uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,869,038,156,428
arxiv
\section{The Algorithm} In order to make inference efficient, we train a non-linear regressor that maps input patches $Y$ to sparse representations $Z$. We consider the following nonlinear mapping: \begin{eqnarray} F(Y;G,W,D) = G \tanh( W Y + D) \label{eq:regressor} \end{eqnarray} where $W \in {\cal R}^{m \times n}$ is a filter matrix, $D \in {\cal R}^m$ is a vector of biases, $\tanh$ is the hyperbolic tangent non-linearity, and $G\in {\cal R}^{m \times m}$ is a diagonal matrix of {\em gain} coefficients allowing the outputs of $F$ to compensate for the scaling of the input, given that the reconstruction performed by $B$ uses bases with unit norm. Let $P_f$ collectively denote the parameters that are learned in this predictor, $P_f=\{G,W,D\}$. The goal of the algorithm is to make the prediction of the regressor, $F(Y;P_f)$ as close as possible to the optimal set of coefficients: $Z^* = \arg \min_Z {\cal L}(Y,Z;B)$ in eq.~(\ref{eq:bp}). This optimization can be carried out separately {\em after} the problem in eq.~(\ref{eq:bp}) has been solved. However, training becomes much faster by {\em jointly} optimizing the $P_f$ and the set of bases $B$ all together. This is achieved by adding another term to the loss function in eq.~(\ref{eq:bp}), enforcing the representation $Z$ to be as close as possible to the feed-forward prediction $F(Y;P_f)$: \begin{eqnarray} {\cal L}(Y,Z;B,P_f) = \|Y - BZ \|_2^2 + \lambda \|Z\|_1 + \alpha \|Z - F(Y;P_f) \|_2^2 \label{eq:loss} \end{eqnarray} Minimizing this loss with respect to $Z$ produces a representation that simultaneously reconstructs the patch, is sparse, and is not too different from the predicted representation. If multiple solutions to the original loss (without the prediction term) exist, minimizing this compound loss will drive the system towards producing basis functions and optimal representations that are easily predictable. After training, the function $F(Y;P_f)$ will provide good and smooth approximations to the optimal sparse representations. Note that, a linear mapping would not be able to produce sparse representations using an overcomplete set because of the non-orthogonality of the filters, therefore a non-linear mapping is required. \subsection{2.1 Learning} The goal of learning is to find the optimal value of the basis functions $B$, as well as the value of the parameters in the regressor $P_f$. Learning proceeds by an on-line block coordinate gradient descent algorithm, alternating the following two steps for each training sample $Y$: \begin{enumerate} \item keeping the parameters $P_f$ and $B$ constant, minimize ${\cal L}(Y,Z;B,P_f)$ of eq.~(\ref{eq:loss}) with respect to $Z$, starting from the initial value provided by the regressor $F(Y;P_f)$. In our experiments we use gradient descent, but any other optimization method can be used; \item using the optimal value of the coefficients $Z$ provided by the previous step, update the parameters $P_f$ and $B$ by one step of stochastic gradient descent; The update is: $U \leftarrow U - \eta \deri{{\cal L}}{U}$, where $U$ collectively denotes $\{P_f,B\}$ and $\eta$ is the step size. The columns of $B$ are then re-scaled to unit norm. \end{enumerate} Interestingly, we recover different algorithms depending on the value of the parameter $\alpha$: \begin{itemize} \item {\it $\alpha = 0$}. The loss of eq.~(\ref{eq:loss}) reduces to the one in eq.~(\ref{eq:bp}). The learning algorithm becomes similar to Olshausen and Field's sparse coding algorithm~\cite{olshausen-field-97}. The regressor is trained {\em separately} from the set of basis functions $B$. \item {\it $\alpha \in (0,+\infty)$}. The parameters are updated taking into account also the constraint on the representation, using the same principle employed by SESM training~\cite{ranzato-nips07}, for instance. \item {\it $\alpha \rightarrow +\infty$}. The additional constraint on the representation (the third term in eq.~(\ref{eq:loss})) becomes an equality, i.e. $Z = F(Y;P_f)$, and the model becomes similar to an auto-encoder neural network with a sparsity regularization term acting on the internal representation $Z$ instead of a regularization acting on the parameters $P_f$ and $B$. \end{itemize} In this paper, we always set $\alpha = 1$. However, sec.~\ref{sec:experiments} shows that training the regressor after training the set of bases $B$ yields similar performance in terms of recognition accuracy. When the regressor is trained afterwards, the approximate representation is usually less sparse and the overall training time increases considerably. Finally, additional experiments not reported here show that training the system as an auto-encoder ($\alpha \rightarrow +\infty$) provides a very fast and efficient algorithm that can produce good representations when the dimensionality of the representation is not much greater than the input dimensionality, i.e. $m \simeq n$. When the sparse representation is highly overcomplete the block-coordinate descent algorithm with $\alpha \in (0,+\infty)$ provides better features. \subsection{2.2 Inference} Once the parameters are learned, inferring the representation $Z$ can be done in two ways. \\ {\bf Optimal inference} consists of setting the representation to $Z^* = \arg \min_z {\cal L}$, where ${\cal L}$ is defined in eq.~(\ref{eq:loss}), by running an iterative gradient descent algorithm involving two possibly large matrix-vector multiplications at each iteration (one for computing the value of the objective, and one for computing the derivatives through $B$). \\ {\bf Approximate inference}, on the other hand sets the representation to the value produced by $F(Y;P_f)$ as given in eq.~(\ref{eq:regressor}), involving only a forward propagation through the regressor, i.e. a single matrix-vector multiplication. \section{Experiments} \label{sec:experiments} First, we demonstrate that the proposed algorithm (PSD) is able to produce good features for recognition by comparing to other unsupervised feature extraction algorithms, Principal Components Analysis (PCA), Restricted Boltzman Machine (RBM)~\cite{Hinton-rbm-cd}, and Sparse Encoding Symmetric Machine (SESM)~\cite{ranzato-nips07}. Then, we compare the recognition accuracy and inference time of PSD feed-forward approximation to feature sign algorithm~\cite{lee-nips-06}, on the Caltech 101 dataset~\cite{feifei_cvpr04}. Finally we investigate the stability of representations under naturally changing inputs. \subsection{3.1 Comparison against PCA, RBM and SESM on the MNIST} The MNIST dataset has a training set with 60,000 handwritten digits of size 28x28 pixels, and a test set with 10,000 digits. Each image is preprocessed by normalizing the pixel values so that their standard deviation is equal to 1. In this experiment the sparse representation has 256 units. This internal representation is used as a global feature vector and fed to a linear regularized logistic regression classifier. Fig.~\ref{fig:sparsity-error-comparison-nips07} shows the comparison between PSD (using feed-forward approximate codes) and, PCA, SESM \cite{ranzato-nips07}, and RBM \cite{Hinton-DeepAutoencoder}. Even though PSD provides the {\bf worst reconstruction error}, it can achieve the {\bf best recognition accuracy} on the test set under different number of training samples per class. \begin{figure}[tbh] \begin{centering} \includegraphics[width=1\textwidth,height=0.25\textwidth]{figs/sparsity-error-comparison-nips07} \par\end{centering} \caption{Classification error on MNIST as a function of reconstruction error using raw pixel values and, PCA, RBM, SESM and PSD features. Left-to-Right : 10-100-1000 samples per class are used for training a linear classifier on the features. The unsupervised algorithms were trained on the first 20,000 training samples of the MNIST dataset~\cite{MNIST}. } \label{fig:sparsity-error-comparison-nips07} \end{figure} \subsection{3.2 Comparison with Exact Algorithms} In order to quantify how well our jointly trained predictor given in eq.~(\ref{eq:regressor}) approximates the optimal representations obtained by minimizing the loss in eq.~(\ref{eq:loss}) and the optimal representations that are produced by an exact algorithm minimizing eq.~(\ref{eq:bp}) such as feature sign~\cite{lee-nips-06} (FS), we measure the average signal to noise ratio\footnote{$SNR = 10 log_{10} (\sigma^2_{signal} / \sigma^2_{noise})$} (SNR) over a test dataset of 20,000 natural image patches of size 9x9. The data set of images was constructed by randomly picking 9x9 patches from the images of the Berkeley dataset converted to gray-scale values, and these patches were normalized to have zero mean and unit standard deviation. The algorithms were trained to learn sparse codes with 64 units\footnote{Principal Component Analysis shows that the effective dimensionality of 9x9 natural image patches is about 47 since the first 47 principal components capture the 95\% of the variance in the data. Hence, a 64-dimensional feature vector is actually an overcomplete representation for these 9x9 image patches.}. We compare representations obtained by ``PSD Predictor'' using the {\em approximate} inference, ``PSD Optimal'' using the {\em optimal} inference, ``FS'' minimizing eq.~(\ref{eq:bp}) with~\cite{lee-nips-06}, and ``Regressor'' that is separately trained to approximate the exact optimal codes produced by FS. The results given in table~\ref{table:snr} show that PSD direct predictor achieves about the same SNR on the true optimal sparse representations produced by FS, as the Regressor that was trained to predict these representations. \begin{table}[bt] \caption{Comparison between representations produced by FS~\cite{lee-nips-06} and PSD. In order to compute the SNR, the noise is defined as $(Signal-Approximation)$. } \begin{centering} \begin{tabular}{|l|c|} \hline Comparison (Signal / Approximation) & Signal to Noise Ratio (SNR)\tabularnewline \hline \hline 1. PSD Optimal / PSD Predictor & 8.6\tabularnewline \hline 2. FS / PSD Optimal & 5.2\tabularnewline \hline 3. FS / PSD Predictor & 3.1\tabularnewline \hline 4. FS / Regressor & 3.2\tabularnewline \hline \end{tabular} \par\end{centering} \label{table:snr} \end{table} Despite the lack of absolute precision in predicting the exact optimal sparse codes, PSD predictor achieves even better performance in recognition. The Caltech 101 dataset is pre-processed in the following way: {\bf 1)} each image is converted to gray-scale, {\bf 2)} it is down-sampled so that the longest side is 151 pixels, {\bf 3)} the mean is subtracted and each pixel is divided by the image standard deviation, {\bf 4)} the image is locally normalized by subtracting the weighted local mean from each pixel and dividing it by the weighted norm if this is larger than 1 with weights forming a 9x9 Gaussian window centered on each pixel, and {\bf 5)} the image is 0-padded to 143x143 pixels. 64 feature detectors (either produced by FS or PSD predictor) were plugged into an image classification system that {\bf A)} used the sparse coding algorithms convolutionally to produce 64 feature maps of size 128x128 for each pre-processed image, {\bf B)} applied an absolute value rectification, {\bf C)} computed an average down-sampling to a spatial resolution of 30x30 and {\bf D)} used a linear SVM classifier to recognize the object in the image (see fig.~\ref{fig:recog_sys}(b)). Using this system with 30 training images per class we can achieve $53\%$ accuracy on Caltech 101 dataset. \begin{figure}[tb] \begin{centering} \includegraphics[width=0.73\textwidth]{figs/recog2} \par\end{centering} \caption{{\bf a)} 256 basis functions of size 12x12 learned by PSD, trained on the Berkeley dataset. Each 12x12 block is a column of matrix $B$ in eq.~(\ref{eq:loss}), i.e. a basis function. {\bf b)} Object recognition architecture: linear adaptive filter bank, followed by $abs$ rectification, average down-sampling and linear SVM classifier.} \label{fig:recog_sys} \end{figure} Since FS finds exact sparse codes, its representations are generally sparser than those found by PSD predictor trained with the same value of sparsity penalty $\lambda$. Hence, we compare the recognition accuracy against the {\em measured} sparsity level of the representation as shown in fig.~\ref{fig:acc_nfilters}(b). PSD is not only able to achieve better accuracy than exact sparse coding algorithms, but also, it does it much more efficiently. Fig.~\ref{fig:acc_nfilters}(a) demonstrates that our feed-forward predictor extracts features more than 100 times faster than feature sign. In fact, the speed up is over 800 when the sparsity is set to the value that gives the highest accuracy shown in fig.~\ref{fig:acc_nfilters}(b). Finally, we observe that these sparse coding algorithms are somewhat inefficient when applied convolutionally. Many feature detectors are the translated versions of each other as shown in fig.~\ref{fig:recog_sys}(a). Hence, the resulting feature maps are highly redundant. This might explain why the recognition accuracy tends to saturate when the number of filters is increased as shown in fig.~\ref{fig:acc_nfilters}(c). \begin{figure}[tb] \begin{centering} \includegraphics[width=0.75\textwidth]{figs/accuracy} \par\end{centering} \caption{{\bf a)} Speed up for inferring the sparse representation achieved by PSD predictor over FS for a code with 64 units. The feed-forward extraction is more than 100 times faster. {\bf b)} Recognition accuracy versus measured sparsity (average $\ell^1$ norm of the representation) of PSD predictor compared to the to the representation of FS algorithm. A difference within 1\% is not statistically significant. {\bf c)} Recognition accuracy as a function of number of basis functions.} \label{fig:acc_nfilters} \end{figure} \subsection{3.3 Stability} In order to quantify the stability of PSD and FS, we investigate their behavior under naturally changing input signals. For this purpose, we train a basis set with 128 elements, each of size 9x9, using the PSD algorithm on the Berkeley~\cite{Berkeley} dataset. This basis set is then used with FS on the standard ``foreman'' test video together with the PSD Predictor. We extract 784 uniformly distributed patches from each frame with a total of 400 frames. \begin{figure}[tbh] \begin{centering} \includegraphics[width=0.71\textwidth,height=0.18\textwidth]{figs/stability} \par\end{centering} \caption{Conditional probabilities for sign transitions between two consecutive frames. For instance, $P(-|+)$ shows the conditional probability of a unit being negative given that it was positive in the previous frame. The figure on the right is used as baseline, showing the conditional probabilities computed on pairs of {\em random} frames.} \label{fig:stability} \end{figure} For each patch, a 128 dimensional representation is calculated using both FS and the PSD predictor. The stability is measured by the number of times a unit of the representation changes its sign, either negative, zero or positive, between two consecutive frames. Since the PSD predictor does not generate exact zero values, we threhsold its output units in such a way that the average number of zero units equals the one produced by FS (roughly, only the $4\%$ of the units are non-zero). The transition probabilities are given in Figure~\ref{fig:stability}. It can be seen from this figure that the PSD predictor generates a more stable representation of slowly varying natural frames compared to the representation produced by the exact optimization algorithm. \section{Summary and Future Work} Sparse coding algorithms can be used as pre-processor in many vision applications and, in particular, to extract features in object recognition systems. To the best of our knowledge, no sparse coding algorithm is computationally efficient because inference involves some sort of iterative optimization. We showed that sparse codes can actually be approximated by a feed-forward regressor without compromising the recognition accuracy, but making the recognition process very fast and suitable for use in real-time systems. We proposed a very simple algorithm to train such a regressor. In the future, we plan to train the model convolutionally in order to make the sparse representation more efficient, and to build hierarchical deep models by sequentially replicating the model on the representation produced by the previous stage as successfully proposed in~\cite{Hinton-DeepAutoencoder}. \newenvironment{bibli}{ \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} } \begin{bibli} {\small \bibliographystyle{unsrt} \section{Introduction} Object recognition is one of the most challenging tasks in computer vision. Most methods for visual recognition rely on handcrafted features to represent images. It has been shown that making these representations adaptive to image data can improve performance on vision tasks as demonstrated in~\cite{lecun-98} in a supervised learning framework and in~\cite{elad-cvpr-06,ranzato-cvpr-07} using unsupervised learning. In particular, learning sparse representations can be advantageous since features are more likely to be linearly separable in a high-dimensional space and they are more robust to noise. Many sparse coding algorithms have been shown to learn good local feature extractors for natural images~\cite{olshausen-field-97,ksvd,mairal-cvpr-08,lee-nips-06,ranzato-06}. However, application of these methods to vision problems has been limited due to prohibitive cost of calculating sparse representations for a given image~\cite{mairal-cvpr-08}. In this work, we propose an algorithm named Predictive Sparse Decomposition (PSD) that can simultaneously learn an overcomplete linear basis set, and produce a smooth and easy-to-compute approximator that predicts the optimal sparse representation. Experiments demonstrate that the predictor is over 100 times faster than the fastest sparse optimization algorithm, and yet produces features that yield better recognition accuracy on visual object recognition tasks than the optimal representations produced through optimization. \subsection{1.1 Sparse Coding Algorithms} Finding a representation $Z \in {\cal R}^m$ for a given signal $Y \in {\cal R}^n$ by linear combination of an overcomplete set of basis vectors, columns of matrix $B \in {\cal R}^{n \times m}$ with $m>n$, has infinitely many solutions. In optimal sparse coding, the problem is formulated as: \begin{eqnarray} \min ||Z||_0 {\;\;\rm s.t.\;\;} Y=BZ \label{eq:L0} \end{eqnarray} where the $\ell^0$ ``norm'' is defined as the number of non-zero elements in a given vector. Unfortunately, the solution to this problem requires a combinatorial search, intractable in high-dimensional spaces. Matching Pursuit methods~\cite{Mallat:1993bs} offer a greedy approximation to this problem. Another way to approximate this problem is to make a convex relaxation by turning the $\ell^0$ norm into an $\ell^1$ norm~\cite{chen99atomic}. This problem, dubbed Basis Pursuit in the signal processing community, has been shown to give the same solution to eq.~(\ref{eq:L0}), provided that the solution is sparse enough~\cite{donoho-sparse}. Furthermore, the problem can be written as an unconstrained optimization problem: \begin{eqnarray} {\cal L}(Y,Z;B)=\frac{1}{2} ||Y-BZ||_2^2 + \lambda||Z||_1 \label{eq:bp} \end{eqnarray} This particular formulation, called Basis Pursuit Denoising, can be seen as minimizing an objective that penalizes the reconstruction error using a linear basis set and the sparsity of the corresponding representation. Many recent works have focused on efficiently solving the problem in eq.~(\ref{eq:bp})~\cite{efron02least,ksvd,lee-nips-06,Murray,ThreshCircuit,mairal-cvpr-08}. Yet, inference requires running some sort of iterative minimization algorithm that is always computationally expensive. Additionally, some algorithms are also able to {\em learn} the set of basis functions. The learning procedure finds the $B$ matrix that minimizes the same loss of eq.~(\ref{eq:bp}). The columns of $B$ are constrained to have unit norm in order to prevent trivial solutions where the loss is minimized by scaling down the coefficients while scaling up the bases. Learning proceeds by alternating the optimization over $Z$ to infer the representation for a given set of bases $B$, and the minimization over $B$ for the given set of optimal $Z$ found at the previous step. Loosely speaking, basis functions learned on natural images under sparsity constraints are localized oriented edge detectors reminiscent of Gabor wavelets.
2,869,038,156,429
arxiv
\section{Introduction} \setcounter{equation}{0} Vortices with non-Abelian gauge groups (usually $U(N_c)$), as well as extended flavour symmetry, are host to a wealth of unique and surprising properties (\cite{Hanany:2003hp},\cite{Auzzi:2003fs},\cite{SYmon},\cite{Hanany:2004ea},\cite{Gorsky:2004ad},\cite{Eto:2006mz} and \cite{Hindmarsh:1992yy},\cite{Achucarro:1999it},\cite{Shifman:2006kd},\cite{Eto:2007yv}). Non-Abelian colour symmetry leads to non-Abelian strings, which bear a more complex charge structure than in the Abelian Higgs model. This is materialised by an internal degree of freedom, an undetermined modulus that points in a certain direction in an internal symmetry space, found to be $\mathbb{CP}(N_c-1)$. As a consequence, quantising the soliton leads to the study of fluctuations of these parameters in time and along the length of the string, i.e. a two dimensional non-linear sigma model which captures the physics of the vortex string worldsheet. Much is known about the maximally supersymmetric NLSM. When considering a lesser number of supercharges, one finds that the worldsheet theory becomes a particular type of heterotically deformed, $(0,2)$ supersymmetric Non-Linear Sigma Model. Indeed, it is possible to construct non-Abelian vortices from spacetime field theories with fewer supersymmetries than $\mathcal{N}=2$, for instance by adding a mass term to the scalar multiplet components of the full gauge supermultiplet, making the spacetime theory $\mathcal{N}=1$, and then to observe the consequences on the worldsheet. It was originally suggested by Shifman and Yung \cite{Shifman:2005st} that the resulting NLSM would have at least $\mathcal{N}=(1,1)$ supersymmetry, with extra fermionic degrees of freedom. In addition, this process does not spoil the K\"{a}hler nature of the target space at hand, thus would lead to an enhancement back to the full $(2,2)$ theory. This statement often goes by the name of Zumino's theorem \cite{Zumino:1979et}. It did not seem very surprising that these objects benefited from supersymmetric enhancement, since it had been previously proven that this exact phenomenon happens on domain walls \cite{Ritz:2004mp}. This came into tension with a different perspective offered by Edalati and Tong \cite{Edalati:2007vk}, who, with the help of a brane model, suggested that this statement was untrue-- while $\mathbb{CP}(N_c-1)$ alone can indeed not be deformed in a way that breaks some but not all of the supersymmetry, the \textit{full} target space that the string explores is $\mathbb{C}\times\mathbb{CP}(N_c-1)$. Indeed, in addition to the internal gauge modulus, there is an ever-present translational modulus which describes the position of the string in the transverse directions. This degree of freedom, and its supersymmetric partners, are usually completely decoupled from whatever internal structure the string may also have. Edalati and Tong argued that, from the worldsheet perspective, it is possible to construct a term that mixes the fermionic sectors in both components of this target manifold (the super-translational and super-orientational fermions) in fully target space invariant way, without entailing a deformation of the manifold itself, thus producing an $\mathcal{N}=(0,2)$ theory.\footnote{For a discussion of general aspects of 2D $\mathcal{N}=(0,2)$ theories see e.g. \cite{Witten:2005px}.} This hypothesis was then proven explicitly when this term was derived from the ground up in the spacetime theory \cite{Shifman:2008wv}. It was indeed the case that fermionic zero modes in different sectors have some overlap and do not decouple when the supersymmetry breaking potential is turned on, producing exactly the Edalati-Tong heterotic deformation. Many properties of the worldsheet theory were then investigated (\cite{Cui:2010si},\cite{Cui:2011rz}\cite{Chen:2014efa}). It is therefore relevant to observe if this phenomenon happens for the other type of internal modulus that a generic vortex string may possess: the size modulus. When the number of flavours $N_f$ exceeds the number of colours, the BPS vortex string that occurs in such a theory is no longer fully local. That is, while in a usual Abrikosov-Nielsen-Olesen string every field that constitutes the vortex decays exponentially at a certain distance away from the core, it is found that the fields in a flavour-enhanced string decay as rational functions, defined by a characteristic arbitrary size modulus \cite{Achucarro:1999it}, in a very analogous fashion to the size parameter of the instanton solution\cite{Belavin:1975fg}. Rather surprisingly, the appearance of this extra scale, provided it is much larger than the core width, allows an explicit analytic solution to the BPS equations, albeit an approximate one, to be written. Such semi-local strings also present idiosyncratic challenges to investigate: because its constitutive fields decay so slowly, the theory requires an infra-red cutoff mechanism in order for integration over the directions transverse to the string to regulate it: such integrals are borderline divergent, logarithmically. However, with this compromise alone, it is then possible to create a consistent worldsheet picture of the string. It has been argued that this was no obstruction to further analysis, as any large logarithmic factor can simply be removed by wavefunction normalisation, so that we should expect the worldsheet picture to make sense in any case\cite{Shifman:2011xc},\cite{Koroteev:2011rb}. This led to some very fruitful investigation on the dynamics of these semi-local strings: most recently, it was found that a non-Abelian semi-local vortex string, with two colours and four flavours, is conformal and has a full 10D target space and is therefore a true critical superstring \cite{SYcstring},\cite{Koroteev:2016zqu}. In this work, we wish to start by investigating the possibility of such heterotically deformed worldsheets in the simplest field theory that bears these semi-local vortices, namely $\mathcal{N}=2$ SQED with two flavours. Even in this simple setup there is a wealth of unique phenomena that have become apparent: it was recently found that these basic semi-local vortices, once made closed, can have an extra type of internal winding number, in addition to the usual vortex number, and that both of them would combine to form a soliton with non-zero Hopf index \cite{Ireson:2018bdw}. After checking some of the basic building blocks of the worldsheet theory, we turn on a 4D mass deformation $\mu$, and attempt to solve the modified Dirac equations for the fermion zero modes. At small $\mu$ the picture is very clear, these zero modes become non-holomorphic (in a precise sense to be explained in time), thus allowing for a non-zero overlap between supertranslational and supersize modes of the expected shape: \begin{equation} \zeta_R \partial_L\bar{\rho} \chi_R + \text{ H.c.} \end{equation} This is formally identical to the kind of term derived in the non-Abelian string case, being naturally constrained by target-space geometry. \section{Bulk theory} \setcounter{equation}{0} Our basic four dimensional model is a ${\mathcal N}=2\;$ supersymmetric Abelian $U(1)$ gauge theory deformed by a ${\mathcal N}=1\;$ mass term $\mu$ for the neutral gauge scalar supermultiplet, in the following way. The ${\mathcal N}=2\;$ vector multiplet contains the gauge bosons $A_{\mu}$, two gauginos $\lambda^{\alpha 1}$ and $\lambda^{\alpha 2}$ and the complex neutral scalar field $a$, where $\alpha$ is the spinor index, $\alpha=1,2$. The complex scalar $a$ and one of the gauginos $\lambda^2$ form a neutral ${\mathcal N}=1\;$ chiral supermultiplet ${\mathcal A}$. Adding a mass $\mu$ to this neutral supermultiplet breaks ${\mathcal N}=2\;$ supersymmetry in the bulk down to ${\mathcal N}=1\;$. In the limit of $\mu \to \infty$ the neutral multiplet decouples and the theory flows to ${\mathcal N}=1\;$ SQED. The model also has the matter sector consisting of $N_f=2$ ``electron" matter hypermultiplets charged with respect to the gauge $U(1)$ . In addition, we will introduce a Fayet--Iliopoulos $D$-term for the $U(1)$ gauge field which triggers the scalar electron condensation. Let us first discuss the undeformed theory with ${\mathcal N}=2\;$$\!.$ The superpotential has the form \begin{equation} {\mathcal W}_{{\mathcal N}=2} =\frac{1}{\sqrt 2} \sum_{A=1}^2 \tilde Q_A {\mathcal A} Q^A \,, \label{superpot} \end{equation} where $Q^A$ and $\tilde Q_A$ ($A=1,2$) represent two matter hypermultiplets. The flavor index is denoted by $A$. Next, we add a superpotential, \begin{equation} {\mathcal W}_{br}=\frac{\mu}{2} {\mathcal A}^2, \label{msuperpotbr} \end{equation} Clearly, the mass term (\ref{msuperpotbr}) splits ${\mathcal N}=2\;$ supermultiplets, breaking ${\mathcal N}=2\;$ supersymmetry down to ${\mathcal N}=1\;$. Note that in \eqref{superpot} we set the electron masses to zero. As was shown in \cite{Shifman:2005st} and \cite{Edalati:2007vk} (see also the review \cite{SYrev}), in this case the deformed theory supports 1/2 BPS -saturated flux-tube solutions at the classical level. The massive versions of the deformed ${\mathcal N}=2\;$ theory were studied in \cite{IYmu,IYmusemi} The bosonic part of our $U(1)$ theory has the form \begin{equation} S=\int d^4x \left\{ \frac1{4g^2}\left(F_{\mu\nu}\right)^2 + \frac1{g^2} \left|\partial_{\mu}a\right|^2 +\left|\nabla_{\mu}q^{A}\right|^2 + \left|\nabla_{\mu} \bar{\tilde{q}}^{A}\right|^2 +V(q^A,\tilde{q}_A,a)\right\}\,, \label{model} \end{equation} where \begin{equation} \nabla_\mu=\partial_\mu -\frac{i}{2}\; A_{\mu}, \label{defnabla} \end{equation} while $g$ is the gauge coupling constant. Note, that we work in the Euclidean space. The potential $V(q^A,\tilde{q}_A,a)$ in the Lagrangian (\ref{model}) is a sum of various $D$ and $F$ terms, \begin{eqnarray} V(q^A,\tilde{q}_A,a) &=& \frac{g^2}{8} \left(\bar{q}_A q^A - \tilde{q}_A \bar{\tilde{q}}^A-\xi\right)^2 +\frac{g^2}{2}\left| \tilde{q}_A q^A + \sqrt{2}\, \mu a \right|^2 \nonumber\\[3mm] &+& \frac12\sum_{A=1}^2 |a|^2 \left[ |q^A|^2 + |\bar{\tilde{q}}^A|^2\right] , \label{pot} \end{eqnarray} where the sum over repeated flavor indices $A$ is implied. We also introduced the Fayet--Iliopoulos $D$-term for the $U(1)$ field, with the FI parameter $\xi$ in (\ref{pot}). Note, that the Fayet--Iliopoulos term does not break ${\mathcal N}=2\;$ supersymmetry \cite{matt,VY}. The parameter which does break ${\mathcal N}=2\;$ down to ${\mathcal N}=1\;$ is $\mu$ in (\ref{msuperpotbr}). Let us review briefly the vacuum structure and the mass spectrum of perturbative excitations in our bulk model \eqref{model}, see \cite{SYrev} for details. The Fayet--Iliopoulos term triggers the spontaneous breaking of the gauge symmetry. The vacuum expectation values (VEV's) of the scalar electrons (selectrons) can be chosen as \begin{equation} \langle q^{A}\rangle =\sqrt{ \xi}\, \left( \begin{array}{c} 1 \\ 0 \\ \end{array} \right),\,\,\,\langle \bar{\tilde{q}}^{A}\rangle =0, \qquad A =1,2\,, \label{qvev} \end{equation} while the VEV of the neutral scalar field vanish, \begin{equation} \langle a\rangle =0. \label{avev} \end{equation} The choice of vacuum in \eqref{qvev} is not unique, our theory has a Higgs branch, a manifold in the space of VEV's of $q^A, \tilde q_A$ fields where the scalar potential \eqref{pot} vanish. The dimension of this non-compact Higgs branch is four. To see this, note that we have eight real scalars $q^A, \tilde q_A$ subject to three conditions associated with vanishing of two terms in the first line in \eqref{pot}. Also one phase is gauged. Overall we have \begin{equation} {\rm dim} {\mathcal H} = 8-3-1=4, \label{dimH} \end{equation} which is the dimension of the Higgs branch. Four massless scalars correspond to the lowest components of one short hypermultiplet. A generic vacuum on this Higgs branch does not support BPS string solutions. The reason is that for a generic vacuum the mass of the photon is not equal to the mass of the Higgs field, the condition needed for a string to be BPS. However, the compact two dimensional base of the Higgs defined by the condition \begin{equation} \langle \tilde{q}_{A}\rangle =0 \end{equation} does support BPS strings \cite{VY,EvlY}. Below in this paper we restrict ourselves to the base of the Higgs branch and since all vacua on the base are physically equivalent we take the vacuum \eqref{qvev} as a particular representative. Since the $U(1)$ gauge group is broken by selectron condensation, the gauge boson becomes massive. From (\ref{model}) we get the photon mass \begin{equation} m_{\gamma}=\frac{g}{\sqrt{2}} \sqrt{\xi}\,, \label{phmass} \end{equation} To get the masses of the scalar bosons we expand the potential (\ref{pot}) near the vacuum (\ref{qvev}), (\ref{avev}) and diagonalize the corresponding mass matrix. Then, one component of the eight real scalars $q^A, \tilde q_A$, namely ${\rm Im}\, q^1$ is eaten by the Higgs mechanism. Another component, namely ${\rm Re}\, q^1$ acquires a mass (\ref{phmass}), equal to the mass of the photon. It becomes a scalar component of the massive ${\mathcal N}=1\;$ vector $U(1)$ gauge multiplet. This component is the Higgs field in our theory, since it develops VEV, see \eqref{qvev}. The coincidence of masses ensures presence of BPS strings in our vacuum. Other four real scalar components of the fields $\tilde{q}_{1}$ and $a$ produce the following states: two states acquire mass \begin{equation} m^{+}=\frac{g}{\sqrt{2}} \sqrt{\xi\lambda^{+}}\,, \label{u1m1} \end{equation} while the mass of other two states is given by \begin{equation} m^{-}=\frac{g}{\sqrt{2}}\sqrt{\xi\lambda^{-}}\,, \label{u1m2} \end{equation} where $\lambda^{\pm}$ are two roots of the quadratic equation \begin{equation} \lambda^2-\lambda(2+\omega^2) +1=0\,. \label{queq} \end{equation} Here we introduced ${\mathcal N}=2\;$ supersymmetry breaking parameter, \begin{equation} \omega=\frac{g^2\mu}{m_{\gamma}}\,. \label{omega} \end{equation} In the large-$\mu$ limit the larger mass $m^{+}$ becomes \begin{equation} m^{+}= m_{\gamma} \omega=g^2\mu\,. \label{amass} \end{equation} Clearly, in the limit $\mu\to \infty$ this is the mass of the heavy neutral scalar $a$. At $\omega\gg 1$ this field decouple and can be integrated out. In this limit the scalar $\tilde q_1$ becomes a lowest component of the chiral multiplet with the lower mass $m^{-}$. Equation (\ref{queq}) gives for this mass \begin{equation} m^{-}= \frac{m_{\gamma}}{\omega} = \frac{\xi}{2\mu}\,. \label{light} \end{equation} Furthermore, the four real components $q^2, \tilde q_2$ of the second flavor are massless and live on the Higgs branch. In the limit of infinite $\mu$ mass \eqref{light} tend to zero. This fact reflects the enhancement of the Higgs branch in ${\mathcal N}=1\;$ SQED. Below we will also need the fermionic part of the action of the model (\ref{model}), \begin{eqnarray} S_{\rm ferm} &=& \int d^4 x\left\{ \frac{i}{g^2}\bar{\lambda}_f \bar{\partial}\hspace{-0.65em}/\lambda^{f} + \bar{\psi}_A i\bar\nabla\hspace{-0.65em}/ \psi^A + \tilde{\psi}_A i\nabla\hspace{-0.65em}/ \bar{\tilde{\psi}}^A \right. \nonumber\\[3mm] &+& \frac{i}{\sqrt{2}}\,\left[ \bar{q}_{Af}(\lambda^f\psi^A)+ (\tilde{\psi}_A\lambda_f)q^{fA} +(\bar{\psi}_A\bar{\lambda}_f)q^{fA}+ \bar{q}^f_A(\bar{\lambda}_f\bar{\tilde{\psi}}^A)\right] \nonumber\\[3mm] &+& \left. \frac{i}{\sqrt{2}}\, a (\tilde{\psi}_A\psi^A) +\frac{i}{\sqrt{2}}\, a (\bar{\psi}_A \bar{\tilde{\psi}}^A) -\frac{\mu}2 (\lambda^2)^2 \right\}\,, \label{fermact} \end{eqnarray} where $(\psi^{\alpha})^{A}$ and $(\tilde{\psi}^{\alpha})_{A}$ are matter fermions. Contraction of the spinor indices is assumed inside parentheses. We write the selectron fields in (\ref{fermact}) as doublets of the $SU(2)_{R}$ group which is present in ${\mathcal N}=2\;$ theory \begin{equation} q^{fA}=(q^A,\bar{\tilde{q}}^A)\,, \end{equation} where $f=1,2$ is the $SU(2)_R$ index, this makes manifest the existence of two sets of supersymmetry operators in the ${\mathcal N}=2\;$ case. Similarly, $\lambda^{\alpha f}$ stands for the gaugino $SU(2)_R$ doublet. Note that the last term is the ${\mathcal N}=1\;$ deformation in the fermion sector of the theory induced by the breaking parameter $\mu$. It involves only $f=2$ component of $\lambda$ explicitly breaking the $SU(2)_{R}$ invariance. From \eqref{fermact} one can see that fermions of the second flavor in much the same way as bosons are massless in the vacuum \eqref{qvev}. This will be important later. \section{Semilocal strings in the $\mathcal{N}=2$ theory} \setcounter{equation}{0} \subsection{Vortex BPS Equations for a Static Solution} We work in Euclidean space, labelling our coordinates $(t,x,y,z)$. We will assume that the string we produce is aligned in the $z$ direction. As we explained in the previous section the potential \eqref{pot} has an infinite Higgs branch and we restrict ourselves to its base submanifold with $\tilde{q}^{1,2}=0$. The base of the Higgs branch is then now compact and defined by \begin{equation} |q^1|^2 + |q^2|^2 =\xi \end{equation} where both $q^{1,2}$ are complex fields, so the base of the Higgs branch has the structure of $\mathbb{CP}(1)$. At spatial infinity, the vortex configuration is expected to wrap around the vacuum manifold in a non-trivial way: thus we expect that the vortex will behave like the $\mathbb{CP}(1)$ instanton lump solution at large distances from the core, while close to the core it should behave just like a standard ANO string. The instanton lump is endowed with a dimensionful modulus, a size parameter $\rho$ which controls the spreading of the solution in space\footnote{for details see e.g. \cite{Rajaraman:1982is}}: the vortex should be similarly spread out away from the core, this is why it is called semi-local. Let us introduce a number of profiles for the various bosonic fields in the theory: \begin{align} q^{1A}&\equiv q^{A}=\left(\begin{array}{cc} \phi_1(r) \\ \phi_2(r) e^{-i\theta } \end{array} \right),\quad q^{2A}\equiv -i\tilde{q}^{A} =0\nonumber \\ A_i&=\varepsilon_{ij}\frac{x^j}{r^2}f(r) \end{align} Here we assume boundary conditions \begin{equation} \phi_A(0) =0, \quad \phi_1(\infty)=\sqrt{\xi}, \quad \phi_2(\infty)= 0, \quad f(0) =1, \quad f(\infty ) =0 \label{bc} \end{equation} which ensure that the scalar fields tend at $r\to\infty$ to their vacuum expectation values \eqref{qvev}. We have defined this Ansatz in the singular gauge, where $A$ will be ill-defined at $0$ but decay at infinity. We will assume that all of the profile functions are positive in order to fix various sign choices related to supercharges. From the supersymmetry transformations of our initial theory, we obtain BPS equations and also we define which fermionic variations are preserved by our choices, so as to preserve $\epsilon^{12}Q_{12},\,\epsilon^{21}Q_{21}$. The other two will not leave the solution invariant but generate supertranslational modes. Firstly we consider the scalar equations: \begin{equation} r \partial_r \phi_1 = + f \phi_1,\quad r \partial_r \phi_2 + \phi_2 = + f \phi_2 \label{BPS1} \end{equation} They are very similar in nature, differing only in the linear part, which means they can potentially be related to each other by the right transformation. It is in fact the case: if $\phi_1$ obeys its equation of motion then we are free to take \begin{equation} \phi_2 = \frac{\rho}{r}\phi_1\equiv \frac{\rho}{r}\phi \end{equation} for some unknown constant length scale $\rho$, to obtain a solution to the second BPS scalar equation. This new length scale, the size modulus, defines a new regime for the spreading of the solutions, and is responsible for the semi-local nature of the vortex. As a consequence of its appearance, the various fields constituting the vortex will decay as rational functions of $r,\rho$. The single undetermined scalar profile function inside $q^{1}$ is then relabeled $\phi$. We will see later on in Eq.(\ref{explicitsol}) that this parameter is exactly analogous to the $\mathbb{CP}(1)$ instanton lump size modulus that our solution must at some level reproduce, given the vacuum manifold. The sign of the right hand side is fixed by the supercharges we fixed as well as the requirement that the profiles introduced in the Ansatz are positive. Then, $\phi$ should be regular at the origin and reach the vacuum expectation value at infinity, $\phi$ is an increasing function of $r$, $f$ being positive in our Ansatz confirms this. In addition the BPS equations also produce the following constraint for the gauge profile $f$: \begin{equation} -\frac{1}{r}\partial_r f +g^2\left(\phi^2\left(1+\frac{\rho \bar{\rho}}{r^2}\right) -\xi \right) =0\,. \label{BPS4} \end{equation} Immediately, this allows us to write super-translational zero modes for the theory. They are generated by $\epsilon^{11}Q_{11},\,\epsilon^{22}Q_{22}$, which act non-trivially on the BPS string solution, enabling us to use the BPS equations to simplify the zero modes: \begin{align} \delta \bar{\psi}^{1}_{\dot{2}}&= i\sqrt{2}\bar{\slashed{D}}_{\dot{2}1}\bar{q}_{A}\epsilon^{11}=-2\sqrt{2}\left( \frac{x+iy}{r^2}\right) f(r) \phi(r) \epsilon^{11}\,, \nonumber\\ \delta \bar{\psi}^{2}_{\dot{2}}&= i\sqrt{2}\bar{\slashed{D}}_{\dot{2}1}\bar{q}_{A}\epsilon^{11}=+2\sqrt{2}\left( \frac{x+iy}{r^2}\right) \left(1-f(r)\right) \phi(r) \frac{\bar{\rho}e^{i\theta}}{r} \epsilon^{11} \,,\nonumber\\ \delta \bar{\tilde{\psi}}^{1}_{\dot{1}}&=i\sqrt{2}\bar{\slashed{D}}_{\dot{1}2}\bar{q}_{A}\epsilon^{22}=2\sqrt{2}\left( \frac{x-iy}{r^2}\right) \phi(r)f(r) \epsilon^{22} \,,\nonumber \\ \delta \bar{\tilde{\psi}}^{2}_{\dot{1}}&=i\sqrt{2}\bar{\slashed{D}}_{\dot{1}2}\bar{q}_{A}\epsilon^{22}=-2\sqrt{2}\left( \frac{x-iy}{r^2}\right) \left(1-f(r) \right) \phi(r) \frac{\rho e^{-i\theta}}{r} \epsilon^{22} \,,\nonumber \\ \delta\lambda^{11} &=+2D^3 (\tau^{3})^{1}_{1} \epsilon^{11}= -2i g^2\left( \phi^2 \left( 1 + \left|\frac{\rho}{r}\right|^2 \right) - \xi\right) \epsilon^{11}\,,\nonumber\\ \delta\lambda^{22} &= -2D^3 \left( \tau^{3}\right) ^{1}_{1} \epsilon^{22}= +2i g^2 \left( \phi^2\left( 1 + \left|\frac{\rho}{r}\right|^2\right) - \xi\right) \epsilon^{22}\,. \label{stzm} \end{align} All others are identically zero, by satisfaction of the BPS equations. The fermions $\epsilon^{11},\,\epsilon^{22}$ can be turned into dynamical worldsheet variables, we preemptively label them respectively $\zeta_L,\,\zeta_R$. They are the fermionic superpartners on the worldsheet of the translational zero mode of the vortices. The second set of zero modes are generated by $\epsilon^{12},\,\epsilon^{21}$, which usually act trivially on the string solution. However, adding slow variations of $\rho$ in $(t,z)$ changes this: then, we can write zero modes depending on derivatives of $\rho$. Computing them requires a bit more effort, since in this case the fermionic parameters connect in an unobvious way to the associated worldsheet dynamical fermions, as opposed to the previous case. For starters, we need to updated our gauge Ansatz: in order to retain gauge invariance, new components of the gauge field are required to be turned on. For $k=(t,z)$ \begin{equation} A_k=-i(\rho^*\partial_k \rho - \rho \partial_k \rho^*)\gamma(r) \end{equation} which introduces a new radial profile function $\gamma(r)$, constrained by the gauge equations of motion. In the case of non-Abelian vortices, where a similar analysis was conducted leading to super-orientational modes, this extra gauge profile function was solved for explicitly by studying its equation of motion, and an exact solution was found in terms of the profile $\phi$ alone. This is not as easy in the present case, since the $\rho$ modulus intervenes in every radial profile in the Ansatz, the minimisation equation is much more complicated. In a previous work, a complete solution was found in the low energy limit by sending $m_W=g^2\sqrt{\xi}$ to infinity, or, more precisely, by placing oneself sufficiently far from the core whose width is defined by $(g^2\sqrt{\xi})^{-1}$. Then, the solution takes the following form \cite{Achucarro:1999it,Shifman:2011xc}: \begin{equation} \phi(r)=\frac{\sqrt{\xi}r}{\sqrt{r^2+\bar{\rho}\rho}},\quad f(r)=\frac{|\rho|^2}{r^2+|\rho|^2},\quad\gamma(r)=\frac{1}{2\left(r^2+|\rho|^2 \right) } \label{explicitsol} \end{equation} The structure is indeed as predicted very similar to the $\mathbb{CP}(1)$ instanton, far away. The semi-local nature of the vortices is clearly seen at this distance from the core, but it is only an approximate solution to the various equations at hand. In particular it leaves the gauge BPS equation (\ref{BPS4}) somewhat vacuous: while the matter equation is solved exactly by this solution, the gauge one is only approximately solved, it is an asymptotic solution. In order to ensure we write precise statements and algebraic relations, we would like to stick to exact, if implicit, profile solutions. \subsection{Modulus Fluctuations: Holomorphy Equations} Surprisingly it is possible to find a great deal of information about the implicit solutions for $\gamma$, so long as we impose holomorphy of SUSY variations. Assuming nothing about the function $\gamma$, we can write out the full SUSY variations of the matter and gauge fermions under transformations with parameters $\epsilon^{12},\,\epsilon^{21}$ when we assume that $\rho$ is no longer a constant modulus, but actually has a dependence on $(t,z)$. The resulting variations are of course no longer vanishing, in general they are a function of both $\partial \rho$ and $\partial \bar{\rho}$. Constraints on $\gamma$ then occur when we impose that these transformations should be \textit{holomorphic}: after a fermionic variation we expect the fermionic zero mode to only depend on exactly one of $\partial \rho$ and $\partial \bar{\rho}$, not both. One may note that this simplification already happens when using the approximate but explicit solutions detailed in Eq.(\ref{explicitsol}). The simplest case is the variation of $\bar{\tilde{\psi}}^{2}$, since it involves the field $q^2$: it already has a very direct dependence on $\rho$ and not its conjugate, so we expect its variation should be proportional to $\partial \rho$ only. This gives us a first constraint on $\gamma$: with the sign of $A_{t,z}$ chosen above, we have \begin{equation} \partial_{|\rho|^2}\phi=-\gamma\phi \label{BPS2} \end{equation} We expect $\phi$ to decrease with $\rho$ since it is a size modulus, it controls the spreading of the profile in space. This is consistent with choosing $\phi,\,\gamma$ positive. This assumption then produces a holomorphic dependence on $\partial \rho$ for $\bar{\tilde{\psi}}^{1}$, which is obvious since those two fields were already related by a previously-used BPS equation. Secondly, let us also observe what additional conditions are imposed from the gaugino supersize zero mode, we obtain a second equation for $\gamma$: \begin{equation} \partial_{|\rho|^2} f = \pm r \partial_r \gamma \label{BPS3} \end{equation} where the sign controls which of the two zero modes depend on $\partial \rho$, the other zero mode will depend on the conjugate. Unlike in the matter case, there is no good heuristic to determine which needs to be true. As it happens, however, while we \textit{a priori} could pick either, this choice is actually forced onto us: indeed, the scalar BPS equation (\ref{BPS1}) and the scalar holomorphy equation (\ref{BPS2}) generate this third one, as can be seen by expressing $\partial_r \partial_{|\rho|^2} \phi$ in two different but equal ways. The choice of sign in previous cases dictates that this sign should be negative. Furthermore, physically, $f$ is also expected to increase with $\rho$, since this gauge field should vanish for vanishingly small $\rho$, whereas $\gamma$ is expected to decrease with $r$. Again it can be noted that the explicit solutions for the profiles satisfy these holomorphy relations exactly: \begin{equation} \partial_{|\rho|^2} f = - r \partial_r \gamma \end{equation} By differentiating the gaugino BPS equation (\ref{BPS4}) by $|\rho|$, and using the newly generated identities involving $\gamma$, we obtain precisely the equation of motion for this additional profile, obtained by substituting $A_k$ in the action directly \cite{Shifman:2011xc}: \begin{equation} \frac{1}{r}\partial_r\left( r \partial_r \gamma\right) + g^2\left(-2\phi^2\left(1+\frac{\rho \bar{\rho}}{r^2}\right) \gamma + \frac{\phi^2}{r^2} \right) =0 \label{rhomin} \end{equation} Thus, we have proven that using the above first-order equations, both the BPS and the holomorphy equations, we are in principle obtaining a solution to the $\gamma$ equations of motion. Again one notices that the explicit solution (\ref{explicitsol}) satisfies this equation only asymptotically. \subsection{Computing Supersize Zero Modes} Once this is done, the supersize zero modes are of the correct form for interpretation as being proportional to worldsheet fermion zero modes. This is, every fermionic parameter comes multiplied with one of $\partial \rho$ or $\partial \bar{\rho}$, which, using the worldsheet SUSY variations, can then be wholly replaced by a worldsheet fermion zero mode, see review \cite{SYrev} where similar procedure was used for calculating superorientational fermionic zero modes for non-Abelian string. In total, we get the following expressions for the super-size modes: \begin{align} \delta \bar{\tilde{\psi}}^{2}_{\dot{1}}&=+i\sqrt{2}\epsilon^{12}\frac{e^{-i\theta}}{r}\left(2\bar{\rho}\rho \gamma(r) -1 \right)\phi(r) \left(\partial_0 + i \partial_3 \right) \rho \,,\nonumber\\ \delta \bar{\psi}^{2}_{\dot{2}}&=- i\sqrt{2}\epsilon^{21}\frac{e^{+i\theta}}{r}\left(2\bar{\rho}\rho \gamma(r) -1 \right)\phi(r) \left(\partial_0 - i \partial_3 \right) \bar{\rho} \,,\nonumber\\ \delta \bar{\tilde{\psi}}^{1}_{\dot{1}}&= i\sqrt{2}\epsilon^{12}\left(2\gamma(r) \phi(r) \bar{\rho} \right) \left(\partial_0 + i \partial_3 \right) \rho \,,\nonumber \\ \delta \bar{\psi}^{1}_{\dot{2}}&=-i\sqrt{2}\epsilon^{21}\left(2\gamma(r) \phi(r) \rho \right) \left(\partial_0 - i \partial_3 \right) \bar{\rho}\,,\nonumber\\ \delta \lambda^{11} &= +2\frac{\epsilon^{21}(x-iy)\gamma(r)}{r}\rho\left(\partial_0 - i \partial_3 \right)\bar{\rho} \,,\nonumber\\ \delta \lambda^{22} &= -2\frac{\epsilon^{12}(x+iy)\gamma(r)}{r}\bar{\rho}\left(\partial_0 + i \partial_3 \right)\rho\,. \label{sszm1} \end{align} These solutions can then be further simplified by substituting for the derivatives of $\rho$ using worldsheet supersymmetry: construct fermionic zero modes on the worldsheet from the variations of $\rho$. We introduce two spinor-valued parameters $\eta,\, \xi$ to generate two SUSY transformations: \begin{equation} \delta \chi_\alpha=i\sqrt{2}\slashed{\partial}_{\alpha\beta}\rho \left( \eta^\beta + i \xi^\beta\right),\quad \delta \bar{\chi}_\alpha=i\sqrt{2}\slashed{\partial}_{\alpha\beta}\bar{\rho} \left( \eta^\beta -i \xi^\beta\right) \end{equation} or in components \begin{align} \delta \chi_R&=i\sqrt{2}\left(\partial_0 + i \partial_3 \right) \rho \left( \eta^2 + i \xi^2\right),\quad \delta \bar{\chi}_R=i\sqrt{2}\left( \partial_0 + i \partial_3\right) \bar{\rho} \left( \eta^2 -i \xi^2\right)\,,\nonumber\\ \delta \chi_L&=i\sqrt{2}\left(\partial_0 - i \partial_3 \right) \rho \left( \eta^1 + i \xi^1\right),\quad \delta \bar{\chi}_L=i\sqrt{2}\left( \partial_0 - i \partial_3\right) \bar{\rho} \left( \eta^1 -i \xi^1\right)\,. \end{align} This sign convention reflect the fact we are in Euclidean space. Thus, we identify $\epsilon^{12}=(\eta^2+i\xi^2)$ and $\epsilon^{21}=(\eta^1 - i\xi^1)$, enabling us to write the final form of the supersize zero modes: \begin{align} \delta \bar{\tilde{\psi}}^{2}_{\dot{1}}&=+\frac{e^{-i\theta}}{r}\left( \left(2\bar{\rho}\rho \gamma(r) -1 \right)\phi(r)\right) \delta\chi_R \,,\nonumber\\ \delta \bar{\psi}^{2}_{\dot{2}}&=- \frac{e^{+i\theta}}{r}\left( \left(2\bar{\rho}\rho \gamma(r) -1 \right)\phi(r)\right) \delta \bar{\chi}_L\,,\nonumber\\ \delta \bar{\tilde{\psi}}^{1}_{\dot{1}}&= +\left(2\gamma(r) \phi(r) \bar{\rho} \right) \delta\chi_R\,,\nonumber \\ \delta \bar{\psi}^{1}_{\dot{2}}&=-\left(2\gamma(r) \phi(r) \rho \right) \delta \bar{\chi}_L\,,\nonumber\\ \delta \lambda^{11} &= -i\sqrt{2}\frac{(x-iy)\partial_r\gamma(r)}{r}\rho \delta \bar{\chi}_L \,,\nonumber\\ \delta \lambda^{22} &= +i\sqrt{2}\frac{(x+iy)\partial_r\gamma(r)}{r}\bar{\rho} \delta\chi_R\,. \label{sszm2} \end{align} Inserting these into the spacetime action, one readily gets kinetic terms for these fermions, forming a full $(2,2)$ sigma model on the worldsheet. In order to define useful normalisation constants due to integration over the transverse spacetime, let us quickly check the form of this Lagrangian. \subsection{$(2,2)$ Supersymmetric Worldsheet Elements} First we compute the kinetic term for the size modulus, which involves integrating over the profiles. We again come to some simplifications when using the first order equations (\ref{BPS2}),(\ref{BPS3}),(\ref{BPS4}) and the minimisation equation (\ref{rhomin}). Indeed, we get two terms that contribute to a kinetic term for $\rho$: one from the gauge field and one from the scalars. From the former we have \begin{equation} \frac{1}{g^2}F_{ik}F_{ik}=\frac{4\bar{\rho}\rho}{g^2}\left( \partial_r\gamma\right)^2=-2\bar{\rho}\rho\gamma\frac{2}{g^2}\left(\frac{1}{r}\partial_r \left( r\partial_r\gamma \right) \right)^2 +(\text{total derivative}), \end{equation} and from the latter \begin{align} \left(Dq_i \right)^\dagger \left(Dq_i \right)&=\frac{\phi^2}{r^2} + 4\bar{\rho}\rho \gamma\frac{\phi^2}{r^2}\left(-1+r^2\gamma + \bar{\rho}\rho\gamma \right) \nonumber\\ &=\frac{\phi^2}{r^2} + 2\bar{\rho}\rho \gamma\left(2\phi^2\gamma\left( 1+\frac{\bar{\rho}\rho}{r^2}\right) - 2\frac{\phi^2}{r^2} \right)\,. \end{align} We have written both these components conspicuously in order to make apparent the terms that also appear in Eq.(\ref{rhomin}). Summing these two and applying the minimisation condition, the full integral which produces the $\rho$ kinetic term simplifies massively and we obtain \begin{equation} \mathcal{L}_{\rho,\text{kin.}}=I\left( \partial \rho \partial \bar{\rho}\right) =\left( 2\pi\int r\,dr\left( \frac{\phi^2}{r^2}\left( 1 -2\bar{\rho}\rho\gamma\right) \right) \right) \left( \partial \rho \partial \bar{\rho}\right) \end{equation} We can check that this produces the right result by inserting the explicit solution \ref{explicitsol}. The integrals this produces is divergent, the field profiles do not decay fast enough at large $r$. We impose an infrared cutoff, integrating only up to a large length $L_{IR}$ in the plane transverse to the string, the integral produces, cf. \cite{Shifman:2006kd} \begin{align} I&=2\pi\int_0^L dr\left(\frac{\xi r}{r^2 + \bar{\rho}\rho} \right) \left(1-\frac{\bar{\rho}\rho}{r^2+\bar{\rho}\rho} \right)\nonumber\\ &=\pi\xi\left(\log\left(1+\frac{L_{IR}^2}{\bar{\rho}\rho} \right) - \frac{L_{IR}^2}{L_{IR}^2+\bar{\rho}\rho} \right) \sim \pi\xi \log\left( \frac{L_{IR}^2}{\bar{\rho}\rho}\right), \end{align} where we consider infra-red (IR) logarithm $\log{(L_{IR}/|\rho|)}\gg 1$ as a large parameter. Clearly, the IR logarithm here comes from the profile function of the second massless flavor, see Sec. 2. Note, that modes with IR logarithmically divergent norms are on the borderline between normalizable and non-normalizable modes. Usually such modes are considered as “localized” on the string, while power non-normalizable modes are associated with vacuum rather then with a string. We follow this rule and include modulus $\rho$ in our effective world sheet theory on the string, see \cite{Shifman:2006kd,Shifman:2011xc}. The metric on $\rho$ is K\"{a}hlerian, originating from the following potential \begin{equation} \mathcal{K}(\rho,\bar{\rho})=\bar{\rho}\rho \log\left( \frac{L_{IR}^2}{\bar{\rho}\rho}\right) \label{kpot} \end{equation} Note, that with logarithmic accuracy we do not differentiate the IR logarithm. An entirely analogous computation with the fermionic supersize modes produces the exact same kinetic normalisation for the worldsheet fermions, agreeably. \begin{equation} \pi\xi \log\left( \frac{L_{IR}^2}{\bar{\rho}\rho}\right)\, i \left(\bar{\chi}_R \partial_L \chi_R + \bar{\chi}_L \partial_R \chi_L \right). \end{equation} Note that IR logarithm here comes from $1/r$ tails of massless fermions $\psi^2$, $\tilde{\psi}^2$ of the second flavor in \eqref{sszm2}, while massive fermions of the first flavor have faster decay at infinity and do not produce IR logarithms, see Sec 2. Thus, the ${\mathcal N}= \left(2,2\right)\; $ supersymmetric world sheet theory on the string at $\mu=0$ reads \begin{eqnarray} S_{2D} &=& \int d^2 x \;\pi\xi \left\{\log\left( \frac{L_{IR}^2}{|\rho|^2}\right)\,\left[ |\partial_k \rho|^2 + i \bar{\chi}_R \partial_L \chi_R + i \bar{\chi}_L \partial_R \chi_L \right] \right. \nonumber\\[3mm] &+& \left. |\partial_k x^i|^2 + i \bar{\zeta}_R \partial_L \zeta_R + i \bar{\zeta}_L \partial_R \zeta_L \right\} \label{wcN=2} \end{eqnarray} with logarithmic accuracy, $k=0,3$ label the world sheet coordinates. Here we included also translational modes $x^i$, $i=1,2$ and their superpartners $\zeta_{L}$, $\zeta_{R}$, see \eqref{stzm}. We see that translational and size sectors do not interact. We will see later that this will change once we switch on $\mu$ deformation. \section{Deforming the Spacetime Theory} \label{sec2} \setcounter{equation}{0} Now that the worldsheet theory has been created, we are able to observe how it responds to modifications of the spacetime theory. Specifically, we have enough supercharges to allow a further partial breaking of supersymmetry, while still retaining a supersymmetric worldsheet as an end product. Let us see how this happens. We now add a SUSY breaking superpotential \eqref{msuperpotbr} to the spacetime theory to produce an $\mathcal{N}=1$ Lagrangian. It gives a mass term to the gauge scalar $a$ and one of the gauginos $\lambda^{\alpha 2}$, which form a SUSY doublet $\mathcal{A}$. Upon taking the large $\mu$ limit, this decouples the extra adjoint fields and one gets a theory similar to $\mathcal{N}=1$ SQED, with extra flavour and particular charges. This potential preserves $\epsilon^{11}Q_{11},\,\epsilon^{21}Q_{21}$ so that the string solution now only has two supercharges left, generated by $\epsilon^{21}Q_{21}$ and its conjugate. However, the other charges still generate fermionic zero modes, for small $\mu$ at least. By general considerations on index theorems a small deformation of this kind cannot cause fermion zero modes to drop out of the spectrum. Though still existent, the fermionic zero modes are affected by these modifications. Those proportional to the parameters preserved by the addition of this $\mu$ term do not change. Thus, both in the supertranslational case in Eq.(\ref{stzm}) and the super-size case in Eq.(\ref{sszm2}), $\delta\bar{\psi}_{\dot{2}}$ and $\delta\lambda^{11}$ (proportional to $\epsilon^{11}=\zeta_L$ or $\epsilon^{21}\propto \bar{\chi}_L$) do not change, while $\delta\bar{\tilde{\psi}}_{\dot{1}}$ and $\delta\lambda^{22}$ (proportional to $\epsilon^{22}=\zeta_R$ or $\epsilon^{12}\propto \bar{\chi}_R$) get modified profiles that become $\mu$ dependent. By analysing the Dirac equation, it is possible to find approximate solutions for these profiles respectively as a perturbation series in $\mu$ for small values thereof. The modifications of these profiles make the fermion zero modes overlapping, thus causing interactions between supertranslational and supersize modes, and creating a general $\mathcal{N}=(0,2)$ worldsheet theory that does not benefit from any supersymmetry enhancement. This kind of enhancement is especially easy to fall into in our case. Indeed, any supersymmetric NLSM whose target space is a K\"{a}hler manifold is automatically $\mathcal{N}=(2,2)$, which we referred to as Zumino's theorem. Since the target spaces for both of our basic coordinates, the translational mode $(y\pm iz)$ and the size mode $\rho,\,\bar{\rho}$, are both complex one-dimensional manifolds, they are automatically K\"{a}hler (the K\"{a}hler form is necessarily closed as it is a top-form). The most sure-fire way to ensure no enhancement occurs accidentally is then to couple fermionic variables from both target spaces together, without, of course, changing the structure of the bosonic coordinates i.e. deforming the manifold itself. \subsection{Dirac Equations for Spacetime Fermions} Once the theory is deformed by the potential we added, fermionic zero modes in the theory will generically not be holomorphic anymore. That is, they may depend on a worldsheet spinor and on its conjugate, and in different ways at that. In this spirit we suggest writing the fermionic zero modes in a generic form, with arbitrary profile functions, for which the Dirac equation then provides a constraint. The full Dirac equations are \begin{align} &\frac{i}{g^2} \left( \slashed{{D}}\bar{\lambda}\right)^{f}_{\dot{\alpha}} +{i\sqrt{2}} \left( {\psi}^A_{\dot{\alpha}} \bar{q}^{Af} + q^{Af} {\tilde{\psi}}^A_{\dot{\alpha}} \right) -\mu \delta^f_2 {\lambda}^2_{\dot{\alpha}}=0\,,\nonumber\\ &i \left( \slashed{D}\bar{\psi}\right)^\alpha +i\sqrt{2}\bar{q}_f\lambda^{\alpha f} =0,\quad i \left( \slashed{D}\bar{\tilde{\psi}}\right)_\alpha +i\sqrt{2}q^f\lambda_{\alpha f} =0\,. \end{align} Convenient parametrisations for the modified profiles are the following. For the supertranslational modes: \begin{align} \lambda^{22}&=\lambda_0(r) \zeta_R + \lambda_1(r) \frac{x+iy}{r}\bar{\zeta}_R \,,\nonumber\\ \bar{\tilde{\psi}}^1_{\dot{1}}&=\left(\frac{x-iy}{r}\psi^1_0(r)\zeta_R +\psi^1_1(r)\bar{\zeta}_R \right)\,,\nonumber\\ \bar{\tilde{\psi}}^2_{\dot{1}}&=\frac{\rho e^{-i\theta}}{r}\left(\frac{x-iy}{r}\psi^2_0(r)\zeta_R +\psi^2_1(r)\bar{\zeta}_R \right)\,. \end{align} This produces the following profile equations \begin{align} &\partial_r \lambda_0 -ig^2\sqrt{2} \phi \left(\psi^1_0+\frac{\bar{\rho}\rho}{r^2} \psi^2_0\right) -g^2 \mu \lambda_1=0 \,,\nonumber \\ &\partial_r \lambda_1 +\frac{\lambda_1}{r} - ig^2\sqrt{2} \phi \left(\psi^1_1+\frac{\bar{\rho}\rho}{r^2}\psi^2_1 \right)\psi^1_1-g^2 \mu \lambda_0=0\,,\nonumber\\ &\partial_r \psi^{1,2}_0 +\frac{1}{r}\psi^{1,2}_0\left( 1-f\right) -i\sqrt{2}\phi\lambda_0=0\,,\nonumber\\ &\partial_r \psi^{1,2}_1 -\frac{f}{r}\psi^{1,2}_1 -i\sqrt{2}\phi\lambda_1=0\,. \label{trmodeeqs} \end{align} For the super-size modes we propose the following parametrisation: \begin{align} \lambda^{22}&=\frac{x+i y}{r}\lambda_+(r)\bar{\rho}\chi_R + \lambda_-(r)\rho \bar{\chi}_R \,,\nonumber\\ \bar{\tilde{\psi}}^1_{\dot{1}}&=\left(\psi^1_+(r)\bar{\rho}\chi_R +\frac{x-iy}{r}\psi^1_-(r) \rho\bar{\chi}_R \right)\,,\nonumber\\ \bar{\tilde{\psi}}^2_{\dot{1}}&=\frac{ e^{-i\theta}}{r}\bar{\rho} \rho\left(\psi^2_+(r)\chi_R +\frac{x-iy}{r}\psi^2_-(r) \frac{\rho}{\bar{\rho}}\bar{\chi}_R\right)\,. \end{align} leading to the following profile constraints: \begin{align} &\partial_r \lambda_+ + \frac{\lambda_+}{r} -ig^2\sqrt{2}\left(\psi^1_+ +\frac{\bar{\rho}\rho}{r^2}\psi^2_+ \right)\phi -g^2\mu\lambda_- = 0 \,,\nonumber \\ &\partial_r \lambda_- -ig^2\sqrt{2}\left(\psi^1_- +\frac{\bar{\rho}\rho}{r^2}\psi^2_- \right)\phi -g^2\mu\lambda_+ = 0 \,,\nonumber \\ &\partial_r \psi^{1,2}_+ -\frac{f}{r}\psi^{1,2}_+ -i\sqrt{2}\phi\lambda_+=0\,,\nonumber\\ &\partial_r \psi^{1,2}_- +\frac{1}{r}\psi^{1,2}_-\left( 1-f\right) -i\sqrt{2}\phi\lambda_-=0\,. \label{sizemodeeq} \end{align} These parametrisations were chosen to satisfy several conditions: one, they should capture features present when $\mu=0$ (particularly complex phases and singularities), two, the matter profiles should be scalars of consistent mass dimension, three, the profiles should be invariant under phase rotations affecting $\rho$ and its superpartner. \subsection{Small $\mu$ Solutions} The equations obtained at small $\mu$ can be solved order by order. The $(+)$ and $(0)$ profiles are the only ones that survive taking $\mu\rightarrow 0$, so these profiles will only have even powers of $\mu$ whereas the $(-)$ and $(1)$ profiles will capture all the odd powers of $\mu$. The Dirac equation then couples these two together in a consistent, order by order expansion. Thus, we can start off by writing the $(+)$ and $(0)$ profiles at zeroth order, from which we can compute the others. This gives us, in the translational case: \begin{align} \lambda_0 = 2ig^2 \left( \phi^2\left(1+\frac{\bar{\rho}\rho}{r^2} \right)-\xi \right) ,\quad \psi^1_0=2\sqrt{2}\frac{f \phi}{r},\quad \psi^2_0=2\sqrt{2}\frac{(f-1) \phi}{r}\,. \end{align} By virtue of the BPS equations, these profiles are a solution to the Dirac equations above for vanishing $\mu$. We then use these to source the equations for $\lambda_1,\, \psi_1$: given the high degree of similarity between the $(0)$ and $(1)$ equations, differing only by terms linear in the profile functions, we try a solution of the form \begin{equation} \lambda_1=b(r)\lambda_0,\quad \psi_{1,2}=b(r)\psi_0 \end{equation} for some unknown function $b$. The equations for the $(1)$ profiles subsume to two condition on $b$, notably \begin{equation} \partial_r b +\frac{b}{r} +\mu g^2 =0,\quad \partial_r b -\frac{b}{r}=0\,. \end{equation} This is solved by $b(r)=-\frac{\mu g^2r}{2}$. The $(1)$ profiles are therefore \begin{align} &\lambda_1 =-i\mu g^4r \left( \phi^2\left(1+\frac{\bar{\rho}\rho}{r^2} \right)- \xi \right),\nonumber\\ &\psi^1_1=-\sqrt{2}\mu g^2f \phi,\quad \psi^2_1=-\sqrt{2}\mu g^2(f-1) \phi\,. \end{align} This is entirely analogous to the local non-Abelian case. For the supersize moduli, the $(+)$ profiles at zeroth order are \begin{align} \lambda_+ =i\sqrt{2}\partial_r \gamma,\quad \psi^1_+=2\gamma \phi ,\quad \psi^2_+=2\gamma \phi - \frac{\phi}{\bar{\rho}\rho}\,. \end{align} The zeroth order equation for the $(+)$ profiles subsumes to the extremisation equation for $\gamma$. Thanks to our parametrisation, we can apply the same kind of trick again to find the $(-)$ profiles: writing \begin{equation} \lambda_-=-\frac{\mu g^2r}{2}(\lambda_+-i \sqrt{2} c(r)),\,\psi^{1,2}_-= -\frac{\mu g^2r}{2} \psi^{1,2}_+, \end{equation} we obtain a solution to the Dirac equation when \begin{equation} c=-\frac{2}{r}\gamma\,. \end{equation} This gives the following profiles: \begin{align} &\lambda_-=-\frac{i\mu g^2 \sqrt{2}r}{2}\left(\partial_r \gamma + \frac{2}{r}\gamma \right),\nonumber\\ &\psi^{1}_- = -\mu g^2r \gamma \phi ,\quad \psi^2_- = -\mu g^2r \left( \gamma \phi-\frac{\phi}{2 \rho \bar{\rho}}\right) \,. \end{align} With these profiles supersize zero modes can be checked to be non-singular at zero and normalizable at infinity to the order $O(\mu)$ by using the explicit solution \eqref{explicitsol}, up to a caveat we detail in \ref{ap2}. We can now feed these profiles into the kinetic terms of the 4d fermions and observe any mixing between worldsheet modes. At this level we can expect three changes to occur, three coefficients that can depart from their expected value. The changes affect the $\zeta_r,\,\chi_R$ worldsheet fermions, so their respective kinetic terms can change normalisation: label them $I_{\zeta\zeta},\,I_{\chi\chi}$. But also, we expect a mixing term between these two fields to occur: if the shape of the interactions persists to be K\"{a}hlerian then Zumino's theorem will apply and one would observe an enhancement of the number of supersymmetries. If $\mu=0$ this coefficient vanishes, since, for instance, $\lambda_0$ and $\lambda_+$ have no overlap. One comes multiplied by $\frac{x+i y}{r}$ while the other does not, similarly for the matter fermions. It is clear that at leading order in $\mu$, the fermion kinetic constant for $\zeta$ does not change from its initial value, which one can show is the integral of a total derivative by using the Maxwell equation \begin{equation} I_{\zeta \zeta} = \int rdrd\theta \quad \left( \left( \frac{1}{r}\partial_r f(r)\right)^2 +\frac{1}{r} J\right) = \left[ \frac{1}{r}f(r)\partial_r f(r) \right]^\infty_0 =1 \,. \end{equation} For precisely the same reasons, at order $O(\mu)$ the $\chi$ normalisation does not change either. In that case, a caveat must be raised, the details of which are in Appendix \ref{ap2}. Now, with these solutions, it is the case that zero modes from the translational and size moduli are able to mix, leading to the sought-after term on the worldsheet: \begin{equation} { \pi g^2\xi\mu} \log\left( \frac{L_{\text{IR}}^2}{\bar{\rho}\rho} \right) \left(\zeta_R \chi_R \partial_L\bar{\rho} +\text{c.c.} \right) \label{suggest} \end{equation} where we keep only terms which contain IR logarithms. Here again the IR logarithm comes from the massless fermion of the second flavor. The shape of this resulting term is in fact fixed by supersymmetry and target space invariance, as we will see in the next section. In obtaining this expression, we again used the fact that radial variations of $\rho$ are negligible, since they occur systematically in comparison to $L_{IR}$. This enabled us to justify treating the logarithmic factors in the kinetic terms as constants and changing normalisation to remove them, here it enables us to write \begin{equation} \bar{\rho}\partial \rho \approx -\rho \partial {\bar{\rho}} \end{equation} which simplifies the computation to the result quoted above. Thus our world sheet theory to the $O(\mu)$ order becomes \begin{align} S_{2D} &= \int d^2 x \;\pi\xi \left\{\log\left( \frac{L_{\text{IR}}^2}{|\rho|^2}\right)\,\left[ |\partial_k \rho|^2 + i \bar{\chi}_R \partial_L \chi_R + i \bar{\chi}_L \partial_R \chi_L \right] \right. \nonumber\\ &+ \left. i \bar{\zeta}_R \partial_L \zeta_R + g^2\mu \log\left( \frac{L_{\text{IR}}^2}{|\rho|^2} \right) \left(\zeta_R \partial_L \bar{\rho}\chi_R +\text{c.c.} \right) \right\}, \label{wcN=1} \end{align} where we drop translational moduli $x^i$ and $\zeta_L$ which are sterile. The mixing term, by its existence, breaks $\mathcal{N}=2$ supersymmetry, as has been discussed. Absorbing with logarithmic accuracy square roots of IR logarithms in the normalization for $\chi_R$ and $\rho$ we finally arrive at the action \begin{align} S_{2D} &= \int d^2 x \;\pi\xi \left\{ |\partial_k \rho|^2 + i \bar{\chi}_R \partial_L \chi_R + i \bar{\chi}_L \partial_R \chi_L \right. \nonumber\\ &+ \left. i \bar{\zeta}_R \partial_L \zeta_R + g^2\mu \left(\zeta_R \partial_L\bar{\rho} \chi_R +\text{c.c.} \right) \right\}. \label{wcN=1final} \end{align} We see that the mixing term also does not contains IR logarithm and becomes of order $g^2\mu$. As we mentioned, the shape of this term is expected from supersymmetry: there exists a specific way of combining $(0,2)$ superfields in such a way as to generate a mixing term of this form, but the above result is not the complete answer: along with this new term, extra four-fermion interactions are generated due to $F$ terms. In order to determine the full expression, let us turn to this formalism to generate the remainder of the Lagrangian. \subsection{Superspace Action} We have found that the worldsheet theory develops a deformation term that breaks $(2,2)$ supersymmetry. This term mixes fermions living in different target spaces, while the bosonic coordinates of the manifolds do not mix. Evidence of leftover supersymmetry after this breaking is most easily seen by writing a $(0,2)$ superfield formulation of the Lagrangian. We introduce three superfields, whose expansions in chiral superspace coordinates are \begin{equation} A=\rho + \theta \sqrt{2}\chi_L ,\quad B=\chi_R + \sqrt{2}\theta F_s,\quad C=\zeta_R +\sqrt{2}\theta F_t \end{equation} where $F_t,\,F_x$ are unimportant auxiliaries leading to four-fermion interactions. We also introduce the K\"{a}hler 1-forms $\mathcal{K}_{z},\mathcal{K}_{\bar{z}}$, which are complex conjugate and arbitrary. They would derive, in a $(2,2)$ setting expressed in $(0,2)$ notation, from the K\"{a}hler potential by \begin{equation} \mathcal{K}_{z} = \partial_z \mathcal{K}\,. \end{equation} We then define the metric of the space by $G_{z\bar{z}}=\partial_{\bar{z}} \mathcal{K}_{z}=\overline{{G_{\bar{z}z}}}$. The $\mathcal{N}=(2,2)$ Lagrangian, written in this $(0,2)$ language, takes the following form, first in a generic formulation and then in our specific case: \begin{align} \mathcal{L}_{(2,2)}=&\pi\xi\int d^2\theta\, \left(i \mathcal{K}_{z} \partial_R A + \text{c.c.} + G_{z\bar{z}} B^\dagger B \right) \\ =&\pi\xi\int d^2\theta\, \left(i \log\left( \frac{L_{IR}^2}{ A^\dagger A}\right) \left(A^\dagger\partial_R A - \partial_R A^\dagger A \right)\right. \nonumber\\ &\left.+ \log\left( \frac{L_{IR}^2}{ A^\dagger A}\right) B^\dagger B \right) \end{align} given that the K\"{a}hler potential was given in Eq.(\ref{kpot}). Then, a term that explicitly breaks $(2,2)$ supersymmetry can be found by coupling $B$ and $A^\dagger$ directly, without involving $A$. The following term is suitable: \begin{align} \mathcal{L}_{(0,2)}=&\pi\xi\mu g^2\int d^2\theta\,\left( \mathcal{K}_{\bar{z}} B C+\text{c.c.} \right) \\ =&\pi\xi g^2\mu\int d^2\theta\, \log\left( \frac{L_{IR}^2}{ A^\dagger A}\right) \left( A^\dagger B C+\text{c.c.} \right) \,. \end{align} This addition to the Lagrangian does indeed produce the term we suggest in Eq.(\ref{suggest}) \begin{equation} g^2 \mu \chi_R \zeta_R \partial_L \bar{\rho} +\text{ H.c.} \end{equation} along with further quartic fermion couplings from the F-terms present in the fermionic multiplets. It is clearly a violation of $\mathcal{N}=(2,2)$ supersymmetry as it involves a fermionic multiplet which does not have a paired bosonic multiplet. In total, and once the rescaling of the kinetic logarithms has been performed, the Lagrangian we obtain out of superspace as a result takes the following form: \begin{align} \mathcal{L}=& \partial^\mu \bar{\rho} \partial_\mu \rho + i \bar{\chi}\slashed{\partial}\chi + i \zeta^\dagger_R \partial_L \zeta_R + g^2\mu\left( \zeta_R \chi_R \partial_L \bar{\rho} +\text{ H.c.}\right)\nonumber\\ & + g^4 \mu^2 \left(\zeta^\dagger_R \zeta_R \right)\left( \chi_L^\dagger \chi_L \right) + g^4\mu^2\left(\chi^\dagger_R \chi_R \right) \left(\chi_L^\dagger \chi_L \right) \,. \end{align} This is now manifestly $(0,2)$-supersymmetric, as required. \section{Conclusion} We have investigated properties of Supersymmetric Non-Linear Sigma Models that arise as the Lagrangian for semi-local strings in SQED. The scalar modulus $\rho$ that these strings are endowed with seems very different from the internal colour modulus of non-Abelian strings, but we have shown they are similar in at least one aspect: a heterotic deformation affects their worldsheets in very similar ways. When a mass is turned on for the gauge scalar multiplet in four dimensions, in both cases, a coupling occurs between fermionic degrees of freedom originally defined in different target spaces on the worldsheet. This breaks the full $(2,2)$ supersymmetry in a way that cannot benefit from any accidental enhancement. For this structural shape, an explicitly $(0,2)$ superspace action can be written, in a way that clearly violates $(2,2)$ supersymmetry in turn. It is nevertheless the case that $\rho$ retains some idiosyncratic features: the asymptotic explicit solution of the field equations that exists in this case proves to be a powerful tool to study the properties of semi-local strings. Given that the modulus $|\rho|$ intervenes in every asymptotic spatial profile we wish to write in the theory, the computation to generate the zero modes and worldsheet theory complicates itself quickly, but subsumes to the expected result eventually. We expect it to become even more difficult to perform, if possible at all, for a large-$\mu$ worldsheet. This exercise will be left for future work. \section*{Acknowledgments} This work is supported in part by DOE grant DE-SC0011842. The work of A.Y. was supported by William I. Fine Theoretical Physics Institute at the University of Minnesota and by Russian Foundation for Basic Research Grant No. 18-02-00048. \begin{appendices} \section{Conventions} We work in Euclidean space. We pick the following choices for $\sigma$-matrices \begin{equation} \sigma^{\mu\alpha\beta}=\left(\mathbbm{1},\left( \begin{array}{cc} 0 & -i \\ -i & 0 \end{array} \right) ,\left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right) , \left( \begin{array}{cc} -i & 0 \\ 0 & i \end{array} \right) \right),\quad \bar{\sigma}^\mu_{\alpha\beta} = \left( \mathbbm{1},-\sigma^{i\alpha\beta}\right) \end{equation} $SU(2)$ indices, either spinorial or from $R$-symmetry, are contracted with the following tensor \begin{equation} \varepsilon^{\alpha\beta}=\left(\begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right) = \varepsilon^{\dot{\alpha}\dot{\beta}},\quad \varepsilon_{\alpha\beta} = \varepsilon_{\dot{\alpha}\dot{\beta}} = - \varepsilon^{\alpha\beta} \end{equation} From our choices in spacetime, the worldsheet gamma matrices necessarily become \begin{equation} \gamma=\left( \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) ,\left(\begin{array}{cc} 0 & i \\ -i & 0 \end{array} \right) \right) \end{equation} \section{Asymptotic Expansions on the Worldsheet} \label{ap2} In Section \ref{sec2}, we compute corrections to coefficients of worldsheet couplings. This is an unobvious process conceptually, namely because the field $\chi_R$ is only logarithmically normalisable, though arguments have been put forward that this apparent divergence can be removed safely through field redefinitions. At order $\mu$, for reasons explained above, there is no contribution to the normalisation. One expects them to arrive at higher order in (even) powers of $\mu$. However, we are performing perturbation theory in a setting with an explicit IR cutoff i.e. a maximally large but finite length scale $L$ in the problem. Since $\mu$ has dimensions of mass, one expects that terms dependent on enough powers of $\mu$ will come multiplied by some positive powers of $L$, generically. In particular, $\lambda_-$ is constructed from a square-log-divergent profile times a factor of $r$, so decays even slower at infinity than $\lambda_+$, thus will lead to a correction that goes as $\mu^2 L^2$. This is symptomatic of doing perturbation theory in settings with IR cutoffs: the full series cannot be trusted, since $\mu$ cannot be smoothly turned off without passing by the IR cutoff regime. This phenomenon is referred to as singular perturbation theory, characteristic of dynamics on multiple scales. The approach is broadly contained in asymptotic analysis, rather than perturbation theory. Hence, we suggest that one should truncate the order at which our series is meaningful. \section{Useful Transverse Integration Identities} Computing the transverse integrals yielding worldsheet elements, in the case of semilocal strings, can involve a high number of terms and expressions in the integrand, all contributing towards a small class of possible terms allowed by worldsheet symmetries. It is useful to keep at hand a list of frequently-used identities for quick reference. Integrals are performed over the plane transverse to the string solution, and systematically involve functions of the radial coordinate only. Where the integrand can be summed over the entire plane and produce a finite result, that result is used, though some may be required to be cut off, for small $r$ at $\rho$, and for large $r$ at $L_{IR}$. The general form of the integrands at hand can usually be reduced to the following type of integral: \begin{equation} \int rdr\, \frac{1}{(r^2+\bar{\rho}\rho)^n}=\frac{1}{n-1}\left(\bar{\rho}\rho \right)^{1-n},\quad n>1 \end{equation} When $n=1$ the integral requires regularisation: \begin{equation} \int rdr\, \frac{1}{(r^2+\bar{\rho}\rho)}=\frac{1}{2}\log\left(\frac{L_{IR}^2}{\bar{\rho}\rho} \right) \end{equation} A combination of both of these two integral types produces the characteristic K\"{a}hler metric of the size modulus. In the deformed worldsheet, at small $\mu$ these formul{\ae} are enough to produce the result. \end{appendices} \addcontentsline{toc}{section}{References}
2,869,038,156,430
arxiv
\section{Introduction.}\label{intro} \section{Introduction}\label{Introduction6} Polling queues find applications when multiple products compete for a common resource. In a polling queue, a single server serves multiple queues of products, visiting the queues one at a time in a fixed cyclic manner. In manufacturing, polling queues have been used to model flow of multiple products undergoing manufacturing operations in a factory. In healthcare, polling queues have been used to model the flow of different types of patients through various activities in a hospital or clinic. In transportation, polling queues have been used to model multiple traffic flows in a transportation network. Comprehensive survey on the analysis of polling queues can be found in $\left(\text{Takagi }\cite{Takagi2000}, \text{Vishnevskii \& Semenova }\cite{Vishnevskii2006}\right)$.\\ While a majority of existing research on polling queues focus on the single-station polling queue, this work focuses on the analysis of a tandem network of polling queues with setups. Our motivation for studying tandem network of polling queues with setups is derived from our collaboration with a large manufacturer of rolled aluminum products $\left(\text{RAP}\right)$ where the manufacturing operations can be modeled as a tandem network of polling queues. At this facility, the manufacturing process involves steps like rolling of aluminum ingots into plates, heat treating to improve properties, stretching the plates to improve straightness, aging to cure the metal, sawing the plates into smaller pieces, and conducting ultrasonic inspection to check material properties. In this case, each manufacturing operation can be modeled as a polling queue, processing different types of alloys, and incurring a setup when the equipment switches from one type of product to another type of product in a sequential manner. A particular product may be processed through a series of these operations based on either a predetermined or probabilistic sequence of operations. In such a setting, estimates of mean waiting time can help managers release and schedule jobs, quote lead times for customers, and improve coordination with downstream operations.\\ Tandem network of polling queues also find application in factories of process/semi-process industries such as chemical, plastic, and food industries where significant setup times are incurred when a machine switches from producing one type of product to another. To reduce cost, manufacturers often produce their products in batches, and use an exhaustive policy, i.e, serve all products waiting in a queue before switching over to another product type. Thus, determining the impact of setup times on waiting times is of key interest to the managers.\\ Despite the importance of tandem network of polling queues, there has been limited studies of such networks. Exact analysis of polling models is only possible in some cases, and even then numerical techniques are usually required to obtain waiting times at each queue. We propose a decomposition based approach for the analysis of the performance of tandem network of polling models. Our research makes two key contributions. First, we provide a computationally efficient method that exploits the structure of the state-space to provide solutions for tandem polling queues with setups. In particular, we use a partially-collapsible state-space approach that captures or ignores queue length information as needed in the analysis. We show that this approach reduces computational complexity and provides reasonable accuracy in performance estimation. Second, we investigate the impact of different manufacturing settings, such as, location of bottleneck stations, asymmetry in waiting times, and setup times on systems performance measures. We find that the location of bottleneck station and differences in service rates can have significant impact on the waiting times.\\ The rest of the paper is organized as follows. In Section \ref{LiteratureReview6}, we provide a brief literature review on polling queues and analysis of tandem network of queues. We describe the system in Section \ref{SystemDescription6} and the approach used to analyze the two-station system in Section \ref{Subsystem1} and Section \ref{Subsystem2}. In Section \ref{NumericalResults6}, we validate our approach and provide useful numerical insights. Finally, we conclude and provide future extensions in Section \ref{Conclusions6}. \section{Literature Review}\label{LiteratureReview6} Polling queues and their applications have been an active field of research for the past few decades. Takagi \cite{Takagi2000}, Vishnevskii and Semenova \cite{Vishnevskii2006}, and Boona et al. \cite{Boona11} provide a comprehensive survey on polling queues and their applications. We group our discussion of the literature in three categories$\colon$ polling queue with zero setups, polling queue with non-zero setups, and network of polling queues.\\ \textbf{Polling queue with zero setups}$\colon$ One of the earliest techniques for analyzing polling queues with zero setups uses a \emph{server vacation model}, where the server periodically leaves a queue and takes a vacation to serve other queues. Fuhrmann et al. \cite{Fuhrmann85} uses such a vacation model to study a symmetric polling station with $Q$ queues served in a cyclic order by a single server and determines the expressions for sojourn times under exhaustive, gated, and $k$-limited service discipline. They show that the stationary number of customers in a single station polling queue (summed over all the queues) can be written as the sum of three independent random variables$\colon\left(i\right)$ the stationary number of customers in a standard M/G/I queue with a dedicated server, $\left(ii\right)$ the number of customers in the system when the server begins an arbitrary vacation (changeover), and $\left(iii\right)$ number of arrivals in the system during the changeover. Boxma et al. \cite{Boxma87} use a stochastic decomposition to estimate the amount of work (time needed to serve a specific number of customers) in cyclic-service systems with hybrid service strategies (e.g., semi-exhaustive for first product class, exhaustive for second and third product class, and gated for remaining product classes) and use the decomposition results to obtain a pseudo-conservation law for such cyclic systems.\\ \textbf{Polling queue with non-zero setups}$\colon$ Several studies have used transform methods to find the distributions for waiting times, cycle times, and queue lengths in a single-station polling queue with setups. Cooper et al. \cite{RBCooper96} propose a decomposition theorem for polling queues with non-zero switchover times and show that the mean waiting times is the sum of two terms$\colon\left(\text{1}\right)$ the mean waiting time in a "corresponding" model in which the switchover times are zero, and $\left(\text{2}\right)$ a simple term that is a function of mean switchover times. Srinivasan et al. \cite{Srinivasan95} use Laplace–Stieltjes Transform $\left(\text{LST}\right)$ methods to compute the moments of the waiting times in $R$ polling queues with nonzero-setup-times for exhaustive and gated service. The algorithm proposed requires estimation of parameters with $\log{\left(R\mathcal{E}\right)}$ complexity, with $\mathcal{E}$ as the desired level of accuracy. Once the parameters have been calculated, mean waiting times may be computed with $\mathcal{O}\left(R\right)$ elementary operations. Borst and Boxma \cite{Borst97} generalize the approach used by Srinivasan et al. \cite{Srinivasan95} to derive the joint queue length distribution for any service policy. Boxma et al. \cite{Boxma09} analyzes a polling system of $R$-queues with setup times operating under gated policy and determine the LST for cycle times under different scheduling disciplines such as FIFO and LIFO. They show that LST of cycle times is only dependent on the polling discipline at each queue and is independent of the scheduling discipline used within each queue.\\ In addition to LST techniques, mean value analysis has also been used to estimate performance measures for polling queues with nonzero setups. Hirayama et al. \cite{Hirayama04} developed a method for obtaining the mean waiting times conditioned on the state of the system at an arrival epoch. Using this analysis, they obtain a set of linear functional equations for the conditional waiting times. By applying a limiting procedure, they derive a set of $R(R +1)$ linear equations for the unconditional mean waiting times, which can be solved in $\mathcal{O}\left(R^6\right)$ operations. Winands et al. \cite{Winands06} calculates the mean waiting times in a single-station multi-class polling queue with setups for both exhaustive and gated service disciplines. They use mean value analysis to determine the mean waiting times at the polling queue. They derive a set of $R^{2}$ and $R\left(R +1\right)$ linear equations for waiting time figures in case of exhaustive and gated service. In these studies of polling queues using LST techniques or mean value analysis, the authors have restricted their scope of study to single-station polling queues. Extending their approach to tandem network of polling queue will increase the computational complexity. Therefore, in our work, we propose a decomposition based approach.\\ $\textbf{Network of polling queues}\colon$ Altman and Yechiali \cite{Altman94} study a closed queueing network for token ring protocols with $Q$ polling stations, where a product upon completion of the service is routed to another queue probabilistically. They determine explicit expressions for the probability generating function for the number of products at various queues. However, the system considered is closed system with $N$ products in circulation, which could be a restrictive assumption in some applications. Jennings \cite{Jennings08} conducts a heavy traffic analysis of two polling queues for two stations in series and prove limit theorems for exhaustive and gated discipline for the diffusion scaled, two-dimensional total workload process using heavy traffic analysis. Suman and Krishnamurthy (\cite{Suman18} -- \cite{Suman21}) study a two-product two-station tandem network of polling queues with finite buffers using Matrix-Geometric approach. However, the analysis is restricted to systems with small buffer capacity. In comparison, this paper analyzes an open network of two polling queues with exogenous arrivals using decomposition.\\ \section{System Description and Overview of Approach}\label{SystemDescription6} In this section, we describe the system and provide an overview of the approach to estimate performance measures for the system. \subsection{System Description}\label{System Description} We analyze a tandem polling queue with infinite capacity, each with two product types, indexed by $i$, for $i ={}1, 2$ operating under \emph{independent polling strategy}. Products of type $i$ arrive from the outside world to their respective queue at station 1 according to independent Poisson process with parameter $\lambda_i$. Each product type is served by a single server at station $j$, for $j ={}1, 2$ in a fixed cyclic manner $\left(\text{see Figure } \ref{fig:mesh6.1}\right)$ following an exhaustive service policy. Under the \emph{independent polling strategy}, at each station, the server switches to serve products of the other type after emptying the queue being served, independent of the state of the other station. After service at station 1, the product proceeds from station 1 to station 2, and exits the system after the service is completed at station 2. Service times at these stations for product $i$ has an exponential distribution with parameter $\mu_{ij}$ at station $j$. When a server switches from queue $i'$ to queue $i$, for $i' ={}1, 2$ and $i' \neq i$, at station $j$, the server incurs a setup time $H_{ij}$ that has an exponential distribution with rate $\mu_{s_{ij}}$. We assume that the setups are state independent, i.e., the server incurs a setup time at the polled queue whether or not products are waiting at the queue. We also assume that setup times are independent of service times and other queue type. Note that the system is stable when $\sum_{i={}1}^{2} \lambda_{i}\mu_{ij}^{-1}< $ 1 for each $j$. We assume this condition holds for our system. \\ \graphicspath {{Figures/}} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.32]{ModelDescriptionCh6} \caption{Network of two-products two-stations polling queue.} \label{fig:mesh6.1} \end{center} \end{figure} The goal is to calculate the following system performance measures$\colon$ (i) average buffer level, $\mathbb{E}\left[L_{ij}\right]$, defined as the average amount of material stored in buffer for product type $i$ at station $j \left[\text{parts}\right]$ and (ii) average waiting time, $\mathbb{E}\left[W_{i}\right]$, defined as the average time required by products to go through station 1 and 2 $\left[\text{time units}\right]$.\\ To solve the system described above using a conventional Markov chain $\left(\text{MC}\right)$ approach, we would need to use a six-tuple state space resulting in over 2.5 million states for a system with a buffer size of 20. To address this curse-of-dimensionality, we propose a new approach based on decomposition. We first describe the general approach and provide details in Section \ref{Subsystem1} and \ref{Subsystem2}.\\ \subsection{Overview of Approach}\label{Approach} The main idea is to decompose the two-station polling queue into two subsystems$\colon SS\left(k\right)$ for $k ={}1, 2$ as shown in Figure $\left(\ref{fig:mesh6.2}\right)$, and study each subsystem independently. Subsystem $SS\left(1\right)$ comprise of only station 1 of the system. We use exact analysis methods for subsystem $SS\left(1\right)$ to obtain performance measures at station 1. Subsystem $SS\left(2\right)$ comprise of both station 1 and station 2. We analyze subsystem $SS\left(2\right)$ to estimate performance measures at station 2. Since arrivals at station 2 depend on departures from station 1, the analysis of subsystem $SS\left(2\right)$ requires joint analysis of station 1 and station 2. In solving the subsystem $SS\left(2\right)$, we make use of the fact that the service policy adopted by the server is exhaustive at both the stations, and that the queue becomes zero for the served product type before it switches to serve another product. We exploit this fact to define the `partially-collapsible state-space' needed to analyze subsystem $SS\left(2\right)$. In this partially-collapsible state-space, the size of the state-space is varied depending on the information that needs to be retained to conduct the analysis. We use a combination of state-space description with four-tuples and five-tuples to model the relevant state transitions in subsystem $SS\left(2\right)$ depending on if the server at station 1 is doing a setup, or serving products, respectively. This approach helps reduce the state complexity and yet yields good approximations for the performance measures at station 2. The details are provided in the next section. \graphicspath {{Figures/}} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.70]{BuildingBlock} \caption{Decomposition of system into subsystem $SS\left(1\right)$ and $SS\left(2\right)$.} \label{fig:mesh6.2} \end{center} \end{figure} \newpage \section{Analysis of Subsystem $SS\left(1\right)$}\label{Subsystem1} In subsystem $SS\left(1\right)$, we consider only station 1 of the system described in Figure \ref{fig:mesh6.1}. In this, we consider system of single server serving two product types as shown in Figure \ref{fig:mesh6.3}. We analyze this subsystem to estimate performance measures for station 1. It should be noted that the subsystem $SS\left(1\right)$ can be analyzed using mean value approach in Winands et al. \cite{Winands06} or using Laplacian approach in Boxma et al. \cite{Boxma09}, but we use an exact Markov chain analysis instead. Our approach gives stationary distributions of the queue lengths in addition to the mean queue lengths which can be useful for managerial decisions. Furthermore, the Markov chain approach also provides a better context for partially-collapsible state-space approach used for analyzing $SS\left(2\right)$. \graphicspath{{Figures/}} \begin{figure}[H] \begin{center} \includegraphics[scale=0.40]{SS1} \caption{Illustration of subsystem $SS\left(1\right)$.} \label{fig:mesh6.3} \end{center} \end{figure} The state of the subsystem $SS\left(1\right)$ at a given time epoch forms a continuous time Markov chain defined by the tuple $\Big(\,l_{11}, l_{21}, r_{i1}\,\Big)$, where $l_{i1}$ is the number of products of type $i$, and $r_{i1}$ takes value of $S_{i1}$ or $U_{i1}$, for $i = 1, 2$, depending on if it is doing a setup for product $i$ or is processing product $i$. Note that $l_{11}$ and $l_{21}$ can take integer values greater than or equal to zero. Let $q\Big[\left(l_{11}, l_{21}, r_{i1}\right), \left(l_{11}', l_{21}', r_{i1}'\right)\Big]$ denote the transitions from the state $\left(l_{11}, l_{21}, r_{i1}\right)$ to the state $\left(l_{11}', l_{21}', r_{i1}'\right)$ for $\left(r_{i1}, r_{i1}'\right) \in \{S_{11}, S_{21}, U_{11}, U_{21}\}$. The transitions for the subsystem $SS\left(1\right)$ are summarized below in Table \ref{Table:6.1}. {\renewcommand{\arraystretch}{1.30} \begin{table}[H] \centering \caption{Transitions for the subsystem $SS\left(1\right)$.}\label{Table:6.1} \begin{tabular}{| C{4cm} | C{4cm} | C{3cm} | C{3cm}|} \hline \textbf{From state} & \textbf{To state} & \textbf{Condition} & \textbf{Transition rate out}\\ \hline $\left(l_{11}, l_{21}, S_{i1}\right)$ & $\left(l_{11}, l_{21}, S_{i'1}\right)$ & $l_{i1} ={} 0$ & \multirow{2}{*}{$\mu_{s_{i1}}$}\\ $\left(l_{11}, l_{21}, S_{i1}\right)$ & $\left(l_{11}, l_{21}, U_{i1}\right)$ & $l_{i1} > 0$ & \\ \hline $\left(l_{11}, l_{21}, U_{11}\right)$ & $\left(0, l_{21}, S_{21}\right)$ & $l_{11} ={} 1$ & \multirow{2}{*}{$\mu_{11}$}\\ $\left(l_{11}, l_{21}, U_{11}\right)$ & $\left(l_{11}-1, l_{21}, U_{11}\right)$ & $l_{11} > 1$ & \\ \hline $\left(l_{11}, l_{21}, U_{21}\right)$ & $\left(l_{11}, 0, S_{11}\right)$ & $l_{21} ={} 1$ & \multirow{2}{*}{$\mu_{21}$}\\ $\left(l_{11}, l_{21}, U_{21}\right)$ & $\left(l_{11}, l_{21}-1, U_{21}\right)$ & $l_{21} > 1$ & \\ \hline $\left(l_{11}, l_{21}, S_{11}\right)$ & $\left(l_{11}+1, l_{21}, S_{11}\right)$ & \multirow{4}{*}{--} & \multirow{4}{*}{$\lambda_{1}$}\\ $\left(l_{11}, l_{21}, S_{21}\right)$ & $\left(l_{11}+1, l_{21}, S_{21}\right)$ & &\\ $\left(l_{11}, l_{21}, U_{11}\right)$ & $\left(l_{11}+1, l_{21}, U_{11}\right)$ & &\\ $\left(l_{11}, l_{21}, U_{21}\right)$ & $\left(l_{11}+1, l_{21}, U_{21}\right)$ & &\\ \hline $\left(l_{11}, l_{21}, S_{11}\right)$ & $\left(l_{11}, l_{21}+1, S_{11}\right)$ & \multirow{4}{*}{--} & \multirow{4}{*}{$\lambda_{2}$}\\ $\left(l_{11}, l_{21}, S_{21}\right)$ & $\left(l_{11}, l_{21}+1, S_{21}\right)$ & &\\ $\left(l_{11}, l_{21}, U_{11}\right)$ & $\left(l_{11}, l_{21}+1, U_{11}\right)$ & &\\ $\left(l_{11}, l_{21}, U_{21}\right)$ & $\left(l_{11}, l_{21}+1, U_{21}\right)$ & &\\ \hline \end{tabular} \end{table} Let $\pi\left(l_{11}, l_{21}, r_{i1}\right)$ be the steady-state probability of state $\left(l_{11}, l_{21}, r_{i1}\right)$. The Chapman-Kolmogorov (CK) equations for the Markov chain for subsystem $SS\left(1\right)$ to and from states $\left(l_{11}, l_{21}, S_{11}\right)$ and $\left(l_{11}, l_{21}, U_{11}\right)$ are given by Equations $\left(\ref{eqn6.1}\right) - \left(\ref{eqn6.8}\right)$.\\ \setlength{\abovedisplayskip}{0pt} \setlength{\belowdisplayskip}{0pt} \setlength{\abovedisplayshortskip}{0pt} \setlength{\belowdisplayshortskip}{0pt} \begin{align*} \intertext{For $l_{11} ={}0, l_{21} ={}0\colon$} \left(\lambda_{1}+\lambda_{2}+\mu_{s_{11}}\right)\pi\left(0, 0, S_{11}\right)= {}\mu_{s_{21}}\pi\left(0, 0, S_{21}\right)+\mu_{21}\pi\left(0, 1, U_{21}\right)\numberthis\label{eqn6.1}\\ \intertext{For $l_{11} >{}0, l_{21} ={}0\colon$} \left(\lambda_{1}+\lambda_{2}+\mu_{s_{11}}\right)\pi\left(l_{11}, 0, S_{11}\right)= {}\\` \lambda_{1}\pi\left(l_{11}-1, 0, S_{11}\right)+\mu_{s_{21}}\pi\left(l_{11}, 0, S_{21}\right)+\mu_{21}\pi\left(l_{11}, 1, U_{21}\right)\numberthis\label{eqn6.2}\\ \intertext{For $l_{11} ={}0, l_{21} >{}0\colon$} \left(\lambda_{1}+\lambda_{2}+\mu_{s_{11}}\right)\pi\left(0, l_{21}, S_{11}\right)= {}\lambda_{2}\pi\left(0, l_{21}-1, S_{11}\right)\numberthis\label{eqn6.3}\\ \intertext{For $l_{11} >{}0, l_{21} >{}0\colon$} \left(\lambda_{1}+\lambda_{2}+\mu_{s_{11}}\right)\pi\left(l_{11}, l_{21}, S_{11}\right)= {}\lambda_{1}\pi\left(l_{11}-1, l_{21}, S_{11}\right)+\lambda_{2}\pi\left(l_{11}, l_{21}-1, S_{11}\right)\numberthis\label{eqn6.4}\\ \intertext{For $l_{11} ={}1, l_{21} ={}0\colon$} \left(\lambda_{1}+\lambda_{2}+\mu_{11}\right)\pi\left(1, 0, U_{11}\right)= {}\mu_{s_{11}}\pi\left(1, 0, S_{11}\right)+\mu_{11}\pi\left(2, 0, S_{21}\right)\numberthis\label{eqn6.5}\\ \intertext{For $l_{11} ={}1, l_{21} >{}0\colon$} \left(\lambda_{1}+\lambda_{2}+\mu_{11}\right)\pi\left(1, l_{21}, U_{11}\right)= {}\\ \lambda_{2}\left(1, l_{21}-1, U_{11}\right)+\mu_{s_{11}}\pi\left(1, l_{21}, S_{11}\right)+\mu_{11}\pi\left(2, l_{21}, U_{11}\right)\numberthis\label{eqn6.6}\\ \intertext{For $l_{11} >{}1, l_{21} ={}0\colon$} \left(\lambda_{1}+\lambda_{2}+\mu_{11}\right)\pi\left(l_{11}, 0, U_{11}\right)= {}\\ \lambda_{1}\left(l_{11}-1, 0, S_{11}\right)+\mu_{s_{11}}\pi\left(l_{11}, 0, S_{11}\right)+\mu_{11}\pi\left(l_{11}+1, 0, U_{11}\right)\numberthis\label{eqn6.7}\\ \intertext{For $l_{11} >{}1, l_{21} >{}0\colon$} \left(\lambda_{1}+\lambda_{2}+\mu_{11}\right)\pi\left(l_{11}, l_{21}, U_{11}\right) = {}\\ \lambda_{1}\left(l_{11}-1, l_{21}, U_{11}\right)+\lambda_{2}\left(l_{11}, l_{21}-1, U_{11}\right)+\mu_{s_{11}}\pi\left(l_{11}, l_{21}, S_{11}\right)+\\ \mu_{11}\pi\left(l_{11}+1, l_{21}, U_{11}\right)&\numberthis\label{eqn6.8}\\ \end{align*} We can similarly write balance equations for states of the form $\left(l_{11}, l_{21}, S_{21}\right)$ and $\left(l_{11}, l_{21}, U_{21}\right)$. The normalization condition is written as$\colon$ \begin{align*} \mathop{\sum\sum\sum}_{\substack{S_{i1}\in\{S_{11}, S_{21}\}\\ \left(l_{11}, l_{21}\right)\in\mathbb{Z}}}\pi\left(l_{11}, l_{21}, S_{i1}\right) +\mathop{\sum\sum\sum}_{\substack{U_{i1}\in\{U_{11}, U_{21}\}\\ l_{i1}\in\mathbb{Z}^{+}, l_{i'1}\in\mathbb{Z}}}\pi\left(l_{11}, l_{21}, U_{i1}\right)= {}1\numberthis\label{eqn6.9}\\ \end{align*} Using Equations $\left(\ref{eqn6.1}\right)-\left(\ref{eqn6.9}\right)$, we obtain the values of all steady state probabilities for subsystem $SS\left(1\right)$. Using the steady state probabilities, we obtain expressions for average throughput $TH_{i1}$, average buffer level $L_{i1}$, and average waiting time $W_{i1}$, of product type $i$, for $i={}1, 2$ at station 1, and are given by Equation $\left(\ref{eqn6.10}\right)$, Equation $\left(\ref{eqn6.11}\right)$, and Equation $\left(\ref{eqn6.12}\right)$ respectively.\\ \begin{equation}\label{eqn6.10} TH_{i1} = {}\mu_{i1}\mathop{\sum\sum}\limits_{l_{11}\in\mathbb{Z}^{+} \text{ } l_{21}\in\mathbb{Z}}\pi\left(l_{11}, l_{21}, U_{i1}\right)={}\lambda_{i} \end{equation} \begin{equation}\label{eqn6.11} L_{i1} = {}\mathop{\sum\sum\sum}_{\substack{r\in\{{S_{11}, S_{21}, U_{11}, U_{21}\}}\\ \left(l_{11}, l_{21}\right)\in\mathbb{Z}}} l_{i1}\cdot\pi\left(l_{11}, l_{21}, r\right) \end{equation} \begin{align*} W_{i1}& = {}L_{i1}TH_{i1}^{-1}\numberthis\label{eqn6.12} \end{align*} \section{Analysis of Subsystem $SS\left(2\right)$}\label{Subsystem2} Subsystem $SS\left(2\right)$ comprises of two-product two-station tandem polling queue as described in Section \ref{SystemDescription6} and shown in Figure \ref{fig:mesh6.1}. We perform a joint analysis of station 1 and station 2 by analyzing the Markov chain with state space aggregation. This combined analysis is necessary to incorporate the interdependencies between station 1 and station 2. \subsection{Steady State Probabilities for $SS\left(2\right)$}\label{Subsystem2StateSpace} To model the transitions in subsystem $SS\left(2\right)$, we have a partially-collapsible state-space description. In this description, we retain partial but relevant buffer level information for station 1, and complete and detailed buffer level information for station 2 at all time instances. We exploit the following two scenarios$\colon$ \begin{enumerate}[(a)] \item When the server is performing setup for product $i$ at station 1, we do not track the buffer levels for any of the products at station 1, as no products are getting served at station 1. We note that if $l_{i1} > 0$ at the end of the setup, the server at station 1 will finish its setup with rate $\mu_{s_{i1}}$ and begin to serve product $i$, in which case, we will need to retrieve the buffer level information for product $i$ at station 1. The queue length retrieval for product $i$ is important to determine when the server will switch from serving product $i$ to perform setup for product $i'$. If $l_{i1} ={} 0$, the server will switch to perform setup for product $i'$, in which case, we again do not need the buffer length information for product $i'$ during its setup phase.\\ \item When the server is serving product $i$ at station 1, we only track the buffer level for product $i$ at station 1, to capture the increment in buffer levels of product $i$ at station 2, and to determine when the server switches from serving product $i$ to perform setup for product $i'$ at station 1.\\ \end{enumerate} Through the use of this partially-collapsible state-space description, we are able to reduce the size of the state-space from one that could have six tuples to a combination of states with four-tuples and five-tuples. Our analysis shows that this loss in information does not significantly compromise the accuracy in estimates of performance measures.\\ Specifically, we define the state of the subsystem $SS\left(2\right)$ at time \emph{t} as a continuous time Markov chain defined using the following two types of states, depending on the activity of the server at station 1 at time $t\colon$ \begin{enumerate}[(i)] \item $\Big(S_{i1}, \,l_{12}, l_{22}, R_{i2}\,\Big)-$ When the server is performing setup at station 1$\colon$ In the state space, $S_{i1}$ represents setup for product type $i$ at station 1, $l_{i2}$ is the buffer level for type $i$ products at station 2, and $R_{i2}$ takes value of $S_{i2}$ or $U_{i2}$, for $i = 1, 2,$ depending on if the server at station 2 is doing a setup for product $i$, or is processing product $i$.\\ \item $\Big(l_{i1}, U_{i1}, \,l_{12}, l_{22}, R_{i2}\,\Big)-$ When the server is serving products at station 1$\colon$ In the state space, $l_{i1}$ represents the buffer level of the product being served at station 1, $U_{i1}$ represents service for product type $i$ at station 1, $l_{i2}$ is the buffer level of type $i$ products at station 2, and $R_{i2}$ takes value of $S_{i2}$ or $U_{i2}$, for $i = 1, 2,$ depending on if the server at station 2 is doing a setup for product $i$, or is processing product $i$.\\ \end{enumerate} Next, we describe the state transitions for the subsystem $SS\left(2\right)$. We summarize all the state transitions for the subsystem $SS\left(2\right)$ in Table \ref{Table:6.2} below and provide explanation for the non-trivial state transitions $q\Big[\left(S_{i1}, l_{12}, l_{22}, S_{i2}\right),\left(l_{i1}, U_{i1}, l_{12}, l_{22}, U_{i2}\right)\Big]$ when $l_{i1} >{} 0$, and state-transitions $q\Big[\left(S_{i1}, l_{12}, l_{22}, S_{i1}\right),\left(S_{i'1}, l_{12}, l_{22}, S_{i'1}\right)\Big]$ otherwise. Let $p_{i\left(l_{i1}\right)}$ be the probability that there are $l_{i1}$ type $i$ products at station 1 after the server completes the setup for queue $i$. Thus, with probability $p_{i\left(l_{i1}\right)}$, for $l_{i1} >{} 0$, there can be $l_{i1}$ type $i$ products in the queue at station 1 after the server completes setup for product $i$. In this case, the transition $q\Big[\left(S_{i1}, l_{12}, l_{22}, S_{i2}\right),\left(l_{i1}, U_{i1}, l_{12}, l_{22}, U_{i2}\right)\Big]$ occurs with rate $p_{i\left(l_{i1}\right)}\mu_{s_{i1}}$, and the server switches to serve product $i$ at station 1. Alternatively, with probability $p_{i\left(0\right)}$, the queue for product $i$ at station 1 can be empty after the server completes setup for product $i$. Since the setups are state independent and there are 0 products in queue $i$, the transition $q\Big[\left(S_{i1}, l_{12}, l_{22}, S_{i1}\right),\left(S_{i'1}, l_{12}, l_{22}, S_{i'1}\right)\Big]$ occurs with rate $p_{i\left(0\right)}\mu_{s_{i1}}$. We determine the probability $p_{i\left(l_{i1}\right)}$ in the next section.\\ {\renewcommand{\arraystretch}{1.40} \begin{table}[h!] \centering \caption{Transitions for the subsystem $SS\left(2\right)$.}\label{Table:6.2} \begin{tabular}{| C{5cm} | C{5cm} | C{2.0cm} | C{2.0cm}|} \hline \textbf{From state} & \textbf{To state} & \textbf{Condition} & \textbf{Transition rate out}\\ \hline \multicolumn{4}{|l|}{\textbf{Transitions at station 1.}}\\ \hline $\left(l_{i1}, U_{i1}, l_{12}, l_{22}, R_{i2}\right)$ & $\left(l_{i1}+1, U_{i1}, l_{12}, l_{22}, R_{i2}\right)$ & & $\lambda_{i}$\\ \hline $\left(S_{i1}, l_{12}, l_{22}, S_{i1}\right)$ & $\left(S_{i'1}, l_{12}, l_{22}, S_{i'1}\right)$ & $l_{i1} ={} 0$ & $p_{i\left(0\right)}\mu_{s_{i1}}$\\ \hline $\left(S_{i1}, l_{12}, l_{22}, S_{i2}\right)$ & $\left(l_{i1}, U_{i1}, l_{12}, l_{22}, U_{i2}\right)$ & $l_{i1} > 0$ & $p_{i\left(l_{i1}\right)}\mu_{s_{i1}}$\\ \hline \multicolumn{4}{|l|}{\textbf{Transitions at station 1 and station 2.}}\\ \hline $\left(1, U_{11}, l_{12}, l_{22}, R_{i2}\right)$ & $\left(S_{21}, l_{12}+1, l_{22}, R_{i2}\right)$ & $l_{11} ={} 1$ & \multirow{2}{*}{$\mu_{11}$}\\ $\left(l_{11}, U_{11}, l_{12}, l_{22}, R_{i2}\right)$ & $\left(l_{11}-1, U_{11}, l_{12}+1, l_{22}, U_{11}\right)$ & $l_{11} > 1$ &\\ \hline $\left(1, U_{21}, l_{12}, l_{22}, R_{i2}\right)$ & $\left(S_{11}, l_{12}, l_{22}+1, R_{i2}\right)$ & $l_{21} ={} 1$ & \multirow{2}{*}{$\mu_{21}$}\\ $\left(l_{21}, U_{21}, l_{12}, l_{22}, R_{i2}\right)$ & $\left(l_{21}-1, U_{21}, l_{12}, l_{22}+1, U_{11}\right)$ & $l_{21} > 1$ &\\ \hline \multicolumn{4}{|l|}{\textbf{Transitions at station 2.}}\\ \hline $\left(S_{i1}, l_{12}, l_{22}, S_{i2}\right)$ & $\left(S_{i1}, l_{12}, l_{22}, S_{i'2}\right)$ & $l_{i2} ={} 0$ & \multirow{4}{*}{$\mu_{s_{i2}}$}\\ $\left(S_{i1}, l_{12}, l_{22}, S_{i2}\right)$ & $\left(S_{i1}, l_{12}, l_{22}, U_{i2}\right)$ & $l_{i2} >{} 0$ &\\ $\left(l_{i1}, U_{i1}, l_{12}, l_{22}, S_{i2}\right)$ & $\left(l_{i1}, U_{i1}, l_{12}, l_{22}, S_{i'2}\right)$ & $l_{i2} ={} 0$ &\\ $\left(l_{i1}, U_{i1}, l_{12}, l_{22}, S_{i2}\right)$ & $\left(l_{i1}, U_{i1}, l_{12}, l_{22}, U_{i2}\right)$ & $l_{i2} >{} 0$ &\\ \hline $\left(S_{i1}, l_{12}, l_{22}, U_{12}\right)$ & $\left(S_{i1}, 0, l_{22}, S_{22}\right)$ & $l_{12} ={} 1$ & \multirow{4}{*}{$\mu_{12}$}\\ $\left(S_{i1}, l_{12}, l_{22}, U_{12}\right)$ & $\left(S_{i1}, l_{12}-1, l_{22}, U_{11}\right)$ & $l_{11} > 1$ &\\ $\left(l_{i1}, U_{i1}, l_{12}, l_{22}, U_{12}\right)$ & $\left(l_{i1}, U_{i1}, 0, l_{22}, S_{22}\right)$ & $l_{12} ={} 1$ &\\ $\left(l_{i1}, U_{i1}, l_{12}, l_{22}, U_{12}\right)$ & $\left(l_{i1}, U_{i1}, l_{12}-1, l_{22}, U_{11}\right)$ & $l_{11} > 1$ &\\ \hline $\left(S_{i1}, l_{12}, l_{22}, U_{22}\right)$ & $\left(S_{i1}, l_{12}, 0, S_{12}\right)$ & $l_{22} ={} 1$ & \multirow{4}{*}{$\mu_{22}$}\\ $\left(S_{i1}, l_{12}, l_{22}, U_{22}\right)$ & $\left(S_{i1}, l_{12}, l_{22}-1, U_{22}\right)$ & $l_{22} > 1$ &\\ $\left(l_{i1}, U_{i1}, l_{12}, l_{22}, U_{22}\right)$ & $\left(l_{i1}, U_{i1}, l_{12}, 0, S_{12}\right)$ & $l_{22} ={} 1$ &\\ $\left(l_{i1}, U_{i1}, l_{12}, l_{22}, U_{22}\right)$ & $\left(l_{i1}, U_{i1}, l_{12}, l_{22}-1, U_{22}\right)$ & $l_{22} > 1$ &\\ \hline \end{tabular} \end{table} The CK equations for the Markov chain for subsystem $SS\left(2\right)$ are illustrated in Equations $\left(\ref{eqn6.13}\right) - \left(\ref{eqn6.20}\right)$.\\ \begin{align*} \intertext{For $l_{12} \geq {}0, l_{22} ={}0\colon$} \left(\mu_{s_{11}}+\mu_{s_{12}}\right)\pi\left(S_{11}, l_{12}, 0, S_{12}\right)= {}\\ \mu_{s_{22}}\pi\left(S_{11}, l_{12}, 0, S_{22}\right)+\mu_{22}\pi\left(S_{11}, l_{12}, 1, U_{22}\right)+p_{2\left(0\right)}\mu_{s_{21}}\pi\left(S_{21}, l_{12}, 0, S_{12}\right)\numberthis\label{eqn6.13} \intertext{For $l_{12} \geq {}0, l_{22} >{}0\colon$} \left(\mu_{s_{11}}+\mu_{s_{12}}\right)\pi\left(S_{11}, l_{12}, l_{22}, S_{12}\right)= {}\\ p_{2\left(0\right)}\mu_{s_{21}}\pi\left(S_{21}, l_{12}, l_{22}, S_{12}\right)+\mu_{21}\pi\left(1, U_{11}, l_{12}, l_{22}-1, S_{12}\right)\numberthis\label{eqn6.14} \intertext{For $l_{12} \geq {}0, l_{22} ={}0\colon$} \left(\mu_{12}+\mu_{s_{11}}\right)\pi\left(S_{11}, l_{12}, 0, U_{12}\right)= {}\\ \mu_{s_{12}}\pi\left(S_{11}, l_{12}, 0, S_{12}\right)+\mu_{12}\pi\left(S_{11}, l_{12}+1, 0, U_{12}\right)+p_{2\left(0\right)}\mu_{s_{21}}\pi\left(S_{21}, l_{12}, 0, S_{12}\right)\numberthis\label{eqn6.15} \intertext{For $l_{12} \geq {}0, l_{22} >{}0\colon$} \left(\mu_{12}+\mu_{s_{11}}\right)\pi\left(S_{11}, l_{12}, l_{22}, U_{12}\right)={}\\ \mu_{s_{12}}\pi\left(S_{11}, l_{12}, l_{22}, S_{12}\right)+\mu_{12}\pi\left(S_{11}, l_{12}+1, l_{22}, U_{12}\right)+\\ p_{2\left(0\right)}\mu_{s_{21}}\pi\left(S_{21}, l_{12}, l_{22}, S_{12}\right)+\mu_{21}\pi\left(1, U_{21}, l_{12}, l_{22}, U_{12}\right)\numberthis\label{eqn6.16} \intertext{For $l_{11}={}1, l_{12} = {}0, l_{22} ={}0\colon$} \left(\lambda_{1}+\mu_{11}+\mu_{s_{12}}\right)\pi\left(1, U_{11}, 0, 0, S_{12}\right)= {}\\ \mu_{s_{22}}\pi\left(1, U_{11}, 0, 0, S_{22}\right)+\mu_{22}\pi\left(1, U_{11}, 0, 1, U_{22}\right)+p_{1\left(1\right)}\mu_{s_{11}}\pi\left(S_{11}, 0, 0, S_{12}\right)\numberthis\label{eqn6.17} \intertext{For $l_{11}={}1, l_{12} > {}0, l_{22} ={}0\colon$} \left(\lambda_{1}+\mu_{11}+\mu_{s_{12}}\right)\pi\left(1, U_{11}, l_{12}, 0, S_{12}\right)=\\ \mu_{s_{22}}\pi\left(1, U_{11}, l_{12}, 0, S_{22}\right)+\mu_{22}\pi\left(1, U_{11}, l_{12}, 1, U_{22}\right)+\\ p_{1\left(1\right)}\mu_{s_{11}}\pi\left(S_{11}, l_{12}, 0, S_{12}\right)+\mu_{11}\pi\left(2, U_{11}, l_{12}-1, 1, U_{22}\right)\numberthis\label{eqn6.18} \intertext{For $l_{11}={}1, l_{12} = {}0, l_{22} >{}0\colon$} \left(\lambda_{1}+\mu_{11}+\mu_{s_{12}}\right)\pi\left(1, U_{11}, 0, l_{22}, S_{12}\right) = {}p_{1\left(1\right)}\mu_{s_{11}}\pi\left(S_{11}, 0, l_{22}, S_{12}\right)\numberthis\label{eqn6.19} \intertext{For $l_{11}={}1, l_{12} > {}0, l_{22} >{}0\colon$} \left(\lambda_{1}+\mu_{11}+\mu_{s_{12}}\right)\pi\left(1, U_{11}, l_{12}, l_{22}, S_{12}\right) = {}\\ p_{1\left(1\right)}\mu_{s_{11}}\pi\left(S_{11}, l_{12}, l_{22}, S_{12}\right)+\mu_{11}\pi\left(2, U_{11}, l_{12}-1, l_{22}, U_{22}\right)\numberthis\label{eqn6.20}\\ \end{align*} Similarly, we can write balance equations for states $\Big(S_{i2}, \,l_{12}, l_{22}, R_{i2}\,\Big)$ and $\Big(l_{i2}, U_{i2}, \,l_{12}, l_{22}, R_{i2}\,\Big)$. The normalization condition is written as$\colon$\\ \begin{align*} &\mathop{\sum\sum\sum}_{\substack{S_{i1}\in\{S_{11}, S_{21}\}\\ \left(l_{12}, l_{22}\right)\in\mathbb{Z}}}\pi\left(S_{i1}, l_{12}, l_{22}, S_{12}\right) +\mathop{\sum\sum\sum}_{\substack{U_{i1}\in\{U_{11}, U_{21}\}\\ \left(l_{i1}, l_{12}\right)\in\mathbb{Z}^{+}, l_{22}\in\mathbb{Z}}}\pi\left(l_{i1}, U_{i1}, l_{12}, l_{22}, U_{12}\right)\\ +&\mathop{\sum\sum\sum}_{\substack{S_{i1}\in\{S_{11}, S_{21}\}\\ \left(l_{12}, l_{22}\right)\in\mathbb{Z}}}\pi\left(S_{i1}, l_{12}, l_{22}, S_{22}\right) +\mathop{\sum\sum\sum}_{\substack{U_{i1}\in\{U_{11}, U_{21}\}\\ \left(l_{i1}, l_{22}\right)\in\mathbb{Z}^{+}, l_{12}\in\mathbb{Z}}}\pi\left(l_{i1}, U_{i1}, l_{12}, l_{22}, U_{22}\right) = {}1\\\numberthis\label{eqn6.21} \end{align*} Using Equations $\left(\ref{eqn6.13}\right)-\left(\ref{eqn6.21}\right)$, we obtain the estimates of all steady state probabilities for subsystem $SS\left(2\right)$. Using the steady state probabilities, we obtain estimates of the average throughput $TH_{i2}$, average buffer level $L_{i2}$, average waiting time $W_{i2}$, and system waiting time $W_{i}$, of product type $i$, for $i={}1, 2$ at station 2, these are given by Equations $\left(\ref{eqn6.22}\right)-\left(\ref{eqn6.25}\right)$.\\ \begin{align*} TH_{i2} = {}\mu_{i2}\Big[\mathop{\sum\sum\sum}_{\substack{\left(l_{12}, l_{22}\right)\in\mathbb{Z}^{+}\\ S_{i1}\in\{S_{11}, S_{21}\}}} \pi\left(S_{i1}, l_{12}, l_{22}, U_{i2}\right) &+\mathop{\sum\sum\sum}_{\substack{\left(l_{11}, l_{12}, l_{22}\right)\in\mathbb{Z}^{+}}} \pi\left(l_{11}, U_{11}, l_{12}, l_{22}, U_{i2}\right)\\ &+\mathop{\sum\sum\sum}_{\substack{\left(l_{21}, l_{12}, l_{22}\right)\in\mathbb{Z}^{+}}} \pi\left(l_{21}, U_{21}, l_{12}, l_{22}, U_{i2}\right)\Big]={}\lambda_{i}\numberthis\label{eqn6.22} \end{align*} \begin{align*} L_{i2} = {}\mathop{\sum\sum\sum\sum}_{\substack{r\in\{S_{12}, S_{22}, U_{12}, U_{22}\}\\ \left(l_{12}, l_{22}\right)\in\mathbb{Z}^{+}\\ S_{i1}\in\{S_{11}, S_{21}\} }} l_{i2}\cdot\pi\left(S_{i1}, l_{12}, l_{22}, r\right) &+\mathop{\sum\sum\sum\sum}_{\substack{r\in\{S_{12}, S_{22}, U_{12}, U_{22}\}\\ \left(l_{11}, l_{12}, l_{22}\right)\in\mathbb{Z}^{+}}} l_{i2}\cdot\pi\left(l_{11}, U_{11}, l_{12}, l_{22}, r\right)\\ &+\mathop{\sum\sum\sum\sum}_{\substack{r\in\{S_{12}, S_{22}, U_{12}, U_{22}\}\\ \left(l_{21}, l_{12}, l_{22}\right)\in\mathbb{Z}^{+}}} l_{i2}\cdot\pi\left(l_{21}, U_{21}, l_{12}, l_{22}, r\right)\numberthis\label{eqn6.23} \end{align*} \begin{align*} W_{i2}& = {}L_{i2}TH_{i2}^{-1}\numberthis\label{eqn6.24}\\ W_{i} & = {}W_{i1}+W_{i2}, & i ={} 1, 2. \numberthis\label{eqn6.25} \end{align*} \subsection{Determination of $p_{i\left(l_{i1}\right)}$}\label{DeterminationOfProbability} Next, we explain how we determine $p_{i\left(l_{i1}\right)}$. We know that $H_{ij}$ is the setup time for product $i$ at station $j$. Let $H_{j}$ be the sum of setup times for product 1 and 2 at station $j$, i.e., $H_{j} ={}H_{1j}+H_{2j}$. Further, let $V_{ij}$ denote the visit period of queue $i$, the time the server spends serving products at queue $i$ excluding setup time at station $j$. We define intervisit period $I_{ij}$ of queue $i$ at station $j$ as the time between a departure epoch of the server from queue $i$ and its subsequent arrival to this queue at station $j$. $I_{1j}$ and $I_{2j}$ can be written as \begin{align*}\label{eq:6.26} I_{1j} = {} H_{2j}+V_{2j}+H_{1j}\\ I_{2j} = {} H_{1j}+V_{1j}+H_{2j}\numberthis\\ \end{align*} Next, we define cycle length at station $j$, $C_{j}$, as the time between two successive arrivals of the server at a particular queue at station $j$. Then, the relationship between $C_{j}$, $I_{ij}$, and $V_{ij}$ can be written as Equation $\left(\ref{eq:6.27}\right)$, and is shown in Figure \ref{fig:mesh6.4}. \begin{equation}\label{eq:6.27} C_{j}=H_{1j}+V_{1j}+H_{2j}+V_{2j} \end{equation} \graphicspath {{Figures/}} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.45]{PossionArrivals} \caption{Depiction of intervisit period $I_{21}$.} \label{fig:mesh6.4} \end{center} \end{figure} We know that $p_{i\left(l_{i1}\right)}$ is the probability that there are $l_{i1}$ type $i$ products at station 1 after the server completes the setup for queue $i$. Since the stations follow an exhaustive service policy, to calculate $p_{i\left(l_{i1}\right)}$, we need to determine the probability of a given number of Poisson arrivals at station 1 during the time interval when the server is not serving products of type $i$ at station 1, i.e, during the intervisit time of queue $i$. Note that this intervisit period is a random variable and we approximate its probability density function (pdf) using estimates of the first and the second moments of the intervisit period by method of moments.\\ Let the first moment and the variance of the setup time for product $i$ at station $j$ be $\mu_{s_{ij}}^{-1}$ and $\sigma_{s_{ij}}$ respectively. Let $\displaystyle\mathop{\mathbb{E}}\left[H_{j}\right]$ be the sum of setup times for product 1 and 2 at station $j$. Then,\\ \begin{equation}\label{eq:6.28} \displaystyle\mathop{\mathbb{E}}\left[H_{j}\right] = {}\mu_{s_{1j}}^{-1}+\mu_{s_{2j}}^{-1}\\ \end{equation} Next, let the traffic intensity $\rho_{ij}$ at queue \emph{i} of station \emph{j} be defined as $\rho_{ij}={}\lambda_{i}/\mu_{ij}$, and the total traffic intensity at station \emph{j}, $\rho_{j}$, be defined as $\rho_{j}={}\sum_{i={}1}^{2} \rho_{ij}$. Note that this traffic intensity does not include the setup times. Hence, the effective load on the station is considerably higher. The mean cycle lengths in polling queues at station $j$, $C_{j}$, is given by Equation $\left(\ref{eq:6.29}\right)$.\\ \begin{equation}\label{eq:6.29} \displaystyle\mathop{\mathbb{E}}\left[C_{j}\right] = \frac{\displaystyle\mathop{\mathbb{E}}\left[H_{j}\right]}{1 - \rho_{j}}\\ \end{equation} Since the server is working a fraction $\rho_{ij}$ of the time on queue $i$, the mean of a visit period of queue $i$ is given by \begin{equation}\label{eq:6.30} \displaystyle\mathop{\mathbb{E}}\left[V_{ij}\right]={}\rho_{ij}\displaystyle\mathop{\mathbb{E}}\left[C_{j}\right]\\ \end{equation} Therefore, the mean of intervisit period, $\displaystyle\mathop{\mathbb{E}}\left[I_{i1}\right]$, of queue $i$ at station 1 can be written as\\ \begin{equation}\label{eq:6.31} \displaystyle\mathop{\mathbb{E}}\left[I_{ij}\right] = \mathbb{E}\left[C_{i}\right] - \mathbb{E}\left[V_{ij}\right]\\ \end{equation} The variance of the intervisit period, $\sigma_{I_{i1}}^{2}$, of queue $i$ at station 1 is given by Equation $\left(\ref{eq:6.32}\right)$. This equation is based on the analysis by Eisenberg \cite{Eisenberg72}.\\ \begin{align*} \sigma_{I_{i1}}^{2}=\sigma_{s_{i'1}}^{2}+\frac{\rho_{i'1}^{2}\left(\lambda_{i}T_{i1}^{2}C+\sigma_{s_{i'1}}^{2}\right) + \left(1-\rho_{i1}\right)^{2}\left(\lambda_{i'}T_{i'1}^{2}C+\sigma_{s_{i1}}^{2}\right)}{\left(1-\rho_{11}-\rho_{21}\right)\left(1-\rho_{11}-\rho_{21}+2\rho_{11}\rho_{21}\right)}\numberthis\label{eq:6.32}\\ \end{align*} Next, we use information about $\mathbb{E}\left[I_{i1}\right]$ and $\sigma_{I_{i1}}^{2}$ in Equation $\left(\ref{eq:6.31}\right)$ and $\left(\ref{eq:6.32}\right)$ to approximate the pdf of $I_{i1}$ by a Gamma distribution. We choose the Gamma distribution since the intervisit period $I_{i1}$ is the sum of possibly non-identical exponential random variables, i.e., setup times of queue $i$ and queue $i'$, and visit period of queue $i'$. Recall that for random variable $\mathbb{Z}$, having Gamma distribution with scale parameter $\alpha$ and shape parameters $\beta$, the pdf is given by Equation $\left(\ref{eq:6.33}\right)$. The mean $\mathbb{E}\left[\mathbb{Z}\right]$ and the variance $Var\left[\mathbb{Z}\right]$ is given by Equation $\left(\ref{eq:6.34}\right)$ and Equation $\left(\ref{eq:6.35}\right)$ respectively.\\ \begin{align*} f_{I_{i1}}\left(t\right)&={}\frac{1}{\Gamma\left(\alpha\right)\beta^\alpha}t^{\alpha-1}e^{\left(-\frac{t}{\beta}\right)}\numberthis\label{eq:6.33}\\ \mathbb{E}\left[\mathbb{Z}\right]&={} \alpha\beta\numberthis\label{eq:6.34}\\ Var\left[\mathbb{Z}\right] &={} \alpha\beta^{2}\numberthis\label{eq:6.35}\\ \end{align*} Finally, using $f_{I_{i1}}\left(t\right)$, we determine $p_{i\left(l_{i1}\right)}$, i.e., the probability that there are $l_{i1}$ type $i$ products after the server completes the setup for queue $i$ at station 1. Let $\mathbb{N}_{i}\left(t\right)$ be the number of arrivals of product $i$ at station 1 in time $t$. Since the service policy is exhaustive at both the stations, the number of products of type $i$ at the end of the service of queue $i$ is 0 at the corresponding station. Thus, the number of type $i$ products at the end of setup for queue $i$ at station 1 is equal to the number of exogenous arrivals of type $i$ products at station 1 during the intervisit period of queue $i$. Let $l_{i1}$ be the number of type $i$ products that arrive at station 1 during the intervisit period $I_{i1}$. As the arrivals of exogenous products at station 1 are Poisson, we estimate $p_{i\left(l_{i1}\right)}$ using Equation $\left(\ref{eq:6.36}\right)$ given below. \begin{equation}\label{eq:6.36} p_{i\left(l_{i1}\right)}={}\Pr\left[\mathbb{N}_{i}\left(I_{i1}\right) = {}l_{i1}\right] = {}\int_{0}^{\infty}\Pr\left[\mathbb{N}\left(I_{i1} = t\right) = {}l_{i1}\right]\times f_{I_{i1}}\left(t\right)dt \end{equation} \section{Numerical Results}\label{NumericalResults6} In this section, we present the results of the numerical experiments performed using the decomposition approach described in Section \ref{Subsystem1} and Section \ref{Subsystem2}. To study the accuracy of our proposed decomposition approach, a simulation model was made using Arena software (\href{https://www.arenasimulation.com/}{www.arenasimulation.com}). In the simulation model, the stations were modeled as `process' with `seize delay release' as action and products as `entities'. When the products of a particular type are processed at a station, the products of other type were held using the `hold' process. At the same time, the 'hold' process scans the queue length and releases the products of the other type when the queue length becomes zero for the served product type. A total of 10 replications were performed with a warm-up period of 50 and replication length of 500. The replication length was set to 10 days. A total of 1 million entities were processed in this duration. The simulation ran for approximately 10 minutes for each of the experimental settings.\\ To further study how our proposed approach performs against simpler models, we compared our approach with a simple decomposition approach. This simple decomposition approach looks at the system as two independent polling station. We compare the mean waiting times obtained using the proposed decomposition approach with that obtained from the simulation model and simple decomposition under four different experiment settings. In the first set, we compare the results under station and product symmetry. In the second set, we compare the results under station asymmetry that arises due to differences in processing rates between stations, and in the third, we compare the results under product asymmetry that arises due to differences in processing rates between products. Finally, in the fourth set, we compare the results under both station and product asymmetry. We define Error $\left(\Delta_{W_{i}}\right)$ as $|\frac{W_{i_{S}} - W_{i_{D}}}{W_{i_{S}}}|$, where $W_{i_{S}}$ and $W_{i_{D}}$ are the mean waiting times for product $i$ obtained from simulation and the decomposition approach. As expected, throughput from the decomposition model matches with the throughput from the simulation model, and the comparison of $L_{ij}$ and $W_{ij}$ give similar insights. Therefore, we focus our attention only on insights related to $W_{ij}$ in the discussion below. \subsection{Model Validation} \textbf{Station and Product Symmetry}$\colon$ We set the arrival rate $\lambda_{i}$ to 1.00 for both the product types at station 1 and the setup time $\mu_{s_{ij}} = {} \{1.00, 1.50, 2.00, 5.00\}$ for both the products at both the stations. We vary the service rates $\mu_{ij}$ between 2.86 to 4.00 so that the load at station $j$, $\rho_{j}$, varies between 0.50 to 0.70 in the increments of 0.10. As mentioned in Section \ref{Subsystem2}, this load does not include the setup times. Hence, the effective load on the system is considerably higher and is always 1. We also set high values for buffer sizes so that the loss in systems throughput is less than 0.1\%. The results of this comparison are summarized in Table \ref{T:6.3}. Note that, as we analyze symmetric system under this setting, $W_{1j} = {} W_{2j}$ for $j ={}1, 2$ and $W_{1} = {} W_{2}$. We do not feel the need to compare the waiting times at station 1 in our experiments as we use an exact approach to determine it. {\renewcommand{\arraystretch}{1.35} \begin{table}[H] \centering \caption{Performance analysis of systems with product and station symmetry.}\label{T:6.3} \begin{tabular}{|C{1.0cm}|C{1.0cm}||C{1.2cm}|C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.0cm}|} \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{s_{ij}}^{-1} = {}1/1.00, \text{high setup times}$}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 4.00 & 0.5 & 2.50 & 2.57 & 5.07 & 2.50 & 2.56 & 5.06 & -0.46 & -0.26\\ 3.33 & 0.6 & 3.00 & 2.99 & 5.99 & 3.00 & 3.02 & 6.02 & 1.03 & 0.56\\ 2.86 & 0.7 & 3.83 & 3.67 & 7.50 & 3.83 & 3.73 & 7.56 & 1.53 & 0.69\\ \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{s_{ij}}^{-1} = {}1/1.50, \text{high-medium setup times}$}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 1.50 & 0.5 & 1.83 & 1.87 & 3.70 & 1.83 & 1.89 & 3.72 & 1.32 & 0.67\\ 1.50 & 0.6 & 2.26 & 2.24 & 4.50 & 2.26 & 2.28 & 4.54 & 1.75 & 0.88\\ 1.50 & 0.7 & 2.94 & 2.82 & 5.76 & 2.94 & 2.88 & 5.82 & 2.03 & 1.01\\ \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{s_{ij}}^{-1} = {}1/2.00, \text{medium-low setup times}$}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 2.00 & 0.5 & 1.50 & 1.51 & 3.01 & 1.50 & 1.54 & 3.04 & 2.08 & 1.05\\ 2.00 & 0.6 & 1.88 & 1.84 & 3.72 & 1.88 & 1.89 & 3.77 & 2.54 & 1.27\\ 2.00 & 0.7 & 2.49 & 2.37 & 4.86 & 2.49 & 2.45 & 4.94 & 3.18 & 1.58\\ \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{s_{ij}}^{-1} = {}1/5.00, \text{low setup times}$}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 5.00 & 0.5 & 0.89 & 0.87 & 1.76 & 0.89 & 0.90 & 1.79 & 2.76 & 1.57\\ 5.00 & 0.6 & 1.20 & 1.14 & 2.34 & 1.20 & 1.20 & 2.40 & 4.94 & 2.37\\ 5.00 & 0.7 & 1.69 & 1.55 & 3.24 & 1.69 & 1.65 & 3.34 & 5.87 & 2.83\\ \hline \end{tabular} \end{table} \normalsize It can be noted that the error in waiting times estimate at station 2 is less than 6\% while the error in system's waiting time estimates is less than 3\% for all the tested values of traffic intensity for symmetric systems using our proposed method.\\ In the arena model simulation for all the setup settings, we see that when we vary the traffic intensity at the stations, the waiting times at station 2 which was higher than the waiting times at station 1 for lower traffic values becomes smaller for higher traffic values. This trend is captured by our proposed approach. Further, our approach is able to classify the bottleneck station for product and station symmetry settings by capturing the synergies of a tandem polling system. Although for space reasons we do not report results from the simple decomposition in the paper, we would like to point out that $i)$ the simple decomposition approach is unable to capture this trend in waiting times, and $ii)$ the simple decomposition approach yields output the same performance measure values for both the stations as it analyzes both the stations independently.\\ \textbf{Station Asymmetry Because of Different Processing Rates}$\colon$ In this experiment setting, we analyze the impact of station asymmetry by examining the effects of upstream bottlenecks and downstream bottlenecks. To do so, we first vary the service rate $\mu_{i2}$ at station 2 from 2.86 to 4.00 while keeping the service rates $\mu_{i1}$ at station 1 for both the types of products constant at 2.50. Under these settings, $\rho_{2}$ varies between 0.50 to 0.70 in the increments of 0.10. Next, to study the effects of downstream bottlenecks, we vary the service rate $\mu_{i2}$ at station 2 of both the types of products at station 2 between 2.86 to 4.00 while keeping the service rates $\mu_{i1}$ at station 1 equal at 2.50 for both the types of products. Under these settings, $\rho_{1}$ varies between 0.50 to 0.70 in the increments of 0.10. The results of this analysis are summarized in Table \ref{T:6.4}. We set the arrival rate $\lambda_{i}$ to 1.00 for both the product types at station 1 and the setup time $\mu_{s_{ij}} = {} \{1.00, 1.50, 2.00, 5.00\}$ for both the products at both the stations. Since we have only station asymmetry, $W_{1j} = {} W_{2j}$ for $j ={}1, 2$ and $W_{1} = {} W_{2}$. \newpage {\renewcommand{\arraystretch}{1.40} \begin{table}[H] \centering \caption{Performance analysis of systems with station asymmetry.}\label{T:6.4} \begin{tabular}{|C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.2cm}|} \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{i1} = {}2.50, \rho_{1} = {}0.80, \mu_{s_{ij}}^{-1} = {}1/1.00$, high setup times, station 1 bottleneck}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 4.00 & 0.50 & 5.50 & 2.25 & 7.75 & 5.50 & 2.25 & 7.75 & 0.02 & 0.01\\ 3.33 & 0.60 & 5.50 & 2.67 & 8.17 & 5.50 & 2.68 & 8.18 & 0.29 & 0.10\\ 2.86 & 0.70 & 5.50 & 3.42 & 8.92 & 5.50 & 3.45 & 8.95 & 0.92 & 0.36\\ \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{i2} = {}2.50, \rho_{2} = {}0.80, \mu_{s_{ij}}^{-1} = {}1/1.00$, high setup times, station 2 bottleneck}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 4.00 & 0.50 & 2.50 & 5.86 & 8.36 & 2.50 & 5.88 & 8.38 & 0.41 & 0.29\\ 3.33 & 0.60 & 3.00 & 5.71 & 8.71 & 3.00 & 5.74 & 8.74 & 0.57 & 0.38\\ 2.86 & 0.70 & 3.83 & 5.47 & 9.30 & 3.83 & 5.52 & 9.35 & 0.97 & 0.57\\ \hline \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{i1} = {}2.50, \rho_{1} = {}0.80, \mu_{s_{ij}}^{-1} = {}1/1.50$, high-medium setup times, station 1 bottleneck}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 4.00 & 0.50 & 4.33 & 1.67 & 6.00 & 4.33 & 1.69 & 6.02 & 1.18 & 0.33\\ 3.33 & 0.60 & 4.33 & 2.05 & 6.38 & 4.33 & 2.06 & 6.39 & 0.49 & 0.16\\ 2.86 & 0.70 & 4.33 & 2.67 & 7.00 & 4.33 & 2.72 & 7.05 & 1.84 & 0.71\\ \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{i2} = {}2.50, \rho_{2} = {}0.80, \mu_{s_{ij}}^{-1} = {}1/1.50$, high-medium setup times, station 2 bottleneck}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 4.00 & 0.50 & 1.83 & 4.53 & 6.36 & 1.83 & 4.56 & 6.39 & 0.66 & 0.47\\ 3.33 & 0.60 & 2.25 & 4.41 & 6.66 & 2.25 & 4.47 & 6.72 & 1.34 & 0.89\\ 2.86 & 0.70 & 2.93 & 4.19 & 7.12 & 2.93 & 4.30 & 7.23 & 2.56 & 1.52\\ \hline \end{tabular} \end{table} \normalsize {\renewcommand{\arraystretch}{1.40} \begin{table}[H] \ContinuedFloat \centering \caption{Performance analysis of systems with station asymmetry (continued).}\label{T:6.4} \begin{tabular}{|C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.2cm}|} \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{i1} = {}2.50, \rho_{1} = {}0.80, \mu_{s_{ij}}^{-1} = {}1/2.00$, medium-low setup times, station 1 bottleneck}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 4.00 & 0.50 & 3.75 & 1.36 & 5.11 & 3.75 & 1.40 & 5.15 & 3.07 & 0.83\\ 3.33 & 0.60 & 3.75 & 1.68 & 5.43 & 3.75 & 1.74 & 5.49 & 3.51 & 1.11\\ 2.86 & 0.70 & 3.75 & 2.19 & 5.94 & 3.75 & 2.32 & 6.07 & 5.52 & 2.11\\ \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{i2} = {}2.50, \rho_{2} = {}0.80, \mu_{s_{ij}}^{-1} = {}1/2.00$, medium-low setup times, station 2 bottleneck}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 4.00 & 0.50 & 1.50 & 3.80 & 5.30 & 1.50 & 3.88 & 5.38 & 2.14 & 1.54\\ 3.33 & 0.60 & 1.88 & 3.67 & 5.55 & 1.88 & 3.84 & 5.72 & 4.45 & 2.99\\ 2.86 & 0.70 & 2.50 & 3.47 & 5.97 & 2.50 & 3.73 & 6.23 & 6.89 & 4.13\\ \hline \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{i1} = {}2.50, \rho_{1} = {}0.80, \mu_{s_{ij}}^{-1} = {}1/5.00$, low setup times, station 1 bottleneck}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 4.00 & 0.50 & 2.70 & 0.82 & 3.52 & 2.70 & 0.85 & 3.55 & 2.98 & 0.71\\ 3.33 & 0.60 & 2.70 & 1.08 & 3.78 & 2.70 & 1.13 & 3.83 & 4.05 & 1.20\\ 2.86 & 0.70 & 2.70 & 1.51 & 4.21 & 2.70 & 1.60 & 4.30 & 5.50 & 2.05\\ \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{i2} = {}2.50, \rho_{2} = {}0.80, \mu_{s_{ij}}^{-1} = {}1/5.00$, low setup times, station 2 bottleneck}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 4.00 & 0.50 & 0.89 & 2.51 & 3.40 & 0.89 & 2.70 & 3.59 & 7.01 & 5.27\\ 3.33 & 0.60 & 1.20 & 2.46 & 3.66 & 1.20 & 2.68 & 3.88 & 8.14 & 5.62\\ 2.86 & 0.70 & 1.69 & 2.39 & 4.08 & 1.69 & 2.64 & 4.33 & 9.38 & 5.72\\ \hline \end{tabular} \end{table} \normalsize Table \ref{T:6.4} shows that the error in waiting times estimate using our proposed approach is less than 3\% for high and high-medium setup time settings, and is less than 10\% for medium-low and low setup time settings. We also observe that the error in estimation of waiting times is considerably low when we have bottleneck in upstream versus when the bottleneck is in downstream operations. One important thing to notice about the system behavior is that when system parameters such as arrival rate and setup times are kept constant, the system waiting times $W_{i}$ is higher when the downstream station is a bottleneck as compared to when the bottleneck is in upstream station.\\ The major drawback of the simple decomposition approach is its inability to distinguish between bottleneck stations. In the arena simulation model and our proposed approach, we observe that system waiting times $W_{i}$ is higher when the downstream station is a bottleneck as compared to when the bottleneck is in upstream station.\\ \textbf{Product Asymmetry Because of Different Processing Rates}$\colon$ In this experiment setting, we analyze the impact of product asymmetry. For this, we fix the service rates $\mu_{1j}$ of type 1 products at both the station and vary the service rates $\mu_{2j}$ of type 2 products such that $\mu_{1j} / \mu_{2j}$ varies between 0.40 to 0.80 in the units of 0.20. We do this for $\mu_{1j} = 2.50$. Note that in all cases, product 2 has faster service rate at both the stations. We list the results corresponding to $\mu_{s_{ij}} = {} \{1.00, 1.50, 2.00, 5.00\}$ in Table \ref{T:6.5}. {\renewcommand{\arraystretch}{1.40} \begin{table}[H] \footnotesize \centering \caption{Performance analysis of systems with product asymmetry.}\label{T:6.5} \resizebox{\columnwidth}{!} \begin{tabular}{|C{0.9cm}|C{0.9cm}||C{0.9cm}|C{0.9cm}|C{0.9cm}|C{0.9cm}||C{0.9cm}|C{0.9cm}|C{0.9cm}|C{0.9cm}||C{0.9cm}|C{0.9cm}|C{0.9cm}|C{0.9cm}|} \hline \multicolumn{14}{|c|}{$\lambda_{i} = {}1, \mu_{1j} ={} 2.50, \mu_{s_{ij}}^{-1} = {}1/1.00, \text{high setup times}$}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{4}{c||}{\textbf{Proposed Approach}} & \multicolumn{4}{c||}{\textbf{Simulation}} & \multicolumn{4}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&&&&&\\[-1em] $\mu_{i2}$ & $\rho_{2}$ & $W_{12}$ & $W_{22}$ & $W_{1}$ & $W_{2}$ & $W_{12}$ & $W_{22}$ & $W_{1}$ & $W_{2}$ & $\Delta_{W_{12}}$ & $\Delta_{W_{22}}$ & $\Delta_{W_{1}}$ & $\Delta_{W_{2}}$\\ \hline 6.25 & 0.56 & 2.74 & 3.09 & 5.33 & 6.33 & 2.72 & 3.19 & 5.30 & 6.43 & -0.90 & 3.19 & -0.46 & 1.68\\ 4.17 & 0.64 & 3.11 & 3.39 & 6.15 & 7.05 & 3.13 & 3.53 & 6.18 & 7.18 & 0.85 & 3.84 & 0.38 & 1.86\\ 3.13 & 0.72 & 3.69 & 3.88 & 7.55 & 8.17 & 3.82 & 4.09 & 7.69 & 8.40 & 3.41 & 5.09 & 1.84 & 2.68\\ \hline \multicolumn{14}{|c|}{$\lambda_{i} = {}1, \mu_{1j} ={} 2.50, \mu_{s_{ij}}^{-1} = {}1/1.50, \text{high-medium setup times}$}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{4}{c||}{\textbf{Proposed Approach}} & \multicolumn{4}{c||}{\textbf{Simulation}} & \multicolumn{4}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&&&&&\\[-1em] $\mu_{i2}$ & $\rho_{2}$ & $W_{12}$ & $W_{22}$ & $W_{1}$ & $W_{2}$ & $W_{12}$ & $W_{22}$ & $W_{1}$ & $W_{2}$ & $\Delta_{W_{12}}$ & $\Delta_{W_{22}}$ & $\Delta_{W_{1}}$ & $\Delta_{W_{2}}$\\ \hline 6.25 & 0.56 & 2.05 & 2.33 & 4.02 & 4.74 & 2.05 & 2.38 & 4.02 & 4.79 & 0.00 & 1.93 & -0.10 & 1.02\\ 4.17 & 0.64 & 2.31 & 2.58 & 4.64 & 5.34 & 2.37 & 2.66 & 4.71 & 5.43 & 2.49 & 3.01 & 1.46 & 1.66\\ 3.13 & 0.72 & 2.83 & 3.00 & 5.80 & 6.29 & 2.94 & 3.14 & 5.94 & 6.46 & 3.74 & 4.49 & 2.36 & 2.65\\ \hline \multicolumn{14}{|c|}{$\lambda_{i} = {}1, \mu_{1j} ={} 2.50, \mu_{s_{ij}}^{-1} = {}1/2.00, \text{medium-low setup times}$}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{4}{c||}{\textbf{Proposed Approach}} & \multicolumn{4}{c||}{\textbf{Simulation}} & \multicolumn{4}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&&&&&\\[-1em] $\mu_{i2}$ & $\rho_{2}$ & $W_{12}$ & $W_{22}$ & $W_{1}$ & $W_{2}$ & $W_{12}$ & $W_{22}$ & $W_{1}$ & $W_{2}$ & $\Delta_{W_{12}}$ & $\Delta_{W_{22}}$ & $\Delta_{W_{1}}$ & $\Delta_{W_{2}}$\\ \hline 6.25 & 0.56 & 1.73 & 1.86 & 3.40 & 3.86 & 1.72 & 1.98 & 3.39 & 3.98 & -0.35 & 5.86 & -0.18 & 3.12\\ 4.17 & 0.64 & 1.93 & 2.19 & 3.91 & 4.51 & 2.00 & 2.34 & 3.98 & 4.66 & 3.40 & 6.37 & 1.71 & 3.22\\ 3.13 & 0.72 & 2.39 & 2.51 & 4.94 & 5.32 & 2.51 & 2.69 & 5.07 & 5.53 & 4.90 & 6.62 & 2.66 & 3.73\\ \hline \multicolumn{14}{|c|}{$\lambda_{i} = {}1, \mu_{1j} ={} 2.50, \mu_{s_{ij}}^{-1} = {}1/5.00, \text{low setup times}$}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{4}{c||}{\textbf{Proposed Approach}} & \multicolumn{4}{c||}{\textbf{Simulation}} & \multicolumn{4}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&&&&&\\[-1em] $\mu_{i2}$ & $\rho_{2}$ & $W_{12}$ & $W_{22}$ & $W_{1}$ & $W_{2}$ & $W_{12}$ & $W_{22}$ & $W_{1}$ & $W_{2}$ & $\Delta_{W_{12}}$ & $\Delta_{W_{22}}$ & $\Delta_{W_{1}}$ & $\Delta_{W_{2}}$\\ \hline 6.25 & 0.56 & 1.11 & 1.12 & 2.23 & 2.37 & 1.12 & 1.23 & 2.24 & 2.48 & 0.89 & 8.97 & 0.47 & 4.40\\ 4.17 & 0.64 & 1.27 & 1.34 & 2.61 & 2.86 & 1.33 & 1.47 & 2.67 & 3.00 & 4.42 & 9.39 & 2.25 & 4.75\\ 3.13 & 0.72 & 1.64 & 1.69 & 3.42 & 3.63 & 1.74 & 1.87 & 3.52 & 3.81 & 5.48 & 9.59 & 2.80 & 4.83\\ \hline \end{tabular}} \end{table} \normalsize Table \ref{T:6.5} shows that the error in waiting times estimate using our proposed approach is less than 4\% for high and high-medium setup time settings, and is less than 10\% for medium-low and low setup time settings.\\ In Table \ref{T:6.5}, we observe that $W_{2}$ (for the product type having the faster service rate) is higher as compared to $W_{1}$. A possible explanation for this is that since the servers at both the stations are faster in serving products of type 2, when they switch to serve products of type 1, because of lower service rates for type 1 products, the server processes products from that queue for a longer duration. As a consequence, the products of type 2 wait longer.\\ \textbf{Station Asymmetry Because of Different Setup Rates}$\colon$ In this experiment setting, we analyze the impact of setup times on system performance. We consider the case where the upstream station is a bottleneck in terms of setup times, and set $\mu_{s_{i1}}={}1.00$ and $\mu_{s_{i1}}={}5.00$. We also consider the case where the downstream station is a bottleneck in terms of setup, and set $\mu_{s_{i1}}={}5.00$ and $\mu_{s_{i1}}={}1.00$. For both the setup settings, we vary the service rates $\mu_{ij}$ between 2.50 to 4.00 so that $\rho_{j}$ varies between 0.50 to 0.80 in the increments of 0.10. We set the arrival rate $\lambda_{i}$ to 1.00 for both the products types at station 1. The results of this analysis are summarized in Table \ref{T:6.6}.\\ {\renewcommand{\arraystretch}{1.40} \begin{table}[H] \centering \caption{Performance analysis of systems with setup variation across stations.}\label{T:6.6} \begin{tabular}{|C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.2cm}|C{1.2cm}||C{1.2cm}|C{1.0cm}|} \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{s_{i1}}^{-1} = {}1/1.00, \mu_{s_{i2}}^{-1} = {}1/5.00, \text{station 1 bottleneck}$}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 4.00 & 0.50 & 2.50 & 1.00 & 3.50 & 2.50 & 0.98 & 3.48 & -2.19 & -0.62 \\ 3.33 & 0.60 & 3.00 & 1.30 & 4.30 & 3.00 & 1.27 & 4.27 & -2.12 & -0.63 \\ 2.86 & 0.70 & 3.82 & 1.75 & 5.57 & 3.82 & 1.72 & 5.54 & -1.74 & -0.54 \\ 2.50 & 0.80 & 5.52 & 2.59 & 8.11 & 5.52 & 2.63 & 8.15 & 1.52 & 0.49 \\ \hline \hline \multicolumn{10}{|c|}{$\lambda_{i} = {}1, \mu_{s_{i1}}^{-1} = {}1/5.00, \mu_{s_{i2}}^{-1} = {}1/1.00, \text{station 2 bottleneck}$}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{3}{c||}{\textbf{Proposed Approach}} & \multicolumn{3}{c||}{\textbf{Simulation}} & \multicolumn{2}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&\\[-1em] $\mu_{ij}$ & $\rho_{ij}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $W_{i1}$ & $W_{i2}$ & $W_{i}$ & $\Delta_{W_{i2}}$ & $\Delta_{W_{i}}$\\ \hline 4.00 & 0.50 & 0.90 & 2.43 & 3.33 & 0.90 & 2.51 & 3.41 & 3.19 & 2.35 \\ 3.33 & 0.60 & 1.20 & 2.85 & 4.05 & 1.20 & 3.02 & 4.22 & 5.63 & 4.03 \\ 2.86 & 0.70 & 1.69 & 3.50 & 5.19 & 1.69 & 3.79 & 5.48 & 7.65 & 5.29 \\ 2.50 & 0.80 & 2.70 & 4.86 & 7.56 & 2.70 & 5.33 & 8.03 & 8.82 & 5.85 \\ \hline \end{tabular} \end{table} \normalsize Table \ref{T:6.6} shows that the error in waiting times estimate using our proposed approach is less than 3\% when we have bottleneck at upstream stations, and is less than 10\% when we have bottleneck at downstream stations. The error values and rates show similar trend when we had station asymmetry because of different processing rates in Table \ref{T:6.4} .\\ Note that when system parameters such as arrival rate and setup times are kept constant, the system waiting times $W_{i}$ is higher when the upstream station is bottleneck in terms of setup times as compared to when the downstream station is bottleneck, when the other systems parameters are kept constant. This is opposite to the results that we observed in Table \ref{T:6.4}, when the stations where bottleneck with respect to processing times.\\ \textbf{Product Asymmetry Because of Different Setup Rates}$\colon$ Last, we compare the system performance under the settings of product asymmetry in terms of setup times. For this, we consider two settings of service rates$\colon\mu_{ij}={}2.50$ and $\mu_{ij}={}4.00$. For each of the two settings, we fix the setup rates $\mu_{s_{1j}}$ of type 1 products at both the station and vary the setup rates $\mu_{s_{2j}}$ of type 2 products such that $\mu_{s_{1j}} / \mu_{s_{2j}}$ varies between 0.40 to 0.80 in the units of 0.20. Note that in all cases, product 2 has faster setup rate at both the stations. We list the results corresponding to $\mu_{ij} = {} 2.50$ and $\mu_{ij} = {} 4.00$ in Table \ref{T:6.7}.\\ {\renewcommand{\arraystretch}{1.40} \begin{table}[H] \footnotesize \centering \caption{Performance analysis of systems with setup variation across products.}\label{T:6.7} \resizebox{\columnwidth}{!} \begin{tabular}{|C{0.9cm}|C{0.9cm}||C{0.9cm}|C{0.9cm}|C{0.9cm}|C{0.9cm}||C{0.9cm}|C{0.9cm}|C{0.9cm}|C{0.9cm}||C{0.9cm}|C{0.9cm}|C{0.9cm}|C{0.9cm}|} \hline \multicolumn{14}{|c|}{$\lambda_{i} = {}1, \mu_{ij}^{-1} = {}1/2.50, \text{high service times}$}\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{4}{c||}{\textbf{Proposed Approach}} & \multicolumn{4}{c||}{\textbf{Simulation}} & \multicolumn{4}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&&&&&\\[-1em] $\mu_{s_{1j}}$ & $\mu_{s_{2j}}$ & $W_{12}$ & $W_{22}$ & $W_{1}$ & $W_{2}$ & $W_{12}$ & $W_{22}$ & $W_{1}$ & $W_{2}$ & $\Delta_{W_{12}}$ & $\Delta_{W_{22}}$ & $\Delta_{W_{1}}$ & $\Delta_{W_{2}}$\\ \hline 1.00 & 2.50 & 3.92 & 3.96 & 8.40 & 8.54 & 4.20 & 4.27 & 8.68 & 8.85 & 6.67 & 7.26 & 3.23 & 3.50 \\ 1.00 & 1.67 & 4.15 & 4.18 & 8.92 & 9.04 & 4.49 & 4.53 & 9.26 & 9.38 & 7.57 & 7.73 & 3.70 & 3.62 \\ 1.00 & 1.25 & 4.60 & 4.62 & 9.75 & 9.78 & 4.82 & 4.82 & 9.97 & 9.99 & 4.56 & 4.15 & 2.21 & 2.00 \\ \hline \multicolumn{14}{|c|}{$\lambda_{i} = {}1, \mu_{ij}^{-1} = {}1/4.00, \text{low service times}$ }\\ \hline \multicolumn{2}{|c||}{\textbf{Input}} & \multicolumn{4}{c||}{\textbf{Proposed Approach}} & \multicolumn{4}{c||}{\textbf{Simulation}} & \multicolumn{4}{c|}{\textbf{Error \%}}\\ \hline &&&&&&&&&&&&&\\[-1em] $\mu_{s_{1j}}$ & $\mu_{s_{2j}}$ & $W_{12}$ & $W_{22}$ & $W_{1}$ & $W_{2}$ & $W_{12}$ & $W_{22}$ & $W_{1}$ & $W_{2}$ & $\Delta_{W_{12}}$ & $\Delta_{W_{22}}$ & $\Delta_{W_{1}}$ & $\Delta_{W_{2}}$\\ \hline 1.00 & 2.50 & 1.97 & 2.06 & 3.87 & 4.08 & 1.95 & 2.08 & 3.85 & 4.10 & -1.17 & 0.83 & -0.52 & 0.49 \\ 1.00 & 1.67 & 2.15 & 2.22 & 4.24 & 4.39 & 2.13 & 2.22 & 4.22 & 4.39 & -0.81 & 0.11 & -0.47 & 0.00 \\ 1.00 & 1.25 & 2.36 & 2.39 & 4.65 & 4.71 & 2.34 & 2.38 & 4.63 & 4.70 & -0.79 & -0.40 & -0.43 & -0.21 \\ \hline \end{tabular}} \end{table} \normalsize Table \ref{T:6.7} shows that the error in waiting times estimate using our proposed approach is less than 8\% for high and high service time settings, and is less than 2\% for low service time settings.\\ In Table \ref{T:6.7}, we observe that $W_{2}$ (for the product type having the faster setup rate) is higher as compared to $W_{1}$. A possible explanation for this is that since the servers at both the stations are faster in performing setups for products of type 2, when they switch to setup and serve products of type 1, because of lower setup rates for type 1 products, the server processes products from that queue for a longer duration. As a consequence, the products of type 2 wait longer. This observation is similar to the observed behavior of the system in Table \ref{T:6.5}.\\ Table \ref{T:6.8} summarizes the performance of the decomposition approach showing the average errors, standard deviations, and quantiles for the error \% $\left(\Delta_{W_{i2}}\right)$ and error \% $\left(\Delta_{W_{i}}\right)$. Overall, we find that the average error is around 4\% for $W_{i2}$, and around 2\% for $W_{i}$, while the errors for the majority of the cases is less than 6\%. We believe that these errors are in general satisfactory in view of the complexity of the system under consideration.\\ {\renewcommand{\arraystretch}{1.40} \begin{table}[h!] \centering \caption{Summary of error analysis.}\label{T:6.8} \begin{tabular}{|L{4.0cm}|C{4.0cm}|C{4.0cm}|} \hline &&\\[-1em] \textbf{Statistics} & \textbf{Error \% $\left(\Delta_{W_{i2}}\right)$} & \textbf{ Error \% $\left(\Delta_{W_{i}}\right)$}\\ \hline Average error & 3.5 & 1.8 \\ SD error & 3.0 & 1.7 \\ $50^{th}$ quantile & 3.0 & 1.3 \\ $75^{th}$ quantile & 5.3 & 2.5 \\ \hline \end{tabular} \end{table} \section{Conclusions}\label{Conclusions6} In this paper, we develop a decomposition based approach to analyze tandem network of polling queues with two-products and two-stations to determine system throughput, average buffer levels, and average waiting times. Under Markovian assumptions of arrival and service times, we obtain exact values of performance measures at station 1 and use partially-collapsible state-space approach to obtain reasonably accurate approximations for performance measures at station 2. This approach allows us to analyze the system with better computational efficiency. Numerical studies are conducted to test the accuracy of the decomposition method. Overall, the average error is around 4\% for waiting time estimates at station 2, and around 4\% in estimation of system waiting times, while the errors for the majority of the cases is less than 6\%.\\ We also investigate the effects of two different types of bottleneck in the system related to product and station asymmetry, and the systems performance are different in the two cases. In the setting with station asymmetry with respect to service rates, we notice that the system waiting times $W_{i}$ is higher when the downstream station is bottleneck as compared to when the upstream station is bottleneck. However, this is not the case when there is station asymmetry with respect to setup times. In the setting with station asymmetry with respect to setup times, we observe opposite behavior. Additionally, in both cases of product asymmetry, i.e, service rates and setup rates, we observed that $W_{2}$ (for the product type having the faster service rate) is higher as compared to $W_{1}$. A simple decomposition approach that analyzes the two polling stations independently does not capture these interactions in between polling stations and gives inferior estimates of performance measures.\\ The analysis in this paper can be extended to analyze larger network of polling queues with multiple products by using product aggregation. The analysis can be also be used as building block for networks with more than two stations. Exploring these generalizations is part of our ongoing research. \bibliographystyle{nonumber}
2,869,038,156,431
arxiv
\section{Introduction} Bound states composed of two heavy quarks such as $(c\bar c),\, (b\bar b)$ and $(\bar b c)$ mesons, and $(ccq),\, (bbq)$ and $(bcq)$ baryons ($q$ denotes a light quark) are of interests because they can be produced with sizable rates at the current high energy hadron or $e^+e^-$ colliders. The Standard Model predictions of the production rates of these bound states can therefore be confronted with experimental data. The direct production of heavy mesons like heavy quarkonia and $(\bar b c)$ bound states can provide very interesting tests for perturbative QCD. The production of $J/\psi$ and $\psi'$ \cite{CDF} at the Tevatron has already raised a lot of theoretical interests in explaining the excess of the experimental data above the lowest order perturbative QCD calculation \cite{stirling}, especially at the large transverse momentum $(p_T)$ region. The ideas of heavy quark and gluon fragmentation \cite{gswave,charm,bcfrags,gpwave,qpwave,bcfrags_others} have been successfully applied to explain the experimental data of the prompt $J/\psi$ production from CDF within a factor of five \cite{BDFM,CG,RS,CWT}. Various attempts \cite{close,chowise,psipi,psipi2} using the same fragmentation ideas have also been made to resolve the $\psi^\prime$ surplus problem observed at CDF. Recently, the preliminary CDF results \cite{papadimi} also showed that the production rates of the $1S$, $2S$, and $3S$ $\Upsilon$ states are in excess of the leading order calculation. While the $\Upsilon(1S)$ and $\Upsilon(2S)$ results can be partly explained by including the fragmentation contribution, the $\Upsilon(3S)$ result showed an excess of about an order of magnitude over the QCD prediction even with the fragmentation contribution included \cite{papadimi}. One subtlety is that the relevant $p_T$ for the fragmentation contribution to dominate should be larger in the bottomonium system than in the charmonium system, such that fragmentation is only valid for $p_T \buildrel {\mbox{$>$}} \over {\raisebox{-0.8ex}{\hspace{-0.05in \,$(1--2)$\, m_b$ in the former case. Besides, one has to worry about the very small $p_T$ region because, unlike the charmonium system, the experimental triggering conditions on the muon pair coming from the bottomonium leptonic decay allow experimentalists to measure the transverse momentum of the bottomonium all the way down to about 0--1 GeV \cite{papadimi}. In order to fully understand the $p_T$ spectrum of the bottomonium production, different production mechanisms have to be brought into picture to explain the production rates in different $p_T$ regions. The $(\bar b c)$ meson system, which is {\it intermediate} between the $J/\psi$ and $\Upsilon$ families, is also an interesting physical system to study. The mass spectrum of the $(\bar b c)$ mesons can be predicted reliably from quarkonium potential models \cite{eitqui,baganetal,kiselevetal} without introducing any new parameters and their decay constants can be computed using QCD spectrum sum rules \cite{baganetal,kiselevetal}. We will adopt the Particle Data Group \cite{PDG} conventions, denoting the $1S$ pseudoscalar ($^1S_0$) and vector meson ($^3S_1$) $(\bar b c)$ states by $B_c$ and $B_c^*$, respectively. Higher radially and orbitally excited states are labeled by the standard spectroscopy notation: $n\,^{2S+1}L_J$, where the integer $n$ is the principal quantum number, and $L$, $S$, and $J=L+S$ are respectively the orbital angular momentum, total spin, and total angular momentum of the bound state. In the $LS$ coupling scheme, for each principal quantum number $n$, the spin-singlet and the spin-triplet S-wave ($L=0$) states are denoted by $^1S_0$ and $^3S_1$, respectively. Those for the P-wave ($L=1$) and D-wave ($L=2$) states are denoted by $^1P_1$, $^3P_J$ ($J=0,1,2$), and $^1D_2$, $^3D_J$ ($J=1,2,3)$, respectively. According to the results of the potential model calculation \cite{eitqui}, the first two sets ($n=1$ and $n=2$) of S-wave states, the first ($n=1$) and probably the entire second set ($n=2$) of P-wave states, and the first set ($n=1$) of D-wave states lie below the $BD$ flavor threshold. Since QCD interactions are diagonal in flavors, the annihilation channel of excited $(\bar b c)$ mesons can only occur through the weak gauge boson ($W$) exchange and is therefore suppressed relative to the electromagnetic and hadronic transitions to other lower lying states. The excited states below the $BD$ threshold will cascade down into the ground state $B_c$ via emission of photons and/or pions, while the other states above the $BD$ threshold will decay rapidly into a pair of $B$ and $D$ mesons. Inclusive production of the $B_c$ meson therefore includes the production of the $n=1$ and $n=2$ S-wave and P-wave states, and the $n=1$ D-wave states. The production of the S-wave ($^1S_0$ and $^3S_1$) states were first computed exactly to leading order in Ref.~\cite{bc_ee} at the $e^+e^-$ machine, in particular at the $Z$ resonance. Later, it was realized \cite{bcfrags,bcfrags_others} that the complicated formulas in these complete calculations can be simplified by a factorization approach. The dominant contribution in the leading order calculation can be factorized into a short distance piece, which describes the partonic process of the decay of $Z$ into a high energy $b \bar b$ pair, and a fragmentation piece describing how the $\bar b$ antiquark splits into the two S-wave states. The corresponding fragmentation functions $D_{\bar b \to B_c}(z)$ and $D_{\bar b \to B_c^*}(z)$, which are independent of the short-distance piece, were shown to be calculable by perturbative QCD at the heavy quark mass scale \cite{bcfrags}. Recently, the production of the S-wave states has also been computed at hadronic colliders like the Tevatron and the Large Hadron Collider (LHC) both by a complete ${\cal O}(\alpha_s^4)$ calculation \cite{changchen,slab,bere,mase,klr} and by using the simpler fragmentation approach \cite{cheung,induceglue}. We note that unlike the $J/\psi$ production in hadronic collisions in which the major contributions come from (1) $g\to \chi_{cJ}$ fragmentation followed by the decay $\chi_{cJ} \to J/\psi + \gamma$ \cite{BDFM,CG,RS,CWT}, and (2) gluon fragmentation into a color-octet $(c \bar c)$ $^3S_1$ state which subsequently evolves nonperturbatively into $J/\psi$ \cite{psipi,psipi2}, the fragmentation diagrams for producing $(\bar b c)$ mesons form a subset of the whole set of ${\cal O}(\alpha_s^4)$ diagrams, which are the {\it leading-order} diagrams for producing $(\bar b c)$ mesons in hadronic collisions. It is therefore not clear that if the production of $(\bar b c)$ mesons in hadronic collisions is dominated by parton fragmentation. However, detailed calculations by Chang {\it et al.} \cite{changchen}, Slabospitsky \cite{slab}, and Kolodziej {\it et al} \cite{klr} indicated that the fragmentation approach is valid for the S-wave production at the large transverse momentum region. We will discuss more about this later in the closing section. In this paper, we study the hadronic production of $(\bar bc)$ mesons. We compute the production rates of the S-wave and P-wave $(\bar b c)$ mesons at the Tevatron using the fragmentation approach. Intuitively, the dominant production mechanism of the $(\bar b c)$ mesons at the large transverse momentum region must arise from the direct fragmentation of the heavy $\bar b$ antiquark. The relevant question is whether experiments can probe the transverse momentum region where fragmentation dominates. Unfortunately, to answer this question it also requires a complete ${\cal O}(\alpha_s^4)$ calculation for the production of the P-wave states, which is not available at the moment. Here we assume that the fragmentation approach also works for production of the P-wave $(\bar b c)$ mesons in the transverse momentum range that is being probed experimentally at the Tevatron. In this work, we do not include the contributions from the D-wave states because the corresponding fragmentation functions are very small \cite{dwave}. Although the production of the D-wave states is of great interest by themselves, they only contribute about 2 \% \cite{dwave} to the inclusive production of $B_c$. In the near future, like other heavy quarkonia, the production of $(\bar bc)$ mesons at the Tevatron may therefore provide another interesting test for perturbative QCD. Although we will only show our results for the positively charged states $(\bar b c)$, all the results presented in this work also hold for the negatively charged states $(b \bar c)$. The organization of this paper is as follows. In the next section, we discuss in detail the general procedures to calculate the production cross sections using the fragmentation approach. In Sec. III we present the transverse momentum spectra and the integrated cross sections for the production of S-wave and P-wave $(\bar b c)$ states, as well as the inclusive production rate for the $B_c$ meson. Discussions and conclusions are made in Sec. IV. For completeness we also collect all the S-wave and P-wave fragmentation functions at the heavy-quark mass scale in the Appendix. Before leaving this preamble, we note that a preliminary result from CDF had provided a hint for the $B_c$ existence by looking at the production rate of $J/\psi+\pi$ in the mass bin of 6.1--6.4 GeV \cite{private}. \section{Inclusive production cross sections in the fragmentation approach} Theoretical calculations of production cross sections in high energy hadronic collisions are based on the idea of factorization. Factorization divides an inclusive or exclusive hadronic production process into short-distance pieces and long-distance pieces. The short-distance pieces are perturbatively calculable to any desired accuracy in QCD, while the long-distance pieces are in general not calculable within perturbation theory but can be parameterized as phenomenological functions, which can be determined by experiments. The factorization used here for the production of $(\bar b c)$ mesons divides the process into the production of a high energy parton (a $\bar b$ antiquark or a gluon) and the fragmentation of this parton into various $(\bar b c)$ states. The novel feature in our approach, which is due to a recent theoretical development \cite{bcfrags,qpwave}, is that the relevant fragmentation functions at the heavy quark mass scale can be calculated in perturbative QCD to any desired accuracy. This is easily understood from the fact that the fragmentation of a $\bar b$ antiquark into a $(\bar b c)$ meson involves the creation of a $c \bar c$ pair out of the vacuum. The natural scale for this particular hadronization is of order of the mass of the quark pair being created. In our case, this scale is of order $m_c$, which is considerably larger than $\Lambda_{\rm QCD}$. One can therefore calculate reliably the fragmentation function as an series expansion in the strong coupling constant $\alpha_s$ using perturbative QCD. The production of high energy partons also involves the factorization into the parton distribution functions inside the hadrons and the parton-parton hard scattering. Let $H$ denotes any $(\bar b c)$ meson states. The differential cross section $d\sigma/dp_T$ versus the transverse momentum $p_T$ of $H$ is given by \begin{eqnarray} \frac{d \sigma}{d p_T}(p\bar p \to H(p_T) X) & = & \sum_{ij} \int dx_1 dx_2 dz f_{i/p}(x_1,\mu) f_{j/\bar p}(x_2,\mu) \left [ \frac{d \hat \sigma}{dp_T} (ij \to \bar b(p_T/z)X,\,\mu) \nonumber \right. \\ && \left. \times D_{\bar b \to H} (z,\mu) + \frac{d\hat \sigma}{dp_T} (ij\to g(p_T/z)X,\mu)\; D_{g\to H} (z,\mu) \right ] \; . \label{*} \end{eqnarray} The physical interpretation is as follows: a heavy $\bar b$ antiquark or a gluon is produced in a hard process with a transverse momentum $p_T/z$ and then it fragments into $H$ carrying a longitudinal momentum fraction $z$. We assume that $H$ is moving in the same direction as the fragmenting parton. In the above equation, $f_{i/p(\bar p)}(x,\mu)$'s are the parton distribution functions, $d \hat \sigma$'s represent the subprocess cross sections, and $D_{i\to H}(z,\mu)$'s are the parton fragmentation functions at the scale $\mu$. For production of the $\bar b$ antiquark, we include the subprocesses $gg\to b\bar b$, $g \bar b \to g\bar b$, and $q \bar q \to b\bar b$; while for the production of the gluon $g$, we include the subprocesses $gg\to gg$, $q\bar q\to gg$, and $g q(\bar q)\to g q(\bar q)$. In Eq.~(\ref{*}), the factorization scale $\mu$ occurs in the parton distribution functions, the subprocess cross sections, and the fragmentation functions. In general, we can choose three different scales for these three entities. For simplicity and ease of estimating the uncertainties due to changes in scale, we choose a common scale $\mu$ for all three of them. The physical production rates should be independent of choices of the scale $\mu$, because $\mu$ is just an artificial entity introduced to factorize the whole process into different parts in the renormalization procedure. However, this independence of scale can only be achieved if both the production of the high energy partons and the fragmentation functions are calculated to all orders in $\alpha_s$. So far, only the next-to-leading order $\hat \sigma$'s and the leading order fragmentation functions are available, so the production cross sections do depend on the choice of $\mu$ to a certain degree. We will estimate the dependence on $\mu$ by varying the scale $\mu=(0.5-2)\mu_R$, where $\mu_R$ is our primary choice of scale \begin{equation} \label{scale} \mu_R = \sqrt{p_T^2({\rm parton}) + m^2_b} \;. \end{equation} This choice of scale, which is of order $p_T({\rm parton})$, avoids the large logarithms in the short-distance part $\hat \sigma$'s. However, we have to sum up the logarithms of order $\mu_R/m_b$ in the fragmentation functions. But this can be implemented by evolving the Altarelli-Parisi equations for the fragmentation functions. \subsection{Evolution of Fragmentation Functions} The Altarelli-Parisi evolution equations for the fragmentation functions are \begin{equation} \label{Db} \mu \frac{\partial}{\partial \mu} D_{\bar b\to H}(z,\mu) = \int_z^1 \frac{dy}{y} P_{\bar b\to \bar b}(z/y,\mu)\; D_{\bar b \to H}(y,\mu) + \int_z^1 \frac{dy}{y} P_{\bar b\to g}(z/y,\mu)\; D_{g \to H}(y,\mu) \,, \end{equation} \begin{equation} \label{Dg} \mu \frac{\partial}{\partial \mu} D_{g\to H}(z,\mu) = \int_z^1 \frac{dy}{y} P_{g \to \bar b}(z/y,\mu)\; D_{\bar b \to H}(y,\mu) + \int_z^1 \frac{dy}{y} P_{g \to g}(z/y,\mu)\; D_{g \to H}(y,\mu) \,, \end{equation} where $H$ denotes any $(\bar b c)$ states, and $P_{i\to j}$ are the usual Altarelli-Parisi splitting functions. The leading order expressions for $P_{i\to j}$ can be found in the Appendix. The boundary conditions for solving the above Altarelli-Parisi equations are the fragmentation functions $D_{\bar b \to H}(z,\mu_0)$ and $D_{g\to H}(z,\mu_0)$ that we can calculate by perturbative QCD at the initial scale $\mu_0$, which is of the order of the $b$-quark mass. At present, all the S-wave \cite{bcfrags} and P-wave \cite{qpwave} fragmentation functions for $\bar b\to (\bar b c)$ have been calculated to leading order in $\alpha_s$. They are all collected in the Appendix. The initial scale $\mu_0$ for $D_{\bar b \to H}(z,\mu_0)$ is chosen to be $\mu_0=m_b + 2m_c$, which is the minimum virtuality of the fragmenting $\bar b$ antiquark \cite{bcfrags}. On the other hand, since the initial gluon fragmentation function $D_{g\to H}(z,\mu_0)$ is suppressed by one extra power of $\alpha_s$ relative to $D_{\bar b\to H}(z,\mu_0)$, we simply choose the initial gluon fragmentation function to be $D_{g\to H}(z,\mu_0)=0$ for $\mu_0 \le 2(m_b+m_c)$ --- the minimum virtuality of the fragmenting gluon \cite{induceglue}. We can also examine the relative importance of these fragmentation functions. The initial $D_{\bar b\to H}(z,\mu_0)$ is of order $\alpha_s^2$, whereas the initial $D_{g\to H}(z,\mu_0)$ is of order $\alpha_s^3$ and has been set to be zero for $\mu_0 \leq 2(m_b + m_c)$ as discussed above. But when the scale is evolved up to a higher scale $\mu$, $D_{\bar b\to H}(z,\mu)$ is still of order $\alpha_s^2$, while the induced $D_{g\to H}(z,\mu)$ is of order $\alpha_s^3 \log(\mu/\mu_0)$. At a sufficiently large scale $\mu$ the logarithmic enhancement can offset the extra suppression factor of $\alpha_s$. Thus the Altarelli-Parisi induced gluon fragmentation functions can be as important as the $\bar b$ antiquark fragmentation, even though the initial gluon fragmentation functions are suppressed. While these Altarelli-Parisi induced gluon fragmentation functions play only a moderate role at the Tevatron, they will play a more significant role at the LHC \cite{induceglue}. To obtain the fragmentation functions at an arbitrary scale greater than $\mu_0$, we numerically integrate the Altarelli-Parisi evolution equations (\ref{Db})--(\ref{Dg}) with the boundary conditions described in the above paragraph. Since the initial light-quark fragmentation functions $D_{q \to H}(z,\mu_0)$ are of order $\alpha_s^4$, one can set them to be zero as well for $\mu_0 \leq 2(m_b + m_c)$. One may ask if the light-quark fragmentation functions can be induced in the same manner as in the gluon case by Altarelli-Parisi evolution. Since both the Altarelli-Parisi splitting functions $P_{q\to q}$ and $P_{q\to g}$ are total plus-functions, the induced light-quark fragmentation functions $D_{q \to H}(z)$ can only be total plus-functions or vanishing identically. We do not anticipate that these induced light-quark fragmentation functions, if nontrivial, will play any significant role in our analysis. \section{Numerical results} Leading order QCD formulas are employed for the parton-level scattering cross sections and CTEQ(2M) \cite{cteq} is used for the initial parton distributions. The inputs to the initial fragmentation functions are the heavy quark masses $m_b$ and $m_c$, and the nonperturbative parameters associated with the wavefunctions of the bound states. For the S-wave states there is only one nonperturbative parameter, which is the radial wavefunction $R_{nS}(0)$ at the origin. However for the P-wave fragmentation functions we have two nonperturbative parameters $H_1$ and $H_8^\prime$ associated with the color-singlet and the color-octet mechanisms, respectively \cite{bbl}. Two of the P-wave states ($^1P_1$ and $^3P_1$) are mixed to form the two physical states, denoted by $|1^+ \rangle$ and $|1^{+\prime}\rangle$. Further details of the mixings can be found in Refs.~\cite{qpwave,eitqui}. The above input parameters for the fragmentation functions are summarized in Table~\ref{table1}. As mentioned in the previous section, we have set the scale $\mu$ in the parton distribution functions, subprocess cross sections, and the fragmentation functions to be the same. We will later vary $\mu$ between $0.5 \mu_R$ and $2\mu_R$, where $\mu_R$ is given in Eq.~(\ref{scale}), to study the dependence on the choice of scale. For the strong coupling constant $\alpha_s(\mu)$ entered in the subprocess cross sections, we employ the following simple expression \begin{equation} \alpha_s (\mu) = \frac{\alpha_s(M_Z)}{1+ \;\frac{33-2n_f}{6\pi}\; \alpha_s(M_Z) \ln(\frac{\mu}{M_Z}) }\;, \end{equation} where $n_f$ is the number of active flavors at the scale $\mu$ and $\alpha_s(M_Z)=0.118$. In order to simulate the detector coverage at the Tevatron, we impose the following acceptance cuts on the transverse momentum and rapidity of the $(\bar b c)$ state $H$: \begin{equation} p_T(H) > 6\;{\rm GeV} \qquad {\rm and} \qquad |y(H)| < 1 \;. \end{equation} The numerical results of the $p_T$ spectra for the $(\bar b c)$ state $H$ with various spin-orbital quantum numbers are shown in Fig.~\ref{fig1} and Fig.~\ref{fig2} for the cases of principal quantum number $n=1$ and $n=2$, respectively. The integrated cross sections versus $p^{\rm min}_{T}(H)$ are also shown in Figs.~\ref{fig3} and \ref{fig4} for the cases of $n=1$ and $n=2$, respectively. Now we can predict the inclusive production rate of the $B_c$ meson. As the annihilation channel is suppressed relative to the electromagnetic/hadronic transitions, all the $(\bar b c)$ excited states below the $BD$ threshold will decay eventually into the ground state $B_c$ via emission of photons or pions. Since the energies of these emitted photons and/or pions are limited by the small mass differences between the initial and final $(\bar b c)$ states, the transverse momenta of the $(\bar bc)$ mesons will not be altered appreciably during the cascades. Therefore, we can simply add up the $p_T$ spectra (Fig.~\ref{fig1} and Fig.~\ref{fig2}) of all the states to represent the $p_T$ spectrum of the inclusive $B_c$ production. Similarly, we can add up the integrated cross sections (Fig.~\ref{fig3} and Fig.~\ref{fig4}) to represent the integrated cross section for the inclusive $B_c$ production. Thus, we can obtain the inclusive production rate of $B_c$ as a function of $p_T^{\rm min}(B_c)$. Table~\ref{table2} gives the inclusive cross sections for the $B_c$ meson at the Tevatron as a function of $p_T^{\rm min}(B_c)$, including all the contributions from the $n=1$ and $n=2$ S-wave and P-wave states. These cross sections should almost represent the inclusive production of $B_c$ by fragmentation, because the contributions from the D-wave states are expected to be minuscule. The results for $\mu=\mu_R/2,\,\mu_R$, and $2\mu_R$ are also shown. The variation of the integrated cross sections with the scale $\mu$ is always within a factor of two, and only about 20\% for $p_T>10$ ~GeV. We will discuss more about the dependence of scale in the next section. At the end of Run Ib at the Tevatron, the total accumulated luminosities can be up to 100--150 pb$^{-1}$ or more. With $p_T>6$~GeV, there are about $5 \times 10^5$ $B_c^+$ mesons. The lifetime of the $B_c$ meson has been estimated to be of order 1--2 picosecond \cite{eitqui}, which is long enough to leave a displaced vertex in a silicon vertex detector. Besides, $B_c$ decays into $J/\psi+X$ very often, where $X$ can be a $\pi^+$, $\rho^+$, or $\ell^+ \nu_l$, and $J/\psi$ can be detected easily through its leptonic decay modes. The inclusive branching ratio of $B_c\to J/\psi +X$ is about 10\% \cite{mangano}. When $X$ is $e^+\nu_e$ or $\mu^+\nu_\mu$, we will obtain the striking signature of three-charged leptons coming off from a common secondary vertex. The combined branching ratio of $B_c\to J/\psi \ell^+\nu_\ell \to \ell'^+ \ell'^- \ell^+ \nu_\ell\; (\ell,\ell'=e,\mu)$ is about 0.2\%. This implies that there will be of order $10^3$ such distinct events for 100 pb$^{-1}$ luminosity at the Tevatron. Even taking into account the imperfect detection efficiencies, there should be enough events for confirmation. However, this mode does not afford the full reconstruction of the $B_c$. If $X$ is some hadronic states, {\it e.g.}, pions, the events can be fully reconstructed and the $B_c$ meson mass can be measured. The process $B_c\to J/\psi+ \pi^+ \to \ell^+\ell^- \pi^+$ is likely to be the discovery mode for $B_c$. Its combined branching ratio is about 0.03\%, which implies about 300 such distinct events at the Tevatron with a luminosity of 100 pb$^{-1}$. After the next fixed target runs at the Tevatron, the Main Injector will be installed in 1996--1997 according to the present plan \cite{talk}. The Main Injector will give a significant boost in the luminosity while the center-of-mass energy stays the same. The upgraded luminosity is estimated to be about ten times larger than its present value. This enables Run II to accumulate a total luminosity of 1--2 fb$^{-1}$, which implies that about $10^7 - 10^8$ $B_c$ mesons will be produced. With the Main Injector installed, it might be possible to produce the D-wave $(\bar b c)$ states with sizable rates. Tevatron will continue running until the next generation of hadronic colliders, {\it e.g.}, the LHC. The present design of the LHC is at a center-of-mass energy of 14 TeV and the yearly luminosity is of order 100 fb$^{-1}$. In Table~\ref{table3}, we show the inclusive cross sections for the $B_c$ meson at the LHC as a function of $p_T^{\rm min}(B_c)$ including the contributions from $n=1$ and $n=2$ S-wave and P-wave states. With the assumed 100 fb$^{-1}$ luminosity there are about $3\times 10^9$ $B_c$ mesons with $p_T> 10 $~GeV and $|y(B_c)|<2.5$. With such a high luminosity at LHC, one expects sizable numbers of the various D-wave $(\bar b c)$ states to be produced as well. The LHC will then be a copious source of $(\bar b c)$ mesons such that their properties {\it e.g.}, spectroscopy and decays, can be thoroughly studied. In addition, the mixing and CP violation studies are possible. For example, one can use $B_c$ to tag the flavor of $B_s$ in the decay $B_c^+ \to B_s^0 \ell^+ \nu_l$ for the studies of $B_s^0 - \bar{B_s^0}$ mixing. Also, CP violations in the $B_c$ system can be studied by looking at the difference in the partial decay widths of $B_c^+ \to X$ and $B_c^- \to \bar X$. \section{Discussions and conclusions} First, we briefly discuss the various sources of uncertainties in our calculation. One uncertainty comes from the use of the naive Altarelli-Parisi evolution equations. As pointed out in Refs.~\cite{BDFM,BCFY}, the naive Altarelli-Parisi equations in Eqs.~(\ref{Db})--(\ref{Dg}) do not respect the phase space constraints. Inhomogeneous evolution equations were then advocated to remedy for these problems. The major effect is to correct the unphysical blow-up of the evolved gluon fragmentation functions when $z$ gets too close to 0. The corrected gluon fragmentation functions, instead of having the unphysical blow-up at $z=0$, turn to zero smoothly below a certain threshold value of $z$. Despite the dramatic changes of the evolved gluon fragmentation functions for small $z$ values, this effect will not show up easily in our calculation because the small $z$ region is very likely excluded by the transverse momentum cut imposed on the $(\bar bc)$ mesons. Therefore, in this paper we keep on using the homogeneous Altarelli-Parisi equations in Eqs.~(\ref{Db})--(\ref{Dg}) to evolve our fragmentation functions. A second source of uncertainty is due to the choice of the factorization scale $\mu$. We show in Fig.~\ref{fig5} the dependence of the differential cross sections on the factorization scale $\mu$ by plotting the results for the various choices of $\mu =\mu_R/2, \mu_R$, and $2\mu_R$. For clarity we only show the curves for the $1\,^1S_0$ and $1\,^3P_0$ states in Fig.~\ref{fig5}. The behaviors for other states are similar. Note that in the $\mu=\mu_R/2,\,\mu_R$, and $2\mu_R$ curves, the running scales used in the strong coupling constant $\alpha_s$, which entered in $d \hat \sigma$'s and in the parton distribution functions $f_i(x)$'s are equal to $\mu$, while the running scale used in the fragmentation functions is set to be ${\rm max}$ $(\mu,\,\mu_0)$, where $\mu_0$ is the prescribed initial scale for the fragmentation functions. Figure~\ref{fig5} shows that the change in the factorization scale $\mu$ gives different results for the differential cross sections. The $\mu=2\mu_R$ curves show that the differential cross sections increase (decrease) only slightly at the low (high) $p_T$ region. Although the $\mu=\mu_R/2$ curves demonstrate larger changes in the differential cross sections, the variations are always within a factor of two. The integrated cross sections at various $p^{\rm min}_{T}(B_c)$, as already shown in Table~\ref{table2}, also indicate lesser sensitivity in the scale as one increases $p^{\rm min}_{T}(B_c)$. The variations of differential cross sections with the scale $\mu$ demonstrate the effects of the next-to-leading order (NLO) corrections. Only if all the NLO corrections are calculated for each piece contained in Eq.~(\ref{*}), namely the parton distribution functions, the parton-parton hard scattering cross sections, and the fragmentation functions, can these variations be reduced substantially. Since the perturbative QCD fragmentation functions are only calculated to leading order, the NLO calculations of the perturbative QCD fragmentation functions can provide an improvement to our calculation. However, due to the rather weak dependence on the scale in our leading order calculation, our results should be rather stable under higher order perturbative corrections. Other uncertainties come from the input parameters to the boundary conditions of the fragmentation functions. These are the heavy quark masses $m_b$ and $m_c$, and the nonperturbative parameters describing the bound states. Slight changes in $m_c$ and $m_b$ could possibly lead to appreciable changes in the fragmentation functions, as indicated by the $m^3$ and $m^5$ dependence, respectively in the denominators of the S-wave and P-wave fragmentation functions. (Note that $H_1/m \approx 9 \vert R^\prime(0) \vert^2/(32\pi m^5)$.) However, in the numerators the wavefunctions at the origin $\vert R(0) \vert^2$ and $\vert R^\prime(0) \vert^2$ also scale like $m^3$ and $m^5$, respectively. Therefore the dependence on the heavy quark masses should be mild in the fragmentation functions. The color-octet parameters $H_8'$'s are associated with the additional color-octet contributions and they are not well determined. However, we do not expect that they will play any significant role in the present context and only refer our readers to Ref.\cite{qpwave} where some discussions of these parameters can be found. There was a controversy in the recent literature \cite{changchen,slab,bere,mase,klr} concerning about the importance of parton fragmentation in the production of the $B_c\;(B_c^*)$ meson at hadronic colliders. The controversy arises because the Feynman diagrams responsible for the $\bar b$ antiquark fragmentation form a subset of the whole set of Feynman diagrams, which contribute at the order of $\alpha_s^4$. Thus, there is a competition between the fragmentation contribution and the non-fragmentation contribution. Some authors had referred to the latter contribution as recombination. So far, five independent groups \cite{changchen,slab,bere,mase,klr} have presented such a complete ${\cal O}(\alpha_s^4)$ calculation. Chang {\it et al}. \cite{changchen} and Slabospitsky \cite{slab} agreed that the fragmentation contribution dominates at the large transverse momentum region. However, we could not find in their work the precise value of $p_T$ at which the fragmentation contribution begins to dominate. On the other hand, Berezhnoy {\it et al.} \cite{bere} claimed that fragmentation never dominates for all kinematical region of $p_T$ and the recombination diagrams can never be ignored. Independently, Masetti and Sartogo \cite{mase} have recently obtained results claimed to be consistent with Berezhnoy {\it et al} \cite{bere}. There are certainly discrepancies among these calculations. Most recently, Kolodziej {\it et al} \cite{klr} performed yet another independent calculations using two different methods -- the usual trace and the modern helicity amplitude techniques. Both methods agreed with each other numerically to very high accuracy, according to the authors. Kolodziej {\it et al.} \cite{klr} also did a very thorough comparison with previous exact calculations by comparing numerically their matrix elements with other groups \cite{changchen,slab,bere}, but could not find agreement using identical input parameters. \footnote{In their most recently revised versions, both groups of Chang {\it et al.} (the second entry of Ref.~\cite{changchen}) and Berezhnoy {\it et al.} \cite{bere} confirmed the results of Kolodziej {\it et al.} \cite{klr}. Thanks to an anonymous referee for pointing out the situation.} Furthermore, Kolodziej {\it et al.} \cite{klr} did a comparison of the production of $B_c$ mesons between the exact calculation and the fragmentation approximation, and found that the fragmentation approach is indeed valid for $p_T \ge 10$ GeV! Because of the conclusions of Ref.~\cite{klr}, we believe that the fragmentation contribution should begin to dominate the production of $(\bar bc)$ mesons at $p_T \buildrel {\mbox{$>$}} \over {\raisebox{-0.8ex}{\hspace{-0.05in 10$ GeV. Also, we would like to emphasize that the detector coverage and performance do require a minimum transverse momentum cut and a rapidity cut on the $B_c$ meson. Thus, the non-fragmentation (recombination) contribution, which is substantial at low $p_T$, becomes less important as the lowest $p_T$ range is being excluded. The main result of this paper is the inclusive cross section of the ground state $B_c$ at the Tevatron. We have included in this work all the contributions from the direct $\bar b$ fragmentation and induced gluon fragmentation, as well as the cascade contributions {}from all the excited S-wave and P-wave states below the $BD$ threshold. Previous result presented by one of us in Ref.\cite{cheung} only included the direct $\bar b$ fragmentation into S-wave states, since the P-wave fragmentation functions were not available at that time. Later it was pointed out in Ref.\cite{induceglue} that the induced gluon fragmentation contribution is nonnegligible. The induced gluon fragmentation contribution is about 20 -- 30\% (30 -- 40\%) of the direct $\bar b$ fragmentation contribution at the Tevatron (LHC) energies. The two sets of P-wave states included in this work contribute at the level of 30\%. Furthermore, we have employed here a more updated set of parton distribution functions than those used in Refs.\cite{cheung,induceglue}. As a result, the cross sections increase roughly by 25\%, mainly due to the increase in the gluon luminosities. When all these aforementioned differences are taken into account, the net result is about 70 -- 80\% (a factor of two) larger than the previous result presented in Ref.\cite{cheung} at the Tevatron (LHC). To recap, in this paper we have performed calculations of the transverse momentum spectra and integrated cross sections for the S- and P-wave $(\bar b c)$ mesons. We have also predicted the inclusive production rate for the $B_c$ meson, and found that there should be enough signature events to confirm the existence of $B_c$ at the Tevatron, whereas the LHC will be a copious source of $B_c$. Nevertheless, one should keep in mind that what we have calculated in this paper represents only the contribution from fragmentation, while there should also be other non-fragmentation contributions, especially at the low $p_T$ region ($p_T \buildrel {\mbox{$<$}} \over {\raisebox{-0.8ex}{\hspace{-0.05in 10$ GeV). At least, our results represent a lower bound of the production rate for $B_c$. In the near future, the production of $B_c$ at the Tevatron would provide another interesting test for perturbative QCD, and we expect a very exciting experimental program of $B_c$ at the LHC. \section*{Acknowledgement} This work was supported in part by the United States Department of Energy under Grant Numbers DE-FG03-93ER40757 and DE-FG03-91ER40674. \newpage
2,869,038,156,432
arxiv
\section{Introduction} \label{intro} Understanding the origin of life is one of the main challenges of modern science. It is believed that some basic prebiotic chemistry could have developed in space, likely transferring prebiotic molecules to the solar nebula and later on to Earth. For example, comets exhibit a wide variety of complex organic molecules (or COMs) that are commonly detected in the ISM \citep[see, e.g.,][]{biver15}. Recently, the spacecraft Rosetta found evidence for the presence of several COMs of prebiotic interest on the surface of the comet 67P/Churyumov-Gerasimenko, using the COSAC mass spectrometer \citep[as e.g. glycoladehyde, CH$_2$OHCHO, or formamide, NH$_2$CHO;][]{goesmann15}, and in the coma of the comet using the ROSINA instrument \citep[with the detection of the amino acid glycine, and phosphorous;][]{altwegg16}. Among these species, the COSAC mass spectrometer suggested the presence of methyl isocyanate (CH$_3$NCO) with an abundance relatively high compared to other COMs \citep[][]{goesmann15}. CH$_3$NCO is the simplest isocyanate, which along NH$_2$CHO contains C, N, and O atoms, and could play a key role in the synthesis of amino acid chains known as peptides \citep{pascal05}. CH$_3$NCO was subsequently detected in massive hot molecular cores such as SgrB2 N \citep{halfen15,belloche17} and Orion KL \citep{cernicharo16}. However, CH$_3$NCO remained to be reported in solar-type protostars. IRAS 16293$-$2422 (hereafter IRAS16293) is located in the $\rho$ Ophiuchi cloud complex at a distance of 120 pc \citep{loinard08}, and it is considered an excellent template for astrochemical studies in low-mass protostars \citep[e.g.,][]{jorgensen11,jorgensen16,lykke16}. IRAS16293 is composed by sources A and B, separated in the plane of the sky by $\sim$5$\arcsec$ ($\sim$600 AU), and whose masses are $\sim$0.5 M$_{\odot}$ \citep{looney00}. Their emission exhibits line profiles with linewidths of up to 8 km s$^{-1}$ for IRAS16293 A and $<$ 2 km s$^{-1}$ for IRAS16293 B. The narrow emission of IRAS16293 B, along with its rich COM chemistry, makes this object the perfect target to search for new COMs. In this letter we report the detection of CH$_3$NCO towards IRAS 16293-2422 B at frequencies $\le$250 GHz using the Atacama Large Millimeter Array (ALMA). Our results are consistent with those presented by \citet{ligterink17} using CH$_3$NCO transitions with frequencies $\ge$300 GHz. \section{Observations} \label{observations} The analysis was carried out using the ALMA data from our own project (\#2013.1.00352.S), and all other public datasets in Bands 3, 4, and 6 available in the ALMA archive as of January 2017 (\#2011.0.00007.SV, \#2012.1.00712.S, and \#2013.1.00061.S). We note that we have excluded from our analysis other ALMA public datasets in Bands 7 and 8, i) to prevent any dust optical depth problems \citep[the dust continuum emission of IRAS16293 B is known to be very optically thick at frequencies $>$300 GHz, affecting the line intensities of the molecular emission; see][]{zapata13,jorgensen16}; and ii) to limit the level of line confusion, which allows the correct subtraction of the continuum emission by selecting a suitable number of line-free channels in the observed spectra \citep[see e.g.][]{pineda12}. All data matching our criteria were downloaded and re-calibrated using standard ALMA calibration scripts and the Common Astronomy Software Applications package\footnote{https://casa.nrao.edu}. The angular resolution of all datasets was sufficient to resolve source B from source A (with angular resolutions below 1.5$"$), and therefore the emission lines arising from source B are narrow with linewidths <2$\,$km$\,$s$^{-1}$. Continuum subtraction was performed in the uv-plane before imaging using line-free channels. Our dataset covers a total bandwidth of $\sim$6 GHz split in 26 spectral windows spread between 89.5 and 266.5 GHz, with synthesised beam sizes ranging from 0.57$^{\prime\prime}\times$0.28$^{\prime\prime}$ to 1.42$^{\prime\prime}\times$1.23$^{\prime\prime}$. The velocity resolution falls between 0.14 and 0.30 km s$^{-1}$. For the analysis, a spectrum was extracted from each datacube using a circular support with size $\sim$1.6$^{\prime\prime}$ centered at the position of IRAS16293 B ($RA_{\rm J2000}$ = 16$h$ 32$m$ 22.61$s$, $DEC_{\rm J2000}$ = -24$^{\circ}$ 28$^{\prime}$ 32.44$^{\prime\prime}$). We note that the molecular emission from IRAS16293 B for the species considered in this study (e.g. CH$_3$NCO, NH$_2$CHO and HN$^{13}$CO), is compact and lies below 1.5$"$ \citep[see Figure 2 below and][]{coutens16}. Thus, although the ALMA datasets were obtained with different array configurations and UV coverage, we are confident that our extracted spectra contain all the emission from the hot corino and the analysed lines do not suffer from missing flux. \section{Results} \label{results} \subsection{Detection of CH$_3$NCO} The rotational spectrum of CH$_3$NCO, with the A and E torsional states, has been studied by \citet{koput86} (from 8 to 40 GHz), and more recently by \citet{halfen15} (from 68 to 105 GHz) and \citet[][from 40 to 363 GHz]{cernicharo16}. The identification of the lines was performed using the software MADCUBAIJ\footnote{Madrid Data Cube Analysis on ImageJ is a software developed in the Center of Astrobiology (Madrid, INTA-CSIC) to visualise and analyse single spectra and datacubes \citep{rivilla16a,rivilla16b}}, using the information from the Jet Propulsion Laboratory \citep[JPL;][]{pick98} and the Cologne Database for Molecular Spectroscopy spectral catalogs \citep[CDMS;][]{mull05}. We identified a total of 22 transitions of CH$_3$NCO, 8 out of which were unblended with upper level energies ranging from 175 to 233 K (see Table \ref{table-unblended}) using MADCUBAIJ. The remaining 14 lines appear contaminated by emission from other species. The CH$_3$NCO lines peak at a radial velocity of $v_{LSR}$ = 2.7 km s$^{-1}$ and have linewidths of $\sim$1.1 km s$^{-1}$ (Fig. \ref{figure-unblended}), similar to those from other molecules in IRAS16293 B \citep[][]{jorgensen11}. \begin{table} \tabcolsep 1.2pt \begin{center} \caption{Detected CH$_3$NCO unblended lines in IRAS16293 B.} \label{table-unblended} \begin{tabular}{c c c c c} \hline Frequency & Transition & logA$_{ul}$ & E$_{\rm up}$ & Area \\ (GHz) & (J,K$_{a}$,K$_{c}$,m) & (s$^{-1}$)& (K) & Jy km s$^{-1}$ \\ \hline 157.258419 & (18,2,0,3)$-$(17,2,0,3) & -3.75 & 210 & 0.028 $\pm$ 0.009 \\% & \\ 157.259087 & (18,2,0,-3)$-$(17,2,0,-3) & -3.75 & 210 & 0.028 $\pm$ 0.009 \\% & \\ 232.342227 & (27,2,0,2)$-$(26,2,0,2) & -3.22 & 234 & 0.12 $\pm$ 0.04 \\% & Unbl. \\ 232.411044 & (27,1,0,1)$-$(26,1,0,1) & -3.22 & 175 & 0.16 $\pm$ 0.06 \\% & Unbl. \\ 240.302835 & (28,0,0,1)$-$(27,0,0,1) & -3.17 & 181 & 0.18 $\pm$ 0.06 \\% & Unbl. \\ 250.313498 & (29,3,27,0)$-$(28,3,26,0) & -3.13 & 235 & 0.16 $\pm$ 0.05 \\% & Blend. \\ 250.323521 & (29,3,26,0)$-$(28,3,25,0) & -3.13 & 235 & 0.16 $\pm$ 0.05 \\% & Blend. \\ 250.676140 & (29,0,29,0)$-$(28,0,28,0) & -3.13 & 181 & 0.21 $\pm$ 0.07 \\% & Blend \\ \hline \end{tabular} \end{center} \end{table} MADCUBAIJ produces synthetic spectra assuming Local Thermodinamical Equilibrium (LTE) conditions. The comparison between the observed and the synthetic spectrum for the unblended ransitions can be used to derive the excitation temperature and total column density that best match the observations. We assumed a linewidth of 1.1 km s$^{-1}$, and the source size was constrained by the continuum emission to 0.5$^{\prime\prime}$ (see Figure \ref{mapa}), which agrees with the source size assumed in previous works \citep{jorgensen16,coutens16,lykke16}. The observed spectra and the corresponding LTE fitted synthetic spectrum for the 8 unblended lines detected are shown in Fig \ref{figure-unblended}. All CH$_3$NCO transitions were found to be optically thin ($\tau <$ 0.08), and thus our analysis is not affected by optical depth effects. In addition, a careful check of the synthetic spectrum was performed to confirm that no dectectable transitions were missing from our observational data. \begin{figure*} \centering \includegraphics[scale=0.28]{f1.jpg} \caption{CH$_3$NCO unblended lines measured toward IRAS16293 B with ALMA (solid black). Transitions are shown in every panel, while their rest frequencies are reported in Table \ref{table-unblended}. The synthetic LTE spectrum generated by MADCUBAIJ is overplotted in red.} \label{figure-unblended} \end{figure*} The detected CH$_3$NCO transitions are well reproduced by an excitation temperature of $T_{ex}$=110$\pm$19 K. This $T_{ex}$ is similar to that found for other COMs such as acetaldehyde or propanal in IRAS16293 B \citep[][]{lykke16}. The derived column density is $N$(CH$_{3}$NCO)=(4.0$\pm$0.3)$\times$10$^{15}$ cm$^{-2}$, which agrees with the column density reported in \citet{ligterink17} assuming the same source size and excitation temperature for the transitions with E$_{up}>$300 K detected at frequencies $\ge$320 GHz. The spatial distribution of CH$_3$NCO is shown in Fig. \ref{mapa} and it is coincident with the continuum emission. The measured deconvolved size is $\sim$0.5$"$, consistent with the assumed source size. In order to estimate the CH$_3$NCO abundance, we have derived the H$_2$ column density by using the continuum flux measured at 232 GHz (1.4$\pm$0.05 Jy within a deconvolved size of 0.55$\arcsec\times$0.47$\arcsec$), and by assuming optically thin dust, a dust opacity of 0.009 cm$^{2}$ g$^{-1}$ \citep[thin ices in a H$_2$ density of 10$^{6}$ cm$^{-3}$; see][]{ossenkopf94} and a gas-to-dust mass ratio of 100. The estimated H$_2$ column density for T$_{dust}$=T$_{ex}$=110 K (at these high densities, dust and gas are thermally coupled) is $N$(H$_{2}$)=2.8 $\times$10$^{25}$ cm$^{-2}$, consistent with that estimated by \citet{jorgensen16} at higher frequencies. We however caution that this value should be considered as a lower limit since dust may be optically thick even at these low frequencies. The derived abundance of CH$_{3}$NCO is $\chi$(CH$_{3}$NCO)=(1.4$\pm$0.1)$\times$10$^{-10}$ and it should be considered as an upper limit. Transitions corresponding to two isomers of methyl isocyanate, CH$_3$CNO and CH$_3$OCN, were not detected in our dataset, and 3$\sigma$ upper limits of 2.7$\times$10$^{13}$ cm$^{-2}$, and 5.1$\times$10$^{14}$ cm$^{-2}$ (respectively) were extracted assuming the same linewidth and excitation temperature. This upper limits lead to column density ratios of CH$_3$NCO/CH$_3$CNO$\ge$150 and CH$_3$NCO/CH$_3$OCN$\ge$8. \subsection{Chemically-related species: HNCO and NH$_2$CHO} In Orion KL, CH$_3$NCO shows the same spatial distribution as HNCO and NH$_2$CHO \citep{cernicharo16} and therefore they are thought to be chemically related. Several transitions of HNCO and NH$_2$CHO, and of some of their isotopologues, are also covered and detected in our dataset. The HNCO and NH$_2$CHO lines are optically thick \citep{coutens16} and their column densities have been inferred using the HNC$^{18}$O and NH$_2^{13}$CHO isotopologues. Five unresolved transitions of HNC$^{18}$O are found at 250 GHz with E$_{up}$=122 K. For a fixed excitation temperature of $T_{ex}$=110 K, (the $T_{ex}$ derived for CH$_3$NCO; see Section 3.1) we obtain a column density of $N$(HNC$^{18}$O)=(9.7$\pm$3.8)$\times$10$^{13}$ cm$^{-2}$. By assuming an isotopic ratio $^{16}$O/$^{18}$O = 500 (Wilson \& Rood 1994), the derived total column density of HNCO is $N$(HNCO)=(4.9$\pm$1.9)$\times$10$^{16}$ cm$^{-2}$, which yields an abundance of (1.8$\pm$0.7)$\times$10$^{-9}$. For NH$_2^{13}$CHO, three lines are detected at 156.957 GHz, 157.097 GHz, and 239.628 GHz, with E$_{up}$ = 58$-$98 K. Their emission is fitted with an excitation temperature of $T_{ex}$=75 K and a column density of $N$(NH$_2^{13}$CHO)=(7.6$\pm$3.7)$\times$10$^{14}$ cm$^{-2}$. The derived $T_{ex}$ is slightly lower than that obtained for CH$_3$NCO, possibly due to the lower values of E$_{up}$ covered by the NH$_2^{13}$CHO lines compared to those of CH$_3$NCO. We note however, that both species show the same spatial extent \citep[see Figure 2 and][]{coutens16} and therefore, they likely trace the same gas. By assuming an isotopic ratio $^{12}$C/$^{13}$C=68 \citep{milam05}, the derived total column density is $N$(NH$_2$CHO)=(5.2$\pm$2.5)$\times$10$^{16}$ cm$^{-2}$, which gives an abundance of (1.9$\pm$0.9)$\times$10$^{-9}$. As for CH$_3$NCO, these abundances should be considered as upper limits. \subsection{Comparison with other sources} The abundance of (1.4$\pm$0.1)$\times$10$^{-10}$ measured for CH$_3$NCO toward IRAS16293 B is similar to that found in SgrB2(N) \citep[1.7$\times$10$^{-9}$ and 1.0$\times$10$^{-9}$ for the two $V_{LSR}$ components; see][]{cernicharo16}. In Table \ref{comparacion}, we present the comparison between the abundance ratios CH$_3$NCO/HNCO and CH$_3$NCO/NH$_2$CHO measured in IRAS16293 B with those from the three sources where CH$_3$NCO has also been detected \citep[e.g. SgrB2(N), Orion KL, and 67P/Churyumov-Gerasimenko;][]{goesmann15,halfen15,belloche17,cernicharo16}. Since we have estimated the column densities considering the same emitting region, the derived ratios are likely independent on the assumed source size and the derived H$_2$ column density. \begin{figure} \centering \includegraphics[scale=0.5]{f2.jpg} \caption{Integrated intensity maps of two representative CH$_3$NCO unblended lines observed toward IRAS16293 B. Black contours indicate 50\% and 90\% of the peak line emission, while white contours indicate 20\%, and 80\% of the continuum peak emission at 232 GHz. The rest frequency and E$_{\rm up}$ of the transitions are shown in every panel (see also Table \ref{table-unblended}). Beam sizes are shown in the bottom right corner.} \label{mapa} \end{figure} From Table \ref{comparacion}, we find that the CH$_3$NCO/HNCO column density ratio in IRAS16293 B is of the same order as those measured in SgrB2(N) and Orion KL. However, it is a factor of $\sim$50 lower than in comet 67P/Churyumov-Gerasimenko. We note however that the COSAC detections are tentative and therefore the abundance ratios in column 7 of Table \ref{comparacion} should be taken with caution. The CH$_3$NCO/NH$_2$CHO column density ratio in IRAS16293 B is similar to that observed in SgrB2(N), while it is factors 20-70 lower than those measured in Orion KL, and a factor of 10 lower than that in comet 67P/Churyumov-Gerasimenko. In Section 4, we explore the formation routes for CH$_3$NCO and compare the measured ratios with those predicted by chemical modelling. \section{Chemical modelling} \label{model} For the chemical modelling of CH$_3$NCO, HNCO, and NH$_2$CHO in IRAS16293 B, we have used the gas-grain chemical code UCL\_CHEM recently re-written by Holdship et al., (submitted)\footnote{UCL\_CHEM can be downloaded at https://uclchem.github.io/.}. UCL\_CHEM's chemical network contains 310 species (100 of them are also on the grain surface) and 3097 reactions. Gas phase reactions are taken from the UMIST database \citep{mcelroy2013}, while dust grain surface processes include thermal desorption \citep[as in][]{viti2004} and non-thermal desorption processes such as direct UV desorption, cosmic ray-induced UV photons desorption, direct cosmic ray desorption, and H$_2$ formation mechanism desorption \citep[see][]{roberts2007}. Recently, diffusion following the formalism from \citet{hasegawa1992}, and chemical reactive desorption \citep[following the experimentally-derived formula of][]{minissale2016} have also been included in UCL\_CHEM (Qu\'enard et al. in prep.). To model the chemistry of CH$_3$NCO, HNCO and NH$_2$CHO, we expanded UCL\_CHEM's chemical network by including recently proposed gas phase and grain surface formation routes. For the gas phase formation of CH$_3$NCO, \citet{halfen15} proposed the following reactions: \begin{eqnarray} \mathrm{HNCO/HOCN + CH_3} &\longrightarrow& \mathrm{CH_3NCO + H}\\ \mathrm{HNCO/HOCN + CH_5^+} &\longrightarrow& \mathrm{CH_3NCOH^+ + H_2}\\ \mathrm{CH_3NCOH^+ + e^-} &\longrightarrow& \mathrm{CH_3NCO + H} \end{eqnarray} \noindent Note that no gas-phase destruction route was proposed in their study. We have also included the reaction \begin{eqnarray} \mathrm{CH_3NCOH^+ + e^-} &\longrightarrow& \mathrm{CH_3 + HOCN}, \end{eqnarray} \noindent to account for the fact that CH$_3$NCOH$^+$ may fragment into smaller products. For the grain surface formation, \citet{belloche17} and \citet{cernicharo16} proposed that CH$_3$NCO could be formed through the grain surface reactions: \begin{eqnarray} \mathrm{CH_3 + OCN} &\longrightarrow& \mathrm{CH_3NCO}\\ \mathrm{CH_3 + HNCO} &\longrightarrow& \mathrm{CH_3NCO + H}. \end{eqnarray} These reactions have been found to be efficient experimentally \citep{ligterink17}. Furthermore, one of the possible formation routes of N-methylformamide (N-CH$_3$NHCHO) may involve successive addition of hydrogen atoms to CH$_3$NCO: \begin{eqnarray} \mathrm{CH_3NCO + H} &\longrightarrow& \mathrm{CH_3NHCO}\label{hadd1}\\ \mathrm{CH_3NHCO + H} &\longrightarrow& \mathrm{CH_3NHCHO}. \end{eqnarray} \noindent where reaction (\ref{hadd1}) has an activation energy of $\sim$2500\,K \citep{belloche17}. The HNCO network (both in the gas and on the grain surface), together with that of its isomers HCNO and HOCN \citep[see][]{quan2010}, have also been included in UCL\_CHEM. The physical conditions and the chemical composition of the IRAS16293 B hot corino were modelled using a three phase model. In Phase 0, we followed the evolution of the chemistry in a diffuse cloud (size of $\sim$0.6\,pc and $A_V=2$\,mag) by assuming a constant density of $n_\textrm{H}=10^{2}$\,cm$^{-3}$ and a temperature of $T_{kin}=10$\,K for $\sim$10$^6$ yrs. We assume an interstellar radiation field of $G_0=1$ Habing and the standard cosmic ray ionisation rate of $1.3\times10^{-17}$\,s$^{-1}$. The elemental abundances considered in this model are taken from \citet[][model EA1]{wakelam2008}. In Phase 1 of our model, we follow the chemistry during the pre-stellar core phase assuming a constant temperature of $T_{kin}=10$\,K while we increase the core's gas density from 10$^{2}$$\,$cm$^{-3}$ to $5\times10^{8}$\,cm$^{-3}$ (this value is consistent with that measured in IRAS16293 B). In Phase 2, the chemical evolution of the hot corino is modelled by assuming a constant H$_2$ gas density ($5\times10^{8}$\,cm$^{-3}$) while gradually increasing the gas temperature from 10$\,$K to 110$\,$K during the first 10$^5$ yrs. After then, the temperature is kept constant and the chemistry is followed during $10^6$\,yrs. The best fit to our observations is found for a dynamical age of $\times$10$^4$ yrs, with a predicted abundance for CH$_3$NCO of [CH$_3$NCO]=$6.0\times10^{-10}$ (with respect to H$_2$), i.e. a factor of 4 higher than that measured in IRAS16293 B (1.4$\times$10$^{-10}$; Section 3.1). We note that these time-scales are consistent with those estimated for this source \citep[see, e.g.,][]{bottinelli14,majumdar16}. In this model, HNCO (the parent molecule of CH$_3$NCO) is formed on the surface of dust grains; and once the temperature reaches $\sim$100 K, HNCO is thermally desorbed and incorporated into the gas phase, allowing the gas-phase formation of CH$_3$NCO to proceed \citep[see reactions above and][]{halfen15}. We note that CH$_3$NCO is also formed on grain surfaces in our model. However, this mechanism by its own is not sufficient to explain the observed abundances of this molecule in IRAS16293 B. Therefore, our modelling shows that formation both in the ices and in the gas phase is required to explain the observed abundance of CH$_3$NCO in IRAS16293 B. We note that, while the HNCO abundance predicted by our model (3.8$\times$10$^{-9}$) also agrees well with that observed in IRAS16293 B (1.8$\times$10$^{-9}$), the abundance of NH$_2$CHO is underproduced by a factor of 10. As a result, while the CH$_3$NCO/HNCO abundance ratio is well reproduced by our model, the CH$_3$NCO/NH$_2$CHO ratio differs from that observed by a factor of $\sim$40 (see Table \ref{comparacion}). We also note that our model perfectly reproduces the upper limits of CH$_3$NCO measured in cold cores such as L1544 \citep[$\leq$2-6$\times$10$^{-12}$;][Qu\'enard et al., in prep]{izaskun16}. \begin{table} \tabcolsep 1.2pt \label{comparacion} \caption[]{Comparison of the CH$_3$NCO/HNCO and CH$_3$NCO/NH$_2$CHO ratios measured in IRAS16293 B, SgrB2(N), Orion KL and comet 67P. Our modelling results of IRAS16293 B for T$_{gas}$=110$\,$K and time=4$\times$10$^4$ yrs}, are also shown. \begin{center} \begin{tabular}{ c c c c c c c} \hline & \multicolumn{5}{c}{Protostars}& Comet \\ \cline{2-6} Molecular & \multicolumn{2}{c}{Low-mass} & \multicolumn{3}{c}{High-mass} & \\ ratio & \multicolumn{2}{c}{IRAS16293 B} & SgrB2(N) & \multicolumn{2}{c}{Orion KL}&67P\\ & obs. & model & & A & B & \\ \hline CH$_3$NCO/HNCO & 0.08 & 0.16 & 0.11 &0.02 & 0.06 & 4.33 \\ CH$_3$NCO/NH$_2$CHO & 0.08 & 3.53 & 0.06 &1.75 & 5.71 & 0.72 \\ \hline \end{tabular} \end{center} \end{table} We carried out an additional test including the isomers of CH$_3$NCO, for which their upper limits have been measured (see Section$\,$\ref{results}). We have assumed that CH$_3$OCN and CH$_3$CNO experience the same reactions as CH$_3$NCO at the same rates, although this assumption is highly uncertain given the lack of experimental data. The abundance of CH$_3$NCO changes only by a factor of 1.1, but CH$_3$OCN and CH$_3$CNO are overproduced by factors $\geq$10-100. This means that their associated reaction rates need to be lowered by several orders of magnitude to match the observed upper limits. The full chemical network of CH$_3$NCO and its isomers will be discussed in detail in Qu\'enard et al. (in prep.). \section*{Acknowledgments} This Letter makes use of the following ALMA data: ADS/JAO.ALMA\#2011.0.00007.SV, \#2012.1.00712.S, \#2013.1.00061.S, and \#2013.1.00352.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in co-operation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ This research was partially financed by the Spanish MINECO under project AYA2014-60585-P, by the Italian Ministero dell’ Istruzione, Università e Ricerca, through the grant Progetti Premiali 2012 – iALMA (CUP C52I13000140001), and by the Gothenburg Centre for Advanced Studies in Science and Technology, where the re-calibration and re-imaging of all the ALMA Archive data on IRAS 16293-2422 was carried out as part of the GoCAS program “Origins of Habitable Planets”. R.M.-D. benefited from a FPI grant from Spanish MINECO. I.J.-S. acknowledges the financial support from an STFC Ernest Rutherford Fellowship and Grant (projects ST/L004801/2 and ST/M004139/2). J.M.-P. acknowledges partial support by the MINECO under grants FIS2012-39162-C06-01, ESP2013-47809-C03-01, and ESP2015-65597-C4-1. We thank an anonymous referee, and Dr. Wing Fai-Thi for providing useful comments on the manuscript.
2,869,038,156,433
arxiv
\section{Introduction} In this paper, we study the Cauchy problem for the $L^{2}$-critical nonlinear Schr\"{o}dinger equations: \begin{equation}\label{NLSa} \begin{cases} iu_{t} + \Delta{u} +|u|^{\frac{4}{d}}u + ia(x)u =0, (t,x) \in [0,\infty[\times \mathbb{R}^{d}, d=1,2,3,4. \\ u(0)= u_{0} \in H^1(\mathbb{R}^{d}) \end{cases} \end{equation} with a real inhomogeneous damping term a $\in C^1(\mathbb{R}^{d},\mathbb{R})\cap W^{1,\infty}(\mathbb{R}^{d},\mathbb{R})$ and initial data $u(0)= u_{0} \in H^1(\mathbb{R}^{d})$.Equation (\ref{NLSa}) arises in several areas of nonlinear optics and plasma physics. The inhomogenous damping term corresponds to an electromagnetic wave absorved by an inhomogenous medium. (cf \cite{Barontini},\cite{Brazhnyi})\\ It is known that the Cauchy problem for (\ref{NLSa}) is locally well-posed in $H^1(\mathbb{R}^{d})$(see Kato\cite{Kato} and also Cazenave\cite{Cazenave}): For any $u_{0} \in H^{1}(\mathbb{R}^{d})$, there exist $T \in (0,\infty]$ and a unique solution $u(t)$ of (\ref{NLSa}) with $u(0)=u_{0}$ such that $u \in C([0,T);H^1(\mathbb{R}^{d}))$. Moreover, T is the maximal existence time of the solution $u(t)$ in the sense that if $T < \infty$ then $\displaystyle{ \lim_{t\rightarrow T}{\|u(t)\|_{H^1(\mathbb{R}^{d})}}}=\infty$.\\ Dias and figueira \cite{Dias} studied the supercritical case($|u|^{p}u$ with $p > \frac{4}{d}$) and showed that blow-up in finite time can occur, using the virial method. In \cite{Correia}, Correia was studied the equation in dimension one, and he proved the existence of blowup phenomena in the energy space $H^{1}$\\ Let us notice that for $a=0$ (\ref{NLSa}) becomes the $L^2$-critical nonlinear Schr\"{o}dinger equation:\\ \begin{equation}\label{NLS} \begin{cases} iu_{t} + \Delta u + |u|^{\frac{4}{d}}u = 0\\ u(0)=u_{0} \in H^{1}(\mathbb{R}^{d}) \end{cases} \end{equation} Special solutions play a fundamental role for the description of the dynamics of (\ref{NLS}). They are the solitary waves of the form $u(t, x) =\exp(it)Q(x)$, where $Q$ solves: \begin{equation}\label{ellip} \Delta Q + Q|Q|^{\frac{4}{d}} = Q. \end{equation} Let $u$ be a solution of (\ref{NLSa}), we define the following quantities:\\ $L^2$norm : $\left\|u(t)\right\|_{L^2}.$\\ Energy : $E(u(t)) = \frac{1}{2}\|\nabla u\|_{L^2}^{2} - \frac{d}{4 + 2d}\|u\|_{L^{\frac{4}{d}+2}}^{\frac{4}{d}+2}.$\\ Kinetic momentum : $P(u(t))=Im(\displaystyle{\int} \nabla u \overline{u}(t,x)).$\\ It is easy to prove that if $u$ is a solution of (\ref{NLSa}) then : \begin{equation}\label{mass a} \displaystyle{\frac{d}{dt}\|u(t)\|^2_{L^2}= -\int a(x)\vert u(t,x)\vert^2dx}, t \in [0,T), \end{equation} \begin{equation}\label{derivee de lenergie} \frac{d}{dt}E(u(t))=-\int a(x)\nabla u(t,x)dx + \int a(x)u(t,x)^{\frac{4}{d}+2}-\Re\int(\nabla u.\nabla a)\overline{u}dx \end{equation} and \begin{equation}\label{moment} \frac{d}{dt}P(u(t))=-2\int a(x)\Im (\nabla u\overline{u})dx, t \in [0,T). \end{equation} \begin{remark} Remark that if $ a(x) >0, \forall x \in \mathbb{R}^{d}$, then $\|u(t)\|_{L^2} < \|u_0\|_{L^2}$, for all $ t \in [0,T[$. \end{remark} Let us now our results: \begin{theorem}\label{theoremessentiel} \begin{enumerate} Let $u_0 \in H^{1}(\mathbb{R}^{d})$ with $d=1,2,3,4$, then: \item If $a(x) > 0$ and $\|u_0\|_{L^2} \leq \|Q\|_{L^2}$, then the corresponding solution of (\ref{NLSa}) is global in $H^1$. \item If $a(x)$ has an arbitrary sign and if there exists an initial data $u_0 \in H^{1}$ with $\|u_0\| < \|Q\|_{L^2}$ such that the corresponding solution blows up at finite time $T$, then $T > \frac{1}{\|a\|_{\infty}}\log(\frac{\|Q\|_{L^2}}{\|u_0\|_{L^2}})$. \end{enumerate} \end{theorem} \section{$L^2$-concentration } In this section, we prove theorem \ref{theoremessentiel} by extending the proof of the $L^2$-concentration phenomen, proved by Ohta and Todorova \cite{ohta} in the radial case, to the non radial case.\\ Hmidi and Keraani showed in \cite{Hmidi} the $L^2$-concentration for the equation (\ref{NLS}) without the hypothese of radiality, using the following theorem: \begin{theorem}\label{limsup} Let $(v_{n})_{n}$ be a bounded family of $H^1(\mathbb{R}^{d})$, such that: \begin{equation} \limsup_{n \rightarrow +\infty}\left\|\nabla v_{n}\right\|_{L^2(\mathbb{R}^{d})} \leq M \quad and \quad \limsup_{n \rightarrow +\infty}\left\|v_{n}\right\|_{L^{\frac{4}{d} + 2}} \geq m. \end{equation} Then, there exists $(x_{n})_{n} \subset \mathbb{R}^{d}$ such that: \begin{equation} v_{n}(\cdot + x_{n}) \rightharpoonup V \quad weakly, \nonumber \end{equation} with $\left\|V\right\|_{L^2(\mathbb{R}^{d})} \geq (\frac{d}{d+4})^{\frac{d}{4}}\frac{m^{\frac{d}{2}+1} + 1}{M^{\frac{d}{2}}}\left\|Q\right\|_{L^2(\mathbb{R}^{d})}$. \end{theorem} Now we have the following theorem: \begin{theorem}\label{nonradiale} Assume that $u_{0} \in H^{1}(\mathbb{R}^{d})$ , and suppose that the solution of (\ref{NLSa}) with $u(0)=u_{0}$ blows up in finite time $T \in (0,+\infty)$. Then, for any function $w(t)$ satisfying $w(t)\left\|\nabla u(t)\right\|_{L^2(\mathbb{R}^{d})} \rightarrow \infty$ as $t \rightarrow T$, there exists $x(t) \in \mathbb{R}^{d}$ such that, up to a subsequence, \begin{equation} \displaystyle{\limsup_{t \rightarrow T}\left\|u(t)\right\|_{L^2(\left|x - x(t)\right| < w(t))} \geq \left\|Q\right\|_{L^2(\mathbb{R}^{d})}.}\nonumber \end{equation} \end{theorem} To show this theorem we shall need the following lemma: \begin{lemma}\label{lemma ohta} Let $T \in (0,+\infty)$, and assume that a function $F : [0, T )\longmapsto(0,�+\infty)$ is continuous, and $\lim_{t \rightarrow T} F(t) = +\infty$. Then, there exists a sequence $(t_{k})_{k}$ such that $t_{k}\rightarrow T $and \begin{equation}\label{Fsur son int} \displaystyle{\lim_{t_k \rightarrow T}\frac{\displaystyle{\int}_{0}^{t_{k}}F(\tau)d\tau}{F(t_{k})} = 0.} \end{equation} \end{lemma} For the proof see \cite{ohta}.\\ \textbf{Proof of Theorem \ref{nonradiale}}:\\ Suppose that there exist an initial data $u_0$ in $H^{1}$ with $\|u_0\| \leq \|Q\|_{L^2}$ such that the corresponding solution blows up at finite time $T$.\\ By the energy identity $(\ref{derivee de lenergie})$, we have \begin{equation}\label{energie} \displaystyle{E(u(t)) = E(u_{0}) - \int_{0}^{t}H(u(\tau))d\tau, \quad t \in [0,T[.} \end{equation} Where $H(u(t)) =\displaystyle{ -\int a(x)\vert \nabla u(t,x)\vert^2dx + \int a(x)\vert u(t,x)\vert^{\frac{4}{d}+2}-\Re\int(\nabla u.\nabla a)\overline{u}dx}$. Let us recall the Gagliardo-Nirenberg inequality: \begin{equation}\label{gagliardo} \left\|u\right\|_{L^{2 + \frac{4}{d}}}^{2 + \frac{4}{d}} \leq C \|\nabla u\|_{L^2}^2\|u\|_{L^2}^{\frac{4}{d}}, \end{equation} Note that (\ref{mass a}) gives that: \begin{equation}\label{majorationmasse} \|u_0\|_{L^2}e^{-\|a\|_{L^\infty}t}\leq \|u\|_{L^2} \leq \|u_0\|_{L^2}e^{\|a\|_{L^\infty}t} \end{equation} Now using (\ref{gagliardo}) and (\ref{majorationmasse}) we obtain that: \begin{align} \left|H(u(t))\right| & \leq \|a\|_{L^{\infty}}\left\|\nabla u(t)\right\|_{L^2(\mathbb{R}^{d})}^{2} + \|a\|_{L^{\infty}}\left\|u(t)\right\|_{L^{2 + \frac{4}{d}}}^{2 + \frac{4}{d}} + \|\nabla a\|_{L^\infty}\|u\|_{L^2}\|\nabla u\|_{L^2}\nonumber\\ &\leq \|a\|_{L^{\infty}}\left\|\nabla u(t)\right\|_{L^2(\mathbb{R}^{d})}^{2} + C\|a\|_{L^{\infty}}e^{\|a\|_{L^\infty}t}\|\nabla u\|_{L^2}^2\|u_0\|_{L^2}^{\frac{4}{d}} \nonumber\\&+ e^{\|a\|_{L^\infty}t}\|\nabla a\|_{L^\infty}\|u_0\|_{L^2}\|\nabla u\|_{L^2} \nonumber \end{align} for all $t \in [0,T[$. Then $$\left|H(u(t))\right|\leq C(\|a\|_{H^1},\|u_0\|_{L^2})e^{\|a\|_{L^\infty}t}\|\nabla u\|^2_{L^2} $$ Moreover, we have $\displaystyle{\lim_{t \rightarrow T}\left\|\nabla u(t)\right\|_{L^2(\mathbb{R}^{d})}} = +\infty$, thus by Lemma \ref{lemma ohta}, there exists a sequence $(t_{k})_{k}$ such that $t_{k} \rightarrow T$ and \begin{equation}\label{ksurnabla} \displaystyle{\lim_{k \rightarrow \infty}\frac{\displaystyle{\int}_{0}^{t_{k}}H(u(\tau))d\tau}{\left\|\nabla u(t_k)\right\|_{L^2(\mathbb{R}^{d})}^{2}} = 0.} \end{equation} Let $$\rho(t) = \frac{\left\|\nabla Q\right\|_{L^2(\mathbb{R}^{d})}}{\left\|\nabla u(t)\right\|_{L^2(\mathbb{R}^{d})}} \quad \text{and} \quad v(t,x)=\rho^{\frac{d}{2}}u(t,\rho x)$$ and $\rho_{k} = \rho(t_{k}), v_{k} = v(t_{k},.)$. The family $(v_{k})_{k}$ satisfies $$\left\|v_{k}\right\|_{L^2(\mathbb{R}^{d})} \leq e^{\|a\|_{L^\infty}T}\left\|u_{0}\right\|_{L^2(\mathbb{R}^{d})}\quad \text{and} \quad \left\|\nabla v_{k}\right\|_{L^2(\mathbb{R}^{d})} = \left\|\nabla Q\right\|_{L^2(\mathbb{R}^{d})}.$$ \bigskip By (\ref{energie}) and (\ref{ksurnabla}), we have \begin{equation}\label{Edevk} \displaystyle{E(v_{k}) = \rho^2_{k}E(u_{0}) -\rho^2_{k}\int_{0}^{t_{k}}H(u(\tau))d\tau \rightarrow 0,} \end{equation} which yields \begin{equation}\label{vk ver Q} \displaystyle{\left\|v_{k}\right\|_{L^{\frac{4}{d} + 2}}^{\frac{4}{d} + 2} \rightarrow \frac{d + 2}{d}\left\|\nabla Q\right\|_{L^2(\mathbb{R}^{d})}^{2}.} \end{equation} The family $(v_{k})_{k}$ satisfies the hypotheses of Theorem \ref{limsup} with \\ $$m^{\frac{4}{d} + 2} = \frac{d+2}{d}\left\|\nabla Q\right\|_{L^2(\mathbb{R}^{d})}^{2} \quad \text{and} \quad M = \left\|\nabla Q\right\|_{L^2(\mathbb{R}^{d})},$$ \bigskip thus there exists a family $(x_{k})_{k} \subset \mathbb{R}^{d}$ and a profile $V \in H^{1}(\mathbb{R}^{d})$ with $\left\|V\right\|_{L^2(\mathbb{R}^{d})} \geq \left\|Q\right\|_{L^2(\mathbb{R}^{d})}$, such that, \begin{equation}\label{convergencefaible} \displaystyle{\rho^{\frac{d}{2}}_{k}u(t_{k}, \rho_{k}\cdot + x_{k}) \rightharpoonup V \in H^{1} \quad \text{weakly}.} \end{equation} Using (\ref{convergencefaible}), $\forall A \geq 0$ \begin{equation} \displaystyle{\liminf_{n\to +\infty}\int_{B(0,A)}\rho_{n}^{d}|u(t_{n},\rho_{n}x+x_{n})|^{2}dx\geq \int_{B(0,A)}|V|^{2}dx,}\nonumber \end{equation} but $\lim_{n\to +\infty}\frac{w(t_{n})}{\rho_{n}}=+\infty$\,\,thus $\frac{w(t_{n})}{\rho_{n}}> A$, $\rho_{n}A < w(t_{n})$. This gives immediately: \begin{align} \displaystyle{\liminf_{n\to +\infty}\sup_{y\in\mathbb{R}^{d}}\int_{|x-y|\leq w({t_{n})}}|u(t_{n},x)|^{2}dx\geq \int_{|x|\leq A}|V|^{2}dx.}\nonumber \end{align} This it is true for all $A > 0$ thus : \begin{equation}\label{L2phenomena} \displaystyle{\liminf_{t\to T}\sup_{y\in\mathbb{R}^{d}}\int_{|x-y|\leq w(t)}|u(t,x)|^{2}dx\geq \int Q^{2} dx.} \end{equation} Finally we obtain:\\ \begin{enumerate} \item If $a(x) >0$, then the norm $L^2$ is strictly decreasing, with (\ref{L2phenomena}) in hand we obtain that:$ \displaystyle{\|u_0\|_{L^2}^2 > \|Q\|_{L^2}^2}$\\ \item If the sign of $a$ is arbitrary, (\ref{L2phenomena}) gives that $e^{2\|a\|_{L^\infty}T}\|u_0\|_{L^2}^2 \geq \|Q\|^2_{L^2}$. \end{enumerate} \textbf{Proof of Theorem \ref{theoremessentiel}:}\\ Now if $a(x) >0$ $\forall x \in \mathbb{R}^{d}$, and $u_0$ be an initial data such that $\|u_0\| \leq \|Q\|$, we obtain a contradiction, that means that the solution is global in $H^{1}$, and this gives the proof of part 1 of the theorem.\\ If a has an arbitrary sign, and if $u$ blows up with initial data $u_0$ with $\|u_0\|_{L^2} < \|Q\|_{L^2} $ at finite time $T$, we obtain that $T > \frac{1}{\|a\|_{L^\infty}}\log(\frac{\|Q\|_{L^2}}{\|u_0\|_{L^2}})$, this gives the proof of the second part of the theorem.
2,869,038,156,434
arxiv
\section{Introduction} Upcoming redshift and weak lensing surveys, such as Dark Energy Survey (DES)~\cite{des}, Euclid~\cite{euclid} and Large Synoptic Survey Telescope (LSST)~\cite{lsst,Ivezic:2008fe}, combined with the cosmic microwave background measurements from Planck~\cite{planck} and other cosmological probes, will accurately trace the growth of cosmic structures through multiple epochs. They will offer the opportunity to test General Relativity (GR) by examining the relations between the distribution of matter, the gravitational potential and the lensing potential on cosmological scales. Such tests may yield clues to the physics causing cosmic acceleration or, at the very least, extend the range of scales over which Einstein's gravity has been validated by experiment. To test GR, one can either constrain particular alternative gravity models, such as the Dvali-Gabadadze-Porrati (DGP) braneworld model~\cite{Dvali:2000hr} or $f(R)$~\cite{Capozziello:2003tk,Carroll:2003wy}, or work within more general parametrized frameworks that cover many theories at once and minimize the risk of missing potential hints of modified gravity in the data. Over the past several years significant effort went into developing such frameworks and understanding requirements for their consistency~\cite{Linder:2007hg,Zhang:2007nk,Hu:2007pj,Bertschinger:2008zb,Daniel:2008et,Skordis:2008vt,Song:2010rm,Bean:2010zq,Daniel:2010ky,Pogosian:2010tj,Baker:2011jy,Battye:2012eu,Sawicki:2012re,Baker:2012zs,Gubitosi:2012hu,Amendola:2012ky,Bloomfield:2012ff}. Often, departures from the standard cosmological model (LCDM) are quantified in terms of arbitrary functions of time and, sometimes, scale. These functions cannot be fit to data without further assumptions about their form. In this paper we motivate a parametrization that contains five unknown functions of time only and is general enough to cover most viable models of modified gravity and dark energy proposed so far. Importantly, these functions are expected to be slowly varying, hence the effective number of degrees of freedom that are fit to data can be small. One can avoid assuming a parametric form for the five functions and use instead a smoothness prior similarly to how it was applied to reconstruction of the dark energy equation of state $w$ in~\cite{Crittenden:2005wj,Crittenden:2011aa,Zhao:2012aw}. Observables describing large scale structure are calculated using cosmological perturbation theory in Fourier space. The relevant variables are the two scalar metric degrees of freedom, \eg, $\Phi$ and $\Psi$ in the Newtonian gauge, along with the matter density contrast $\delta$ and the matter velocity perturbation $v$. One needs four equations to solve for the evolution of these four variables, assuming that baryons and dark matter obey the same equations at late times. Two equations are provided by the covariant conservation of matter energy-momentum. The other two equations are supposed to be provided by a theory of gravity which prescribes how the metric responds to the matter stress-energy. Formally, one can always complete the system of equations by introducing two functions $\mu(a,k)$ and $\gamma(a,k)$, defined via\footnote{These definitions assume that anisotropic stress of matter is negligible at the epochs of interest, although it can be included, if necessary, as it was done in~\cite{Bean:2010zq,Hojjati:2011ix}.} \begin{equation} k^2 \Psi =- 4 \pi\mu G a^2 \rho \Delta \ , \quad \Phi = \gamma \Psi \ , \label{eq:mugamma} \end{equation} where $a$ is the scale factor and $\Delta=\delta+3aHv/k$. They are defined in a way that recovers the Poisson and the anisotropy equations of LCDM when $\mu=\gamma=1$. There are other choices in the literature for the pair of functions relating ($\Phi$, $\Psi$) to ($\delta$, $v$) that are equivalent to $\mu$ and $\gamma$. A common alternative choice is to use $\Sigma$, defined as \begin{equation}\label{Poisson_Weyl_Modified} k^2\left(\Phi+\Psi\right)=- 8\pi G a^2 \Sigma(a,k) \rho\Delta\ , \end{equation} in combination with $\mu$. As shown in~\cite{Zhao:2008bn,Pogosian:2010tj,Bean:2010zq,Hojjati:2011ix}, once the two functions are given, one has a consistent set of equations that can be incorporated~\cite{Zhao:2008bn,Hojjati:2011ix,mgcamb,Dossett:2011tn} into standard Boltzmann codes, such as CAMB~\cite{Lewis:1999bs}, to calculate the observables. In principle, everything that observations can tell us about cosmic structure on linear scales can be stored as a measurement of $\mu$ and $\gamma$ and, if necessary, projected onto constraints on specific models. But what form should one adopt for these functions to fit them to data? In \cite{Zhao:2009fn,Hojjati:2011xd,Hall:2012wd}, a principal component analysis (PCA) was performed to forecast the best constrained eigenmodes of $\mu(a,k)$ and $\gamma(a,k)$ for different future surveys, finding that they will measure amplitudes of tens (if not hundreds) of them with good accuracy. This is encouraging, but it is not clear how many of these constrainable eigenmodes are physically interesting. PCA alone does not really answer the question of what parameters one should be fitting to data. Another concern, which is the main motivation for this work, is that an arbitrary relation between two quantities in Fourier space, such as those in Eq.~(\ref{eq:mugamma}), does not, in general, imply a local relation between them or their derivatives in real space. Clearly, the $k$-dependence of $\mu(a,k)$ and $\gamma(a,k)$ cannot be completely arbitrary if equations of motion are obtained from variational principle. In this work, we investigate the physically acceptable forms of $\mu(a,k)$ and $\gamma(a,k)$ based on considerations of locality and general covariance. We show that under rather general conditions, and under the quasi-static approximation (QSA), they should always have a form of ratios of polynomials in $k$. Furthermore, the numerator of $\mu$ is set by the denominator of $\gamma$. The coefficients inside the polynomials are functions of the background quantities and can be expected to be slowly varying functions. Technically, the number of these time-dependent coefficients is infinite if one allows for a completely arbitrary modification of GR. However, in models with purely scalar extra degrees of freedom, the polynomials are even in $k$ and, furthermore, in many viable models considered so far in the literature, the polynomials are even and of second degree, hence the number of time-dependent coefficients is reduced to five. While this parametrization is motivated by the QSA, it allows for departures form LCDM on near-horizon scales. Baker \etal. \cite{Baker:2012zs} have recently investigated the form of exact equations of motion for a large variety of modified gravity models. They also noted that, under the QSA, equations reduce to algebraic relations with even powers of $k$ and proposed constraining the time dependence of the background dependent coefficients. Instead, we consider coefficients appearing in $\mu$ and $\gamma$, which reduces the number of free functions in most interesting cases to five and makes it easy to use them in existing modified Boltzmann codes, such as MGCAMB \cite{Zhao:2008bn,Hojjati:2011ix,mgcamb}. Amendola \etal. \cite{Amendola:2012ky} recently adopted an equivalent five-function parametrization to investigate limits of observability of modified gravity on linear scales. Their choice was motivated by results of De Felice \etal. \cite{DeFelice:2011hq}, who calculated $\mu$ and $\gamma$ in the QSA for the Horndeski \cite{Horndeski:1974wa} class of most general second order scalar-tensor theories. We arrive at the same form as a particular case of a more general and much simpler derivation. Working with five arbitrary functions of time may seem like a daunting task, but it is much easier than constraining two functions of scale and time. Furthermore, the functions are known to be slowly varying, which can be used as a strong theoretical prior. We outline the practical application to forecasts and fits in Sec.~\ref{sec:app}. Our parametrization is useful if one wants to look for departures from LCDM without assuming a particular model. Clearly, the number of functions can be smaller if one restricts the range of possibilities. For instance, to describe linear perturbations in Brans-Dicke models it is sufficient to provide only two functions of the background~\cite{Brax:2011aw,Brax:2012gr}. The remainder of the paper is organized as follows. In Sec.~\ref{sec:general}, we show that, under the QSA, $\mu$ and $\gamma$ are ratios of polynomials in $k$, and the numerator of $\mu$ is given by the denominator of $\gamma$. In \ref{sec:viable} we point out that for a very broad class of viable modified gravity models, the polynomials are even and second order in $k$ and, therefore, one needs to specify only five functions of time. In Sec.~\ref{sec:QS}, we examine the assumptions made by the QSA and discuss the extent of their applicability. In Sec.~\ref{sec:app}, we outline the procedure for reconstructing $\mu$ and $\gamma$ from data using a smoothness prior applied to the five functions. We conclude with a discussion in Sec.~\ref{sec:summary}. \section{$\mu$ and $\gamma$ in modified gravity models} \label{sec:mugamma} \subsection{The most general case} \label{sec:general} Consider a broad class of theories in $(3+1)$ dimensions with the action defined in terms of a Lagrangian density that contains an arbitrary function of geometric invariants $R$, $R_{\alpha\beta} R^{\alpha\beta}$, $R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma\delta}$, $\Delta R$, $R^{\alpha\beta}\nabla_\alpha\nabla_\beta R,\dotsc$ as well as any number of scalar degrees of freedom $\phi_i$, $i=1,\dotsc,N$ (including the longitudinal components of vector or tensor degrees of freedom), which can be non-minimally coupled to the metric and each other. This embraces dark energy models as well as modified gravity theories, including effective $(3+1)$ dimensional descriptions of higher dimensional theories. For the moment, let us not worry about the existence of ghosts or other unphysical properties that such theories may have. At this point, we only require invariance of the action under general coordinate transformations. Let us also make an important assumption that there exists a frame in which all particles are minimally coupled to the metric, so that the matter stress-energy is covariantly conserved. For simplicity, we neglect radiation and the differences between CDM and baryons. Let us now consider the form of equations for linear scalar perturbations in the Newtonian gauge, where the relevant degrees of freedom of the metric sector are the potentials $\Phi$ and $\Psi$ defined as \begin{equation} ds^2=-(1+2\Psi)a^2d\tau^2+(1-2\Phi)a^2d{\bf x}^2 \ . \end{equation} Varying the action with respect to the metric tensor gives four Einstein equations. The time-time and the time-space components can be combined to form the Poisson equation, which, to linear order in the perturbations in Fourier space, will have the following general form: \begin{equation} {\hat A} \Psi + {\hat B} \Phi + {\hat C}^i \delta \phi_i = -4 \pi G a^2 \rho \Delta \ , \label{eq:poisson} \end{equation} where we assume summation over repeated indices and where ${\hat A}$, ${\hat B}$ and ${\hat C}^i$ are linear operators that contain functions of the background, time derivatives and/or powers of $k$. For instance, for any local theory of gravity, one generally has \begin{equation} {\hat A} = \sum_{n,m}a_{nm}k^{n}\partial^m_0 \ , \label{eq:ahat} \end{equation} where $\partial^m_0$ denotes the time ($\tau$) derivative of $m$th order, the highest values of $n$ and $m$ are determined by the order of metric derivatives contained in the action, and the coefficients $a_{nm}$ are functions of time. Note, that in models with only scalar extra degrees of freedom, ${\hat A}$ will contain only {\it even} powers of $k$. This is because odd powers can only come from a contraction of spatial derivatives of perturbations with spatial indices of the background dependent coefficients that should vanish for isotropic backgrounds such as FRW. Operators ${\hat B}$ and ${\hat C}^i$ have the same form with corresponding coefficients $b_{nm}$ and $c_{nm}^i$. Similarly, the traceless space-space Einstein equation can generally be written as \begin{equation} {\hat D} \Psi + {\hat E} \Phi + {\hat F}^i \delta \phi_i = 0 \ , \label{eq:ij} \end{equation} where the zero on the right hand side is due to vanishing of the matter anisotropic stress, and the operators ${\hat D}$, ${\hat E}$ and ${\hat F}^i$ have the same form as ${\hat A}$ in Eq.~(\ref{eq:ahat}). In addition, varying the action with respect to each scalar field $\phi_i$, and linearizing in the perturbations, will provide equations for the corresponding perturbation $\delta\phi_i$, that can generally be written as \begin{equation} {\hat H}_i \Psi + {\hat K}_i \Phi + {\hat L}_i^{\ j} \delta \phi_j = 0 \ , \label{eq:phi} \end{equation} where the operators ${\hat H}_i$, ${\hat K}_i$ and ${\hat L}_i^{\ j}$ also have the form given by Eq.~(\ref{eq:ahat}), with correspondingly renamed coefficients. Our aim is to find the form of the functions $\mu(a,k)$ and $\gamma(a,k)$ defined in Eq.~(\ref{eq:mugamma}). Because of the time-derivatives in Eqs.~(\ref{eq:poisson}), (\ref{eq:ij}) and (\ref{eq:phi}), it is impossible to write $\mu(a,k)$ and $\gamma(a,k)$ in a closed form without solving for the evolution of the perturbations first. To make progress, let us take the quasi static approximation (QSA) in which we neglect all time derivatives of $\Phi$, $\Psi$ and $\delta \phi_i$, and delegate justifying this approximation to Section \ref{sec:QS}. In the QSA, the operators ${\hat A}$, $\hat{B}$, $\hat{C}^i$, $\hat{D}$, $\hat{E}$, $\hat{F}^i$, $\hat{H}_i$, $\hat{K}_i$, $\hat{L}_i^{\ j}$ become functions, specifically polynomials in $k$; we indicate them with the same letters, removing the hats, \eg, we have \begin{align} A = \sum_{n}a_{n0}k^{n} \ \label{} \end{align} and similarly for the other functions. As a result, Eqs~(\ref{eq:poisson}),~(\ref{eq:ij}) and~(\ref{eq:phi}) reduce to a system of linear algebraic equations which we can use to extract $\mu$ and $\gamma$. Defining $R_i$ via \begin{equation} \delta \phi_i = R_i \Psi \end{equation} and substituting it, along with $\Phi=\gamma \Psi$, into Eqs.~(\ref{eq:ij}) and (\ref{eq:phi}), we find \begin{eqnarray} \label{eq:gamma-R1} D + E \gamma + F^i R_i &=& 0 \ , \\ H_i + K_i \gamma + L_i^{\ j} R_j &=& 0 \ . \label{eq:gamma-R2} \end{eqnarray} It is convenient to write the solution of these equations in the matrix form, \begin{align} \begin{bmatrix} \gamma \\ R \end{bmatrix} = - \begin{bmatrix} E & F \\ K & L \end{bmatrix} ^{-1} \begin{bmatrix} D \\ H \end{bmatrix} , \label{} \end{align} where we introduced a row vector $F$, a column vector $K$ and a square matrix $L$. We can express the inverse of any matrix $M$ as the ratio of its classical adjoint to its determinant, $M^{-1}=\adj{M}/\det{M}$. After some algebra we obtain \begin{alignat}{2} &\det \begin{bmatrix} E & F \\ K & L \end{bmatrix} &&=(E-FL^{-1}K)\det{L}\nonumber\\&&&=E\det{(L-KE^{-1}F)}\ , \label{det}\\ &\adj \begin{bmatrix} E & F \\ K & L \end{bmatrix} &&= \begin{bmatrix} \det{L} & -F\adj{L} \\ -(\adj{L})K & E\adj{(L-KE^{-1}F)} \end{bmatrix} \ . \label{adj} \end{alignat} Since $D$, $E$, $F$, $H$, $K$, $L$ are polynomials, the quantities in \eqref{det} and \eqref{adj} are polynomials as well, and, consequently, $\gamma$ and $R$ are fractions of polynomials. Furthermore, in their irreducible form, the denominators of $\gamma$ and $R$ are the same, \begin{equation}\label{gamma_and_R_pol} \gamma=\frac{N_\gamma}{Q} \ ,\,\,\,\, R=\frac{N_R}{Q} \end{equation} where \begin{align} N_\gamma&=-D\det{L}+F(\adj{L})H\ ,\\ N_R&=D(\adj{L})K-E\,{\rm adj}\,(L-KE^{-1}F)H\ ,\\ Q&=E \det{(L-K E^{-1}F)} \ . \label{} \end{align} Also, since the Poisson equation in the QSA has the form \begin{equation} A + B \gamma + C^i R_i = {-4 \pi G a^2 \rho \Delta \over \Psi} = \frac{k^2}{\mu}\ , \end{equation} it follows that $\mu$ can be written as an irreducible fraction of polynomials with its numerator uniquely determined by the denominator of $\gamma$ and $R$, \ie \begin{equation}\label{mu_pol} \mu=\frac{k^2 Q}{AQ+B\,N_\gamma+CN_R}\ . \end{equation} As mentioned earlier, in models with pure scalar degrees of freedom, the polynomial functions will contain only even powers of $k$. Furthermore, the $k^0$ terms in the denominator of $\mu$ are negligible in the QSA because the magnitude of the corresponding coefficients in the Poisson equation is dependent either on $H$, time-derivatives of $\phi_i$ or the first derivative of the scalar field potential(s), all of which are small in the QSA. This means that the $k^2$ factors in the numerator and denominator of $\mu$ will cancel. Thus, starting from a general covariant action that contains an arbitrary function of geometric invariants and any number of scalar degrees of freedom (DOF), we derived in a model-independent way that, under the QSA, the functions $\gamma$ and $\mu$ are ratios of polynomials in $k$, with the numerator of $\mu$ equal to the denominator of $\gamma$. \subsection{The subset of viable models} \label{sec:viable} A parametrization that anticipates a completely arbitrary modification of gravity is impractical, as one cannot fit an infinite number of unknown functions to data. Furthermore, there are good theoretical reasons not to allow for arbitrarily high order derivatives or tensorial modes in the equations of motion because of the appearance of ghost degrees of freedom. Let us therefore consider a more representative class of viable modified gravities described by a Lagrangian that contains only one scalar DOF obeying second order equations of motion. In this case, Eqs~(\ref{eq:poisson}), (\ref{eq:ij}) and (\ref{eq:phi}) in the QSA reduce to: \begin{eqnarray} &&A k^2 \Psi + B k^2 \Phi + C k^2 \delta \phi = -4 \pi G a^2 \rho \Delta\ , \\ &&D \Psi + E \Phi + F \delta \phi = 0\ , \\ &&H k^2 \Psi + K k^2 \Phi + (L_0 + L_1 k^2) \delta \phi = 0 \ , \end{eqnarray} where we have made the $k$ dependence explicit, so that $A$, $B$, $\dotsc$, $L_1$ are time dependent coefficients, and we include the $L_0$ coefficient which represents the mass squared of the scalar field. Following the same steps as in Sec.~\ref{sec:general} we find in this case that $\mu$ and $\gamma$ are ratios of {\em even} polynomials of {\em second degree} and, as in the general case, the denominator of $\mu$ is the same as the numerator of $\gamma$, i.e. \begin{eqnarray}\label{gamma_pol_2nd} \gamma&=&\frac{-DL_0+\left(FH-DL_1\right)k^2}{EL_0+\left(EL_1-KF\right)k^2}\ ,\\ \label{mu_pol_2nd} \mu&=&\left[EL_0+(EL_1-KF)k^2\right]\Bigl\{(AE-BD)L_0\nonumber\\ && \quad\quad +\bigl[F(BH -AK)+C(KD-EH)\nonumber\\ && \quad\quad +(AE-BD)L_1\bigr]k^2\Bigr\}^{-1}. \end{eqnarray} The above expressions have the same forms as analogous expressions derived in~\cite{DeFelice:2011hq} for general Horndeski theories \cite{Horndeski:1974wa}. Indeed, although we arrived at~(\ref{gamma_pol_2nd}) and (\ref{mu_pol_2nd}) from general arguments, the subset of viable models to which we are restricting coincides with the models included in the Horndeski class, which contains most of the viable theories of dark energy and modified gravity. The class of theories with a single scalar DOF with a second order equation of motion includes models with actions that contain a function $f(R,G)$ of the Ricci scalar and the Gauss-Bonnet term, provided the determinant of the Hessian is zero, i.e. $f_{RR}f_{\mathcal{G}\mathcal{G}}-f^2_{R\mathcal{G}}=0$. Restricting to the Lovelock invariants~\cite{Lovelock:1971yv} $R$ and $\mathcal{G}$ guarantees that no spurious spin-2 ghosts are introduced~\cite{DeFelice:2010aj}, while having a null determinant of the Hessian further ensures that superluminal modes for scalar perturbations are avoided~\cite{DeFelice:2010hb}. Finally, Horndeski theories include also dark energy models such as quintessence and k-essence, as well as the covariant Galileon and the 4 dimensional effective DGP model in the decoupling limit~\cite{Nicolis:2008in}. Without loss of generality, we can rewrite~(\ref{gamma_pol_2nd}) and (\ref{mu_pol_2nd}) in a more compact way by introducing $5$ functions of the background $\{p_i(a)\}$: \begin{eqnarray} \label{eq:gamma} \gamma &=& {p_1(a)+p_2(a) k^2 \over 1+p_3(a) k^2}\ , \\ \mu &=& {1 + p_3(a) k^2 \over p_4(a) + p_5(a) k^2} \ . \label{eq:mu} \end{eqnarray} Thus, in the QSA, one can express the perturbed equations of motion of a very large class of viable modified gravity models in terms of only 5 functions of time. Note that even though this ansatz was derived using the QSA, it allows for near- and super-horizon modifications of gravity: $\gamma(a,k\rightarrow 0)=p_1(a) \ne 1$. Also note that, while $\mu$ can also deviate from unity on super-horizon scales ($\mu \rightarrow p_4^{-1}(a)$), this should not affect any of the observables and the super-horizon perturbations will evolve consistently with the background expansion~\cite{Pogosian:2010tj}. Analogous compact forms (\ref{eq:gamma}) and (\ref{eq:mu}) were used in~\cite{Amendola:2012ky}, based on results in \cite{DeFelice:2011hq}, to discuss prospects of constraining Horndeski theories~\cite{Horndeski:1974wa}. We arrive at the same forms starting from simpler and more general arguments that do not require considering the details of the Horndeski action. Finally, the form of $\gamma$ and $\mu$ in~(\ref{eq:gamma}) and (\ref{eq:mu}) resembles the parametrization introduced by Bertschinger and Zukin (BZ) in~\cite{Bertschinger:2008zb}, however, there are some important differences. In the BZ parametrization, $\mu$ and $\gamma$ tend to $1$ for $k\rightarrow 0$, effectively reducing the theory to GR in that limit. Furthermore, BZ does not set the numerator of $\mu$ equal to the denominator of $\gamma$, which we find instead to be a general feature. Finally, they fix the time dependence of the coefficients in the $k^2$ expansion to a power law, while we leave it general. From a theoretical point of view, not all time dependencies can be described as power laws; however, the differences might be undetectable depending, in part, on the range of redshifts probed by the experiment. \section{The quasi-static approximation} \label{sec:QS} As mentioned earlier, closed form expressions for $\mu(a,k)$ and $\gamma(a,k)$ may not exist in a general gravity theory unless one adopts the QSA, in which the relations between the metric potentials and the matter perturbation become algebraic. Without the QSA, one needs to solve the differential equations in order to determine $\mu$ and $\gamma$, making them dependent on the initial conditions. There are {\it two} different assumptions involved in what is commonly known as the QSA: (1) the relative smallness of the time derivatives of metric perturbations compared to their space derivatives, and (2) the sub-horizon approximation, $k/aH \gg 1$. In LCDM, the second assumption automatically implies the first --- the perturbed quantities evolve on time scales comparable to the expansion rate, thus their time derivatives become comparable to space derivatives only for perturbations on near-Hubble scales. In alternative gravity models with additional degrees of freedom, the two assumptions need not imply each other, so let us separately consider their effects on the applicability of our parametrization. \subsection{Neglecting time-derivatives} In scalar-tensor models of modified gravity, there can be rapid oscillations of the metric potentials on top of their slow evolution which can make their time-derivatives large. A general solution typically includes a homogeneous oscillatory mode as well as a particular solution, that can also oscillate, induced by the coupling to matter. The initial amplitude of the former is a free parameter that typically needs to be fine-tuned to a small value in order to have a consistent cosmology at early times and avoid problems such as particle overproduction~\cite{Starobinsky:2007hu}. Motivated by this, we propose a theoretical prior in which the amplitude of the homogeneous mode is very small initially. Subsequently, it is controlled by functions of the background and can grow only slowly, never becoming large enough to affect observables. The amplitude of the oscillations in the particular solution is not a free parameter, but in general the terms that set the amplitude and the frequency of such oscillations are proportional to the strength of the coupling and the range of the extra scalar degree of freedom. Because of the generally tight constraints on fifth forces, one typically finds that the oscillations are undetectable. To illustrate the point, let us consider models in which the Einstein-Hilbert part of the action is given by $R+f(R)$, and the Poisson equation has the form~\cite{Pogosian:2007sw} \begin{eqnarray} \label{Poisson_Psi_f(R)} k^2\Psi&+&k^2\frac{\delta f_R}{2F}+\frac{3}{2}\left[\left(\dot{{\cal H}}-{\cal H}^2\right)\frac{\delta f_R}{F}+\left(\dot{\Phi}+{\cal H}\Psi\right)\frac{\dot{F}}{F}\right]\nonumber\\ &&=-4\pi Ga^2\frac{\rho}{F}\Delta \,, \end{eqnarray} where $f_R\equiv df/dR \equiv F-1$ is the ``scalaron'' degree of freedom, ${\cal H}=a^{-1}da/d\tau$, and the non-quasi-static terms are collected inside the square brackets. On sub-horizon scales (when $k \ll aH$), the first term inside the square brackets is negligible compared to the $k^2 \delta f_R$ term. But on large scales it is {\it still} smaller than other terms because $\delta f_R=f_{RR}\delta R$, and $f_{RR}$ must be small. The latter is required for the Chameleon mechanism~\cite{Khoury:2003aq} to screen the fifth force inside our solar system --- the values of $f_R$ and $f_{RR}$ must be small~\cite{Hu:2007nk}. The second term inside the square brackets is large only if $\dot{\Phi}$ is large. However, the evolution of $\Phi$ in the heavy scalaron (small $f_{RR}$) limit is practically the same as in GR, except for additional oscillations with a tiny amplitude set by $f_{RR}$~\cite{Hojjati:2012rf}. Even if the oscillations had a larger amplitude, they would be difficult to detect because of their high frequency set by $f^{-1}_{RR}$. Furthermore, there are no oscillations in the lensing potential $\Phi+\Psi$, hence there can be no signal in the Integrated Sachs-Wolfe effect that constrains the near- and super-horizon evolution of perturbations. We are not aware of a theory in which oscillations in extra DOFs are observable for the range of parameters that has not already been ruled out. Thus, it is reasonable to {\it adopt a theoretical prior} that ignores rapid time-variations of gravitational degrees of freedom {\it until} we find an example of a viable theory in which they are observable. Finding such an example may warrant an appropriate extension of Eqs.~(\ref{eq:gamma}) and (\ref{eq:mu}). \subsection{The sub-horizon approximation} By ignoring the time derivatives in the modified equations we are neglecting not just the rapid oscillations in metric perturbations but also the slowly varying signatures of modified gravity. This, in the absence of additional information about the model, can only be justified in the $k/aH \gg 1$ limit. Before addressing the significance of near-horizon modifications of gravity, let us make an important point: the {\it implementation} of our parametrization in the equations of motion does not assume the QSA. In the LCDM limit, when $\mu=\gamma=1$, we recover the {\em exact} equations of GR, while the parametrization allows for departures from $\mu=\gamma=1$ on all scales. Also, we do not suggest that one should ignore the relativistic effects when calculating the observables~\cite{Yoo:2009au,Yoo:2010ni,Challinor:2011bk,Bonvin:2011bg,Bruni:2011ta,Jeong:2011as,Yoo:2013tc}. The implementation of near-horizon and other relativistic effects~\cite{Challinor:2011bk} in Boltzmann codes like CAMB is unaffected by the use of the $(\gamma,\mu)$ parametrization --- only the Einstein equations are modified, while Boltzmann equations and the expressions for the observable quantities remain the same as before. The two relevant questions about the validity of taking the QSA limit in deriving the form of our parametrization are: (1) How observable departures from the LCDM prediction can be on near horizon scales in viable modified gravity models? and (2) To what extent our parametrization would bias such a potentially observable signature? As far as we know, there is no example of a theoretically motivated model that is not ruled out and in which departures from LCDM on near-horizon scales have been shown to be detectable~\cite{Lombriser:2013aj}, although there is clearly more room for investigation. Part of the reason is that cosmic variance limits the statistical significance of any inference on large scales. Multi-tracer technique proposed in~\cite{Seljak:2008xr,Seljak:2009af,Yoo:2012se} can remove this limitation to some extent. However, in order for the modified gravity signal on large scales to be detectable, it has to be sufficiently pronounced while still keeping the model in agreement with other constraints. While one can design specific models providing such an example~\cite{Lombriser:2013aj}, they cannot be considered representative. The second question --- the extent to which our parametrization would bias a potentially important signature on near-horizon scales --- can only be answered by considering particular solutions of specific models. But first one has to find a viable model in which such signatures are observable at all. A conservative way to use the parametrization in Eqs.~(\ref{eq:gamma}) and (\ref{eq:mu}) would be to separately fit to a subset of data corresponding to clustering on sub-horizon scales. Then, if a departure from LCDM is seen, one would have a clear idea about which scales contribute the most to the signal, and whether it is appropriate to interpret it under the QSA. \section{Practical application to forecasts, constraints and reconstructions} \label{sec:app} It is impossible to constrain a function without assuming something about its form. One possibility is to pick a particular functional form, such as a power law dependence, similar to how it is done in the BZ parametrization~\cite{Bertschinger:2008zb}. Since the five functions in Eqs.~(\ref{eq:gamma}) and (\ref{eq:mu}) are expected to be slowly varying, this need not be a bad approximation if the data only probes a limited range of redshifts. But it is unlikely that a single power law will capture the evolution over a wide range of epochs, which is what we can expect from surveys like Euclid, SKA or LSST in combination with CMB and other data. Let us instead explore a way to constrain $\mu$ and $\gamma$ in a non-parametric way that still takes into account their smoothness. Recent Refs. \cite{Crittenden:2011aa,Zhao:2012aw} proposed a transparent Bayesian framework for constraining the dark energy equation of state $w(a)$ based on adopting an explicit smoothness prior. The prior is defined via a correlation function that correlates values of $w$ at neighbouring points in $a$. This framework can be applied to any unknown function (or functions) expected to be smooth from theoretical considerations. Let us outline how it can be applied to $\{p_i(a)\}$ in Eqs.~(\ref{eq:gamma}) and (\ref{eq:mu}) for the purpose of forecasting future constraints on $\gamma$ and $\mu$, as well as for fitting to real data. \subsection{Application to forecasting} As a starting point, one can discretize the functions into finite numbers of bins. The binning can be implemented using a smooth function, such as a hyperbolic tangent, to avoid infinite derivatives at the edges, and the number of bins can always be taken to be sufficiently large to achieve convergence\footnote{One of the advantages of the smoothness prior approach of~\cite{Crittenden:2005wj,Crittenden:2011aa,Zhao:2012aw} is that it eliminates the dependence on binning.}. The binned values of $\{p_i(a)\}$ can be substituted into Eqs.~(\ref{eq:gamma}) and (\ref{eq:mu}) to find $\mu$ and $\gamma$ which are used as input in a modified Boltzmann code such as MGCAMB~\cite{mgcamb}. Along with providing other cosmological parameters, this is sufficient for calculating all types of cosmological observables. In the simplest approach to forecasting, one assumes a Gaussian shape of the parameter likelihood surface with a peak corresponding to a fiducial model, and proceeds to calculate the Fisher matrix from derivatives of observables with respect to model parameters. One then inverts it to obtain an estimate of the total covariance matrix. Because of the large number of highly correlated parameters, considering a constraint on any single bin is meaningless. One can instead use the principal component analysis (PCA)~\cite{Huterer:2002hy} to see which independent linear combinations of bins will be best constrained by a given experiment. This is accomplished by diagonalizing the corner of the covariance matrix corresponding to the bins of the five functions. What can one do with the information obtained from the PCA forecast for $\{p_i(a)\}$? There will be a strong degeneracy between the five functions, which means one should look at independent linear combinations of bins of {\it all five} functions. The number of such well-constrained {\it combined} eigenmodes should give us a measure of how many physically relevant independent degrees of freedom one can measure about $\mu$ and $\gamma$. In the absence of a theoretical prior, all eigenmodes of $\{p_i(a)\}$ carry some information. How should one decide which modes are informative and which are not? This is where one can use the knowledge about the slowly varying nature of the functions and introduce a smoothness prior~\cite{Crittenden:2005wj,Crittenden:2011aa,Zhao:2012aw}. The prior comes in a form of a non-diagonal matrix $C^{\rm prior}$, the inverse of which one adds to the inverse of the original covariance matrix, and which introduces additional correlations between bins of the five functions. One then considers an eigenmode to be informative if it is unaffected by the prior, \ie if it is the same before and after addition of the prior covariance. To be more explicit, let us assume that each function $p_i(a)$ is binned into $N$ bins in the scale factor, $a_1,\dotsc,a_N$, and label the bins so that they form a single vector ${\tilde p}_\alpha$ with $\alpha=1,\dotsc,5N$. For example, one can define \begin{align} {\tilde p}_1=p_1(a_1), \ {\tilde p}_2=p_1(a_2),\dotsc,{\tilde p}_{N}=p_1(a_N),\nonumber\\ \ {\tilde p}_{N+1}=p_2(a_1),\dotsc,{\tilde p}_{5N}=p_5(a_N). \label{} \end{align} Let $C^{\rm data}_{\alpha \beta}$ be the corner of the covariance matrix corresponding to $\{{\tilde p}_\alpha \}$ that we have obtained earlier by inverting the total Fisher matrix. According to Bayes' theorem, the posterior probability distribution for the parameters $\{{\tilde p}_\alpha \}$ is a product of the likelihood and the prior probability. For Gaussian probabilities, this implies that the net covariance matrix (i.e. corresponding to the posterior probability) is the inverse of the sum of $\left[C^{\rm data}\right]^{-1}$ and $\left[C^{\rm prior}\right]^{-1}$. To construct the latter, one can follow the prescription in~\cite{Crittenden:2005wj,Crittenden:2011aa} and start with specifying a correlation function \begin{equation} \langle (p_i(a) - p^{\rm lcdm}_i) (p_i(a') - p^{\rm lcdm}_i) \rangle \equiv \xi^{(i)}(|a-a'|) \end{equation} where $p^{\rm lcdm}_i$ are the constant values of $p_i(a)$ in LCDM ($p_1=p_4=1$ and $p_2=p_3=p_5=0$), and where the form of the functions $\xi^{(i)}(|a-a'|)$ is chosen so that \begin{eqnarray} \nonumber \xi^{(i)}(\Delta a) \rightarrow \xi^{(i)}(0) \ &{\rm when}& \ \ \Delta a \equiv |a-a'| \ll a_c \\ \xi^{(i)}(\Delta a) \rightarrow 0 \ &{\rm when}& \ \ \Delta a \gg a_c \ , \nonumber \end{eqnarray} where $a_c$ is the correlation length and $\xi^{(i)}(0)$ is a positive constant\footnote{In certain cases, it may be more appropriate to specify the correlation between points in $\log a$ rather than in $a$.}. One has to specify the functional form of $\xi^{(i)}(\Delta a)$ and we refer the reader to~\cite{Crittenden:2011aa} for an extended discussion of different choices. The choice does not make a big difference in practice. Using $\xi^{(i)}(\Delta a)$, one can calculate the prior covariance matrix for the binned $p_i(a_j)$ via \begin{equation} C^{(i)}_{jk} = {1\over (\delta a)^2} \int_{a_j}^{a^j+\delta a} da \int_{a_k}^{a_k+\delta a} da' \xi^{(i)}(|a-a'|) \ , \end{equation} where $\delta a$ is the width of a bin in $a$. One can define such an $N\times N$ matrix for each of the five functions and combine them to form a block diagonal $5N \times 5N$ matrix $C^{\rm prior}$ for the parameters ${\tilde p}_\alpha$ such that \begin{equation} C^{\rm prior}= {\rm diag}[C^{(1)},\dotsc,C^{(5)} ] \ . \end{equation} This prior assumes that the five functions are independent of each other, which is true in the most general case but not in many specific models. For example, in $f(R)$ only one function is independent, while in more general Brans-Dicke models there are two. Such additional restrictions can be implemented, if desired, by adjusting the form of $C^{\rm prior}$. Having constructed the prior matrix, one then compares the eigenmodes of $C^{\rm data}$ to the eigenmodes of the inverse of $\left[C^{\rm data}\right]^{-1}+\left[C^{\rm prior}\right]^{-1}$. The eigenmodes that are common to both, \ie those that survive, can be considered to be informative {\it with respect to that prior}. Due to the nature of the prior, one expects that slowly varying eigenmodes that are best constrained by data will have a higher chance to survive, while the high frequency modes will be suppressed. Naturally, the outcome of this comparison depends on the parameters of the prior, $a_c$ and $\xi^{(i)}(0)$. Their choice should, in principle, come from our theoretical prejudice. In practice, they can be tuned so that eigenmodes with variations on time scales comparable to Hubble time (or another time-scale that is theoretically motivated) survive. Thus, a PCA forecast is a key step for tuning the prior that can later be used in fitting to data, as we discuss next. \subsection{Fitting to real data} In a Fisher forecast, there is no limitation on making the number of bins $N$ as large as needed to achieve a convergence of well-constrained eigenmodes to the continuous limit. But this is not the case when fitting to real data, which involves searching for the maximum of a multi-parameter likelihood surface. The search stalls if the parameter space contains flat directions corresponding to nearly degenerate combinations of parameters, which is guaranteed to be the case when the number of bins is large. Fitting only the few best constrained eigenmodes amounts to a rather strong assumption that the amplitudes of the poorly constrained modes are known to be exactly zero, which amounts to adopting a strong, yet somewhat obscure, prior. Instead, in~\cite{Crittenden:2011aa,Zhao:2012aw} it was suggested to use the explicit smoothness prior described in the preceding subsection to aid the convergence of Monte Carlo Markov chains (MCMC). This is achieved in practice by adding a term \begin{equation} \chi^2_{\rm prior}=({\bf {\tilde p}}-{\bf {\tilde p}}^{\rm lcdm})^T [C^{\rm prior}]^{-1} ({\bf {\tilde p}}-{\bf {\tilde p}}^{\rm lcdm}) \end{equation} to the $\chi^2_{\rm data}$ in MCMC. The number of bins that one fits to data need not be very large. One typically needs a couple of bins per effective correlation length set by the prior. It remains to be shown for specific experiments how strong the prior needs to be in order for MCMC to converge. Once MCMC has converged, one can quantify the statistical significance of the detection of a departure from LCDM from the improvement in the $\chi^2_{\rm data}$. One can also compute the evidence for the best fit model and the Bayes' factors, since the prior probability is explicitly known. An explicit illustration of such a calculation for the case of $w(a)$ can be found in~\cite{Zhao:2012aw}. We do not expect the reconstructed shapes of the individual functions $p_i(a)$ to be highly informative because observables will only constrain their combinations and degeneracies between parameters $\{{\tilde p}_\alpha \}$ will make marginalized errors on them large. One could instead use Eqs.~(\ref{eq:gamma}) and (\ref{eq:mu}) to visualize reconstructions of $\mu$ and $\gamma$, or $\mu$ and $\Sigma$, as surfaces in the $(a,k)$ space. \section{Summary} \label{sec:summary} In this paper we have motivated a parametric form for the modified growth functions $\mu$ and $\gamma$ that fixes their scale dependence to a ratio of polynomials in $k$ and have shown that generally the denominator of $\gamma$ is equal to the numerator of $\mu$. We arrive at this form by taking the quasi-static approximation (QSA) in the equations for scalar perturbations derived from a covariant action that allows for modifications of gravity and any number of scalar degrees of freedom. We examine the impact of assuming the QSA in our derivation and conclude that, until a viable counterexample is found, the non-quasi-static effects of modified gravity can be assumed to be negligible. We nevertheless note that the final form of our parametrization allows for a detection of some non-quasi-static signatures, but not necessarily for the most general ones in that regime. We further argue that for most of the viable modifications of gravity discussed in the literature, the polynomials in $k$ are even and of second degree, effectively reducing the number of time-dependent coefficients to five. Since these coefficients are functions of the background variables only, they can be safely assumed to be slowly varying. This justifies using an explicit smoothness prior on their shape when fitting them to data, similarly to the Bayesian framework developed for the reconstruction of $w(a)$ in~\cite{Crittenden:2011aa,Zhao:2012aw} . A forecast of future reconstructions of $\mu$ and $\gamma$ based on this approach is currently in progress~\cite{in-progress}. \acknowledgments We benefited from discussions and previous collaborations with Rob Crittenden, Kazuya Koyama and Gong-Bo Zhao, as well as useful interactions with Tessa Baker, Antonio de Felice, Pedro Ferreira, and Constantinos Skordis. AS is supported by the SISSA Excellence Grant and acknowledges the use of DAMTP/SISSA collaboration grant. LP is supported by an NSERC Discovery grant.
2,869,038,156,435
arxiv
\chapter{SUMMARY AND CONCLUSIONS \label{cha:Summary}} The non-supersymmetric space $\mathbb{C}/\mathbb{Z}_d$ is the exemplary model for the study of tachyon condensation in string theory. By studying topological defects between these non-compact orbifolds we have found defects which encode the bulk RG flow that drives the process of tachyon condensation. Besides the bulk RG flow, the algebraic language of the defects found here provides a simplified way to tackle the boundary RG flow. The latter is a topic not yet well understood in string theory in general and our work gives more evidence to the adequacy of defects in this area. The discussion of describing the boundary RG flow in terms of defects was first exploited in \cite{brunner07a} for the superconformal models $\mathcal{M}_{d-2}/\mathbb{Z}_d$ but their treatment needed spacetime supersymmetry to avoid dealing with regularization. Our work shows that one can do away with such an assumption without having to invoke a regularization scheme when working with the non-compact orbifolds. In \cite{hori00a}, Hori and Vafa argued that the twisted chiral sector of LG orbifolds is independent of the concrete superpotential term. Starting from there, one would first expect that in the cases of $\mathbb{C}/\mathbb{Z}_d$ and a LG with superpotential $W=X^d$, the spectrum of the twisted sectors (i.e., the $(a,c)$-rings) of both theories agree. Hence, one can map the perturbations of the two models to each other. The work presented in this dissertation shows that the previous conclusion can be taken further, namely that the flows between the flat models follow a similar pattern to the ones with superpotential. In the bosonic, compact orbifolds at $c=1$ we have provided a fuller picture of possible defects for these theories which now include the twisted part of the spectrum. By mapping out what are the possible defects in this important class of 2D theories other questions can be tackled such as the RG flow in these models. Our work shows that the topological defects form a closed algebra. By computing this fusion algebra, we have explicitly shown the way in which the symmetry breaks from the quantum groups upon orbifolding. In this dissertation we have explored the spectrum and applications of conformal and topological defects in $\mathbb{C}/\mathbb{Z}_d$ and $S^1_R/\mathbb{Z}_2$ orbifold theories. We have constructed topological defects which implement the action of the RG flow between $\mathbb{C}/\mathbb{Z}_n$ theories. The language we have employed to describe the RG flow defects is the natural description for such objects in the framework of Landau-Ginzburg models and their orbifolds. As we reviewed in section \ref{BtypeSec}, this description involves factorizing the superpotentials of the given theories over different polynomial rings. The language of matrix factorizations for boundaries and defects has been shown to carry over to the case of a zero superpotential. The matrix factorizations we used in this case were obtained by setting $p_0=0$ in those given in \cite{brunner07a}. This is a very natural choice since it relates matrix factorizations in the $\mathbb{C}/\mathbb{Z}_d$ models to another method of characterizing D-branes. Indeed, a common description of D-branes in geometric spaces (when there is no superpotential) is via chain complexes of vector bundles, with a differential $d$ built from the BRST operator $Q$ \cite{aspinwall}. On the other hand, out of the matrix factorizations associated with the D-branes in the Landau-Ginzburg models one obtains 2-periodic twisted complexes by taking the differentials to be the factorizing maps $p_1$ and $p_0$. Therefore with $p_0=0$, $W\rightarrow 0$ produces an ordinary complex which coincides with above description for the D-branes. It would be interesting to make this connection precise in a more general context\footnote{We thank Ilka Brunner for emphasizing this connection to us.}. We have put forth two different ways of checking that the defects we posit here indeed enforce the RG flow between the non-compact orbifolds. One method uses the chiral rings of the theories at hand, and their deformations. The other method is a geometrical description of A-branes which are the equivalent representation of B-type boundary conditions in the mirror theory. Both methods keep track of the RG flow and show that the endpoints are $\mathbb{C}/\mathbb{Z}_n$ orbifolds. The defects $P^{(m,\underline n)}$ of subsection \ref{rgflowDefects} are shown to be appropriate interfaces between any two such orbifolds. By studying the fusion rules we showed that we can use these defects to tackle the question of the boundary RG flow when the theory has a nontrivial worldsheet boundary. In this note we provided evidence that the defects $P^{(m,\underline n)}$ successfully map the boundary conditions associated with the IR theory $\mathbb{C}/\mathbb{Z}_n$, to those of the UV theory $\mathbb{C}/\mathbb{Z}_n'$, $n'<n$. We established such correspondence by working with the mirror theory of the non-compact orbifolds. In this picture, we can compare the action of the RG flow defects on the B-type D-branes with the action of the RG flow on the dual A-type D-branes, i.e., A-branes. In comparing with the work of \cite{brunner07a}, we have shown that the RG flows between the $\mathbb{C}/\mathbb{Z}_d$ models follow a similar pattern to that of the LG orbifolds with a superpotential turned on. Although we checked that RG flow defects properly describe the bulk-induced boundary RG flow by going to the mirror description in subsection \ref{comparison}, a similar comparison can be done between the result of the fusion rules and the flow of the deformed relation of the chiral ring given in equation (\ref{deformed}). This can be done by considering the quotient relation of the chiral ring in equation (\ref{deformed}) as a branched covering of the complex plane. Such a description would provide an equivalent geometrical formalism to that of the deformed A-branes, so that an analysis could be done along the lines of the one done in Subsection \ref{comparison} for the A-branes. A different approach to building conformal defects in these non-compact orbifolds is via the unfolding procedure employed here for the compact orbifold case. In this method one constructs the boundary states corresponding to D-branes in the target space $\mathbb{C}/\mathbb{Z}_n \times \mathbb{C}/\mathbb{Z}_{n'}$. These states can be mapped to defects between the theories $\mathbb{C}/\mathbb{Z}_{n}$ and $\mathbb{C}/\mathbb{Z}_{n'}$ via the unfolding procedure. An interesting question would be to find an equivalent description of the RG flow defects presented here in terms of a representation as Hilbert space operators. For the compact orbifold $S^1/\mathbb{Z}_2$ we have identified the spectrum of possible conformal defects which glue together these theories. The first step was to obtain solutions for the boundary states of the product theory $(S^1/\mathbb{Z}_2)^2$, and secondly to unfold these elements to the defects. Although not exhaustive, this work and that of \cite{bachas07, fuchs07} taken together map out the possible defects between the $c=1$, 2D CFTs. This classification makes this family of field theories the one whose defects are best classified. In our construction of D-branes for the product theory $(S^1/\mathbb{Z}_2)^2$ we encountered that the boundary states come in three varieties: untwisted, partially twisted, and twisted depending on whether we used twisted fields in none, one, or both of the directions of the orbifold. These same varieties translate to the defects upon unfolding. The untwisted defects presented here are the $\mathbb{Z}_2 \times \mathbb{Z}_2$-symmetrized versions of defects for the circle theory in \cite{bachas07}. The analysis of the untwisted D-branes and defects guides our analysis for the fully twisted ones. The obvious next step to explore the full algebra between the defects and all possible boundary conditions cataloged in this note. Given the prolific amount of these objects upon orbifolding the circle, we have left this step out of our project. Furthermore, defects have been shown to feel a Casimir force between themselves, and between a defect and a boundary \cite{Brunner:2010xm, bachas02}. It would be interesting to study this attribute in the presence of the twisted and partially twisted defects presented here. Lastly, now that defects have been written for both branches of the $c=1$, 2D CFTs, and that we have an understanding of twisted degrees of freedom on defects it would be very interesting to explore defects gluing $S^1$ and $S^1/\mathbb{Z}_2$ theories. \chapter{\uppercase{Boundary theory of $S^1/\mathbb{Z}_2 \times S^1/\mathbb{Z}_2$} } In this chapter we explore possible D-brane constructions for the $c=2$ free bosonic theory with target space $S^1/\mathbb{Z}_2 \times S^1/\mathbb{Z}_2$. The $S^1/\mathbb{Z}_2$ orbifold constitutes another archetypal model in string theory rich enough to test ideas and find applications. Indeed, almost half of all $c=1$, 2D CFTs can be realized as instances of this model \cite{Ginsparg:1988ui}. To obtain the orbifold construction we start with a free bosonic field $X$ on a circle \begin{align} X = X + 2\pi R, \end{align} and perform the identification by $\mathbb{Z}_2$ action \begin{align} \rho: X(z,\bar z) \rightarrow{} -X(z,\bar z). \end{align} The D-branes found here are described as elements of the boundary CFT. We start with a review of the boundary CFT formalism in the case of the $S^1/\mathbb{Z}_2$ orbifold as developed in \cite{affleck}. Then we move on to find the allowed boundary states in $S^1/\mathbb{Z}_2 \times S^1/\mathbb{Z}_2$. In chapter 4, these D-branes are mapped to defects between the single orbifold theories at different radii. \section{Review of the boundary states in the circle orbifold}\label{review} In this section we review the boundary conformal field theory (BCFT) for the bosonic free theory following \cite{affleck}. The action is \begin{equation}\label{freeTheory} S=\frac{1}{2\pi}\displaystyle\int dzd\bar z \ \partial X\bar\partial X, \end{equation} with $X: \mathbb{C}\rightarrow {S}^1_R/\mathbb{Z}_2$ where $R$ is radius of the unorbifolded circle. There are two types of solutions to above variation problem: One is the field $X$ which satisfies $X(e^{i2\pi}z,e^{-i2\pi}\bar z) = X(z,\bar z)$ and thus it is called ``untwisted''. The other solution is ``twisted'' with respect to the $\mathbb{Z}_2$ action, $Y(e^{i2\pi}z,e^{-i2\pi}\bar z) = -Y(z,\bar z)$. The field $X$ has the Fourier expansion \cite{affleck} \begin{equation}\label{xExpansion} X(\tau,\sigma)=\hat x_0 +\frac{\widehat N}{2 R}\tau + \widehat M R\sigma+\sum_{n=1}^\infty\frac{i}{2\sqrt{n}} (a_n e^{-in(\tau+\sigma)}+\tilde a _n e^{-in(\tau-\sigma)}- h.c.), \end{equation} where $\hat x_0$ is the zero-mode operator; $a_n$ and $\tilde a_n$ are lowering operators and $a_n^\dag = a_{-n}$, $\tilde a_n ^\dag = \tilde a_{-n}$ are the corresponding raising operators; and $\widehat N$ and $\widehat M$ are the momentum and winding operators. These operators satisfy \begin{equation}\label{aalgebra2} [a_n,a_m] = \delta_{m+n}= [\tilde a_n, \tilde a_m] = \delta_{m+n} \ , \ \ \ \left[\hat x_0, \widehat N \right] = iR\ , \ \ \ \left[\widehat{\tilde x}_0, \widehat M\right] = -\frac{i}{R}. \end{equation} In the above, $\widehat{\tilde x}_0$ is the variable conjugate to the winding number operator. The Hamiltonian in the untwisted sector is given by $H_c=L_0+\widetilde L_0$ and it has the mode expansion \begin{equation}\label{hamiltonian} H_c =\frac{\widehat N^2}{4R^2}+\widehat M^2R^2+\sum_{n>0}n(a_n^\dag a_n +\tilde a_n^\dag \tilde a_n)-\frac{1}{12}. \end{equation} In the twisted sector the boson $Y$ has mode expansion \begin{equation}\label{yExpansion} Y(\tau,\sigma)=\hat y_0+\sum_{n>0}\frac{i}{2\sqrt{(n-\frac{1}{2})}}\left( b_ne^{-i(n-\frac{1}{2})(\tau+\sigma)}+\tilde b_ne^{-i(n-\frac{1}{2})(\tau-\sigma)}- h.c. \right), \end{equation} where the $b_n$ modes satisfy the same canonical commutations as the $a_n$ modes, and $y_0\in\left\{0,\pi R\right\}$. The last condition means that the twisted field is restricted to the endpoints of the orbifold, i.e., the fixed points of the $\mathbb{Z}_2$ action. The respective Hamiltonian $H_t$ is given by \begin{equation}\label{ht2} H_t = \sum_n\left( \left( n-\frac{1}{2}\right) b_n^\dag b_n + \left( n-\frac{1}{2}\right) \tilde b_n^\dag\tilde b_n\right)+\frac{1}{24}. \end{equation} Given a 2D CFT on a subspace $\Sigma \subset \mathbb{C}$ with non-trivial boundary $\partial \Sigma\neq \displaystyle \emptyset$, the following condition must hold along the boundary \begin{equation}\label{TbarT} T=\xbar T, \end{equation} which is a requirement for the conformal Ward identity to hold in the presence of boundaries \cite{cardy}. The Hilbert space of a BCFT contains elements which are consistent with Eq. (\ref{TbarT}), that is there is a boundary CFT whose elements $|v\rangle\rangle$ satisfy \begin{equation}\label{VirasorDefining} (L_n - \widetilde L_{-n})|v\rangle\rangle=0. \end{equation} The operator in the above equation follows by taking the Fourier expansion of both sides of $T=\xbar T$. For the free theory of Eq. (\ref{freeTheory}), boundary states solving Eq. (\ref{VirasorDefining}) can be obtained as solutions to the systems of equations \begin{equation}\label{Ndefining} (a_n +\tilde a_{-n})|k,w\rangle\rangle_N =0 , \end{equation} \begin{equation}\label{Ddefining} (a_n -\tilde a_{-n})|k,w\rangle\rangle_D=0, \end{equation} where $n\in \mathbb{Z}$. The $(k,w)$ labels of the boundary states are winding mode and momentum eigenvalues furnishing the elements of direct sum of the bulk $u(1)^2$ representations \begin{equation} \mathcal{H}(R)=\bigoplus_{k,w\in\mathbb{Z}} \mathcal{H}_{q_{m,w}(R)} \otimes \widetilde{ \mathcal{H}}_{\tilde q_{m,w}(R)}, \end{equation} where the charges $(q,\tilde q)$ are the eigenvalues of $(a_0,\tilde a_0)$. That is, \begin{equation}\label{alpha0} a_0 |k, w\rangle\rangle = \left(\frac{k}{R} -\frac{Rw}{2}\right) |k, w\rangle\rangle, \end{equation} \begin{equation}\label{baralpha0} \tilde a_0 |k, w\rangle = \left(\frac{k}{R} + \frac{Rw}{2}\right) |k, w\rangle\rangle. \end{equation} The solutions to the system in Eq. (\ref{Ndefining}) correspond to Neumann boundary states while those for Eq. (\ref{Ddefining}) correspond to Dirichlet ones. Each state encodes the corresponding type of boundary conditions. The Cardy-consistent boundary states for the orbifolded theory which are invariant under the action of $\mathbb{Z}_2$ are built as symmetric combinations of the boundary states of the circle theory. We refer to these invariant states as ``untwisted''. The Neumann untwisted state is give by \begin{equation}\label{orbifoldn} |N_O(\tilde x_0)\rangle\rangle = \frac{1}{\sqrt{2}}\left(|N(\tilde x_0)\rangle\rangle+|N(-\tilde x_0)\rangle\rangle \right), \end{equation} where \begin{equation}\label{generaln} |N(\tilde x_0) \rangle\rangle:=\sqrt{R}\sum_{w\in\mathbb{Z}}e^{iMR\tilde x_0}\exp\left(-\sum_{n=1} ^\infty \frac{1}{n} a_{-n}\tilde a_{-n} \right) |0,w\rangle, \end{equation} is the Neumann boundary state for the $S_R^1$ theory. The invariant Dirichlet counterpart is \begin{equation}\label{orbifoldd} |D_O(x_0)\rangle\rangle = \frac{1}{\sqrt{2}}\left(|D(x_0)\rangle\rangle+|D(-x_0)\rangle\rangle \right), \end{equation} with the circle Dirichlet expression being \begin{equation}\label{generald} |D(x_0)\rangle\rangle:=\frac{1}{\sqrt{2R}}\sum_{k\in\mathbb{Z}} e^{ikx_0/R}\exp\left(\sum_{n=1} ^\infty \frac{1}{n} a_{-n}\tilde a_{-n} \right) |k,0\rangle. \end{equation} The choice of coefficients $e^{ikx_0/R}/\sqrt{2R}$ in the Dirichlet solution follows from the requirement that the state is Cardy-consistent with itself and the Neumann state. That is, the amplitudes among these states transform to partition functions under a modular $S$-transformation. This requirement fixes the given coefficients for the Neumann state as well. The two vectors $|D(x_0)\rangle\rangle$ and $|N(\tilde x_0)\rangle\rangle$ encode the Dirichlet and Neumann boundary conditions of the free compact field $X$ \cite{bachas07}. This characteristic can be seen via the following two relationships. \begin{equation} X(0,\sigma)|D(x_0)\rangle\rangle=x_0|D(x_0)\rangle\rangle , \end{equation} \begin{equation} \partial_\tau X(0,\sigma)|N(\tilde x_0)\rangle\rangle=0. \end{equation} In the twisted sector, there are two systems of equations similar to those in Eq. (\ref{Ndefining}) and Eq. (\ref{Ddefining}) but with the $b_n$ oscillator modes: \begin{equation}\label{NdefiningT} (b_n +\tilde b_{-n})|v\rangle\rangle_N =0 , \end{equation} \begin{equation}\label{DdefiningT} (b_n -\tilde b_{-n})|v\rangle\rangle_D=0. \end{equation} The Dirichlet solution is given by \begin{equation}\label{twistedD} |D_{O}(y_0),T\rangle\rangle =e^{\sum_{n>0} b^\dag _n \tilde b_n ^\dag}|y_0,T\rangle, \end{equation} with the discreet parameter $y_0\in\left\{0,\pi R\right\}$ taking value at the fixed points of the $\mathbb{Z}_2$ action. This state satisfies the twisted Dirichlet condition \begin{equation}\label{dirichletequation} Y(0,\sigma)|D_O(y_0),T\rangle\rangle=y_0|D_O(y_0),T\rangle\rangle. \end{equation} The solution to the Neumann-type system of Eq. (\ref{NdefiningT}) is \begin{equation} |N_O(\tilde y_0),T\rangle\rangle=e^{-\sum_{n>0} b^\dag _n \tilde b_n ^\dag}\frac{1}{\sqrt{2}} (|0,T\rangle+e^{i 2R\tilde y_0}|\pi R,T\rangle), \end{equation} where $\tilde y_0\in \left\{0, \frac{\pi}{2R}\right\}$ is the variable T-dual to $y_0$. The states $|N_O(\tilde y_0),T\rangle\rangle$ and $|D_O(y_0),T\rangle\rangle$ are not Cardy consistent. Instead, in the twisted sector boundary states come as elements $\mathcal{H}_{\text{circle}}\oplus \mathcal{H}_{\text{twisted}}$, where $\mathcal{H}_{\text{circle}}$ is the boundary Hilbert space of the circle theory; and $ \mathcal{H}_{\text{twisted}}$ is the Hilbert space of states which satisfy the systems of equations of Eq. (\ref{DdefiningT}) and Eq. (\ref{NdefiningT}). The consistent boundary states were first written by \cite{affleck}: \begin{equation}\label{Dgenerator} |D_O^\pm(y_0)\rangle\rangle= \frac{1}{\sqrt{2}}( |D(y_0)\rangle\rangle e_c \pm 2^{1/4} |D_O(y_0),T\rangle\rangle e_t), \end{equation} \begin{equation}\label{Ngenerator} |N_O^\pm(\tilde y_0)\rangle\rangle= \frac{1}{\sqrt{2}}( |N (\tilde y_0)\rangle\rangle e_c \pm 2^{1/4} |N_O(\tilde y_0),T\rangle\rangle e_t), \end{equation} where $e_c$ and $e_t$ are the left and right generators of the direct sum $\mathcal{H}_{\text{circle}}\oplus \mathcal{H}_{\text{twisted}}$. That is, we write a generic element $a\in \mathcal{H}_{\text{circle}}\oplus \mathcal{H}_{\text{twisted}}$ as $a =a^c e_c + a^t e_t$. We will omit the generators except in places where they help to clarify the computations. With this notation, the boundary states in Eq. (\ref{Dgenerator}) and Eq. (\ref{Ngenerator}) satisfy the following equations \begin{equation} ((a_n-\tilde a_n)e_c\otimes e_c^*+ (b_n-\tilde b_n)e_t\otimes e_t^*)|D_O^\pm(y_0)\rangle\rangle=0, \end{equation} \begin{equation} ((a_n+\tilde a_n)e_c\otimes e_c^*+ (b_n+\tilde b_n)e_t\otimes e_t^*)|N_O^\pm(\tilde y_0)\rangle\rangle=0. \end{equation} \section{D-branes for $(S^1/\mathbb{Z}_2)^2$} In this section we proceed to find possible D-branes for the free bosonic theory with target space $(S^1_R/\mathbb{Z}_2)^2$. We parametrize the target space by two bosons $(Z^1,Z^2)\in S^1_{R_1}/\mathbb{Z}_2\times S^1_{R_1}/\mathbb{Z}_2$ where each $Z^i$ stands for the untwisted field $X^i$ or twisted $Y^i$. To obtain more general D-branes we allow for a target-space rotation by angle $\phi$ and we denote the rotated target space by $({}^R Z^1,{}^RZ^2)$. This target-space transformation leaves the conformal requirement $T= \xbar T$ for points at the boundary invariant in the case of the free boson. We proceed below by following the same procedure to find D-branes but in the product theory. By solving equations defining boundary conformal states we find families of D-branes describing possible boundary conditions for open strings. First, we present the untwisted sector composed of the D-branes which remain after projecting out those in the $\mathbb{S}_{R_1}\times \mathbb{S}_{R_2}$ theory which are not $\mathbb{Z}_2 \times \mathbb{Z}_2$-invariant. Then we present the twisted D-branes which contain those D-branes which arise as a tensor product of twisted boundary states. We are mainly interested on finding solutions for the rotated D-branes, i.e., D-branes for the target space $({}^R Z^1,{}^RZ^2)$. \subsection{Untwisted boundary states}\label{UntwistedBraneSection} The untwisted boundary states for the general rotated D-branes are obtained as solutions to the equations \begin{align} ({}^R\underline a_n\pm {}^R\tilde{ \underline a}_{-n}) |v\rangle\rangle&=0, \label{rotated1}\\ ({}^R\underline a_n\pm \Omega {}^R\tilde{ \underline a}_{-n}) |v\rangle\rangle&=0,\label{rotated2} \end{align} where $\Omega =\operatorname{diag}(1,-1)$, and ${}^R\underline a_n:= R(\phi) \underline a_n$ with $\underline a_n:=(a^1_n,a^2_n)^t$. The angle $\phi$ is given by \begin{equation} \phi=\tan^{-1}\left( \frac{k_2R_2}{k_1R_1}\right), \end{equation} where $k_1$, $k_2$ are coprime integers. As in the $S^1/\mathbb{Z}_2$ case in \cite{affleck}, we directly construct untwisted D-branes in the product quotient theory by symmetrizing the boundary states of the $\mathbb{S}^1_{R_1} \times \mathbb{S}^1_{R_2}$- theory. The boundary theory of $\mathbb{S}^1_{R_1} \times \mathbb{S}^1_{R_2}$ is given in \cite{bachas07} and we use their notation here. The most general untwisted D-branes fall into two large classes. One of them is a D1-brane wrapping $k_1$-times one direction of the orbifold, and $k_2$-times the other direction and satisfies Eq. (\ref{rotated2}). Up to a normalization factor $C$, the boundary state for such a D-brane is \begin{align}\label{d1orbifold} |D1_O(\alpha,\beta)\rangle\rangle_\phi = C & \left(|D1(\alpha,\beta)\rangle\rangle_\phi + |D1(-\alpha,-\beta)\rangle\rangle_\phi \right. \nonumber \\ &\left.+|D1(\alpha,-\beta)\rangle\rangle_\phi +|D1(-\alpha,\beta)\rangle\rangle_\phi \right), \end{align} where $|D1(\alpha,\beta)\rangle\rangle_\phi$ was found in \cite{bachas07} and it is given by \begin{equation}\label{d1bachas} |D1,(\alpha,\beta)\rangle\rangle_\phi :=\prod_{n>0}e^{-\Omega_\phi^{ij}a^{i\dag}_n\tilde a_n^{j\dag}}\left(g^{+} \sum_{M,N}e^{iN\alpha-iM\beta}|k_2N,k_1M\rangle\rangle_1\otimes|-k_1N,k_2M\rangle\rangle _2\right), \end{equation} where \begin{equation}\label{omegaPhi} \Omega_\phi=R^t(\phi)\Omega R(\phi)=\left( \begin{array}{cc} \cos(2\phi) & \sin(2\phi)\\ \sin(2\phi)&-\cos(2\phi) \end{array}\right), \ \ \ \ g^{+}=\sqrt{\frac{k_1k_2}{\sin2\phi}}. \end{equation} To fix the overall constant we note that at $\phi = n\pi$ the above state reduces to \begin{equation}\label{d101} |D1(\alpha,\beta)\rangle\rangle_{n\pi} =|N(\alpha)^{k_1}\rangle\rangle\otimes|D(\beta)^{k_1}\rangle\rangle , \end{equation} where the $k_1$ superscript means $k_1$ copies of the state. At this angle, the symmetrized boundary state in Eq. (\ref{d1orbifold}) becomes \begin{equation}\label{rightd1} |D1_O(\alpha,\beta)\rangle\rangle_{n\pi}= 2C |N_O(\alpha) ^{k_1}\rangle\rangle \otimes |D_O(\beta)^{k_1} \rangle\rangle , \end{equation} which is the orbifolded version of the previous state if $C=1/2$. Simplifying the sum in Eq. (\ref{d1orbifold}) we can write the $ |D1_O(\alpha,\beta)\rangle\rangle_\phi$ state as \begin{align}\label{d1orbifoldcosine} |D1_O(\alpha,\beta)\rangle\rangle_\phi=2g^{+} \prod_{n>0}e^{(-\Omega_\phi^{ij} a^i_n\tilde a_n^j)^\dag} \sum_{M,N}(&\cos(N\alpha)\cos(M\beta) \nonumber\\ &|k_2N,k_1M\rangle_1\otimes|-k_1N,k_2M \rangle _2). \end{align} The second general type of untwisted D-branes is a bound system of $k_1$ D2-branes and $k_2$ D0-branes. Such state is the $\mathbb{Z}_2 \times\mathbb{Z}_2$-symmetric solution to Eq. (\ref{rotated2}) and it can also be obtained by T-dualizing the right-movers in $|D1_O(\alpha,\beta)\rangle\rangle_\phi$. This D2/D0 state is given by \begin{align}\label{d2d0orbifoldcosine} |D2/D0_O(\alpha,\beta)\rangle\rangle_\theta:= 2 g^{(-)}\prod_{n>0}e^{-\widetilde \Omega_\theta ^{ij} a^{i\dag}_n \tilde a_n^{j\dag}}\sum_{M,N}&(\cos(N\alpha)\cos(M\beta)\nonumber\\ & |k_1M,k_2N\rangle_1\otimes|-k_1N,k_2M\rangle _2), \end{align} with \begin{equation} \widetilde \Omega_\theta: =\Omega_\phi\left( \begin{array}{cc} -1 & 0\\ 0& 1 \end{array}\right)\Big |_{\phi = \theta}\ , \ \ \ \ g^{(-)}=\sqrt{\frac{k_1k_2}{\sin2\theta}}, \end{equation} where \begin{equation}\label{thetatheta} \theta=\tan^{-1}\left(\frac{2k_2R_1R_2}{k_1}\right). \end{equation} is the T-dualized rotation angle. At values which are multiples of $\pi/2$ we obtain the orbifolded version of the D2 and D0-branes of \cite{bachas07}, \begin{align} |D2/D0_O(\alpha,\beta)\rangle\rangle_{n\pi}&=|D_O(\alpha)^{k_1}\rangle\rangle_1\otimes |D_O(\beta)^{k_1}\rangle\rangle_2=:|D0_O^{k_1}\rangle\rangle,\\ |D2/D0_O(\alpha,\beta)\rangle\rangle_{(2n+1)\pi/2}& =|N_O(\alpha)^{k_2}\rangle\rangle_1\otimes |N_O(\beta)^{k_2}\rangle\rangle_2=:|D2_O^{k_2}\rangle\rangle. \end{align} \subsection{Twisted D-branes} Aside from the $\mathbb{Z}_2\times \mathbb{Z}_2$-symmetric boundary states obtained as projections from the $T^2$ boundary theory, the orbifold carries twisted D-branes. Without considering the general rotated case at first, we write down boundary states which are tensor products of the single theory boundary theory. We will use these states as a guide to find the general types in the next section. The non-rotated elements are \begin{equation}\label{diagonaldd} |DD^{\pm\pm}(y_0^1,y_0^2)\rangle\rangle:=|D_O^\pm(y_0^1)\rangle\rangle_1\otimes |D_O^\pm(y_0^2)\rangle\rangle_2, \end{equation} \begin{equation}\label{diagonalnn} |NN^{\pm\pm}(\tilde y_0^1,\tilde y_0^2)\rangle\rangle:=|N_O^\pm(\tilde y_0^1)\rangle\rangle_1\otimes |N_O^\pm(\tilde y_0^2)\rangle\rangle_2, \end{equation} \begin{equation}\label{diagonaldn} |DN^{\pm\pm}(y_0^1,\tilde y_0^2)\rangle\rangle:=|D_O^\pm(y_0^1)\rangle\rangle_1\otimes |N_O^\pm(\tilde y_0^2)\rangle\rangle_2, \end{equation} \begin{equation}\label{diagonalnd} |ND^{\pm\pm}(\tilde y_0^1, y_0^2)\rangle\rangle:=|N_O^\pm(\tilde y_0^1)\rangle\rangle_1\otimes |D_O^\pm( y_0^2)\rangle\rangle_2. \end{equation} The first two states above can are solutions to the equations \begin{equation}\label{twistedConditionDiag} ((\underline a_n\pm\tilde{ \underline a}_{-n}) + (\underline b_n\pm\tilde{ \underline b}_{-n})) |V\rangle\rangle=0, \end{equation} while the latter two can be obtained as solutions to \begin{equation}\label{TconditionNoDiag} ((\underline a_n\pm \Omega \ \tilde{ \underline a}_{-n}) + (\underline b_n \pm\Omega \ \tilde{ b}_{-n})) |V\rangle\rangle=0, \end{equation} where $\Omega =\operatorname{diag}(1,-1)$, $\underline a_n:= (a^1_n,a^2_n)^t$, $\underline b_n:= (b^1_n,b^2_n)^t$. In the above equations we are using the implicit identity operators acting on the left or the right elements of the tensor product as needed. We are also implicitly using the generators $e_c^i$ and $e_t^i$ over which the modules $\mathcal{H}_{\text{circle}}$ and $\mathcal{H}_{\text{twisted}}$ are built, both for the left and right elements of the tensor products. In order to find the boundary states for the twisted D-branes in the rotated case, it helps to write out the above solutions in a compact manner. Just like $\underline a_n:=(a^1_n,a^2_n)^t$ and $\underline b_n:=(b^1_n,b^2_n)^t$ it helps to define two new sets of 2-vectors of oscillators given by \begin{equation}\label{oscillatorvectors} \ \ \underline c_n:=(a^1_n,b^2_n)^t \ \ , \ \ \underline d_n:=(b^1_n,a^2_n)^t, \end{equation} and similarly for the antiholomorphic oscillators. We will label the set of all such pairs of oscillators by $\underline s_n$. Let us start with the DN and ND tensors; inserting the expressions for single boundary states give in Eq. (\ref{Dgenerator}) and Eq. (\ref{Ngenerator}) we obtain \begin{equation}\label{dnNoRotated} |DN^{\pm\pm}(y_0,\tilde y_0)\rangle\rangle = B^{(+)}[V_{DN}^{\pm\pm}(y_0,\tilde y_0)], \end{equation} \begin{equation}\label{ndNoRotated} |ND^{\pm\pm}(y_0,\tilde y_0)\rangle\rangle = B^{(-)}[V_{ND}^{\pm\pm}(y_0,\tilde y_0)], \end{equation} where the operators are \begin{equation} B^{(\pm)}:=\prod_{n>0}\left( e^{\pm \underline a_n^{\dag t}\Omega \tilde{ \underline a}_n^\dag} + e^{\pm \underline c_n^{\dag t}\Omega \tilde{ \underline c}_n^\dag} + e^{\pm \underline d_n^{\dag t}\Omega \tilde{ \underline d}_n ^\dag} + e^{\pm \underline b_n^{\dag t}\Omega \tilde{ \underline b}_n^\dag}\right), \end{equation} and the lattice sums are \begin{equation} \begin{split} V_{DN}^{\pm\pm}=& \left(2^{-1/2}\frac{1}{\sqrt{2R_1}}\sum_M e^{iMy_0/R_1}|M,0\rangle_1 \pm 2^{-1/4} |y_0,T\rangle_1\right)\\ &\otimes \left( 2^{-1/2} \sqrt{R_2}\sum_N e^{iNR_2\tilde y_0}|0,N\rangle_2 \pm 2^{-1/4}|\tilde y_0,T\rangle_2 \right), \end{split} \end{equation} \begin{equation} \begin{split} V_{ND}^{\pm\pm}=&\left( 2^{-1/2} \sqrt{R_1}\sum_N e^{iNR_1\tilde y_0}|0,N\rangle_1 \pm 2^{-1/4}|\tilde y_0,T\rangle_1 \right) \\ &\otimes \left(2^{-1/2}\frac{1}{\sqrt{2R_2}}\sum_M e^{iMy_0/R_2}|M,0\rangle_2 \pm 2^{-1/4} |y_0,T\rangle_2\right). \end{split} \end{equation} Aside from the fully twisted solutions in Eq. (\ref{diagonaldd}) - Eq. (\ref{diagonalnd}), in the twisted sector we can also find boundary states which are untwisted in one direction and twisted along the other direction of the D-brane. We call such boundary states ``partially twisted''. The NN or DD combinations are \begin{equation}\label{partialT1} |A1^{\pm}(y_0,x_0)\rangle\rangle := |D^{\pm}(y_0)\rangle\rangle_1 \otimes |D_O(x_0)\rangle\rangle_2, \end{equation} \begin{equation} |A4^{\pm}(\tilde y_0,\tilde x_0)\rangle\rangle := |N^{\pm}(\tilde y_0)\rangle\rangle_1 \otimes |N_O(\tilde x_0)\rangle\rangle_2, \end{equation} \begin{equation} |B1^{\pm}(x_0,y_0)\rangle\rangle := |D_O(x_0)\rangle\rangle_1 \otimes |D^{\pm}(y_0)\rangle\rangle_2, \end{equation} \begin{equation}\label{partialT4} |B4^{\pm}(\tilde x_0, \tilde y_0)\rangle\rangle := |N_O(\tilde x_0)\rangle\rangle_1 \otimes |N^{\pm}(\tilde y _0)\rangle\rangle_2, \end{equation} which solutions to the systems of equations \begin{equation}\label{PartialNNDD} ((\underline a_n\pm\tilde{ \underline a}_{-n}) +P^i (\underline b_n\pm \tilde{ \underline b}_{-n})) |V\rangle\rangle=0, \end{equation} where the $P^i$ are projectors defined on the twisted sector by $P^i \underline b_n = b_n^i$. There are also the DN and ND tensor products, \begin{equation} |A2^{\pm}(y_0,x_0)\rangle\rangle := |D^{\pm}(y_0)\rangle\rangle_1 \otimes |N_O(\tilde x_0)\rangle\rangle_2, \end{equation} \begin{equation} |A3^{\pm}(\tilde y_0,x_0)\rangle\rangle := |N^{\pm}(\tilde y_0)\rangle\rangle_1 \otimes |D_O(x_0)\rangle\rangle_2, \end{equation} \begin{equation} |B2^{\pm}(x_0,\tilde y_0)\rangle\rangle := |D_O(x_0)\rangle\rangle_1 \otimes |N^{\pm}(\tilde y_0)\rangle\rangle_2 , \end{equation} \begin{equation} |B3^{\pm}(\tilde x_0,y_0)\rangle\rangle := |N_O(\tilde x_0)\rangle\rangle_1 \otimes |D^{\pm}(y_0)\rangle\rangle_2 , \end{equation} which solve the defining equation \begin{equation}\label{} ((\underline a_n\pm\Omega\ \tilde{ \underline a}_{-n}) +P^i (\underline b_n\pm \Omega \ \tilde{ \underline b}_{-n})) |V\rangle\rangle=0. \end{equation} As in the case for the fully twisted boundary states, it is helpful to write out some of the above states in a way that gives us insight when solving the general rotated case. Inserting the expressions for the single states from the $S^1/\mathbb{Z}_2$ boundary theory we obtain, \begin{align} |A2^{\pm}(y_0,\tilde x_0)\rangle\rangle &= \prod_{n>0}\left( e^{\underline a_n^{\dag t}\Omega \tilde{ \underline a}_n^\dag } + e^{\underline d_n^{\dag t}\Omega \tilde{ \underline d}_n ^\dag} \right) V_{A2}^{\pm}(y_0,\tilde x_0),\label{A2nonRotated}\\ |A3^{\pm}(\tilde y_0,x_0)\rangle\rangle &= \prod_{n>0}\left( e^{-\underline a_n^{\dag t}\Omega \tilde{ \underline a}_n ^\dag} + e^{-\underline d_n^{\dag t}\Omega \tilde{ \underline d}_n ^\dag} \right) V_{A3}^{\pm}(\tilde y_0, x_0),\\ |B3^{\pm}(\tilde x_0,y_0)\rangle\rangle &= \prod_{n>0}\left( e^{-\underline a_n^{\dag t}\Omega \tilde{ \underline a}_n ^\dag} + e^{-\underline c_n^{\dag t}\Omega \tilde{ \underline c}_n ^\dag} \right) V_{B3}^{\pm}(\tilde x_0,y_0),\\ |B4^{\pm}(\tilde x_0,\tilde y_0)\rangle\rangle &= \prod_{n>0}\left( e^{-\underline a_n^{\dag t} \tilde{ \underline a}_n ^\dag} + e^{-\underline c_n^{\dag t} \tilde{ \underline c}_n ^\dag} \right) V_{B4}^{\pm}(\tilde x_0,\tilde y_0), \end{align} where $ V_{A2}^{\pm}(y_0,\tilde x_0)$, $ V_{A3}^{\pm}(\tilde y_0, x_0)$, $V_{B3}^{\pm}(\tilde x_0,y_0)$, and $V_{B4}^{\pm}(\tilde x_0,\tilde y_0)$ contain the vacuum expressions coming from $|D^{\pm}(y_0)\rangle\rangle_i$, $|N^{\pm}(\tilde y_0)\rangle\rangle_i$, $|D_O(x_0)\rangle\rangle_i$, and $|N_O(\tilde x_0)\rangle\rangle_i$. \subsection{Rotated twisted D-branes} In this section we look for solutions to the boundary defining equations for the rotated target space. We restrict to the twisted sector because the untwisted rotated D-branes are already found in Subsection \ref{UntwistedBraneSection} as projections from their counterparts in the $T^2$ theory developed in \cite{bachas07}. For the rotated case we have the oscillators \begin{equation} {}^R \underline a_n:= R(\phi)\underline a_n\ , \ \ \ {}^R \underline b_n:= R(\phi)\underline b_n, \end{equation} where $R(\phi)$ is the same rotation matrix which acts on target-space coordinates. The twisted states $|DD^{\pm\pm}(y_0^1,y_0^2)\rangle\rangle$ and $|NN^{\pm\pm}(\tilde y_0^1,\tilde y_0^2)\rangle\rangle$ are $R(\phi)$-invariant while the DN and ND are not. In the rotated frame, the latter two are the solutions to the defining equations \begin{equation}\label{RtwistedDef} (({}^R\underline a_n\pm \Omega {}^R\tilde{ \underline a}_{-n}) + ({}^R\underline b_n\pm \Omega {}^R\tilde{ \underline b}_{-n})) |V\rangle\rangle_\phi=0, \end{equation} Noting that the rotation $R$ preserves the oscillator algebras and the Virasoro algebra of the modes of the energy-momentum tensor we observe that the solutions to Eq. (\ref{RtwistedDef}) have the form as the solutions to non-rotated defining equations of Eq. (\ref{TconditionNoDiag}). That is, the rotated versions for $\phi \neq n\pi$ of $|DN^{\pm\pm}(y_0^1,\tilde y_0^2)\rangle\rangle$ and $|ND^{\pm\pm}(y_0^1,\tilde y_0^2)\rangle\rangle$ are given by \begin{equation}\label{twistedDNlong} |DN^{\pm\pm}(y_0^1,\tilde y_0^2)\rangle\rangle_\phi = B_\phi^{(+)}[\widetilde V_{DN}^{\pm\pm}(y_0^1,\tilde y_0^2)], \end{equation} \begin{equation}\label{twistedNDlong} |ND^{\pm\pm}(y_0^1,\tilde y_0^2)\rangle\rangle_\phi = B_\phi^{(-)}[\widetilde V_{ND}^{\pm\pm}(\tilde y_0^1, y_0^2)], \end{equation} where \begin{equation}\label{boundaryOps} B^{(\pm)}_\phi:=\prod_{n>0}\left( e^{\pm \underline a_n^{\dag t}\Omega_\phi \tilde{ \underline a}_n^\dag} + e^{\pm \underline c_n^{\dag t}\Omega_\phi \tilde{ \underline c}_n^\dag} + e^{\pm \underline d_n^{\dag t}\Omega_\phi \tilde{ \underline d}_n ^\dag} + e^{\pm \underline b_n^{\dag t}\Omega_\phi \tilde{ \underline b}_n^\dag}\right), \end{equation} and $\widetilde V_{DN}^{\pm\pm}(y_0^1,\tilde y_0^2)$ and $\widetilde V_{ND}^{\pm\pm}(\tilde y_0^1, y_0^2)$ are the vacuum expressions determined to be \begin{equation}\label{VacTwistDN} \widetilde V_{DN}^{\pm\pm}(y_0^1,\tilde y_0^2) =\left(2^{-1/2}|0\rangle_1 \pm 2^{-1/4}|y_0^1,T\rangle_1\right)\otimes \left( 2^{-1/2} |0\rangle_2 \pm 2^{-1/4}|\tilde y_0^2,T\rangle_2 \right), \end{equation} \begin{equation} \widetilde V_{ND}^{\pm\pm}(\tilde y_0^1, y_0^2)=\left(2^{-1/2}|0\rangle_1 \pm 2^{-1/4}|\tilde y_0^1,T\rangle_1\right)\otimes \left( 2^{-1/2} |0\rangle_2 \pm 2^{-1/4}| y_0^2,T\rangle_2 \right). \end{equation} In order to obtain the two vacuum expressions above we use the $n=0$ case in Eq. (\ref{RtwistedDef}). Here we focus on the DN case, but similar steps lead to the ND expression as well. Since $a_0$ commutes with the higher modes we have, \begin{equation}\label{} (\underline a_0+ \Omega_\phi \ \tilde{ \underline a}_{0})\widetilde V_{DN}^{\pm\pm}(y_0^1,\tilde y_0^2)=0. \end{equation} We note that the DN vacuum expression a priori would be a (possibly infinite) linear combination of states $|\underline k; \underline w;\pm\pm\rangle\rangle$ which have the schematic shape (up to coefficients) \begin{equation} \begin{split} |\underline k; \underline w;\pm\pm\rangle\rangle \sim &|k_1,w_1\rangle_1 |k_1,w_1\rangle_2 e_1^c\otimes e^2_c \pm |k_1,w_1\rangle |\tilde y_0,T\rangle_2 e_1^c\otimes e^2_t\\ & \pm |\tilde y_0,T\rangle_1|k_2,w_2\rangle_2 e_1^t\otimes e^2_c + | y_0,T\rangle_1 |\tilde y_0,T\rangle_2 e_1^t\otimes e^2_t. \end{split} \end{equation} Then the $\underline k$ and $\underline w$ labels are fixed by \begin{equation}\label{} (\underline a_0+ \Omega_\phi \ \tilde{ \underline a}_{0})|\underline k; \underline w;\pm\pm\rangle =0. \end{equation} Writing out the above equation we obtain \begin{equation} \begin{split} 0=&\left(\left(\begin{array}{c} \frac{k_1}{2R_1}+w_1R_1 \\ \frac{k_2}{2R_2}+w_2R_2 \end{array}\right) + \Omega_\phi \left( \begin{array}{c} \frac{k_1}{2R_1}-w_1R_1 \\ \frac{k_2}{2R_2}-w_2R_2 \end{array} \right)\right)|\underline k; \underline w\rangle e_1^c\otimes e^2_c \\ &\pm \left(\begin{array}{c} (\frac{k_1}{2R_1}+w_1R_1) +\Omega_\phi^{11}( \frac{k_1}{2R_1}-w_1R_1 )\\ \Omega_\phi^{21}(\frac{k_1}{2R_1}-w_1R_1 )\end{array}\right) |k_1,w_1\rangle_1\otimes |\tilde y_0,T\rangle_2 e_1^c\otimes e^2_t\\ &\pm \left(\begin{array}{c} \Omega_\phi^{12}(\frac{k_2}{2R_2}-w_2R_2) \\ (\frac{k_2}{2R_2}+w_2R_2 )+\Omega_\phi^{22}(\frac{k_2}{2R_2}-w_2R_2 )\end{array}\right) | y_0,T\rangle_1\otimes|k_2,w_2\rangle_2 e^1_t\otimes e^2_c. \end{split} \end{equation} The unique solution to the above system of equations is \begin{equation} |\underline k; \underline w;\pm\pm\rangle=|\underline 0; \underline 0;\pm\pm\rangle, \end{equation} which gives us the vacuum part of our solutions. Observe that there is a discontinuity in these boundary states as $\phi \rightarrow n\pi$. For $\phi=n\pi$, we have $R(\phi)=(-1)^n 1_{2\times 2}$ so the defining equations in Eq. (\ref{RtwistedDef}) are left invariant. Therefore the solutions in this case are the non-rotated D-branes, yet these solution are part of the $\phi$-families given by $|DN^{\pm\pm}(y_0^1,\tilde y_0^2)\rangle\rangle_\phi$ and $|ND^{\pm\pm}(y_0^1,\tilde y_0^2)\rangle\rangle_\phi$. The non-rotated twisted D-branes are singletons in the space of twisted solutions. The specific set of coefficients taken in Eq. (\ref{VacTwistDN}) are taken to match the overall coefficients from the non-rotated twisted solutions. This ansatz makes the boundary state Cardy consistent with itself. To see this we need to compute the amplitude \begin{equation}\label{ampScheme} A_{DN,\phi}:= {}_\phi\langle \langle DN^{\pm\pm}(y_0^1,\tilde y_0^2)| e^{-\pi T H} |DN^{\pm\pm}(y_0^1,\tilde y_0^2)\rangle\rangle_\phi, \end{equation} and perform a modular $S$-transformation. The $H$ in the above expression stands for the total Hamiltonian of the bulk theory which is a sum over the twisted and untwisted sector Hamiltonians, as well as tensor products between the left and right Hamiltonians. This amplitude is the sum of four components, \begin{equation} A_{DN,\phi}= \frac{1}{4} A_{cc}+ \frac{1}{2} A_{tt}+ \frac{1}{2^{3/2}} A_{ct}+ \frac{1}{2^{3/2}} A_{tc}. \end{equation} The untwisted term is given by \begin{equation}\label{untwistedAmp2} \begin{split} A_{cc}&=\prod_{n>0} \langle 0| e^{ \underline a_n^{ t}\Omega \tilde{ \underline a}_n}| e^{-2\pi T H} |e^{ \underline a_n^{\dag t}\Omega \tilde{ \underline a}_n^\dag} |0\rangle\\ &=\prod_{n>0} \langle 0| e^{ a_n^1 \tilde a^1_n}e^{- a_n^2 \tilde a^2_n} | e^{-\pi T H_{c,1}} e^{-\pi T H_{c,2}}| e^{ a_n^{\dag 1} \tilde a^{\dag 1}_n}e^{- a_n^{2\dag} \tilde a^{\dag 2}_n}|0\rangle\\ &=\prod_{n>0} \langle 0| e^{ a_n^1 \tilde a^1_n} e^{-\pi T H_{c,1}} e^{ a_n^{\dag 1} \tilde a^{\dag 1}_n} |0\rangle \langle 0| e^{- a_n^2 \tilde a^2_n} e^{-\pi T H_{c,2}} e^{- a_n^{2\dag} \tilde a^{\dag 2}_n}|0\rangle \end{split} \end{equation} where we used $\langle v_1\otimes w_1 | v_2\otimes w_2 \rangle := \langle v_1|w_1\rangle \langle v_2 |w_2\rangle$. $H_{c,i}$ are the untwisted Hamiltonian expressions from Eq. (\ref{hamiltonian}). Formally, this amplitude is computed with the rotated oscillators ${}^R\underline a_n$ and the respectively rotated Hamiltonian ${}^R H$ but again used the fact that the oscillator algebra is left invariant by the the $R(\phi)$ action. So the untwisted contribution is, \begin{equation}\label{AccFinal} A_{cc}=\frac{1}{\eta(q)^2} \ , \ \ q := e^{-2\pi T}. \end{equation} The twisted term is \begin{equation}\label{twistedAmp2} \begin{split} A_{tt} =\prod_{n>0} \langle 0| e^{ b_n^1 \tilde{ b}_n^1} e^{-\pi T H_{t,1}} e^{ b_n^{\dag 1} \tilde{ b}_n^{\dag1} }|0\rangle \langle 0| e^{ - b_n^2 \tilde{ b}_n^2} e^{-\pi T H_{t,2}} |e^{- b_n^{\dag 2} \tilde{b}_n^{2 \dag}} |0\rangle,\\ \end{split} \end{equation} where $H_{t,i}$ is the Hamiltonian in the twisted sectors, given in Eq. (\ref{ht2}). This term provides the following contribution, \begin{equation}\label{twistedAmp3} \begin{split} A_{tt} = \left(q^{\frac{1}{48}} \prod_{n>0} \frac{1}{1-q^{{(n-\frac{1}{2})}}}\right)^2. \end{split} \end{equation} The cross terms $A_{ct}$ and $A_{tc}$ are computed are the partially twisted amplitudes \begin{equation} \begin{split} A_{ct}:=&\prod_{n>0} \langle 0| e^{ \underline c_n^{ t}\Omega_\phi \tilde{ \underline c}_n}e^{-2\pi T H} e^{ \underline c_n^{\dag t}\Omega_\phi \tilde{ \underline c}_n^\dag}| 0\rangle \ , \ \ A_{tc}:=\prod_{n>0} \langle 0| e^{ \underline d_n^{ t}\Omega_\phi \tilde{ \underline d}_n} e^{-2\pi T H} e^{ \underline d_n^{\dag t}\Omega_\phi \tilde{ \underline d}_n^\dag} |0\rangle. \end{split} \end{equation} These two terms have the same contribution of \begin{equation}\label{Act} A_{ct}=A_{tc} =\left(q^{\frac{1}{48}} \prod_{n>0} \frac{1}{1-q^{n-\frac{1}{2}}}\right) \frac{1}{\eta(q)} = A_{tt}^{1/2} A_{cc}^{1/2}. \end{equation} Thus combining all the terms we have that the amplitude $A_{DN,\phi}$ of Eq. (\ref{ampScheme}) is given by the total expression \begin{equation}\label{treeLevel} A_{DN,\phi}(q) = \frac{1}{4}\frac{1}{\eta(q)^2} + \frac{1}{2} \left(q^{\frac{1}{48}} \prod_{n>0} \frac{1}{1-q^{{(n-\frac{1}{2})}}}\right)^2 + \frac{1}{2^{1/2}} \left(q^{\frac{1}{48}} \prod_{n>0} \frac{1}{1-q^{n-\frac{1}{2}}}\right) \frac{1}{\eta(q)}, \end{equation} with $q := e^{-2\pi T}$. Now, using a modular $S$-transformation we take the tree-level channel amplitude $A_{DN,\phi}$ to a loop-channel partition function. Under the transformation $T\rightarrow 1/T=:t$ the first summand of Eq. (\ref{treeLevel}) transforms as \begin{equation}\label{sqrtAccTilde} A_{cc}(\tilde q)= \frac{1}{t \ \eta\left(\tilde q \right)^2} \ , \ \ \ \tilde q := e^{-2\pi t}, \end{equation} where we are abusing notation by writing $A_{cc}(\tilde q)$. To transform the twisted term $A_{tt}$ we use the identity found in \cite{affleck} \begin{equation}\label{IdentityAffleck} q ^{1/48}\prod_{n>0} \frac{1}{1- q^{n-1/2}} = \frac{\theta_2 ( q^{1/2}) }{2\eta( q)}, \end{equation} which we use to rewrite Eq. (\ref{twistedAmp3}) as \begin{equation} A_{tt}^{1/2} = \frac{\theta_2 ( q^{1/2}) }{2\eta( q)} . \end{equation} By using the S-transformation of the theta-function $\theta_2(e^{2\pi i \tau})$ \begin{equation} \theta_4 \left(-\frac{1}{\tau}\right) = \sqrt{-i \tau} \theta_2 (\tau), \end{equation} we obtain the transformed twisted term \begin{equation} A_{tt}(\tilde q) =\frac{1}{2}\left( \frac{\theta_4(\tilde q^2)}{ \eta(\tilde q)}\right)^2. \end{equation} Inserting the $\tilde q$ expressions into the full amplitude \begin{equation}\label{} A_{DN,\phi}(\tilde q) = \frac{1}{4}\frac{1}{ t\ \eta\left(\tilde q\right)^2} + \frac{1}{4} \left( \frac{\theta_4(\tilde q^2)}{\ \eta(\tilde q)}\right)^2 + \frac{1}{2} \frac{\theta_4(\tilde q^2)}{\sqrt{ t} \ \eta\left(\tilde q\right)^2}, \end{equation} which shows self-consistency. That is, the modular $S$-transformation maps the $DN^{\pm\pm}-DN^{\pm\pm}$ amplitude to a partition function with a unique vacuum. Given that the $DN^{\pm\pm}$ state would single out the vacuum term in any other boundary state, a similar computation shows that these rotated twisted states are not consistent with the untwisted states we already built. Therefore, twisted boundary states are allowed as long as they are not rotated. In the next chapter, we use the spectrum we have found for D-branes in the product orbifold to obtain possible defects between the $S^1/\mathbb{Z}_2$ theories. The structure of the D-branes listed here is reminiscent of those in the torus theory except that we have catalogued the twisted degrees of freedom as well. The set of D-branes found here are given as Cardy-consistent elements of the boundary CFT. It is important to note that the boundary states presented here are more than just means to obtaining defects. There has been a prolific amount of work on D-branes on $\mathbb{C}^2/\mathbb{Z}_n$ orbifolds \cite{Douglas:1996sw} but much less number of examples in theories with curved or compact orbifold backgrounds \cite{Brunner:1999} such as the solutions we have presented in this work. \chapter{\uppercase{Defects between $S^1/\mathbb{Z}_2$ theories}} In this chapter we answer the question of how to glue together two $S^1_R/\mathbb{Z}_2$ bosonic theories by cataloguing the possible defects between such models. These defects are obtained from the D-branes in the product theory $({S}^1_{R}/\mathbb{Z}_2)^2$ via the unfolding mapping. Unfolding as developed in \cite{bachas07} is the inverse procedure of the ``folding trick'' used in 2D field theories. Given two CFTs separated by a 1-dimensional interface, the total theory $\text{CFT}_1\oplus \text{CFT}_2$ is equivalent to the theory $\text{CFT}_1\otimes \xbar{\text{CFT}}_2$ defined on the worldsheet folded such that the interface is mapped to a boundary for the tensor theory. $\xbar{\text{CFT}}_2$ refers to ${\text{CFT}}_2$ but with the left- and right-movers interchanged. Such a correspondence is referred to as the folding trick. The inverse of the previous procedure involves starting with a larger theory on a worldsheet with boundary. Under the assumption that the theory has two non-interacting sectors in the interior of the worldsheet, the boundary can be mapped to a defect separating the two sectors which are now the full theories to either side of the interface. That is, unfolding is the linear map \begin{equation} \mathcal{H}^{(1\otimes 2)}_{\partial \Sigma} \xrightarrow{\text{unfolding}} \Hom(\mathcal{H}^{(2)},\mathcal{H}^{(1)}) \cong \mathcal{H}^{(2)*}\otimes\mathcal{H}^{(1)}, \end{equation} from the boundary states in the product theory to the space of linear maps between the Hilbert spaces on each side of the defect. To unfold we define mirror fields by taking $\tau\rightarrow-\tau$ in in the expressions for $X$ and $Y$ in Eq. (\ref{xExpansion}) and Eq. (\ref{yExpansion}). The results are also solutions to the variation problem but sending \begin{equation}\label{unfoldoscillators} (\widehat N ,a_n,\widetilde a_n)\rightarrow (-\widehat N,-\widetilde a_n^\dag,-a_n^\dag), \end{equation} in the mode expansions. Using the above mapping we take the $(S^1/\mathbb{Z}_2)^2$ boundary states \begin{equation} |B\rangle\rangle =\sum B_{\lambda_1,\widetilde \lambda_1,\lambda_2,\widetilde \lambda_2}|\lambda_1,\widetilde \lambda_1\rangle|\lambda_2,\widetilde \lambda_2\rangle , \end{equation} to oriented defects given by \begin{equation} I^{1\leftarrow 2}= \sum B_{\lambda_1,\widetilde \lambda_1,\lambda_2,\widetilde \lambda_2} |\lambda_1,\widetilde \lambda_1\rangle \langle\widetilde \lambda_2,\lambda_2|. \end{equation} \section{Untwisted defects} There are two main classes of untwisted defects coming from the rotated classes of boundaries states $|D1_O(\alpha,\beta)\rangle\rangle_\phi$ and $|D2/D0_O(\alpha,\beta)\rangle\rangle_\theta$ in the untwisted sector. Unfolding the former, given in Eq. (\ref{d1orbifoldcosine}), we obtain the conformal defect \begin{equation}\label{evendefect} E^{(R_1\leftarrow R_2)}_{(k_1,k_2)}(\alpha,\beta):= \mathcal{E}_{(k_1,k_2)}(\alpha,\beta) \prod_{n>0}\exp ( -\Omega_\phi ^{11}a^{\dag 1}_n\tilde a_n^{\dag 1} + \Omega_\phi^{12}a^{\dag 1}_n a_n^{2} -\Omega_\phi ^ {22}a^{ 2}_n\tilde a_n^{ 2} + \Omega_\phi ^{21}\tilde a^{ 2}_n\tilde a_n^{\dag 1}), \end{equation} where \begin{equation}\label{vacuumLatSum} \mathcal{E}_{(k_1,k_2)}(\alpha,\beta)= 2 g^{+}\sum_{M,N}\cos(N\alpha)\cos(M\beta)|k_2N,k_1M\rangle_1\langle k_1 N, k_2 M|_2. \end{equation} The D2/D0-brane give in Eq. (\ref{d2d0orbifoldcosine}) unfolds to \begin{equation}\label{oddDefect} O^{(R_1\leftarrow R_2)}_{(k_1,k_2)}(\alpha,\beta):=\mathcal{O}_{(k_1,k_2)}(\alpha,\beta) \prod_{n>0}\exp ( - \widetilde \Omega_\theta^{11}a^{\dag 1}_n\tilde a_n^{\dag 1}+ \widetilde \Omega_\theta^{12}a^{\dag 1}_n a_n^{2} - \widetilde \Omega_\theta^{22}a^{ 2}_n\tilde a_n^{ 2} + \widetilde \Omega_\theta^{21}\tilde a^{ 2}_n\tilde a_n^{\dag 1}), \end{equation} where \begin{equation} \mathcal{O}_{(k_1,k_2)}(\alpha,\beta)= 2 g^{(-)}\sum_{M,N}\cos(N\alpha)\cos(M\beta) |k_1M,k_2N\rangle_1\langle k_1N, k_2M| _2. \end{equation} The two defects above are the $\mathbb{Z}_2\times\mathbb{Z}_2$-symmetrized version of their counterparts in the circle theory of \cite{bachas07}. At different values of the parameter $\phi$ we obtain defects which fall somewhere in the spectrum between being totally reflective or totally transmissive. A defect is called totally reflective when no information can flow across the interface which means the theories flanking the defect decouple entirely. A defect is totally transmissive when it commutes with the field insertion of the free fields. The totally reflective defects in the untwisted sector appear when $\phi$ or $\theta$ are multiples of $\pi/2$. At these values we obtain the following four varieties of totally reflectivity, \begin{equation} E^{(R_1\leftarrow R_2)}_{(0,k_2)}(\alpha,\beta)= |D_O(\alpha)^{k_2}\rangle\rangle \langle\langle N_O(\beta)^{k_2}|, \end{equation} \begin{equation} E^{(R_1\leftarrow R_2)}_{(k_1,0)}(\alpha,\beta)=|N_O(\alpha)^{k_2}\rangle\rangle \langle\langle D_O(\beta)^{k_2}|, \end{equation} \begin{equation}\label{} O^{(R_1\leftarrow R_2)}_{(0,k_2)}(\alpha,\beta)= |N_O(\alpha)^{k_2}\rangle\rangle \langle\langle N_O(\beta)^{k_2}|, \end{equation} \begin{equation} O^{(R_1\leftarrow R_2)}_{(k_1,0)}(\alpha,\beta)=|D_O(\alpha)^{k_2}\rangle\rangle \langle\langle D_O(\beta)^{k_2}|. \end{equation} To obtain defects which are totally transmissive set the rotation angles to odd multiples of $\pi/4$. At these values, the gluing matrices are \begin{equation}\label{} \Omega_{(2m+1)\pi/4}=\left( \begin{array}{cc} 0 & (-)^m\\ (-)^m &0 \end{array}\right) \ \ \ \ , \ \ \ \ \widetilde \Omega_{(2m+1)\pi/4}=\left( \begin{array}{cc} 0 & (-)^m\\ -(-)^m&0 \end{array}\right). \end{equation} Using $\Omega_{(2m+1)\pi/4}$ and $\widetilde \Omega_{(2m+1)\pi/4}$ as above we can obtain the perfectly transmissive untwisted defects. From the defect class in Eq. (\ref{evendefect}) we get, \begin{equation}\label{tranUntwisted} E^{(R_1\leftarrow R_2)}_{(p_1,p_2)}(\alpha,\beta)= \mathcal{E}_{(p_1,p_2)}(\alpha,\beta) \prod_{n>0}e^{(-)^m( a^{\dag 1}_n a_n^{2} + \tilde a^{ 2}_n\tilde a_n^{\dag 1})} , \end{equation} where \begin{equation} \mathcal{E}_{(p_1,p_2)}(\alpha,\beta)= 2 g^{+}\sum_{M,N}\cos(N\alpha)\cos(M\beta)|p_2N,p_1M\rangle_1\langle p_1 N, p_2 M|_2, \end{equation} and the integers $(p_1,p_2)$ satisfy \begin{equation}\label{evenP} (-1)^m\frac{p_1}{p_2}=\frac{R_2}{R_1}. \end{equation} From defect class in Eq. (\ref{oddDefect}) we obtain, \begin{equation}\label{OddTransmissive} O^{(R_1\leftarrow R_2)}_{(q_1,q_2)}(\alpha,\beta):=\mathcal{O}_{(q_1,q_2)}(\alpha,\beta) \prod_{n>0}e^{(-)^m( a^{\dag 1}_n a_n^{2}-\tilde a^{ 2}_n\tilde a_n^{\dag 1})}, \end{equation} where \begin{equation} \mathcal{O}_{(q_1,q_2)}(\alpha,\beta)= 2 g^{(-)}\sum_{M,N}\cos(N\alpha)\cos(M\beta) |q_1M,q_2N\rangle_1\langle q_1 N, q_2 M| _2. \end{equation} At the level of the oscillator modes, a defect is totally transmissive if it commutes with oscillators up to a phase factor. One can check that for the defect $E^{(R_1\leftarrow R_2)}$ in Eq. (\ref{tranUntwisted}) the following holds, \begin{equation}\label{trans1} E\ a_m^{2\dag} = (-)^l a_m^{1\dag}E \ , \ \ \ \ E\ a_m^{2} =(-)^{l}a_m^1 E, \end{equation} \begin{equation}\label{trans3} E\ \widetilde a_m^{2\dag} =(\pm) (-)^{l}\widetilde a_m^{1\dag} E \ , \ \ \ E\ \widetilde a_m^{2} =(\pm) (-)^{l}\widetilde a_m^{1\dag} E. \end{equation} \begin{equation}\label{zeroCommute} E\ \widehat N^2 = \frac{R_2}{R_1}\widehat N^1 E \ , \ \ \ \ E \widehat M^2 = \frac{R_1}{R_2}\widehat M^1 E, \end{equation} which shows that indeed the defect is totally transmissive. Furthermore it follows that the defect also satisfies $L_n^1 E = E\ L_n^2$ and $\bar L_n^1 E= E \ \bar L_n^2$ which means that it is topological. Similar relationships hold for the defect $O^{(R_1\leftarrow R_2)}$ in Eq. (\ref{oddDefect}). \section{Fully twisted defects} Now we apply the unfolding map to the rotated D-branes in the twisted sector thus obtaining defects which are twisted. We first focus on those defects coming from the fully twisted boundary states $|DN^{\pm\pm}(y_0^1,\tilde y_0^2)\rangle\rangle_\phi$ and $|ND^{\pm\pm}(\tilde y_0^1, y_0^2)\rangle\rangle_\phi$. In the next subsection we list those defects arising from the unfolding of the partially twisted D-branes. The defects corresponding to the isolated D-branes $|DN^{\pm\pm}(y_0^1,\tilde y_0^2)\rangle\rangle$ and $|ND^{\pm\pm}(\tilde y_0^1, y_0^2)\rangle\rangle$ (in equations (\ref{dnNoRotated}) and (\ref{ndNoRotated})) are, respectively, \begin{equation}\label{LDN} L_{DN_{\pm\pm},0}^{(R_1\leftarrow R_2)}(y_0^1,\tilde y_0^2):=\mathcal{V}^{\pm\pm}_{DN,0}(y_0^1,\tilde y_0^2)\prod_{n>0}\sum_{s\in\left\{a,b,c,d\right\}} e^{ s^{\dag 1}_n\tilde s_n^{\dag 1} - s^{ 2}_n\tilde s_n^{ 2} }, \end{equation} \begin{equation} L_{ND_{\pm\pm},0}^{(R_1\leftarrow R_2)}(y_0^1,\tilde y_0^2) := \mathcal{V}^{\pm\pm}_{ND,0}(\tilde y_0^1, y_0^2)\prod_{n>0}\sum_{s\in\left\{a,b,c,d\right\}} e^{ - s^{\dag 1}_n\tilde s_n^{\dag 1} + s^{ 2}_n\tilde s_n^{ 2} }. \end{equation} where, \begin{equation} \begin{split} \mathcal{V}^{\pm\pm}_{DN,0}(y_0^1,\tilde y_0^2) =& \frac{1}{2^{3/2}}\sqrt{\frac{R_2}{R_1}}\sum_{M,N} e^{iMy^1_0/R_1 +iNR_2 \tilde y^2_0}|M,0\rangle_1 \langle 0,-N|_2 \pm\pm \frac{1}{2^{1/2}} |y^1_0,T\rangle_1 \langle \tilde y^2_0,T |_2\\ &\pm \frac{1}{2^{5/4}}\frac{1}{\sqrt{R_1}}\sum_M e^{iMy^1_0/R_1}|M,0\rangle_1 \langle \tilde y^2_0,T |_2\\ &\pm \frac{1}{2^{3/4}} \sqrt{R_2}\sum_N e^{iNR_2\tilde y^2_0} |y^1_0,T\rangle_1 \langle 0,-N |_2, \end{split} \end{equation} \begin{equation} \begin{split} \mathcal{V}^{\pm\pm}_{ND,0}(\tilde y_0^1, y_0^2) =& \frac{1}{2^{3/2}}\sqrt{\frac{R_1}{R_2}}\sum_{M,N} e^{iNR_1 \tilde y^1_0+ iMy^2_0/R_2}|0,N\rangle_1 \langle M,0|_2 \pm\pm \frac{1}{2^{1/2}} |\tilde y^1_0,T\rangle_1 \langle y^2_0,T |_2 \\ & \pm \frac{1}{2^{3/4}}\sqrt{R_1}\sum_N e^{iN\tilde y^1_0 R_1}|0,N\rangle_1 \langle y^2_0,T |_2 \\ &\pm \frac{1}{2^{5/4}}\frac{1}{\sqrt{R_2}}\sum_M e^{iMy^2_0/R_2}| \tilde y^1_0,T\rangle_1 \langle M,0 |_2 . \end{split} \end{equation} We note that differently from the untwisted sector in the product theory, there are no twisted boundary states which represent a bound system of D2- and D0-branes. So the $R(\phi)$-invariant DD and NN boundary states in Eq.(\ref{diagonaldd}) and (\ref{diagonalnn}) have to unfolded separately, giving the two additional defects \begin{equation}\label{phiInvariant1} L_{DD_{\pm\pm}}^{(R_1\leftarrow R_2)}(y_0^1, y_0^2):= \mathcal{V}^{\pm\pm}_{DD}(y_0^1, y_0^2)\prod_{n>0}\sum_{s\in\left\{a,b,c,d\right\}} e^{s_n^{\dag 1}\tilde s_n^{\dag 1}+\tilde s_n^{ 2} s_n^{ 2}}, \end{equation} \begin{equation}\label{phiInvariant2} L_{NN_{\pm\pm}}^{(R_1\leftarrow R_2)}(\tilde y_0^1, \tilde y_0^2):= \mathcal{V}^{\pm\pm}_{NN}(\tilde y_0^1, \tilde y_0^2)\prod_{n>0}\sum_{s\in\left\{a,b,c,d\right\}} e^{-s_n^{\dag 1}\tilde s_n^{\dag 1}-\tilde s_n^{ 2} s_n^{ 2}}, \end{equation} where \begin{equation} \begin{split} \mathcal{V}^{\pm\pm}_{DD}(y_0^1, y_0^2):= & \frac{1}{4\sqrt{R_1R_2}}\sum_{M_1,M_2} e^{iM_1y_0^1/R_1+iM_2y_0^2/R_2 }|M_1,0\rangle_1 \langle M_2,0| \pm \pm\frac{1}{ \sqrt{2}} |y_0^1 ,T\rangle_1 \langle y_0^2, T|_2 \\ &\pm \frac{1}{2^{5/4}\sqrt{R_1}}\sum_{M_1} e^{iM_1y_0^1/R_1}|M_1,0\rangle_1\langle y_0^2, T|_2\\ &\pm \frac{1}{2^{5/4}\sqrt{R_2}}\sum_{M_2} e^{iM_2y_0^2/R_2} |y_0^1, T\rangle_1 \langle M_2,0|_2 \end{split} \end{equation} \begin{equation} \begin{split} \mathcal{V}^{\pm\pm}_{NN}(\tilde y_0^1, \tilde y_0^2):= & \frac{\sqrt{R_1 R_2}}{2}\sum_{N_1, N_2} e^{iN_1\tilde y_0^1R_1+iM_2\tilde y_0^2 R_2 }|0, N_1\rangle_1 \langle 0, - N_2 |_2 \pm \pm \frac{1}{\sqrt{2}} |\tilde y_0^1 ,T\rangle_1 \langle \tilde y_0^2 ,T|_2\\ &\pm \frac{\sqrt{R_1}}{2^{3/4}}\sum_{N_1} e^{iN_1\tilde y_0^1 R_1}|0,N_1,\rangle_1 \langle \tilde y_0^2 ,T| _2\\ &\pm \frac{\sqrt{R_2}}{2^{3/4}}\sum_{N_2} e^{iN_2\tilde y_0^2 R_2}|\tilde y_0^1 ,T\rangle_1 \otimes \langle 0, -N_2|_2 . \end{split} \end{equation} The fully twisted sector contains elements which are completely reflective. The defects $L_{DD_{\pm\pm}}^{(R_1\leftarrow R_2)}$ and $L_{NN_{\pm\pm}}^{(R_1\leftarrow R_2)}$ given by equations (\ref{phiInvariant1}) and (\ref{phiInvariant2}) are fully reflective, and so are $L_{DN_{\pm\pm},0}^{(R_1\leftarrow R_2)}$ and $L_{ND_{\pm\pm},0}^{(R_1\leftarrow R_2)}$. \section{Partially twisted defects} We call ``partially twisted defects'' those defects coming via unfolding the partially twisted boundary states $|Ai^{\pm}(y_0,x_0)\rangle\rangle$ and $|Bi^{\pm}(x_0,y_0)\rangle\rangle$. The defects corresponding to those D-branes which are $R(\phi)$-invariant are given below, \begin{equation}\label{pt1} L_{A1_{\pm}}^{(R_1\leftarrow R_2)}(y_0, x_0):= \mathcal{V}^{\pm}_{A1}(y_0, x_0)\prod_{n>0}\sum_{s\in\left\{a,d\right\}} e^{s_n^{\dag 1}\tilde s_n^{\dag 1}+\tilde s_n^{ 2} s_n^{ 2}}, \end{equation} \begin{equation} L_{A4_{\pm}}^{(R_1\leftarrow R_2)}(\tilde y_0,\tilde x_0):= \mathcal{V}^{\pm}_{A4}(\tilde y_0, \tilde x_0)\prod_{n>0}\sum_{s\in\left\{a,d\right\}} e^{-s_n^{\dag 1}\tilde s_n^{\dag 1}-\tilde s_n^{ 2} s_n^{ 2}}, \end{equation} \begin{equation} L_{B1_{\pm}}^{(R_1\leftarrow R_2)}( x_0, y_0):= \mathcal{V}^{\pm}_{B1} (x_0, y_0)\prod_{n>0}\sum_{s\in\left\{a,c\right\}} e^{s_n^{\dag 1}\tilde s_n^{\dag 1}+\tilde s_n^{ 2} s_n^{ 2}}, \end{equation} \begin{equation}\label{pt4} L_{B4_{\pm}}^{(R_1\leftarrow R_2)}(\tilde x_0, \tilde y_0):= \mathcal{V}^{\pm}_{B1} (\tilde x_0, \tilde y_0)\prod_{n>0}\sum_{s\in\left\{a,c\right\}} e^{-s_n^{\dag 1}\tilde s_n^{\dag 1}-\tilde s_n^{ 2} s_n^{ 2}}, \end{equation} where \begin{equation} \begin{split} \mathcal{V}^{\pm}_{A1}(y_0, x_0) = &\frac{1}{2\sqrt{R_1}}\sum_{M_1, M_2} e^{iM_1 y_0/R_1}\cos\left(\frac{M_2x_0}{R_2}\right) |M_1 ,0\rangle_1 \langle M_2,0|_2 \\ &\pm \frac{1}{ 2^{1/4} \sqrt{R_2}}\sum_{M_2} \cos\left(\frac{M_2x_0}{R_2}\right)|y_0,T\rangle_1 | \langle M_2, 0|_2, \end{split} \end{equation} \begin{equation} \begin{split} \mathcal{V}^{\pm}_{A4}(\tilde y_0, \tilde x_0) =&\sqrt{R_1 R_2}\sum_{N_1, N_2} e^{iN_1R_1\tilde y_0}\cos\left(N_2 R_2\tilde x_0\right) |0,N_1\rangle_1\langle 0 , -N_2|_2\\ & \pm 2^{1/4}\sqrt{R_2}\sum_{N_2} \cos\left(N_2 R_2\tilde x_0\right) |\tilde y_0,T\rangle_1 \langle 0,-N_2|_2, \end{split} \end{equation} \begin{equation} \begin{split} \mathcal{V}^{\pm}_{B1}(x_0, y_0)= & \frac{1}{2 \sqrt{R_1 R_2 }}\sum_{M_1, M_2} e^{iM_2y_0/R_2} \cos\left(\frac{M_1 x_0}{R_1}\right)| M_1,0\rangle_1 \langle M_2,0 |_2 \\ &\pm \frac{1}{2^{1/4}\sqrt{R_1}}\sum_{M_1} \cos\left(\frac{M_1 x_0}{R_1}\right)| M_1,0\rangle_1\langle y_0, T |_2, \end{split} \end{equation} \begin{equation} \begin{split} \mathcal{V}^{\pm}_{B4}(\tilde x_0, \tilde y_0) = &\sqrt{ R_1 R_2}\sum_{N_1, N_2} e^{iN_2 R_2\tilde y_0}\cos\left(N_1 R_1\tilde x_0\right)|0,N_1 \rangle_1 \langle 0, -N_2|_2\\ &\pm 2^{1/4} \sqrt{ R_1}\sum_{N_1} \cos\left(N_1 R_1\tilde x_0\right)|0,N_1 \rangle_1 \langle \tilde y_0, T|_2 . \end{split} \end{equation} \section{Fusion algebra} Whether topological or not, defects can be added. In the context of the unfolding map, this statement follows because boundary states can be added. Central to defects, they can also be fused together to obtain a new defect. This operation is well defined without any need for regularization when working with topological defects. For two generic defects $D$ and $D'$, their fusion may be defined as \begin{align}\label{interfaceFuse} D*D' = \lim_{\epsilon \rightarrow{0}} e^{2\pi d/\epsilon} D \ e^{-\epsilon H} D', \end{align} where $H$ is the Hamiltonian of the bulk theory wedged between the two defects, and $e^{d/\epsilon}$ is a self-energy counter-term with a free parameter $d$ which must be fixed \cite{bachas07}. In this section we restrict to those defects which are topological. Recalling that topological defects commute with elements of the Virasoro algebra, we have $\left[ D, H\right] = 0$ since $H = L_0 + \xbar L_0$. Therefore we pay no penalty by moving $e^{-\epsilon H}$ outside the product in the right-hand side of Eq. (\ref{interfaceFuse}) and taking the limit $\epsilon \rightarrow{0}$. Since no regularization is required in this case, we will set $d=0$ in the products we compute below. That is, the fusion products for the topological cases here are given by the composition of the operators representing such defects. \section{Fusion between totally transmissive defects with $R_2=R_1=R$} We first consider the totally transmissive defects which are untwisted. At generic radii $R$, the class of defects ${O}^{(R\leftarrow R)}_{(q_1,q_2)}$ do not contribute any defects since the condition \begin{equation}\label{blah} (-1)^m\frac{q_1}{q_2}=2R_1 R_2. \end{equation} coming from Eq. (\ref{thetatheta}) is not satisfied unless we are at the self-dual radius, $R_1 = R_2 = R_*$. There are two defects coming from ${E}^{(R\leftarrow R)}_{(p_1,p_2)}$ since there are only two solutions to Eq. (\ref{evenP}), which is satisfied by the integers $(p_1,p_2)$ such that $p_1 = (-)^m p_2=p$. But like in the $S^1$-defects, only those defects with $p=1$ are invertible. To check this, we only have to work with the vacuum sums since the higher terms annihilate the vacua $|N,M\rangle$: \begin{equation} \begin{split} E^{(R\leftarrow R)}_{(p,p)}(\alpha,\beta)|N',M'\rangle&= 2 g^{+}\sum_{M,N}\cos(N\alpha)\cos(M\beta)|pN,pM\rangle \langle p N, p M| N',M'\rangle\\ &= 2 g^{+}\sum_{M,N}\cos(N\alpha)\cos(M\beta)|pN,pM\rangle \delta_{pN,N'} \delta_{pM, M'}, \end{split} \end{equation} but this expression will be zero for any $N'\neq 0 \mod p$, or $M'\neq 0 \mod p$. Hence the only $E^{(R\leftarrow R)}_{(p,p)}$ which is not many-to-one are those with $p=1$. Same argument holds for $E^{(R\leftarrow R)}_{(p,-p)}$. Recalling that the normalization expression $g^{+}$ for untwisted defect is \begin{equation} g^{+}_{(p_1,p_2)}=\sqrt{\frac{p_1p_2}{\sin2\phi}}, \end{equation} we see that at $\phi=(2m+1)\pi/4$, we have \begin{equation} g^{+}_{(1,1)}=g^{+}_{(1,-1)}=1. \end{equation} At a generic radius $R$ only two of our defects are topological. These are the defects $E^{(R\leftarrow R)}_{(1,1)}$ and $E^{(R\leftarrow R)}_{(1, -1)}$, where the general expression is given in Eq. (\ref{tranUntwisted}). Their fusion obeys \begin{equation}\label{fusionEven++} E_{(1,1)}(\alpha,\beta)E_{(1,1)}(\alpha ',\beta ') = E_{(1,-1)}(\alpha,\beta)E_{(1,-1)}(\alpha ',\beta '), \end{equation} \begin{equation}\label{fusionEven+-} E_{(1,1)}(\alpha,\beta)E_{(1,-1)}(\alpha ',\beta ') = E_{(1,-1)}(\alpha,\beta)E_{(1,1)}(\alpha ',\beta ') , \end{equation} as we show below. \begin{equation} \begin{split} &E_{(1,-1)}(\alpha,\beta)E_{(1,-1)}(\alpha ',\beta ')\\ &=\left(\mathcal{E}_{(1,-1)}(\alpha,\beta)\prod_{n>0} e ^{-a_n^\dag a_n - \tilde a _n^\dag \tilde a_n}\right)\left(\mathcal{E}_{(1,-1)}(\alpha',\beta') \prod_{n>0}e ^{-a_n^\dag a_n - \tilde a _n^\dag \tilde a_n}\right)\\ &=\prod_{n>0}\sum_{s_n,t_n} \sum_{i_n,j_n} \frac{(-)^{s_n}}{s_n!}\frac{(-)^{t_n}}{t_n!} \frac{(-)^{i_n}}{i_n!}\frac{(-)^{j_n}}{j_n!}\nonumber\\ & \ \ \ \ \ (a_n^\dag)^{s_n} (\tilde a_n^\dag)^{t_n }\mathcal{E}_{(1,-1)}(\alpha,\beta) (a_n)^{s_n} (\tilde a_n)^{t_n } (a_n^\dag)^{i_n} (\tilde a_n^\dag)^{j_n }\mathcal{E}_{(1,-1)}(\alpha',\beta') (a_n)^{i_n} (\tilde a_n)^{j_n } \\ &=\prod_{n>0}\sum_{s_n,t_n} \frac{1}{s_n!}\frac{1}{t_n!} (a_n^\dag)^{s_n} (\tilde a_n^\dag)^{t_n }\mathcal{E}_{(1,-1)}(\alpha,\beta)\mathcal{E}_{(1,-1)}(\alpha',\beta') (a_n)^{s_n} (\tilde a_n)^{t_n }\\ &=\left( \mathcal{E}_{(1,-1)}(\alpha,\beta)\mathcal{E}_{(1,-1)}(\alpha',\beta') \right)\prod_{n>0} e ^{a_n^\dag a_n + \tilde a _n^\dag \tilde a_n}, \end{split} \end{equation} in the above we used \begin{equation} \langle 0|a_p ^{m_p} a_{-p}^{m_p'}|0\rangle = m_p! \delta_{m_p,m_p'}, \end{equation} which holds for the canonical algebra. Now we observe that, \begin{equation} \begin{split} &\mathcal{E}_{(1,-1)}(\alpha,\beta)\mathcal{E}_{(1,-1)}(\alpha',\beta') \\ &=4 \sum_{M,N} \sum_{M',N'} \cos(N \alpha)\cos (M\beta) \cos(N '\alpha') \cos (M' \beta ')\\ & \ \ \ \ \ \ \ \ \ \ \ |-N,M\rangle\langle N,-M| -N',M'\rangle\langle N',-M'|\\ &=4 \sum_{M,N} \cos(N \alpha)\cos (M\beta) \cos(N \alpha') \cos (M\beta ') |-N,M\rangle\langle -N,M|\\ &=4 \sum_{M,N} \cos(N \alpha)\cos (M\beta) \cos(N \alpha') \cos (M\beta ') |N,M\rangle\langle N,M|. \end{split} \end{equation} Using the trigonometric identity \begin{align} 2\cos x \cos y = \cos(x+y) + \cos(x-y), \end{align} we rewrite the vacuum expression as \begin{align} & \mathcal{E}_{(1,-1)}(\alpha,\beta)\mathcal{E}_{(1,-1)}(\alpha',\beta') \nonumber\\ &= \sum_{M,N} \left( \cos[N(\alpha- \alpha')] \cos[M(\beta- \beta')] + \cos[N(\alpha- \alpha')] \cos[M(\beta + \beta')]\right. \nonumber\nonumber \\ &\left. + \cos[N(\alpha + \alpha')] \cos[M(\beta - \beta')] + \cos[N(\alpha + \alpha')] \cos[M(\beta + \beta')]\right)|N,M\rangle\langle N,M|\nonumber\\ & = \frac{1}{2}\mathcal{E}_{(1,1)}(\alpha-\alpha',\beta-\beta') + \frac{1}{2}\mathcal{E}_{(1,1)}(\alpha-\alpha',\beta+ \beta')\nonumber \\ & + \frac{1}{2}\mathcal{E}_{(1,1)}(\alpha + \alpha',\beta-\beta') + \frac{1}{2}\mathcal{E}_{(1,1)}(\alpha + \alpha',\beta + \beta'). \end{align} So we obtain, \begin{align} E_{(1,-1)}(\alpha,\beta)E_{(1,-1)}(\alpha ',\beta ') &= \frac{1}{2}{E}_{(1,1)}(\alpha-\alpha',\beta-\beta') + \frac{1}{2}{E}_{(1,1)}(\alpha-\alpha',\beta+ \beta') \nonumber\\ & + \frac{1}{2}{E}_{(1,1)}(\alpha + \alpha',\beta-\beta') + \frac{1}{2}{E}_{(1,1)}(\alpha + \alpha',\beta + \beta'). \end{align} To compute $ E_{(1,1)}(\alpha,\beta)E_{(1,1)}(\alpha ',\beta ')$, \begin{align}\label{EEplusPlus} E_{(1,1)}(\alpha,\beta)E_{(1,1)}(\alpha ',\beta ') & =\left(\mathcal{E}_{(1,1)}(\alpha,\beta)\prod_{n>0} e ^{a_n^\dag a_n + \tilde a _n^\dag \tilde a_n}\right)\left(\mathcal{E}_{(1,1)}(\alpha',\beta') \prod_{n>0}e ^{a_n^\dag a_n \tilde a _n^\dag \tilde a_n}\right)\nonumber\\ &=\left(\mathcal{E}_{(1,1)}(\alpha,\beta) \mathcal{E}_{(1,1)}(\alpha',\beta') \right)\prod_{n>0} e ^{a_n^\dag a_n + \tilde a _n^\dag \tilde a_n}. \end{align} The vacuum part of the product is obtained via \begin{align} & \mathcal{E}_{(1,1)}(\alpha,\beta)\mathcal{E}_{(1,1)}(\alpha',\beta')\nonumber\\ &= 4 \sum_{M,N} \sum_{M',N'} \cos(N \alpha)\cos (M\beta) \cos(N '\alpha') \cos (M' \beta ') |N,M\rangle\langle N,M| N',M'\rangle\langle N',M'|\nonumber\\ &=4 \sum_{M,N} \cos(N \alpha)\cos (M\beta) \cos(N \alpha') \cos (M\beta ') |N,M\rangle\langle N,M|\nonumber\\ & = \frac{1}{2}\mathcal{E}_{(1,1)}(\alpha-\alpha',\beta-\beta') + \frac{1}{2}\mathcal{E}_{(1,1)}(\alpha-\alpha',\beta+ \beta')\nonumber \\ & + \frac{1}{2}\mathcal{E}_{(1,1)}(\alpha + \alpha',\beta-\beta') + \frac{1}{2}\mathcal{E}_{(1,1)}(\alpha + \alpha',\beta + \beta'). \end{align} Inserting the above into Eq. (\ref{EEplusPlus}) we get \begin{align} E_{(1,1)}(\alpha,\beta)E_{(1,1)}(\alpha ',\beta ') &= \frac{1}{2}{E}_{(1,1)}(\alpha-\alpha',\beta-\beta') + \frac{1}{2}{E}_{(1,1)}(\alpha-\alpha',\beta+ \beta')\nonumber \\ & + \frac{1}{2}{E}_{(1,1)}(\alpha + \alpha',\beta-\beta') + \frac{1}{2}{E}_{(1,1)}(\alpha + \alpha',\beta + \beta'). \end{align} The other two combinations are, \begin{align} E_{(1,1)}(\alpha,\beta)E_{(1,-1)}(\alpha ',\beta ')& =\frac{1}{2}{E}_{(1,-1)}(\alpha-\alpha',\beta-\beta') + \frac{1}{2}{E}_{(1,-1)}(\alpha-\alpha',\beta+ \beta') \nonumber\\ & + \frac{1}{2}{E}_{(1,-1)}(\alpha + \alpha',\beta-\beta') + \frac{1}{2}{E}_{(1,-1)}(\alpha + \alpha',\beta + \beta'), \end{align} and \begin{align} {E}_{(1,-1)}(\alpha,\beta){E}_{(1,1)}(\alpha',\beta') &= \frac{1}{2}{E}_{(1,-1)}(\alpha-\alpha',\beta-\beta') + \frac{1}{2}{E}_{(1,-1)}(\alpha-\alpha',\beta+ \beta') \nonumber\\ & + \frac{1}{2}{E}_{(1,-1)}(\alpha + \alpha',\beta-\beta') + \frac{1}{2}{E}_{(1,-1)}(\alpha + \alpha',\beta + \beta'). \end{align} We see that the topological defects at a generic radius $R$ form an algebra. The fusion products built out of $E_{(1,1)}$ and $E_{(1,-1)}$ in Eq. (\ref{fusionEven++}) and Eq. (\ref{fusionEven+-}) are expected if we note that their counterpart in the circle theory form the symmetry group $U(1)^2 \rtimes\mathbb{Z}_2$. Since we mod out the action of the $\mathbb{Z}_2$ subgroup, the twisting disappears giving $E_{(1,1)}E_{(1,1)}=E_{(1,-1)}E_{(1,-1)}$ and $E_{(1,-1)}E_{(1,1)}=E_{(1,1)}E_{(1,-1)}$. \section{At the self-dual radius $R=R_*$} At the self-dual radius we obtain two new defects which satisfy the condition in Eq. (\ref{blah}) and are invertible: $O_{(1,1)}^{(R_*\leftarrow R_*)}$ and $O_{(1,-1)}^{(R_*\leftarrow R_*)}$ for $m$ even or odd, respectively. The enlarged set of topological defects forms an enhanced algebra. This algebra is related to the $(U(1) \ltimes \mathbb{Z}_2)^2$ symmetries of the circle self-dual symmetries, again by breaking the $\mathbb{Z}_2$-twisting. To compute $O_{(1,1)}(\alpha,\beta) O_{(1,1)}(\alpha',\beta')$ we use Eq. (\ref{OddTransmissive}) with $m=0$, \begin{equation}\label{EEplusplus} \begin{split} &O_{(1,1)}(\alpha,\beta) O_{(1,1)}(\alpha',\beta')=\left(\mathcal{O}_{(1,1)}(\alpha,\beta) \prod_{n>0}e^{ a^{\dag }_n a_n^{}-\tilde a^{ }_n\tilde a_n^{\dag }}\right)\left(\mathcal{O}_{(1,1)}(\alpha,\beta) \prod_{n>0}e^{ a^{\dag }_n a_n^{}-\tilde a^{ }_n\tilde a_n^{\dag }}\right)\\ &=\prod_{n>0}\sum_{s_n,t_n} \sum_{i_n,j_n} \frac{1}{s_n!}\frac{(-)^{t_n}}{t_n!} \frac{1}{i_n!}\frac{(-)^{j_n}}{j_n!}\\ & (a_n^\dag)^{s_n} (\tilde a_n^\dag)^{t_n }\mathcal{O}_{(1,1)}(\alpha,\beta) (a_n)^{s_n} (\tilde a_n)^{t_n } (a_n^\dag)^{i_n} (\tilde a_n^\dag)^{j_n }\mathcal{O}_{(1,1)}(\alpha',\beta') (a_n)^{i_n} (\tilde a_n)^{j_n } \\ &=\prod_{n>0}\sum_{s_n,t_n} \frac{1}{s_n!}\frac{1}{t_n!} (a_n^\dag)^{s_n} (\tilde a_n^\dag)^{t_n }\mathcal{O}_{(1,1)}(\alpha,\beta)\mathcal{O}_{(1,1)}(\alpha',\beta') (a_n)^{s_n} (\tilde a_n)^{t_n }\\ &=\left( \mathcal{O}_{(1,1)}(\alpha,\beta) \mathcal{O}_{(1,1)}(\alpha',\beta') \right)\prod_{n>0} e ^{a_n^\dag a_n + \tilde a _n^\dag \tilde a_n}. \end{split} \end{equation} The vacuum part is given by, \begin{equation} \begin{split} &\mathcal{O}_{(1,1)}(\alpha,\beta)\mathcal{O}_{(1,1)}(\alpha',\beta') \\ &= 4 \sum_{M,N} \sum_{M',N'} \cos(N \alpha)\cos (M\beta) \cos(N '\alpha') \cos (M' \beta ') |M,N\rangle\langle N,M| M',N'\rangle\langle N',M'|\\ &=4\sum_{M,N} \cos(N \alpha)\cos (M\beta) \cos(M\alpha') \cos (N\beta ') |M,N\rangle\langle M,N|\\ & =4\sum_{M,N} \cos(M \alpha)\cos (N\beta) \cos(N\alpha') \cos (M\beta ') |N,M\rangle\langle N,M|\\ & =4\sum_{M,N} \cos (N\beta) \cos(N\alpha') \cos(M \alpha) \cos (M\beta ') |N,M\rangle\langle N,M|\\ & = \frac{1}{2}\mathcal{E}_{(1,1)}(\beta-\alpha',\alpha-\beta') + \frac{1}{2}\mathcal{E}_{(1,1)}(\beta-\alpha',\alpha + \beta') \\ & + \frac{1}{2}\mathcal{E}_{(1,1)}(\beta + \alpha',\alpha-\beta') + \frac{1}{2}\mathcal{E}_{(1,1)}(\beta + \alpha',\alpha + \beta'). \end{split} \end{equation} Inserting the above into the Eq. (\ref{EEplusplus}) we obtain \begin{align} O_{(1,1)}(\alpha,\beta) O_{(1,1)}(\alpha',\beta')& = \frac{1}{2} {E}_{(1,1)}(\beta-\alpha',\alpha-\beta') + \frac{1}{2} {E}_{(1,1)}(\beta-\alpha',\alpha + \beta') \nonumber\\ & + \frac{1}{2} {E}_{(1,1)}(\beta + \alpha',\alpha-\beta') + \frac{1}{2} {E}_{(1,1)}(\beta + \alpha',\alpha + \beta'). \end{align} To compute $O_{(1,-1)}(\alpha,\beta) O_{(1,-1)}(\alpha',\beta')$ we use Eq. (\ref{OddTransmissive}) with $m=1$, \begin{equation}\label{OO} \begin{split} &O_{(1,-1)}(\alpha,\beta) O_{(1,-1)}(\alpha',\beta')\\ &=\left(\mathcal{O}_{(1,-1)}(\alpha,\beta) \prod_{n>0}e^{- a^{\dag }_n a_n^{}+ \tilde a^{ }_n\tilde a_n^{\dag }}\right)\left(\mathcal{O}_{(1,-1)}(\alpha,\beta) \prod_{n>0}e^{ - a^{\dag }_n a_n^{}+ \tilde a^{ }_n\tilde a_n^{\dag }}\right)\\ &=\left( \mathcal{O}_{(1,-1)}(\alpha,\beta) \mathcal{O}_{(1,-1)}(\alpha',\beta') \right)\prod_{n>0} e ^{a_n^\dag a_n + \tilde a _n^\dag \tilde a_n}. \end{split} \end{equation} The vacuum part is given by, \begin{equation} \begin{split} &\mathcal{O}_{(1,-1)}(\alpha,\beta)\mathcal{O}_{(1,-1)}(\alpha',\beta') \\ &= 4 \sum_{M,N} \sum_{M',N'} \cos(N \alpha)\cos (M\beta) \cos(N '\alpha') \cos (M' \beta ') |M,-N\rangle\langle N,-M| M',-N'\rangle\langle N',-M'|\\ &=4\sum_{M,N} \cos(N \alpha)\cos (M\beta) \cos(M\alpha') \cos (N\beta ') |M,-N\rangle\langle M,-N|\\ &=4\sum_{M,N} \cos(N \alpha)\cos (M\beta) \cos(M\alpha') \cos (N\beta ') |M,N\rangle\langle M,N|\\ &= \sum_{M,N} \left( \cos[N(\alpha - \beta')] \cos[M(\beta- \alpha')] + \cos[N(\alpha - \beta')] \cos[M(\beta + \alpha')]\right. \\ &\left. + \cos[N(\alpha + \beta')] \cos[M(\beta - \alpha')] + \cos[N(\alpha + \beta')] \cos[M(\beta + \alpha')]\right)|N,M\rangle\langle N,M|\nonumber\\ & = \frac{1}{2}\mathcal{E}_{(1,1)}(\alpha-\beta',\beta-\alpha') + \frac{1}{2}\mathcal{E}_{(1,1)}(\alpha-\beta',\beta+ \alpha') \\ & + \frac{1}{2}\mathcal{E}_{(1,1)}(\alpha + \beta',\beta-\alpha') + \frac{1}{2}\mathcal{E}_{(1,1)}(\alpha + \beta',\beta + \alpha'). \end{split} \end{equation} Inserting the above into Eq. (\ref{OO}) we get \begin{align} O_{(1,-1)}(\alpha,\beta) O_{(1,-1)}(\alpha',\beta') & = \frac{1}{2}{E}_{(1,1)}(\alpha-\beta',\beta-\alpha') + \frac{1}{2}E_{(1,1)}(\alpha-\beta',\beta+ \alpha')\nonumber \\ & + \frac{1}{2}{E}_{(1,1)}(\alpha + \beta',\beta-\alpha') + \frac{1}{2}{E}_{(1,1)}(\alpha + \beta',\beta + \alpha'). \end{align} Using a similar procedure we obtain the other two odd-odd fusion products at the self-dual radius. \begin{align} O_{(1,1)}(\alpha,\beta) O_{(1,-1)}(\alpha',\beta') & = \frac{1}{2}{E}_{(1,-1)}(\beta -\alpha',\alpha-\beta') + \frac{1}{2}{E}_{(1,-1)}(\beta -\alpha',\alpha + \beta')\nonumber \\ & + \frac{1}{2}{E}_{(1,-1)}(\beta + \alpha',\alpha -\beta') + \frac{1}{2}{E}_{(1,-1)}(\beta + \alpha',\alpha + \beta'). \end{align} \begin{align} O_{(1,-1)}(\alpha,\beta) O_{(1,1)}(\alpha',\beta') & = \frac{1}{2}{E}_{(1,-1)}(\beta -\alpha',\alpha-\beta') + \frac{1}{2}{E}_{(1,-1)}(\beta -\alpha',\alpha + \beta')\nonumber \\ & + \frac{1}{2}{E}_{(1,-1)}(\beta + \alpha',\alpha -\beta') + \frac{1}{2}{E}_{(1,-1)}(\beta + \alpha',\alpha + \beta'). \end{align} We also have fusion products between the odd and even defects as given below. \begin{align} E_{(1,1)}(\alpha,\beta)O_{(1,1)}(\alpha',\beta ') &= \frac{1}{2}{O}_{(1,1)}(\beta+\alpha',\alpha+ \beta') + \frac{1}{2}{O}_{(1,1)}(\beta - \alpha',\alpha+ \beta')\nonumber \\ & + \frac{1}{2}{O}_{(1,1)}(\beta + \alpha',\alpha-\beta') + \frac{1}{2}{O}_{(1,1)}(\beta - \alpha',\alpha - \beta'). \end{align} \begin{align} {O}_{(1,1)}(\alpha,\beta){E}_{(1,1)}(\alpha',\beta') & = \frac{1}{2}{O}_{(1,1)}(\alpha+\alpha',\beta + \beta') + \frac{1}{2}{O}_{(1,1)}(\alpha + \alpha',\beta - \beta')\nonumber \\ & + \frac{1}{2}{O}_{(1,1)}(\alpha - \alpha',\beta + \beta') + \frac{1}{2}{O}_{(1,1)}(\alpha - \alpha',\beta - \beta'). \end{align} \begin{align} E_{(1,1)}(\alpha,\beta)O_{(1,-1)}(\alpha',\beta ') &= \frac{1}{2}{O}_{(1,-1)}(\beta+\alpha',\alpha+ \beta') + \frac{1}{2}{O}_{(1,-1)}(\beta - \alpha',\alpha+ \beta')\nonumber \\ & + \frac{1}{2}{O}_{(1,-1)}(\beta + \alpha',\alpha-\beta') + \frac{1}{2}{O}_{(1,-1)}(\beta - \alpha',\alpha - \beta'). \end{align} \begin{align} {O}_{(1,-1)}(\alpha,\beta){E}_{(1,1)}(\alpha',\beta')&= \frac{1}{2}{O}_{(1,1)}(\alpha+\alpha',\beta + \beta') + \frac{1}{2}{O}_{(1,1)}(\alpha + \alpha',\beta - \beta')\nonumber \\ & + \frac{1}{2}{O}_{(1,1)}(\alpha - \alpha',\beta + \beta') + \frac{1}{2}{O}_{(1,1)}(\alpha - \alpha',\beta - \beta'). \end{align} \begin{align} E_{(1,-1)}(\alpha,\beta)O_{(1,1)}(\alpha',\beta ') &= \frac{1}{2}{O}_{(1,-1)}(\beta+\alpha',\alpha+ \beta') + \frac{1}{2}{O}_{(1,-1)}(\beta - \alpha',\alpha+ \beta')\nonumber \\ & + \frac{1}{2}{O}_{(1,-1)}(\beta + \alpha',\alpha-\beta') + \frac{1}{2}{O}_{(1,-1)}(\beta - \alpha',\alpha - \beta'). \end{align} \begin{align} {O}_{(1,1)}(\alpha,\beta){E}_{(1,-1)}(\alpha',\beta')& = \frac{1}{2}{O}_{(1,-1)}(\alpha+\alpha',\beta+ \beta') + \frac{1}{2}{O}_{(1,-1)}(\alpha + \alpha',\beta - \beta')\nonumber \\ & + \frac{1}{2}{O}_{(1,-1)}(\alpha - \alpha',\beta+ \beta') + \frac{1}{2}{O}_{(1,-1)}(\alpha - \alpha',\beta - \beta'). \end{align} \begin{align} E_{(1,-1)}(\alpha,\beta)O_{(1,-1)}(\alpha',\beta ') &= \frac{1}{2}{O}_{(1,1)}(\beta+\alpha',\alpha+ \beta') + \frac{1}{2}{O}_{(1,1)}(\beta - \alpha',\alpha+ \beta')\nonumber \\ & + \frac{1}{2}{O}_{(1,1)}(\beta + \alpha',\alpha-\beta') + \frac{1}{2}{O}_{(1,1)}(\beta - \alpha',\alpha - \beta'). \end{align} \begin{align} {O}_{(1,-1)}(\alpha,\beta){E}_{(1,-1)}(\alpha',\beta') & = \frac{1}{2}{O}_{(1,1)}(\alpha+\alpha',\beta+ \beta') + \frac{1}{2}{O}_{(1,1)}(\alpha + \alpha',\beta - \beta') \nonumber\\ & + \frac{1}{2}{O}_{(1,1)}(\alpha - \alpha',\beta+\beta') + \frac{1}{2}{O}_{(1,1)}(\alpha - \alpha',\beta - \beta'). \end{align} \chapter{\uppercase{Defects between $\mathbb{C}/\mathbb{Z}_n$ orbifolds}} \chapter[DEFECTS BETWEEN $\mathbb{C}/\mathbb{Z}_n$ ORBIFOLDS]{DEFECTS BETWEEN $\mathbb{C}/\mathbb{Z}_n$ ORBIFOLDS \footnote{Reprinted with permission from ``Defects and boundary RG flows in $\mathbb{C}/\mathbb{Z}_d$'' by M. Becker, Y. Cabrera, and D. Robbins, 2017, \emph{JHEP} 2017 : 7, Copyright [2017] by the authors.}} This chapter develops a description of topological defects for the supersymmetric string with target space $\mathbb{C}/\mathbb{Z}_n$. This language is used to find defects which successfully encode the RG flow between these theories. \section{$\mathcal{N}=(2,2)$ supersymmetry on $\mathbb{R}^2$} In this section we review the $\mathcal{N}=(2,2)$ supersymmetric string in 1+1 dimensions with complex-valued free fields, as well as general aspects of the $(2,2)$ algebra. We follow the conventions of \cite{hori02} but we will restrict to definitions and concepts which are necessary for this work. The action for the RNS supersymmetric model is \begin{equation}\label{rns} S= \displaystyle \int_{\mathbb{R}^2} d^2x \left(|\partial_0 \phi|^2 - |\partial_1 \phi|^2 + i\bar \psi_-(\partial_0 +\partial_1)\psi_- i\bar \psi_+(\partial_0 -\partial_1)\psi_+\right), \end{equation} where $\phi$ is a scalar and the fields $\psi_\pm$ form a Dirac fermion. The bar symbol means complex conjugation in this context. The action is left invariant under the following supersymmetry transformations \begin{equation} \delta \phi = \epsilon_+\psi_--\epsilon_-\psi_+ \ \ , \ \ \delta\psi_\pm= \pm i \bar \epsilon _\mp (\partial_0\pm\partial_1)\phi, \end{equation} \begin{equation} \delta \bar\phi = -\bar\epsilon_+\bar \psi_-+ \bar\epsilon_-\bar \psi_+ \ \ , \ \ \delta\bar \psi_\pm= \mp i \epsilon _\mp (\partial_0\pm\partial_1)\bar\phi, \end{equation} where the $\epsilon$ parameters are fermionic. The above symmetries give rise to two left and two right conserved supercharges, $Q_\pm$ and $\xbar Q_\pm$ respectively. In terms of the bulk fields, these charges are given by \begin{equation} Q_\pm = \displaystyle \int dx^1 (\partial_0 \pm \partial_1)\bar\phi \psi_\pm, \end{equation} \begin{equation} \xbar Q_\pm= \displaystyle\int dx^1\bar \psi_\pm (\partial_0 \pm \partial_1)\phi . \end{equation} The RNS model in Eq. (\ref{rns}) also exhibits two additional $U(1)$ symmetries given by the following transformations \begin{equation}\label{u1vComponent} e^{i\alpha F_V}:(\phi,\psi_\pm,\bar\psi_\pm) \rightarrow (\phi,e^{-i\alpha}\psi_\pm,e^{i\alpha}\bar\psi_\pm) , \end{equation} \begin{equation}\label{u1aComponent} e^{i\alpha F_A}:(\phi,\psi_\pm,\bar\psi_\pm) \rightarrow (\phi,e^{\pm i\alpha}\psi_\pm,e^{\mp i\alpha}\bar\psi_\pm). \end{equation} The first is called \emph{vector R-rotation} and the second is called \emph{axial R-rotation}; the respective conserved supercharges are \begin{equation} F_V= \displaystyle \int dx^1(\bar \psi_-\psi_-+\bar \psi_+ \psi_+) \ \ , \ \ F_A= \displaystyle \int dx^1(-\bar \psi_-\psi_-+\bar \psi_+ \psi_+). \end{equation} Under the usual canonical quantization relations \begin{equation} [\phi(x^1),\partial_0 \bar \phi(y^1)]=\delta(x^1-y^1) \ \ \ , \ \ \ \left\{ \psi_\pm(x^1),\bar\psi_\pm(y^1) \right\}=\delta(x^1-y^1), \end{equation} the supersymmetry charges together with the Hamiltonian $H$, the momentum generator $P$, and the angular momentum generator $M$ satisfy the following algebra \begin{equation} \left\{ Q_\pm, \xbar Q_\pm\right\} = H\pm P, \ \ \ [iM, Q_\pm] =\mp Q_\pm, \ \ \ [iM, \xbar Q_\pm] =\mp \xbar Q_\pm, \end{equation} \begin{equation} \left\{ Q_\pm, Q_\pm\right\}=\left\{ \xbar Q_\pm, \xbar Q_\pm\right\}=0, \end{equation} \begin{equation} \left\{ Q_+, Q_-\right\}=\left\{ \xbar Q_+, \xbar Q_-\right\}=0, \end{equation} \begin{equation} \left\{ \xbar Q_+, Q_-\right\}=\left\{ Q_+, \xbar Q_-\right\}=0. \end{equation} Including the $U(1)_V$ and $U(1)_A$ R-generators the algebra extends by means of the relations, \begin{equation} [iF_V,Q_\pm]=-iQ_\pm, \ \ \ [iF_V,Q_\pm]=iQ_\pm, \end{equation} \begin{equation} [iF_V,Q_\pm]=\mp iQ_\pm, \ \ \ [iF_V,Q_\pm]=\pm iQ_\pm. \end{equation} \section{Superspace formalism and the Landau-Ginzburg model} This section constitutes a review of the superspace approach to supersymmetric theories including the one of Eq. (\ref{rns}) for the RNS string. Superspace formalism is obtained via the introduction of four new fermionic variables aside of the bosonic spacetime coordinates. The new worldsheet manifold, which is called ``superspace'', gives rise to functionals which are manifestly invariant under the $\mathcal{N}=(2,2)$ supersymmery. The local chart of superspace is given by the coordinates $(x^0,x^1,\theta^\pm,\bar \theta^\pm)$ where the $\theta^\pm$ and $\bar \theta^\pm$ are fermionic coordinates, that is, \begin{equation} \theta^a \theta^b = -\theta^b\theta^a, \ \ \ \bar\theta^a \bar \theta^b = -\bar \theta^b\bar \theta^a, \ \ \ \theta^a \bar \theta^b = -\bar \theta^b\theta^a. \end{equation} These fermionic coordinates are complex, with the bar and non-bar pairs being complex conjugate to each other. In superspace, the supercharges have the following representation as differential operators, \begin{equation} Q_\pm = \frac{\partial}{\partial \theta^\pm}+ i\bar \theta^\pm \partial_\pm, \end{equation} \begin{equation} \xbar Q_\pm = -\frac{\partial}{\partial\bar \theta^\pm}- i \theta^\pm \partial_\pm. \end{equation} In the above expressions, $\partial_\pm$ are spacetime derivatives with respect to the lightcone coordinates $x^\pm:=x^0\pm x^1$, \begin{equation} \partial_\pm =\frac{1}{2}(\partial_0\pm \partial_1). \end{equation} Since the pure fermionic derivatives $\partial/\partial \theta^\pm$ and $\partial/\partial \bar \theta^\pm$ do not anticommute with the supersymmetry operators, in superspace one uses the following covariant superderivatives, \begin{equation} D_\pm:=\frac{\partial}{\partial \theta^\pm}- i\bar \theta^\pm \partial_\pm, \end{equation} \begin{equation} D_\pm:=-\frac{\partial}{\partial \theta^\pm}+i\bar \theta^\pm \partial_\pm. \end{equation} The superderivatives and supersymmetry generators obey the anticommutation rules, \begin{equation} \left\{ D_\pm,\xbar D_\pm\right\}=2i\partial_\pm , \ \ \ \left\{ Q_\pm,\xbar Q_\pm\right\}=-2i\partial_\pm, \end{equation} \begin{equation} \left\{ Q_\pm, Q_\pm\right\}=\left\{\xbar Q_\pm, \xbar Q_\pm\right\}=0, \end{equation} \begin{equation} \left\{ D_\pm, Q_\pm\right\}=\left\{ \xbar D_\pm, \xbar Q_\pm\right\}=\left\{ D_\pm, \xbar Q_\pm\right\}=0. \end{equation} The general $\mathcal{N}=(2,2)$ variation is then given by \begin{equation}\label{fullvar} \delta_{\epsilon,\bar \epsilon}=\epsilon_+Q_ - -\epsilon_-Q_+ -\bar \epsilon_+\xbar Q_-+\bar\epsilon_-\xbar Q_+, \end{equation} where the infinitesimal parameters $\epsilon_\pm$ and $\bar \epsilon_\pm$ are fermionic. The most general elements in the representation of the supersymmetry algebra are called \emph{superfields}. That is, a field $\mathcal{F}$ on superspace is called a {superfield} if it transforms as $\mathcal{F}+\delta_{\epsilon,\bar\epsilon}\mathcal{F}$ under the action of the supersymmetry algebra. A superfield $\mathcal{F}$ is fermionic if $\left\{\mathcal{F},\theta^a\right\}=0$ and bosonic $[\mathcal{F},\theta^a]=0$. In our work, all bulk superfields will be bosonic while those restricted to the boundary will be of either type. Instead of generic superfields, we will restrict to those which are \emph{chiral}, \emph{antichiral}, or \emph{twisted chiral superfields}. We call a superfield $X$ {chiral} if \begin{equation}\label{chiraldef} \xbar D_\pm X=0, \end{equation} and a superfield $\widetilde X$ twisted chiral if \begin{equation}\label{twisteddef} \xbar D_+\widetilde X=D_-\widetilde X=0. \end{equation} The solutions to Eq. (\ref{chiraldef}) and Eq. (\ref{twisteddef}) are \begin{equation}\label{chiralexpansion} X=\phi(y)+\theta^+\psi_+( y)+\theta^-\psi_-( y)+\theta^+ \theta^-F( y), \end{equation} \begin{equation}\label{twistexpansion} \widetilde X=\tilde\phi(\tilde y)+\theta^+\bar\chi_+(\tilde y)+\bar\theta^-\chi_-(\tilde y)+\theta^+\bar \theta^-E(\tilde y), \end{equation} respectively. In these expansions we used the usual notation where $\tilde y^\pm=x^\pm \mp i \theta^\pm \bar \theta^\pm$ and $ y^\pm=x^\pm - i \theta^\pm \bar \theta^\pm$. The two $U(1)$ symmetries whose action on the component fields is given in Eq. (\ref{u1vComponent}) and Eq. (\ref{u1aComponent}) is carried over to superspace via the following actions on superfields \begin{equation}\label{u1v} e^{i\alpha F_V}:\mathcal{F}(x^\mu,\theta^\pm,\bar\theta^\pm) \rightarrow e^{i\alpha q_V}\mathcal{F} (x^\mu,e^{-i\alpha}\theta^\pm,e^{i\alpha}\bar\theta^\pm) , \end{equation} \begin{equation}\label{u1a} e^{i\beta F_A}: \mathcal{F}(x^\mu,\theta^\pm,\bar\theta^\pm) \rightarrow e^{i\beta q_A} \mathcal{F} (x^\mu,e^{\mp i\beta}\theta^\pm,e^{\pm i\beta}\bar\theta^\pm). \end{equation} Since will be working directly we supersymmetric actions, here we list all possible supersymmetry invariant functionals which can be constructed in superspace. For the superfields $\mathcal{F}_i$, chiral superfields $X_i$, and twisted chiral superfields $\widetilde X_i$ as above, one can construct the following $\mathcal{N}=(2,2)$-invariant functionals \begin{equation} S_D[\mathcal{F}_i]=\displaystyle \int d^2x d^4\theta \ K(\mathcal{F}_i)= \displaystyle \int d^2x d\theta^+ d\theta^- d\bar\theta^- d\bar\theta ^+K(\mathcal{F}_i)\ , \end{equation} \begin{equation}\label{Fcomplex} S_F[X_i]=\displaystyle \int d^2x d^2\theta \ W(X_i)= \displaystyle \int d^2x d\theta^+ d\theta^- W(X_i)\big |_{\bar \theta ^\pm=0}\ , \end{equation} \begin{equation} \widetilde S_F[\widetilde X_i]=\displaystyle \int d^2x d^2\tilde\theta \ \widetilde W(\widetilde X_i)= \displaystyle \int d^2x d\bar \theta^- d\theta^+ \widetilde W(\widetilde X_i)\big |_{\bar \theta ^+=\theta^-=0} \ , \end{equation} where $K=K(\mathcal{F}_i)$ is a smooth real-valued function, and $W=W(X_i) $ and $\widetilde W = \widetilde W(\widetilde X_i)$ are holomorphic functions. In the above functionals, the measures are $d^4\theta :=d\theta^+d\theta^-d\bar\theta^-d\bar \theta^+$ and $d^2\theta:=d\theta^-d\theta^+$. The function $K$ is usually called the K\"ahler potential, and $W$ the superpotential. In this work we are mainly concerned with with LG models which are defined for chiral fields $X_i$ by the action \begin{equation}\label{LGdef} S[X_i,\xbar X_{\bar i}] := S_D[X_i,\xbar X_{\bar i}]+\real S_F[X_i]. \end{equation} We restrict to a LG model with a single chiral field $X$ and the K\"ahler potential \begin{equation}\label{kahler} K(X,\xbar X) = \xbar X X. \end{equation} A basic result that we will exploit later in this work is that the RNS string as given in Eq. (\ref{rns}) corresponds to a LG model with vanishing superpotential. That is, in the superspace formalism the RNS action is given by \begin{equation}\label{LGzero} S_D=\displaystyle \int d^2xd^4\theta\ \xbar X X, \end{equation} where $X$ is a chiral superfield. To obtain this result one uses a Taylor expansion of the chiral field over the $\theta$ variable, \begin{equation} \begin{split} X&=\phi(y^\pm)+\theta^a\psi_a(y^\pm)+\theta^+\theta^- F(y^\pm)\\ &=\phi(x^\pm) - i\theta^+ \bar \theta^+\partial_+ \phi(x^\pm) - i\theta^- \bar \theta^-\partial_- \phi(x^\pm) - \theta^+\theta^- \bar \theta^- \bar \theta^+ \partial_+\partial_- \phi(x^\pm)\\ &\ \ +\theta^+\psi_+ (x^\pm) - i\theta^+\theta^-\bar \theta ^-\partial_- \psi_+ (x^\pm) - i\theta^-\theta^+\bar \theta^+\partial_+\psi_- (x^\pm) +\theta^+\theta^- F(x^\pm). \end{split} \end{equation} Inserting the above series expansion into the LG model in Eq. (\ref{LGzero}) and the integrating out the fermionic variables, one obtains \begin{equation}\label{} S= \displaystyle \int_{\mathbb{R}^2} d^2x \left(|\partial_0 \phi|^2 - |\partial_1 \phi|^2 + i\bar \psi_-(\partial_0 +\partial_1)\psi_- i\bar \psi_+(\partial_0 -\partial_1)\psi_+ +|F|^2\right), \end{equation} which it is the action of the RNS string up to the last term $|F|^2$. Noting that $F=0$ is the equation of motion of the field $F$, we see that indeed we have recovered the supersymmetric RNS action of Eq.(\ref{rns}). We will utilize this result to work with the supersymmetric $\mathbb{C}/\mathbb{Z}_n$ models in terms of the LG formalism. Under the action of the axial $R$-symmetry defined in Eq. (\ref{u1a}), the LG functional in Eq. (\ref{LGzero}) is invariant if the chiral field has zero $U(1)_A$ weight. This invariance holds when the superpotential is a monomial, that is, \begin{equation} S_F = \displaystyle\int d^2x d^2\theta \ X^k. \end{equation} Furthermore, the LG action is invariant under vector $R$-symmetry if the $X$ is assigned $U(1)_V$ weight $2/k$ in order for the $F$-term to be invariant. The action given in Eq. (\ref{LGzero}) can be generalized to admit a set of chiral superfields $X_i$, $i=1,\dots, n$, taking values on a manifold $M$. The K\"ahler potential can be generalized to any differentiable real-valued function and we assume that \begin{equation} g_{i\bar j}:=\partial_i \partial_{\bar j}K(X_i,\xbar X_{\bar i}), \end{equation} is positive-definite which defines a K\"ahler metric on $M$. In this case the Lagrangian density is given by \cite{hori00} \begin{equation} \begin{split} \mathcal{L}= &-g_{i\bar j} \partial_\mu \phi^i \partial^\mu \bar \phi^{\bar j} +i g_{i\bar j}\bar \psi_-^{\bar j}(D_0+D_1)\psi_-^i +i g_{i\bar j}\bar \psi_+^{\bar j}(D_0-D_1)\psi_+^i\\ &+R_{i\bar j k\bar l} \psi_+^i \psi_-^k \bar \psi_- ^{\bar j} \psi_+ ^{\bar l} + g_{i\bar j} (F^i - \Gamma^i _{jk}\psi^j_+ \psi^k_-) (\bar F^{\bar j} -\bar{\Gamma}^{\bar j} _{\bar k \bar l}\bar\psi^{\bar k}_- \bar \psi^{\bar l}_+), \end{split} \end{equation} where $R_{i\bar j k\bar l}$ is the Riemann curvature of the K\"ahler metric and $D_\mu\psi^i_\pm:= \partial _\mu\psi^i +\partial_\mu\phi^j \Gamma^i_{jk}\psi^k_\pm$. Under the inclusion of the real-valued F-term $\real S_F[X_i]$, where $S_F$ is given in Eq. (\ref{Fcomplex}), the equations of motion of spin-2 fields $F^i$ and $\xbar F^{\bar i}$ are given by \begin{equation} F^i = \Gamma^i _{jk}\psi^j_+\psi^k _- - g^{i\bar l} \partial _{\bar l}\xbar W, \end{equation} \begin{equation} \xbar F^{\bar i} = \bar \Gamma^{\bar i} _{\bar j\bar k}\bar \psi^{\bar j}_+\bar \psi^{\bar k} _- - g^{\bar i l} \partial _{ l} W. \end{equation} Using the above two equations, the action in components for the LG model on the K\"ahler manifold $M$ has the form \begin{align} S=&\displaystyle \int_{\mathbb{R}^2} d^2x ( -g_{i\bar j} \partial_\mu \phi^i \partial^\mu \bar \phi^{\bar j} +i g_{i\bar j}\bar \psi_-^{\bar j}(D_0+D_1)\psi_-^i +i g_{i\bar j}\bar \psi_+^{\bar j}(D_0-D_1)\psi_+^i\nonumber\\ &+R_{i\bar j k\bar l} \psi_+^i \psi_-^k \bar \psi_- ^{\bar j} \psi_+ ^{\bar l} -\frac{1}{4}g^{\bar i j}\partial_{\bar i} W\partial _j W - \frac{1}{2}D_i\partial_j W\psi_+^i\psi_-^j - \frac{1}{2}D_{\bar i} \partial_{\bar j} \xbar W\bar\psi_+^{\bar i}\bar \psi_-^{\bar j} ). \end{align} \section{Supersymmetry preserving boundaries} The previous section contains a treatment of sigma models on worldsheets without boundaries. In this section we review the same theory but in the presence of a nontrivial boundary which gives rise to D-branes. D-branes are boundary conditions for open strings or equivalently sources for emission and absorption of closed strings. These D-branes break the $\mathcal{N}=(2,2)$ supersymmetry to a $(1,1)$ supersymmetry. The remaining supersymmetry is called \emph{A-type} or \emph{B-type} depending on which supercharges are preserved by the D-brane. Our work on defects focuses on the B-type but we will need to extensively use the A-type to make comparisons so this section includes a review of both cases. We start with the $(2,2)$ supersymmetry on the upper half-plane $\Sigma= \mathbb{R}\times [0,\infty)$ as our model theory. At the boundary the fields satisfy boundary conditions which usually relates the left- and right-moving modes. These conditions relate the left and right fermionic variables $\theta^\pm$ and $\bar \theta^\pm$ in one of the two ways \cite{hori00} \begin{equation}\label{boundarytype} \begin{split} &(A) \ \ \ \ \theta^+ +\operatorname{e}^{i\alpha}\bar\theta^-=0, \ \ \bar\theta^+ +\operatorname{e}^{-i\alpha}\theta^-=0,\\ &(B) \ \ \ \ \theta^+ -\operatorname{e}^{i\beta}\theta^-=0, \ \ \bar \theta^+ -\operatorname{e}^{-i\beta}\bar\theta^-=0.\\ \end{split} \end{equation} In theories with the \emph{A-boundary} or \emph{B-boundary} the following supercharges are conserved, respectively, \begin{equation}\label{abgenerators} \begin{split} &(A) \ \ \ \ \xbar Q_A:=\xbar Q_+ + \operatorname{e}^{i\alpha}Q_-, \ \ Q_A:=Q_+ +\operatorname{e}^{-i\alpha}\xbar Q_-,\\ &(B) \ \ \ \ \xbar Q_B:=\xbar Q_+ +\operatorname{e}^{i\beta}\xbar Q_-, \ \ Q_B:= Q_+ +\operatorname{e}^{-i\beta} Q_-. \end{split} \end{equation} The theory with A-boundary and conserved charges $(Q_A,\xbar Q_A)$ is called \emph{A-type supersymmetric}; when considering $(Q_B,\xbar Q_B)$ and B-boundary, the theory is called \emph{B-type supersymmetric}. For simplicity we take $\alpha$ and $\beta$ to be zero in the above equations. The results can always be generalized to the non-zero case via the appropriate $U(1)$ rotation. \subsection{A-type supersymmetry} Specializing to A-boundary means that the boundary of our theory is preserved by the following operators \begin{align} \xbar D_A&:= \xbar D_++D_-=\frac{\partial}{\partial\bar \theta}+i\theta\partial_0 ,\\ D_A&:= D_++\xbar D_-=\frac{\partial}{\partial \theta}-i\bar\theta\partial_0 ,\\ \xbar Q_A&:= \xbar Q_++Q_-=-\frac{\partial}{\partial\bar \theta}-i\theta\partial_0, \\ Q_A&:= Q_++\xbar Q_-=\frac{\partial}{\partial \theta}+i\bar\theta\partial_0. \end{align} So the general A-type variation \index{A-type supersymmetry} is given by \begin{equation}\label{avariation} \delta_A=\epsilon \bar Q_A -\bar \epsilon Q_A. \end{equation} Observe that the full $(2,2)$ variation preserves A-type boundary conditions if $\epsilon_+=\bar \epsilon_-=:\epsilon$ and $\bar \epsilon_+= \epsilon_-=:\bar \epsilon$. The A-type variation of the twisted F-term $S_{\widetilde F}$ is given by \begin{equation}\label{varytwistedf} \delta_AS_{\widetilde F}=2i\epsilon\displaystyle\int_{\partial \Sigma}dtd\theta\ \widetilde W(\widetilde X)\big |_{\bar\theta=0}. \end{equation} Similar expression for the antiholomorphic part, just as above but everything conjugated. To obtain this result we start with the twisted F-term \begin{equation} S_{\widetilde F}=\displaystyle\int_\Sigma d^2xd\bar\theta^-d\theta^+\ \widetilde W(\widetilde X)\big |_{\bar\theta^+=\theta^-=0}, \end{equation} whose argument is a twisted chiral field $\widetilde X$: $\xbar D_+\widetilde X= D_- \widetilde X=0$ \cite{hori00}. We have the variation, \begin{equation}\label{avariationsteps} \begin{split} \delta_A S_{\widetilde F}&=\displaystyle\int_\Sigma d^2xd\bar\theta^-d\theta^+\ \delta_A \widetilde W(\widetilde X)\big |_{\bar\theta^+=\theta^-=0}\\ &=\displaystyle\int_\Sigma d^2xd\bar\theta^-d\theta^+\ \left( [\epsilon(\xbar Q_++Q_-)-\bar \epsilon (Q_++\xbar Q_- )] \widetilde W(\widetilde X)\right)\big |_{\bar\theta^+=\theta^-=0}\\ &=\displaystyle\int_\Sigma d^2xd\bar\theta^-d\theta^+\ 2i \epsilon\left( \bar\theta^-\partial _- \widetilde W -\theta^+\partial _+\widetilde W\right)\big |_{\bar\theta^+=\theta^-=0}, \end{split} \end{equation} where we used the following results \begin{eqnarray} &&Q_+\widetilde W\big |_{\bar\theta^+=\theta^-=0} =\frac{\partial}{\partial \theta^+}\widetilde W \big |_{\bar\theta^+=\theta^-=0}\\ &&\xbar Q_-\widetilde W\big |_{\bar\theta^+=\theta^-=0}=-\frac{\partial}{\partial \bar\theta^-}\widetilde W \big |_{\bar\theta^+=\theta^-=0}\\ && \xbar Q_+\widetilde W\big |_{\bar\theta^+=\theta^-=0}=-2i\theta^+\partial_+\widetilde W \big |_{\bar\theta^+=\theta^-=0}\\ && Q_-\widetilde W\big |_{\bar\theta^+=\theta^-=0}=2i\bar\theta^-\partial_-\widetilde W \big |_{\bar\theta^+=\theta^-=0}. \end{eqnarray} To obtain the last two equations we used \begin{align} \xbar Q_+=\xbar D_+-2i\theta^+\partial_+\ , \ \ \ Q_+= D_-+2i\bar\theta^-\partial_-, \end{align} and the fact that $\widetilde W(\widetilde X)$ itself is twisted chiral since holomorphic functions of $\widetilde X$ are also twisted chiral. Now we use the expansion of a twisted chiral field, \begin{equation} \widetilde \Phi=\tilde\phi(\tilde y)+\theta^+\bar\chi_+(\tilde y)+\bar\theta_-\chi_-(\tilde y)+\theta^+\bar \theta^-E(\tilde y), \end{equation} where $\tilde y^\pm=x^\pm \mp i \theta^\pm \bar \theta^\pm$. Inserting this expansion into Eq. (\ref{avariationsteps}), we get \begin{equation} \delta_A S_{\widetilde F}=2i\epsilon\displaystyle \int_{\partial \Sigma}dt \ (\bar \chi_+(t)-\chi_-(t))= 2i\epsilon\displaystyle \int_{\partial \Sigma}dtd\theta \ \widetilde W\big |_{\bar \theta=0}. \end{equation} The last equality follows by using the same chiral expansion for the twisted superpotential, and restricting to the A-boundary. Note that the A-type variation of the F-term is not as nice: \begin{equation} \delta_A S_F=-2i\displaystyle \int_{\partial \Sigma} dt (\bar \epsilon \psi_+(t)+ \epsilon \psi_-(t)). \end{equation} Thus, we use work with B-type supersymmetry when dealing with LG models and A-type for twisted LG functionals. With the above observation in mind, we find the sufficient and necessary requirements for A-supersymmetry. The characterization will be a geometrical description of D-branes which in this case are called A-branes. We obtain the A-type classification following the work of \cite{Hori0005}. \index{A-branes} We consider the supersymmetric sigma model with superpotential $W$ on $\Sigma =\mathbb{R}\times[0,\infty)$ with variables $(x^0,x^1)$ and with an $n$-dimensional target space $M$ which we assume to be a K\"ahler manifold. The action in components is \begin{equation}\label{sigma} \begin{split} S=&\displaystyle \int_\Sigma d^2x \left\{ -g_{i\bar j}\partial^\mu\phi^i\partial_\mu \xbar \phi ^{\bar j}+\frac{i}{2}g_{i\bar j} \xbar \psi_-^{\bar j}(\overleftrightarrow D_0+\overleftrightarrow D_1) \psi_-^{i} + \frac{i}{2}g_{i\bar j} \xbar \psi_+^{\bar j}(\overleftrightarrow D_0-\overleftrightarrow D_1) \psi_+^{i} \right. \\ & \left. -\frac{1}{4} g^{\bar j i}\partial_{\bar j}\xbar W \partial_i W -\frac{1}{2}(D_i\partial_j W)\psi_+^i\psi_-^j-\frac{1}{2}(D_{\bar i}\partial_{\bar j}\xbar W)\xbar \psi_-^{\bar i}\xbar \psi_+^{\bar j}+R_{i\bar k j \bar l}\psi_+^{ i}\psi_-^{ j}\xbar \psi_-^{\bar k}\xbar \psi_+^{\bar l}\right\}, \end{split} \end{equation} where $\xbar \psi^{\bar j}\overleftrightarrow D_\mu \psi^i :=\xbar \psi^{\bar j} D_\mu \psi^i - ( D_\mu \xbar \psi^{\bar j}) \psi^i$ and $D_\mu \psi^i :=\partial_\mu \psi^i+\partial_\mu\phi^j\Gamma^i_{jk}\psi^k$. Under a general variation one obtains the Euler-Lagrange equations for the fields $X=(\phi^I,\psi_\pm ^I)$ plus boundary conditions needed for the vanishing of the boundary integral. These constraints on the boundary $\partial \Sigma$ are \begin{equation}\label{bdyconstraint1} g_{IJ}\delta\phi^I\partial_1\phi^J=0 \end{equation} \begin{equation}\label{bdyconstraint2} g_{IJ}(\psi_-^I\delta\psi_-^J-\psi_+^I\delta\psi_+^J)=0 \end{equation} Since $\phi:\partial\Sigma \hookrightarrow\gamma$, the vector $\delta\phi^I$ is tangent to $\gamma$. Hence, by the constraint (\ref{bdyconstraint1}) $\partial\phi^I$ is normal to $\gamma$. Under the general supersymmetry action, the action varies as \begin{equation}\label{sigmavar} \begin{split} \delta &S = \frac{1}{2}\displaystyle \int_{\partial\Sigma} dx^0 \left\{\epsilon_+(-g_{i\bar j} (\partial_0+\partial_1)\xbar\phi^{\bar j}\psi_-^i+\frac{i}{2}\xbar\psi_+^{\bar i}\partial_{\bar i}\xbar W) +\epsilon_-(-g_{i\bar j} (\partial_0-\partial_1)\xbar\phi^{\bar j}\psi_+^i \right. \\ &\left. -\frac{i}{2}\xbar\psi_-^{\bar i}\partial_{\bar i}\xbar W) + \xbar \epsilon_+ (g_{i\bar j} \xbar \psi_-^{\bar j}(\partial_0+\partial_1)\phi^{ i}+\frac{i}{2}\psi_+^{ i}\partial_{ i} W) + \xbar \epsilon_- (g_{i\bar j} \xbar \psi_+^{\bar j}(\partial_0-\partial_1)\phi^{ i}-\frac{i}{2}\psi_-^{ i}\partial_{ i} W) \right\}. \end{split} \end{equation} Before stating the requirements for a D-brane to be an A-brane we restrict to the conditions for invariance of the theory under the $\mathcal{N}=1$ subalgebra. The $\mathcal{N}=1$ subalgebra is obtained by taking $\epsilon_\pm=\pm i\epsilon$ with $\epsilon\in\mathbb{R}$. One can see that under this restriction one has both A- and B-supersymmetry. To see this algebra, one takes the general A-type generator $\delta_A=\epsilon_+(Q_-+\xbar Q_+)-\bar \epsilon _+(\xbar Q_-+Q _+)$ and sets $\epsilon_+=i\epsilon$, $\bar \epsilon_+=\epsilon_+^*$. In this case, we have $\delta^*=i\epsilon(Q+\xbar Q)$. The anticommutator is then \begin{equation} \frac{1}{(i\epsilon)^2}\left\{\delta^*,\delta^*\right\}=4H. \end{equation} Let $\gamma\subset M$ be the submanifold containing the image of $\partial \Sigma$. The supersymmetric sigma model (\ref{sigma}) is invariant under the $\mathcal{N}=1$ subalgebra of A-supersymmetry if $\psi^J_{-}+\psi^J_{+}$ and $\psi^J_{-}-\psi^J_{+}$ are tangent and normal to $\gamma$ respectively; and $\image W$ is constant along $\gamma$. To obtain this result, one first restricts $\bar\epsilon_\pm$ and $\epsilon_\pm$ to this case in the boundary contribution (\ref{sigmavar}). The one can rewrite the integral using the two results below: \begin{equation} \begin{split} &\xbar\psi_+^{\bar i}\partial_{\bar i}\xbar W - \psi_-^{ i}\partial_{ i} W+\xbar\psi_-^{\bar i}\partial_{\bar i}\xbar W-\psi_+^{ i}\partial_{ i}\xbar W\\ &=(\xbar \psi _+^{\bar i}+\xbar \psi _-^{\bar i})\partial_{\bar i}\xbar W- ( \psi _+^{ i}+ \psi _-^{ i})\partial_{ i} W\\ &=-( \psi _+^{ I}+ \psi _-^{ I})\partial_{ I}(W-\xbar W), \end{split} \end{equation} and \begin{equation} \begin{split} &-g_{i\bar j} (\partial_0+\partial_1)\xbar\phi^{\bar j}\psi_-^i + g_{i\bar j}\xbar\psi_+ ^{\bar j} (\partial_0-\partial_1)\phi^{ i} +g_{i\bar j} (\partial_0-\partial_1)\xbar\phi^{\bar j}\psi_+^i - g_{i\bar j} \xbar\psi_-^{\bar j} (\partial_0+\partial_1)\phi^{ i}\\ &= -g_{IJ}\partial_0\phi^I(\psi_-^J-\psi_+^J)-g_{IJ}\partial_1\phi^I(\psi^J_-+\psi_+^J). \end{split} \end{equation} Denoting $\delta^*S$ the variation of the action under the $\mathcal{N}=1$ generators, and using the steps above, we have \begin{align}\label{n1var} \delta^*S&=\frac{i\epsilon}{2}\displaystyle \int_{\partial\Sigma}dx^0 \ \left\{ -g_{IJ}\partial_0\phi^I(\psi_-^J-\psi_+^J)-g_{IJ}\partial_1\phi^I(\psi^J_-+\psi_+^J)\right. \nonumber\\ &\left. -\frac{i}{2}( \psi _+^{ I}+ \psi _-^{ I})\partial_{ I}(W-\xbar W)\right\}. \end{align} We see that $\psi^J_-+\psi_+^J$ is the $\mathcal{N}=1$ variation of $\phi^I$ and hence tangent to $\gamma$. Therefore the second term vanishes because $\partial_1\phi^I$ is normal. Then the rest of the integral vanishes if $\psi^J_--\psi_+^J$ is normal to $\gamma$ and $\image W$ is constant along $\gamma$. This is because the vector $\partial_0\phi^I$ is tangent to $\gamma$ and each term vanishes independently since $\partial_0\phi^I$ and $\psi^J_-+\psi_+^J$ are generally independent. Note that not only is the supersymmetric sigma model action invariant under $\mathcal{N}=1$ provided the above boundary conditions, but the boundary conditions themselves are also invariant, that is, \begin{equation} \delta^* (\psi^J_-+\psi_+^J) =-2\epsilon \partial_0\phi^J, \end{equation} \begin{equation} \delta^* (\psi^J_--\psi_+^J) =-2\epsilon \partial_1\phi^J+2\epsilon g^{JI}\partial_I\image W, \end{equation} in the background $\psi^I_\pm=0$. Thus the boundary conditions are invariant when $\image W$ is locally constant on $\gamma$. Now we proceed to general case when A-supersymmetry is preserved to geometrically describe A-branes. We first state the result for the case $e^{i\alpha}=1$ which is the trivial phase in the A-type generators in Eq. (\ref{abgenerators}). A D-brane wrapped on $\gamma$ preserves A-supersymmetry iff $\gamma$ is Lagrangian submanifold of $M$ with respect to the K\"ahler form, and $W(\gamma)\subset \mathbb{C}$ is a straight line parallel to the real axis, and invariant under the gradient flow of $\real W$. The general supersymmetry action on the bosons is $\delta \phi^i=\epsilon_+\psi_-^i-\epsilon_-\psi^i_+$ so the A-type action is \begin{equation} \delta \phi^i=\epsilon_+\psi_-^i-\bar\epsilon_+\psi^i_+. \end{equation} We decompose $\epsilon_+$ into its real and imaginary part $\epsilon_+=\epsilon_1+i\epsilon_2$. The equation above is then \begin{equation}\label{reime} \delta \phi^i=\epsilon_1(\psi_-^i-\psi^i_+)+i\epsilon_2 (\psi^i_-+\psi_+^i). \end{equation} Observing that $\phi$ parametrizes $\gamma$ and that the vector $\delta\phi^i$ is tangent to $\gamma$ we see that $\epsilon(\psi_-^i-\psi^i_+)$ and $i\epsilon(\psi_-^i+\psi^i_+)$ are the holomorphic components of vectors tangent to $\gamma$, where $\epsilon\in\mathbb{R}$. Yet in from Eq. (\ref{n1var}) in the $\mathcal{N}=1$ case we see that $i\epsilon(\psi_-^i-\psi^i_+)$ and $i\epsilon(\psi_-^i+\psi^i_+)$ are normal and tangent to $\gamma$ respectively when $\epsilon\in\mathbb{R}$. Therefore the $\mathcal{N}=2$ supersymmetry requires that map $f: T_X M \rightarrow T_X M$ which multiplies the holomorphic component of vectors by $i=\sqrt{1}$ interchanges tangent and normal vectors to $\gamma\subset M$. This means that $T_X\gamma \cong (T_X\gamma)^\perp$ in $T_X M$ which implies $ \dim\gamma=1/2\dim M$. The target manifold $M$ is a sympletic manifold $(M,\omega)$ where $\omega$ is the K\"ahler form \begin{equation} \omega(A,B):=g(JA,B), \end{equation} for $A, B \in T_X M$, where $J:=i\operatorname{diag}(1_{n},-1_n)$ is the complex structure \index{complex structure} on $M$ compatible with $g$, e.g. $g(JA,JB)=g(A,B)$. To see that $\omega$ is a sympletic 2-form: \begin{equation} \begin{split} \omega(A,B)&=g(JA,B)=g_{m\bar n}(iA^mB^{\bar n}-iA^{\bar n}B^m)\\ &=-g_{m\bar n}(iB^mA^{\bar n} - iB^{\bar n}A^m)\\ &=-\omega(B,A). \end{split} \end{equation} Thus we can write $\omega=ig_{m\bar n}dz^m\wedge dz^{\bar n}$ which means $dw=0$ because \begin{equation} \begin{split} dw&=\partial_lg_{m\bar n}dz^m\wedge dz^l\wedge dz^{\bar n}+\partial_{\bar o}g_{m\bar n}dz^m\wedge dz^{\bar n}\wedge dz^{\bar o}\\ &=\partial_l \partial_m\partial_{\bar n} K(X,\xbar X) dz^m\wedge dz^l\wedge dz^{\bar n} + \partial_{\bar o} \partial_m\partial_{\bar n} K(X,\xbar X) dz^m\wedge dz^{\bar n}\wedge dz^{\bar o}\\ &=0. \end{split} \end{equation} The K\"ahler form is non-degenerate by definition. Hence indeed sympletic. Now it is easy to show that $\omega$ vanishes on $\gamma$. Writing $A=(A^n,A^{\bar n})$ for $A\in T_X M$, $JA =(iA^n, -iA^{\bar n})$ but this is the same map $f$ which multiplies the holomorphic component by $i$ since $V^{\bar n}=V^{n*}$ for all $V \in T_X M$ so $f(A)=\tilde A =(\tilde A^n, \tilde A^{\bar n})=(iA^n, -iA^{\bar n})$. Let $A,B \in T_X \gamma$, then $\omega(A,B)=g(f(A),B)=0$ since $f:T_X\gamma\rightarrow (T_X\gamma)^\perp$. Thus $\gamma$ is a Lagrangian submanifold of $M$. Now we proceed to show that $\image W$ is constant on $\gamma$. As noted above the vector $\epsilon(\psi^i_--\psi^i_+)$ is tangent to $\gamma$ and since we require A-type boundary invariance, we have $\delta_A[\epsilon(\psi^i_--\psi^i_+)]$ is tangent to $\gamma$ as well. In particular for $\psi^i_\pm =0$, we have \begin{equation}\label{parallelvec} \delta_A[\epsilon(\psi^i_--\psi^i_+)]=2i\epsilon\epsilon_1 (\partial_0\phi^i)+ 2i\epsilon\epsilon_2(i\partial_1\phi^i)+ 2i\epsilon\epsilon_2F^i. \end{equation} The coefficients $i\epsilon\epsilon_i$ are real: $(i\epsilon\epsilon_i)^\dag=-i\epsilon^\dag\epsilon_i^\dag=-i\epsilon\epsilon_i=i\epsilon_i\epsilon$ where we used $\epsilon, \epsilon_i \in \mathbb{R}$. As stated, $\partial_0\phi^i$ and $\partial_1\phi^i$ are tangent and normal to $\gamma$ respectively. Multiplying by $i$ the holomorphic component $\partial_1\phi^i$ makes this vector, that is $(\partial_1\phi^i, \partial\bar\phi^{\bar i})$, an element of tangent space as well. Then by Eq. (\ref{parallelvec}) $F^i$ is tangent to $\gamma$ as well. In the subspace defined by $\psi^i_\pm=0$ we have \begin{equation} F^i=-\frac{1}{2}g^{i\bar j}\partial_{\bar j}\xbar W. \end{equation} This means that the gradient of $\real W$ \begin{equation} \operatorname{grad}(\real W)=g^{i\bar j}\partial_{\bar j}(W+\xbar W)\partial_i+g^{\bar j i}\partial_{i}(W+\xbar W)\partial_{\bar j}, \end{equation} is tangent to $\gamma$. Note that $\operatorname{grad}(\real W)=-iJ\operatorname{grad}(\image W)$ so $\operatorname(\image W)$ is normal to $\gamma$ as it was also required by the $\mathcal{N}=1$ case. To see this result one writes explictly, \begin{equation} J\operatorname{grad}(\image W)=i\begin{pmatrix} 1_n & 0_n \\ 0_n & -1_n \end{pmatrix} \begin{pmatrix} -g^{i\bar j}\partial_{\bar j} \xbar W \\ g^{\bar j i}\partial_{i} W \end{pmatrix} =-i\begin{pmatrix} g^{i\bar j}\partial_{\bar j} \xbar W \\ g^{\bar j i}\partial_{i} W \end{pmatrix} = -i \operatorname{grad}(\real W). \end{equation} Thus the flow of $\operatorname{grad}\real W$ is along $\gamma$ and $\image W$ is constant along the flow (one checks that $v^I\partial_I \image W =0$ where $v^i =g^{i\bar j}\partial_{\bar j}$), hence $W(\gamma)$ is invariant under the flow of $\operatorname{grad}\real W$. The above generalizes to the case when we take $e^{i\alpha}\neq 1$. \subsubsection{Wave-front trajectories} For our purposes, the most relevant example of D-branes preserving A-supersymmetry are those wrapped on the submanifold defined by the action of the gradient of $\real W$ on a non-degenerate critical point of the superpotential $W$. Since $\operatorname{grad}[\real W] (\image W)=0$ every point on this manifold has the same value $\image W$ as the critical point. For definiteness, let $X_*$ be a critical point of $W$ of order $n=1$, and let $f_X(t)=f(t,X)$ be the global flow (also called the local one-parameter group action) \index{global flow} generated by $\operatorname{grad}[\real W]$. In general a global flow is a continuous map $f:[0,1)\times M \rightarrow M$ which satisfies $f(0,X)=X$, $f(t,f(s,X))=f(t+s,X)$. Here we are interested on such a flow that satisfies \begin{equation}\label{flowdef} f'_X(t)=\operatorname{grad}[\real W]_{f_X(t)}. \end{equation} And define \begin{equation}\label{gammadef} \gamma_{X_*} :=\left\{ X\in M \big | \lim_{t\rightarrow -\infty} f_X(t)=X_*\right\}, \end{equation} then the claim is D-branes wrapped on $\gamma_{X_*}$ are a A-branes. So we have to check that this submanifold is Lagrangian whose image in the $W$-plane is parallel to the real axis. \begin{equation} \begin{split} \operatorname{grad}[\real W](\image W) &=g^{IJ}\partial_J(W+\xbar W)\partial_I (W-\xbar W)\\ &=g^{i\bar j}\partial_{\bar j}\xbar W\partial_i W - g^{\bar j i}\partial_iW \partial_{\bar j}\xbar W\\ &=|\partial W|^2-|\partial W|^2\\ &=0. \end{split} \end{equation} Therefore $\image W$ is constant along $\operatorname{grad}[\real W]$ and thus $W(\gamma_{X_*})$ is a ray starting at the critical value $w_*:= W(X_*)$. Now we need to show that $\gamma_{X_*}$ is middle dimensional. Recall that if $z_0$ is a critical point of $f:\mathbb{C}\rightarrow\mathbb{C}$ of order $m-1$, then there exists a change of coordinates near $z_0$ and $f(z_0)$ such that $f$ has the form $f(\xi)=\xi^m+f(z_0)$. Thus near $X_*$ we write \begin{equation}\label{wlocal} W=W(X_*)+\sum_{i=1}^n z_i^2+ o(z_i^3). \end{equation} If the change of variables brings the metric into the standard form $ds^2=\delta_{a\bar b}dz^a\otimes d\bar z^{\bar b}$, then in the local coordinates near $0\in\mathbb{C}$ the flow equation (\ref{flowdef}) becomes \begin{equation} {z}_a'(t) =\delta^{a\bar b} \partial_{\bar b} \sum_{k=1}^{n-1}\bar z_k^2+\cdots, \end{equation} after inserting Eq. (\ref{wlocal}) into the flow equation (\ref{flowdef}). The dots are higher order terms which which we can make arbitrarily small by considering smaller neighborhoods near $0\in\mathbb{C}$ which is equivalent to $t\rightarrow -\infty$. The solution is $z_a(t)= r_a e^t+\cdots$ with $r_a\in\mathbb{R}$ which is required to solve $z'(t)\sim\bar z(t)$. Thus, near 0 the (or equivalently in small neighborhood of $X_*$) the submanifold $\gamma_{X_*}$ is an $n$-dimensional real manifold. Since the flow $f(t,X)$ defines $\gamma_{X_*}$ we see that $\dim\gamma_{X_*}=n=1/2\dim M$. If the metric was not in the standard form, we would only obtain a different submanifold but also $n$-dimensional. Now we are left to show that the induced symplectic form vanishes on $\gamma_{X_*}$. The first step to show that $\omega$ is invariant along the gradient of $\real W$. This holds if $\mathcal{L}_v \omega=0$, $v:= \operatorname{grad}[\real W]$. By Cartan's formula, the right-hand side is $i_vd\omega+d i_v\omega$ where $i$ is the interior product. The K\"ahler form $\omega$ is closed so the first term does not contribute. The second term is zero by showing that $i_v\omega$ is exact. \begin{equation} \begin{split} i_v\omega&=i_vig_{i\bar j}dz^i\wedge d\bar z^{\bar j}=ig_{i\bar j}(v^i \bar z^{\bar j}-\bar v^{\bar j}dz^i)=i(\delta^{\bar k}_{\bar j} \partial_{\bar k}\xbar W d\bar z^{\bar j}-\delta^{ k}_{ j} \partial_{ k} W d z^{ j})=id(\xbar W -W). \end{split} \end{equation} Now let $X\in\gamma_{X_*}$ and $v_1, v_2 \in T_X\gamma_{X_*}$. Considering $\omega _{f_X(t)}\in T_{f_X(t)}^*\gamma_{X_*}$, we write $\omega _X\in T_{X}^*\gamma_{X_*}$ as the pullback $\omega_X =(f^*_t \omega)_X:=f^*_t( \omega_{f_X(t)})$ since $\omega$ is invariant along the flow $f_t$ generated by the vector field $v$. Therefore, $\omega_X (v_1,v_2)= f^*_t( \omega_{f_X(t)})(v_1,v_2)= \omega_{f_X(t)}(f_{t*} v_1,f_{t*}v_2)$. In the limit $t\rightarrow -\infty$ the right-hand side is zero since the vectors $f_{t*}v_i\rightarrow 0$. To see this one takes the limit of $f_{t*}v_i(g)= v_i (g\circ f_t)$ where $g$ is any function on $M$. The function $g\circ f_t$ is a constant function in the limit. Thus $\omega_X (v_1,v_2)=0 $ since it is independent of the parameter $t$. Thus $\gamma_{X_*}$ is a Lagrangian submanifold of $(M,\omega)$. To summarize, the we have shown that D-branes wrapped on $\gamma_{X_*}$ as defined in Eq. (\ref{gammadef}) are A-branes which are mapped to $W(X_*)+\mathbb{R}^{\geq 0}$, where $X_*$ is a critical point of $W$. \subsection{A-branes in Landau-Ginzburg models}\label{abraneslg} Below we provide the wave-front trajectory description for LG models with polynomial superpotentials. This application of the geometrical description of A-branes will be used later when following the action of the RG flow on boundary degrees of freedom. We first consider the case $W=X^{k+2}$ with $k$ a non-negative integer. Then $W$ has only one critical point $X_*=0$. As noted above we know that $\gamma_{0}$ (defined in (\ref{gammadef})) is the pre-image of the set $[0,\infty)\subset\mathbb{C}$. Explicitly, A-branes wrap the submanifold \begin{equation} \gamma_{0}=\left\{ r \exp\left( \frac{2\pi ni}{k+2}\right) : r\in[0,\infty) \ , \ n \in \left\{ 0,\dots,k+1\right\} \right\}\subset\mathbb{C}. \end{equation} Using submanifolds of $\mathbb{C}$ which asymptote to $\gamma_0$, we can also describe the A-branes of LG theories with more general superpotentials of the type \begin{equation}\label{deformedlg} W_{ \lambda}(X)=X^{k+2}+\sum_{j=0}^{k-1} \lambda_j X^{j+2}. \end{equation} We have observed that a constant term does not contribute to the fermionic integral of the Lagrangian so it can be shifted away. A linear term does not introduce any new branch points. So we have the freedom to gauge it away and thus always translating one of the critical points to the origin. In the most general case, $\lambda_j\neq0$ for all $j$, and $W_{ \lambda}$ has $k+1$ non-degenerate critical points which are isolated. In this case we have $k+1$ possible Lagrangian submanifolds to wrap the A-branes, corresponding to each of the critical points. We assume that $\image w_i\neq \image w_j$ for $i\neq j$, where $w_j:=W_{ \lambda}(X_{*j})$ are the critical values. This assumption eliminates the possibility of having overlapping images in the $W$-plane of the submanifolds $\gamma_{i}$ corresponding to the $X_{*j}$ critical points. The A-branes of the deformed theory are curves asymptoting to $\mathcal{L}_{n_1}\cup \mathcal{L}_{n_2}$, $n_1\neq n_2$, where $\mathcal{L}_{n_j} \subset \gamma_{0}$ are slices corresponding to each value of $n_i \in\left\{ 0,\dots,k+1\right\}$. This claim follows by noting that for large $X$, $W_\lambda$ approaches the non-deformed $W$ since the leading term $X^{k+2}$ in $W_\lambda$ dominates. So $W^{-1}_\lambda$ is close to $W^{-1}$ in this regime. Now, let $X_{*j}$ be one of the critical points of the deformed potential. By assumption it is of order one so locally near $X_{*j}$ and its image, $W_{\lambda}$ is biholomorphically equivalent to a quadratic map. Thus the preimage of $w_j+\mathbb{R}^{\geq 0}$ near $w_j$ forms two wave-front trajectories starting at $X_{*j}$. As noted, these curves approach some $\mathcal{L}_{n_1}$ and $\mathcal{L}_{n_2}$. The curves intersect at the branch points only (consider $W_\lambda$ as a branched cover) which means $n_1\neq n_2$. For non-generic values of the $\lambda_j$, the branch points can be degenerate. Then the A-brane associated with one of these points, say $X_*$, will asymptote $\mathcal{L}_{n_1}\cup\cdots \cup \mathcal{L}_{n_{o(X_*)+1}}$, where $o(X_*)$ is the order the critical point $X_*$. Following the work of \cite{brunner07a} we can depict the A-brane description above for the Landau-Ginzburg models by compactifying the $X$-plane to the disk $D$. The resulting graph contains the critical points $X_{*i}$ in the interior of the disk; cyclically ordered preimages $B^1,$ $\dots,$ $B^{k+2}$ of $\infty\in W$-plane on the boundary of the disk $\partial D$; and $({o(X_{*i})+1})$-many segments $\gamma_i^a$ connecting the point $X_{*i}$ to that many of the $B^a$. We define $\Gamma_i := \cup_a \gamma_i^a$ and $\Gamma := \cup_i \Gamma_i$. We call the graph formed by $\Gamma$ and the boundary $\partial D$ the \emph{schematic representation} of the superpotential. The two graphs below are examples of schematic representations for A-branes in LG models with superpotentials $W=X^4$ and $W=X^4+\lambda X^3$. \setlength{\unitlength}{7mm} \begin{picture}(10,10)(-6,-6) \put(0,0){\circle{6}} \put(0,0){\line(0,5){3}} \put(0,0){\line(0,-5){3}} \put(0,0){\line(5,0){3}} \put(0,0){\line(-5,0){3}} \put(0,0){\circle*{.2}} \put(3,0){\circle*{.2}} \put(-3,0){\circle*{.2}} \put(0,3){\circle*{.2}} \put(0,-3){\circle*{.2}} \put(.3,.3){$X_*$} \put(3.3,0){$B^1$} \put(0,3.3){$B^2$} \put(-3.8,0){$B^3$} \put(0,-3.8){$B^4$} \put(-.5,-5){{ $W=X^4$}} \put(12,0){\circle{6}} \put(12,0){\line(0,5){3}} \put(12,0){\line(0,-5){3}} \put(12,0){\line(5,0){3}} \put(11,0){\line(-5,0){2}} \put(11,0){\line(1,2.9){1}} \put(10,.3){$X_{*2}$} \put(11,0){\circle*{.2}} \put(12,0){\circle*{.2}} \put(15,0){\circle*{.2}} \put(9,0){\circle*{.2}} \put(12,3){\circle*{.2}} \put(12,-3){\circle*{.2}} \put(12.3,.3){$X_{*1}$} \put(15.3,0){$B^1$} \put(12,3.3){$B^2$} \put(8.1,0){$B^3$} \put(12,-3.8){$B^4$} \put(11,-5){{ $W=X^4+\lambda X^3$}} \end{picture} A graphical representation $\Gamma$ has the following properties \cite{brunner07a}: all the preimages of a critical value $\omega\in \mathbb{C}$ are connected on $\Gamma$; $\Gamma \setminus\partial D$ is connected and simply connected; $\forall i\neq j, \Gamma_i \cap \Gamma_j$ contains at most one point; and it is non-empty only if it contains an element of the fiber $f^{-1}(\infty)$; $\Gamma_i \cap \Gamma_j\cap \Gamma_k = \emptyset$. \subsection{B-type supersymmetry} In this section we review B-type supersymmetry and boundaries which preserve it. The B-type boundary conditions on the fermionic variables at $\partial \Sigma$ is preserved by the operators \begin{align}\label{boperators} \xbar D_B&:= \xbar D_++\xbar D_-=-\frac{\partial}{\partial\bar \theta}+i\theta\partial_0, \\ D_B&:= D_++ D_-=\frac{\partial}{\partial \theta}-i\bar\theta\partial_0, \\ \xbar Q_B&:= \xbar Q_++\xbar Q_-=-\frac{\partial}{\partial\bar \theta}-i\theta\partial_0 ,\\ Q_B&:= Q_++ Q_-=\frac{\partial}{\partial \theta}+i\bar\theta\partial_0. \end{align} The general B-type variation is given by \begin{equation} \delta_B=\epsilon Q_B- \bar \epsilon\ \xbar Q_B. \end{equation} which is equivalent to the $(2,2)$ variation in Eq. (\ref{fullvar}) if in the latter we set $\epsilon_+=-\epsilon_-=:\epsilon$ and $\bar \epsilon_+ = - \bar \epsilon_-=:\bar \epsilon$. The B-type generators obey the relations $\left\{ Q_B,\xbar Q_B \right\} =-2i(\partial_+ +\partial_-)\ , \ Q_B^2=\xbar Q_B^2=0$. Under B-type supersymmetry the components of the original $(2,2)$ chiral field $X$, \begin{equation}\label{Btrans1} X=\phi(y^\pm)+\theta^a\psi_a(y^\pm)+\theta^+\theta^- F(y^\pm), \end{equation} transform as \begin{equation}\label{Btrans2} \delta_B \phi = \epsilon \eta \ , \ \ \ \ \ \ \ \ \delta _B\bar \phi = -\bar\epsilon \bar \eta \ , \end{equation} \begin{equation}\label{Btrans3} \delta_B \eta = -2 i \bar \epsilon \partial_0 \phi \ , \ \ \ \ \ \ \ \ \delta _B\bar \eta = 2 i \epsilon \partial_0 \bar \phi \ , \end{equation} \begin{equation} \delta_B \beta = 2i \bar \epsilon \partial_1 \phi + 2\epsilon \xbar W'(\phi) \ , \ \ \ \ \ \ \ \ \delta_B \bar \beta =- 2i \epsilon \partial_1 \bar \phi + 2\bar \epsilon W'(\phi), \end{equation} where we used the basis \begin{equation} \eta:=\psi_-+\psi_+ \ , \ \ \ \ \ \ \beta:=\psi_--\psi_+ . \end{equation} A consequence of the above B-type transformation is that the $(2,2)$ bulk chiral field $X$ rearranges into a bosonic superfield $\Phi$ and a fermionic superfield $\Theta$ under B-supersymmetry. These two fields have the $\theta$-expansions, \begin{equation} \Phi= \phi(y)+\theta \beta (y), \end{equation} \begin{equation} \Theta= \beta(y)-2\theta F(y)+2i\bar \theta [\partial_1\phi(y)+\theta\partial_1\beta(y)], \end{equation} where $y=x^0-i\theta\bar \theta$ is the boundary version of the bulk $y^\pm$ arguments. The consideration of boundaries not only reduces by half the amount of allowed supersymmetry but it also breaks the invariance of the D-term, even after restricting to B-type variations. This is due to the appearance of boundary contributions in the integral. Recall that in this work we are interested in a LG with single chiral field $X$ and action \begin{equation}\label{LGtotal} S[X]=\displaystyle \int d^2 d^4 \theta \ \xbar X X + \real \displaystyle \int d^2 d^2\theta \ W(X)| _{\bar \theta ^\pm =0}. \end{equation} If $W=0$, then the boundary contribution to $\delta_B S$ can be compensated by the addition of the boundary term \cite{enger05} \begin{equation} S_{D,\partial\Sigma}=\frac{1}{2} \displaystyle \int dx^0 (\bar\beta \eta-\bar \eta \beta). \end{equation} Like the D-term, the F-term also contains a boundary contribution when we vary $\delta_B S_F$. Unlike the D-term, the F-term cannot be compensated by an additional boundary term. Indeed, the B-type variation of the $F$-term is given by \cite{hori00} \begin{equation}\label{varyf} \delta S_{ F}=2i\bar \epsilon\displaystyle\int_{\partial \Sigma}dtd\theta\ W( X)\big |_{\bar \theta=0}-2i\epsilon\displaystyle\int_{\partial \Sigma}dtd\bar \theta\ \xbar W( \xbar X)\big |_{ \theta=0}. \end{equation} To obtain the above result we write $S_F= S_W+ S_{\bar W}$ and note that the B-type variation of \begin{equation} S_W:= \displaystyle \int _\Sigma d^2 x d^2\theta W(X)|_{\bar \theta^\pm =0}, \end{equation} is given by \begin{equation}\label{bvariationsteps} \begin{split} \delta S_{ W}&=\displaystyle\int_\Sigma d^2xd\theta^-d\theta^+\ \delta_B W( X)\big |_{\bar\theta^\pm=0}\\ &=\displaystyle\int_\Sigma d^2xd\theta^-d \theta^+\ \left( [\epsilon( Q_++Q_-)-\bar \epsilon (\xbar Q_++\xbar Q_- )] W( X)\right)\big |_{\bar\theta^\pm=0}\\ &=\displaystyle\int_\Sigma d^2xd\theta^-d\theta^+\ 2i \epsilon\left( (\theta^-\partial _- +\theta^+\partial _+ )W\right)\big |_{\bar\theta^\pm=0}, \end{split} \end{equation} where we used \begin{eqnarray} &&Q_\pm W\big |_{\bar\theta^\pm=0} =\frac{\partial}{\partial \theta^\pm} W \big |_{\bar\theta^\pm=0}\ ,\\ &&\xbar Q_\pm W\big |_{\bar\theta^\pm=0}=-2i\theta^\pm\partial_\pm W \big |_{\bar\theta^\pm=0} . \end{eqnarray} The last equation follows from $W(X)$ being chiral so we can write $\xbar Q_\pm W=(\xbar D_\pm -2i\theta^\pm\partial_\pm) W$. The rest of the steps follow exactly as those used to show Eq. (\ref{varytwistedf}) using the chiral field expansion of $W$ as in Eq. (\ref{twistexpansion}) \begin{equation} X=\phi(y)+\theta^+\psi_+( y)+\theta^-\psi_-( y)+\theta^+ \theta^-F( y). \end{equation} One finally obtains, \begin{equation} \delta_B S_{ W}=2i\epsilon\displaystyle\int_{\partial \Sigma}dtd\theta\ W( X)\big |_{\bar \theta=0}. \end{equation} And the antiholomorphic part $S_{\bar W}$ follows the same way by noting that \begin{align} \xbar{\delta_B W}(\xbar X)=-\delta_B \xbar{W}(\xbar X). \end{align} To understand why no combination of bulk fields can be utilized to construct a boundary term which compensates the variation of the F-term we use the component fields. In components we see that the boundary contribution is \begin{equation}\label{componentFvary} \delta_B S_F= \displaystyle \int _{\partial \Sigma} dx^0 \ \left(\epsilon \bar \eta\xbar W' (\phi) +\bar \epsilon \eta W'(\phi)\right). \end{equation} As noted above, there is no possible boundary term whose B-type variation would can cancel $\delta_B S_F$. Such a boundary term would have to vary with term like $\epsilon \bar \eta$ but from the right-hand side of all the component variations in Eq. (\ref{Btrans1}) - Eq. (\ref{Btrans3}) one sees that such B-type variation is not possible. One approach to make the action B-type supersymmetric is consider only chiral superfields with appropriate boundary conditions such that the right-hand side of Eq. (\ref{componentFvary}) vanishes. Such approach is studied in \cite{warner95}. A more general approach that does not restrict the class of chiral superfields involves introducting a new set of boundary theory whose action variation cancels that of the bulk theory \cite{Orlov:2003yp, Lazaroiu:2003zi, kapustin02}. This second approach leads to matrix factorizations which we will explore in the next section. \section{B-type boundaries and matrix factorizations}\label{BtypeSec} In this section we specialize to B-type supersymmetry and review the use of matrix factorizations to describe B-supersymmetric boundary conditions. As noted in the previous section, in order to ensure that the action for the LG model in Eq. (\ref{LGtotal}) remains invariant under B-type supersymmetry one introduces a boundary theory. To this end, one defines a boundary superfield $\Pi$ which is fermionic and not chiral. That is, \begin{equation}\label{bdyfield} \xbar D \Pi= E(X_{\partial}) \neq 0, \end{equation} where $X_\partial :=X|_{\partial\Sigma}$ is the boundary restriction of the bulk superfield. The $\theta$-expansion of $\Pi$ is \begin{equation} \Pi=\pi(y)+\theta l(y)- \bar \theta \left( E(\phi)+\theta \eta(y) E'(\phi)\right), \end{equation} where $E$ is the source function in Eq. (\ref{bdyfield}). The boundary action is given by a kinetic D-term and an F-term that couples the bulk fields (that is, their boundary restriction) to the boundary fields, \begin{equation}\label{bdyaction} S_{\partial\Sigma}=\displaystyle \int_{\partial \Sigma} dt d^2 \theta \ \xbar \Pi \Pi +\real \left( i\displaystyle \int_{\partial\Sigma} dt d\theta \ J \Pi\big |_{\bar \theta=0}\right), \end{equation} for some function $J:=J\left(X_{\partial}\right)$. A more general form for the boundary coupling of B-type topological Landau-Ginzburg models is discussed in \cite{Lazaroiu:2003zi} but we do not need it here. The B-supersymmetry variation of the component fields of $\Pi$ is given by \cite{enger05} \begin{equation} \delta_B \pi = \epsilon l -\bar \epsilon E(\phi) \ , \ \ \ \ \ \ \ \ \ \delta_B\bar \pi = \bar \epsilon \bar l - \epsilon \bar E(\bar \phi) , \end{equation} \begin{equation} \delta_B l = -2 i \bar \epsilon \partial_0 \pi +\bar \epsilon \eta E'(\phi) \ , \ \ \ \ \ \ \ \ \ \delta_B \bar l = -2 i \epsilon \partial_0 \bar \pi - \epsilon \bar \eta \bar E'(\bar \phi). \end{equation} Inserting the equation of motion $l = -i \bar J(\bar \phi)$, the action of the boundary component fields is given by \begin{equation} S_{\partial \Sigma}= \displaystyle \int dx^0 \left( 2i\bar \pi \partial_0 \pi - |J|^2 - |E|^2 +i \pi \eta J' + i \bar \pi \bar \eta \bar J' - \bar \pi \eta E' +\pi \bar \eta \bar E' \right). \end{equation} Computing the B-type variation of the above action, one gets \begin{equation} \delta_B S_{\partial \Sigma} = -i \displaystyle \int dx^0 \left( \epsilon \bar \eta (\bar E \bar J)' +\bar \epsilon\eta (EJ)' \right). \end{equation} Comparing the above integral with boundary contribution to $\delta_B S_F$ as given in Eq. (\ref{componentFvary}), we see that both terms cancel each other to give supersymmetric invariance if and only if \begin{equation}\label{simpleFact} W = EJ, \end{equation} up to an additive scalar constant. Aside from giving the necessary and sufficient condition for B-type invariance, the above result is the cornerstone upon which the theory of matrix factorizations is built. Similar to the bulk theory where there is a BRST operator $Q$ whose cohomology catalogs the physical fields, the boundary theory also has such an operator which is labeled by $Q_\partial$. The operator $Q_\partial$ is the boundary contribution to the BRST operator for the theory defined on $\Sigma$. Using $l = -i \bar J(\bar \phi)$ the variations of the fermionic component fields are \begin{equation}\label{BvaryPi} \delta_B \pi = -i\epsilon\bar J(\bar\phi) -\bar \epsilon E(\phi) \ , \ \ \ \ \ \ \ \ \ \delta_B\bar \pi = i\bar \epsilon J(\phi) - \epsilon \bar E(\bar \phi), \end{equation} from which we obtain the relations \begin{equation}\label{classes} Q\pi =E(\phi) \ , \ \ \ \ \ \ \ \ Q\bar\pi =-iJ(\phi). \end{equation} The BRST cohomology for the theory with a boundary can be identified from the B-type variations of the component fields and the boundary fermions as in Eq. (\ref{BvaryPi}). The equivalence classes of the boundary cohomology depend on the boundary potentials $J$ and $E$ via Eq. (\ref{classes}). Therefore, the boundary spectra of a LG on a worldsheet with boundary is determined by $(J,E)$-factorizations of the superpotential $W$ as in Eq. (\ref{simpleFact}). The boundary contribution to the $Q_\partial$ to the BRST operator is determined from Eq. (\ref{classes}) from which it follows that, \begin{equation} Q_\partial =\sum_i J_i\pi_i +E_i\bar \pi_i. \end{equation} In the above we are allowing for a set $\left\{ \Pi_i\right\}_i$ of boundary superfields. Using the anticommutation of the fermions $\pi_i$ one obtains that \begin{equation} Q_\partial ^2 =W. \end{equation} Thus, just as in the bulk, the physical fields at the boundary are those which are $Q_\partial$. This observation follows from the above equation and by noting that $Q_\partial(X) = [Q_\partial, X]$ for a boundary superfield $X$. A representation $V$ of the Clifford algebra obeyed by the fermions $\left\{\pi_i,\bar\pi_i\right\}_{i=1,\dots,r}$ is $\mathbb{Z}_2$-graded as $V=V_0\oplus V_1$. Choosing such a representation, the boundary BRST operator $Q_\partial$ obtains the form \begin{equation} Q_\partial= \begin{pmatrix} 0 & p_1 \\ p_0 & 0 \end{pmatrix}, \end{equation} where the $p_i$ are $2^r \times 2^r$-matrices with polynomial entries on the chiral fields such that \begin{equation} p_1p_0 = p_0p_1 = W 1_{2^r \times 2^r}, \end{equation} which is an example of a matrix factorization. More generally, given a polynomial $W\in A:=\mathbb{C}[X_j]$, a \emph{matrix factorization} of $W$ is a pair $(p_0, p_1)$, $p_i \in M_{k}(A)$\index{a@$M_{k}(A)$, space of $k\times k$ matrices over ring $A$}, such that $p_0 p_1 = p_1 p_0 = W 1_k$. One denotes matrix factorizations in the following way \cite{brunner07} \begin{equation}\label{mfactor} P=\left( P_0 \mathrel{\mathop{\rightleftarrows}^{\mathrm{p_0}}_{\mathrm{p_1}}} P_1\right), \ \ \ p_0 p_1 =W 1_{P_1}, \ \ p_1 p_0 =W 1_{P_0}. \end{equation} The \emph{rank} of matrix factorization $P$ is the rank of the maps $p_0$, $p_1$. The simplest example of a matrix factorization is the trivial matrix factorization of the form \begin{equation} P=\left( P_0 =\mathbb{C}[X_j] \mathrel{\mathop{\rightleftarrows}^{\mathrm{1}}_{\mathrm{W}}} \mathbb{C}[X_j]=P_1\right). \end{equation} As an example let $B$ be a boundary condition for a LG model with superpotential $W=X^d+Y^d$. Then a possible matrix factorization for such a boundary is given by the maps \begin{equation} p^0_{ij}=\begin{pmatrix} X^{d-i} & Y^j \\ -Y^{d-j} & X^i \end{pmatrix} \ , \ \ \ \ p^1_{ij}=\begin{pmatrix} X^{i} & -Y^j \\ Y^{d-j} & X^{d-i} \end{pmatrix} \end{equation} going between the rank-2 spaces $P_0\cong P_1\cong \mathbb{C}[X,Y]^2$. One checks that \begin{equation} p_0p_1 = p_1p_0 = (X^d+Y^d)1_{2\times 2}. \end{equation} \section{B-type defects in LG models} In the above section we saw that there is a correspondence between boundary data (i.e., or $Q_\partial$-cohomology classes) of a LG with boundary and matrix factorizations of the superpotential. Such a description applies also to defects between two LG models with the only difference that now the matrix factorization is of the polynomial $W=W_1(X)-W_2(Y)$, where $W_1$ and $W_2$ are the superpotentials of the LG models at either side of the defect \cite{brunner07}. That is, a defect $D$ located at $x^1=0$ on $\mathbb{R}^2$ which separates a LG with chiral superfield $X$ and superpotential $W_1(X)$ on the upper-half plane, from a LG with chiral superfield $Y$ and superpotential $W_2(Y)$ on the lower-half plane is characterized by the matrix factorization \begin{equation}\label{} P=\left( P_0 \mathrel{\mathop{\rightleftarrows}^{\mathrm{p_0}}_{\mathrm{p_1}}} P_1\right), \end{equation} where $P_1$ and $P_0$ are $\mathbb{C}[X,Y]$-modules with \begin{equation} p_0 p_1 =(W_1(X)-W_2(Y)) 1_{P_1}, \ \ \ \ p_1 p_0 =(W_1(X)-W_2(Y)) 1_{P_0}. \end{equation} The above description of defects between LG models is consistent with the ``folding trick'' prescription of \cite{affleck}. This procedure can be extended to the LG language as follows. Consider the defect $D=\left\{ (x,t) | x=0\right\}$ separating two Landau-Ginzburg models $\text{LG}_1(X_i,\xbar X_i)$ and $\text{LG}_2(Y_i,\xbar Y_i)$, where the arguments are the chiral and anti-chiral fields. The action functional is of the form \begin{equation} \text{LG}_1(Z_i,\xbar Z_i)=\displaystyle \int d^2x \int d^4\theta K(Z_i,\xbar Z_i) +\int d^2x \real \int d^2\theta W(Z_i), \end{equation} for each theory. $K$ is the K\"ahler potential and $W$ the superpotential. We do the folding by interchanging the left and right movers in the left half-plane theory. Taking the mirror along $x=0$ sends $x\rightarrow -x$, so the action for $\text{LG}_2(Y_i,\xbar Y_i)$ goes from \begin{equation} \text{LG}_2(Y_i,\xbar Y_i)=\displaystyle \int_{-\infty}^0dx\int_\mathbb{R} dt \int d^4\theta K_2(Y_i,\xbar Y_i) +\int_{-\infty}^0dx\int_\mathbb{R} dt \real \int d^2\theta W_2(Y_i) \end{equation} to \begin{equation} \xbar{\text{LG}}_2(Y_i,\xbar Y_i)=\displaystyle \int_0^\infty dx\int_\mathbb{R} dt \int d^4\theta K_2(Y_i,\xbar Y_i) -\int_0 ^\infty dx\int_\mathbb{R} dt \real \int d^2\theta W_2(Y_i), \end{equation} where we noted $d^4\theta$ is left invariant while $d^2\theta \rightarrow -d^2\theta$; and $K$ is real valued. Thus the folded theory is described by \begin{align} \text{LG}_1(X_i,\xbar X_i)\ \otimes\ \xbar{\text{LG}}\ {}_2 (Y_i,\xbar Y_i)=&\displaystyle \int d^2x \int d^4\theta (K_1(X_i,\xbar X_i)+K_2(Y_i,\xbar Y_i))\nonumber\\ &-\int d^2x \real \int d^2\theta (W_1(X_i)- W_2(Y_i)), \end{align} on the right half-plane. This description is consistent with the theory of matrix factorizations for boundaries in the Landau-Ginzburg set up. A boundary is described by a matrix factorization of the superpotential of the LG. Thus to describe the boundary of the folded theory we factorize the superpotential $W=W_1-W_2$ which is the prescribed factorization of the defect $D$ before the folding. As an example of a matrix factorization for a defect $D$ separating two LG models with $W_1=X^d$ and $W_2=Y^d$ consider $P_0=P_1= \mathbb{C}[X,Y]$ and the maps \begin{equation} p_0^I=\prod_{a\in I}(X-\eta^a Y) \ \ \ , \ \ \ p_1^I=\prod_{a\in \left\{0,\dots, d-1\right\} - I}(X-\eta^a Y) \ , \end{equation} where $I\subset \left\{0,\dots, d-1\right\} $, and $\eta$ primitive $d^{th}$ root of unity. \subsection{Fusion of defects} As described in the introduction, the usefulness of defects comes via the natural binary operation of fusion where defects $D_1$ and $D_2$ can be brought together to form a new defect $D_3$. This fusion of defects, denoted by $D_3=D_1*D_2$, is obtained through the tensor product of matrix factorizations \cite{brunner07}. To review this composition let us consider the upper-half plane $\Sigma$ with a defect located at $x^1=y$ which separates two LG models with superpotentials $W_1(X_i)$ and $W_2(Y_i)$. \setlength{\unitlength}{1.2cm} \begin{picture}(6,6)(-3,-3) \put(-2.5,-1){\vector(1,0){5}} \put(2.56,-1.15){$x^0$} \put(0,-1.0){\vector(0,1){3}} \put(-0.35,1.72){$x^1$} \multiput(-2.5,0.42)(0.4,0){13} {\line(1,0){0.2}} \put(-0.35,.27){$x^1=y$} \put(1.5,1.1){$W_1(X_i)$} \put(1.5,-.3){$W_2(Y_i)$} \put(-2.5,.5){$D$} \put(-2.2,1.5){$\Sigma$} \put(-2.5,-.9){$B(\partial\Sigma)$} \end{picture} To $D$ we associate a matrix factorization $P$ and to the boundary data for the lower LG we associate a matrix factorization $Q$. The exact form of each matrix factorization depends on the on the type of defect and the boundary data, respectively. This fusion is obtained in this case by letting $y \rightarrow 0$ which produces a new boundary condition $D*B=:B'$ at $\partial \Sigma$. Since the lower LG disappears the new boundary condition is for the upper LG. In terms of matrix factorizations, the resulting boundary condition is given by the tensor product of the matrix factorizations $P$ and $Q$. The definition of this tensor product is given below. Let $D$ be the defect above and $P$ the corresponding matrix factorization, \begin{equation}\label{defect} P: \ \ \ P_0 \mathrel{\mathop{\rightleftarrows}^{\mathrm{p_0}}_{\mathrm{p_1}}} P_1, \ \ \ p_0 p_1 =(W_1(X_i)-W_2(Y_i)) 1_{P_1}, \ \ p_1 p_0 =(W_1(X_i)-W_2(Y_i)) 1_{P_0}. \end{equation} Let $Q$ be the corresponding matrix factorization corresponding to the boundary condition $B$, \begin{equation}\label{boundary} Q: \ \ \ Q_0 \mathrel{\mathop{\rightleftarrows}^{\mathrm{q_0}}_{\mathrm{q_1}}} Q_1, \ \ \ q_0 q_1 =W_2(Y_i) 1_{Q_1}, \ \ q_1 q_0 =W_2(Y_i) 1_{Q_0} \end{equation} Then the limit $y\rightarrow 0$ defines a new boundary condition $B'$ given by the \emph{tensor product matrix factorization} which is defined as \begin{equation}\label{formaltensor} Q': \ \ \ Q'_0=\left( P_0\otimes_{\mathbb{C}[Y_i]} Q_0\right)\oplus \left( P_1\otimes_{\mathbb{C}[Y_i]} Q_1\right) \mathrel{\mathop{\rightleftarrows}^{\mathrm{q'_0}}_{\mathrm{q'_1}}} \left( P_1\otimes_{\mathbb{C}[Y_i]} Q_0\right)\oplus \left( P_0\otimes_{\mathbb{C}[Y_i]} Q_1\right)=Q'_1, \end{equation} where, \begin{equation}\label{mapfusion} q'_0 = \begin{pmatrix} p_0\otimes 1_{Q_0} & 1_{P_1}\otimes q_1 \\ -1_{P_0}\otimes q_0 & p_1\otimes 1_{Q_1} \end{pmatrix}\ , \ \ \ \ q'_1 = \begin{pmatrix} p_1\otimes 1_{Q_0} & -1_{P_0}\otimes q_1 \\ 1_{P_1}\otimes q_0 & p_0\otimes 1_{Q_1} \end{pmatrix}. \end{equation} One can see that the matrix factorization $Q'=P\otimes Q$ resulting from the matrix factorization tensor product indeed factorizes $W_1(X_i)$. The matrix factorization $Q'$ is of infinite rank as a $\mathbb{C}[X_i]$-module. That is, the maps $q'_i$ have rank = $\infty$. Since we started with finite-rank matrix factorizations, we would like the resulting tensor product to be also of finite rank. If the two initial defects are of finite rank, the infinite rank of the tensor product comes from trivial matrix factorizations which can be ``peeled off'' to obtain a reduced rank matrix factorization. To obtain the reduced rank matrix factorization resulting from equation (\ref{formaltensor}) more directly, one associates to each matrix factorization $P$ a 2-periodic $\mathbb{C}[X]/W$-resolution of the space $\coker p_1$, the cokernel of the $p_1$ map. Then the problem of computing $Q'$, the matrix factorization corresponding to the tensor product of $P$ and $Q$, is translated into finding $\coker q_1'$ in its reduced form. As noted in \cite{brunner07}, at the level of $\mathbb{C}[X]/W$-modules both $\coker q_1'$ and the space \begin{equation}\label{trick} V=\coker(p_1\otimes1_{Q_0}, 1_{P_0}\otimes q_1), \end{equation} have resolutions which are identical up to the last two steps. Therefore if we can find the reduced form of $V$, we can identify the 2-periodic resolution corresponding to the matrix factorization $Q'$. It turns out that it is simpler to work out the reduced form of $V$ since its components are the known maps of the original two matrix factorizations. It is helpful to see the technique described above using an example: Consider the matrix factorizations \begin{equation}\label{Xdand0} P^{N}(X|Y)= \left( P_1= \mathbb{C}[X,Y] \mathrel{\mathop{\rightleftarrows}^{ X^N}_{X^{d-N}}} \mathbb{C}[X,Y]=P_0\right), \end{equation} and \begin{equation}\label{boundarylm} Q^{L,M}(Y)=\left( Q_1= \mathbb{C}[Y] \mathrel{\mathop{\rightleftarrows}^{ X^M}_{0}} \mathbb{C}[Y]=Q_0\right). \end{equation} Using the formal expression for the tensor product given in Eq. (\ref{formaltensor}) leads to a matrix factorization $Q'$ with the factorizing maps \begin{equation} q'_0 = \begin{pmatrix} X^{d-N} & Y^M \\ 0 & X^N \end{pmatrix}\ \ , \ \ \ \ q'_1 = \begin{pmatrix} X^N & -Y^M \\0 & X^{d-N} \end{pmatrix}. \end{equation} We want to show that $Q'$ is equivalent to a matrix factorization of finite rank. For this we treat the spaces as $\mathbb{C}[X]$-modules by using the matrix representation where $Y^i$ corresponds to the matrix with zeros in all the entries except in for those lying in the off-diagonal $(k+i,k)$ which are set to 1. In this representation $Y^0$ is the infinite identity matrix. The map $q_1'$ is then given by the matrix \begin{equation} \sbox0{$\begin{matrix}X^N& &\\ & \ddots & \\ & & \end{matrix}$} \sbox1{$\begin{matrix}0& & &\\ \vdots & & & \\ -1& & &\\ & & \ddots & & \end{matrix}$} \sbox2{$\begin{matrix}X^{d-N}& &\\ & \ddots & \\ & & \end{matrix}$} q_1 '=\left[ \begin{array}{c|c} \usebox{0}&\usebox{1}\\ \hline \vphantom{\usebox{0}}\makebox[\wd0]{\large $0$}&\usebox{2} \end{array} \right] \end{equation} Using elementary row and column operations the above matrix can be set equal to \begin{equation}\label{q1prime} \sbox0{$\begin{matrix}X^N& & &\\& \ddots & &\\& & 1 & \\& & &\ddots \end{matrix}$} \sbox1{$\begin{matrix}0& & &\\ \vdots & & & \\ -1& & &\\ & & \ddots & & \end{matrix}$} \sbox2{$\begin{matrix}X^d& &\\ & \ddots & \\ & & \end{matrix}$} q_1 '=\left[ \begin{array}{c|c} \usebox{0}& \makebox[\wd0]{\large $0$}\\ \hline \vphantom{\usebox{0}}\makebox[\wd0]{\large $0$}&\usebox{2} \end{array} \right], \end{equation} where the first $M$ entries of the diagonal are $X^N$. And similarly for $q'_0$, \begin{equation}\label{q0prime} \sbox0{$\begin{matrix}X^{d-N}& & &\\& \ddots & &\\& & X^d & \\& & &\ddots \end{matrix}$} \sbox1{$\begin{matrix}0& & &\\ \vdots & & & \\ 1& & &\\ & & \ddots & & \end{matrix}$} \sbox2{$\begin{matrix}1& &\\ & \ddots & \\ & & \end{matrix}$} q_0 '=\left[ \begin{array}{c|c} \usebox{0}& \makebox[\wd0]{\large $0$}\\ \hline \vphantom{\usebox{0}}\makebox[\wd0]{\large $0$}&\usebox{2} \end{array} \right], \end{equation} where the first $M$ entries of the diagonal are $X^{d-N}$. From Eq. (\ref{q0prime}) and Eq. (\ref{q1prime}) we see that the matrix factorization we have obtained is equivalent to a direct sum of a rank-$M$ matrix factorization with maps \begin{equation} q_1'= X^N 1_{M\times M} \ , \ \ \ \ \ q_0'= X^{d-N} 1_{M\times M}, \end{equation} plus an infinite direct sum of rank-1 trivial matrix factorizations. On the other hand, $P\otimes Q$ can be determined by $V = \coker m$ as in Eq. (\ref{trick}). Here $V= P_0\otimes Q_0 / \left\{p_1,q_1\right\}$. Since we want a $\mathbb{C}[X]$-module we use the $Y^M=0$ condition to treat $V$ as generated over the $M$ generators $Y^i$, $i=0,\dots, M-1$. So we have \begin{equation} V= {}_{\mathbb{C}[X]}\left\{Y^i \right\}_{i=0,\dots, M-1}/ \left\{ X^N=0\right\}, \end{equation} which we recognize as $V\cong \coker q_1'$ of the matrix factorization $Q'=Q^N_{\operatorname{rank} M}$ which denotes the rank-$M$ version of $Q^N$. So in the non-orbifolded case we obtain the simpler product $P^N*Q^M =Q^N_{\operatorname{rank} M} $. \section{Describing RG flows in $\mathbb{C}/\mathbb{Z}_d$ orbifolds using defects} In this section we describe a new way of dealing with the $\mathbb{C}/\mathbb{Z}_d$ orbifold in terms of defects. The language of matrix factorizations can be utilized to describe the RG flow between the $\mathbb{C}/\mathbb{Z}_d$ orbifolds. This can be done directly by considering the Lagrangian of the model as equivalent to that of a LG model with superpotential $W=0$. So any defects between $\mathbb{C}/\mathbb{Z}_m$ and $\mathbb{C}/\mathbb{Z}_n$ become a problem of factorizing the zero polynomial. Since we are working with B-type supersymmetry we need to use a perturbation which preserves this type. Such a perturbation for a $\mathcal{N}=(2,2)$ theory is done using twisted chiral fields $\Psi$ in theory with the integrals \begin{equation} \delta S = \displaystyle \int_\Sigma d^2x d\bar x^- d\theta^+ \ \Psi \big|_{\bar \theta^+ = \theta^- =0}. \end{equation} But the $\mathcal{N}=(2,2)$ supersymmetry dictates that the parameters of the superpotential and twisted superpotential remain decoupled under the RG flow \cite{hori02}. This fact means that the structure of the twisted chiral sectors is independent of the specific superpotential. Especially in our case whether there is one or not. Therefore the spectrum of the twisted chiral sectors between $\mathbb{C}/\mathbb{Z}_d$ and the $\mathbb{Z}_d$-orbifolded LG with $W=X^d$ are equivalent, and their B-type preserving perturbations can be mapped to each other. With this observation we set out to check that the sort of defects presented in \cite{brunner07a} describing the RG flow defects coming from such perturbations over a subset of $\Sigma$, can be extended to the non-compact orbifolds and the RG flows between them. \subsection{$\mathbb{C}/\mathbb{Z}_d$ as an $\text{LG}/ \mathbb{Z}_d$ with $W=0$}\label{rgflowDefects} Superstring theory on the space $\mathbb{C}/\mathbb{Z}_d$ can be described by a chiral superfield \begin{equation} \Phi=\phi(y^{\pm})+\theta^\alpha\psi_\alpha(y^\pm)+ \theta^+\theta^- F(y^\pm), \end{equation} where $y^\pm=x^\pm-i\theta^\pm\bar\theta^\mp$. The action takes the form \begin{equation}\label{LGnoW} S=\displaystyle \int d^2 x d^4\theta \ \xbar \Phi \Phi +0, \end{equation} where we included the zero to emphasize that we have a LG model with superpotential $W=0$ in the D-term. In this way we can construct defects between different $\mathbb{C}/\mathbb{Z}_d$ orbifolds and describe them in terms of matrix factorizations. Indeed, we check that when two $\mathbb{C}/\mathbb{Z}_d$ theories are related by an RG flow, we can juxtapose them with a corresponding defect which maps the boundary conditions accordingly. Matrix factorizations of the zero polynomial work in exactly the same way as the case for any other polynomial. As an example of this we consider the fusion of a defect between two orbifolded theories; the upper one with superpotential $W_1(X)=X^d$ and the lower one with $W_2(Y)$ the zero superpotential but orbifold group $\mathbb{Z}_{d'}$. The simplest such defect is given by \begin{equation}\label{defectNoW} D^{m,n,N}(X|Y)=\left( D_1= \mathbb{C}[X,Y][m,-n] \mathrel{\mathop{\rightleftarrows}^{ X^N}_{X^{d-N}}} \mathbb{C}[X,Y][m-N,-n]=D_0\right), \end{equation} where $[\cdot, \cdot]$ is the $\mathbb{Z}_d \times\mathbb{Z}_{d'}$ grading. We see that $d_1 d_0 = X^d-0 = W_1(X)-W_2(Y)$. In the lower theory, the boundary conditions corresponding to rank-1 matrix factorizations are a direct sum of the irreducible matrix factorizations of the form \begin{equation}\label{boundaryNOw} Q^{L,M}(Y)=\left( Q_1= \mathbb{C}[Y][L+M] \mathrel{\mathop{\rightleftarrows}^{ Y^M}_{0}} \mathbb{C}[Y][L]=Q_0\right), \end{equation} where $L\in\mathbb{Z}_{d}$ labels the irreducible representations. If the defect $D^{m,n,N}$ sits at $x^1=y$ and we take $y\rightarrow 0$ the fusion of the defect and the boundary condition is given by tensor product of both matrix factorizations. This is obtained by looking at $\coker f = D_0\otimes Q_0/ \operatorname{im} f$ where $f=(d_1\otimes1_{Q_0}, 1_{D_0} \otimes q_1)$ \cite{enger05}. We denote the $\mathbb{C}[X,Y]$-generators of $D_0$ and $Q_0$ by $e^{D_0}_{m,n}$ and $e^{Q_0}_{L}$, respectively. Then as a $\mathbb{C}[X]$-module, $\coker f$ is generated over $e^i:=Y^i e^{D_0}_{m,n}\otimes e ^{Q_0}_{L}$ modulo \begin{equation} X^N e^i=0\ \ \ , \ \ \ e^{i+M}= 0, \ \ \ \forall i\geq 0. \end{equation} The second condition means that $V$ has rank $M$. Note that $e^i$ has $\mathbb{Z}_d \times\mathbb{Z}_{d'}$-degree $[m-N,L-n+i]$, but under fusion we are left with a $\mathbb{Z}_{d}$ theory so we have to extract the $\mathbb{Z}_{d'}$-invariant subset $V^{\mathbb{Z}_{d'}}\subset V$. This means the $i$ is fixed to $i=n-L$, which means we are left with one generator with $\mathbb{Z}_{d}$-degree $m-N$ restricted to $X^N=0$. Otherwise if $n-L \not\in [0,M-1]$ then $D^{m,n,N}*_{\text{orb}}Q^{L,M}=0$. In summary, \begin{equation}\label{example1} D^{m,n,N}*_{\text{orb}}Q^{L,M}= \left\{ \begin{array}{rl} Q^{m,N}, &\mbox{ if $n-L\leq M-1$,} \\ 0, &\mbox{ otherwise.} \end{array} \right. \end{equation} Another example of useful defects given by matrix factorizations of $W=0$ are those enforcing the action of the symmetry group. Similar to those in \cite{brunner07a} they are given by the $\mathbb{Z}_d\times\mathbb{Z}_d$- equivariant matrix factorization $T^m = (T^m_1, T^m_0; t_1, 0)$ with \begin{equation} T^m_1 = {}_{\mathbb{C}[X,Y]}\left\{ e^{1}_{m,k}\right\}_{(m,k) \in \mathbb{Z}_d\times\mathbb{Z}_d} \ \ , \ \ \deg e^1_{m,k} =[m+k+1,-k], \end{equation} \begin{equation} T^m_0 = {}_{\mathbb{C}[X,Y]}\left\{ e^{0}_{m,k}\right\}_{(m,k) \in \mathbb{Z}_d\times\mathbb{Z}_d} \ \ , \ \ \deg e^0_{m,k} =[m+k,-k]. \end{equation} The factorizing map is given by \begin{equation} t_1=\sum_{k=0}^{d-1} \left( X e_{m,k}^0\otimes {e^1}_{m,k} ^*- Y e_{m,k+1}^0\otimes {e^1}_{m,k} ^*\right), \end{equation} where $e^*$ is the basis dual to $e$. One obtains the fusion rules \begin{equation} T^m *_{\text{orb}} T^n=T^{m+n}, \end{equation} and \begin{equation} T^m *_{\text{orb}} Q^{M,N}=Q^{M+n ,N}, \end{equation} where $D_1*_{\text{orb}} D_2$ means extracting the part of $D_1 * D_2$ which is invariant under the symmetry group of the theory between both defects $D_1$ and $D_2$. The sums are performed modulo $d$. Hence the defects $T^m$ form a representation of the symmetry group. More importantly, we note that by also setting $p_0=0$ in the special defects introduced in \cite{brunner07a} we obtain defects which act as the interface between orbifolds sitting at opposite endpoints of the RG flow. The special defects are $\mathbb{Z}_{d'}\times\mathbb{Z}_d$ - equivariant matrix factorizations $P^{(m,\underline n)}$, with labels $m\in \mathbb{Z}_d$ and $n=(n_0,\dots, n_{{d'}-1})$ with $n_i\in\mathbb{N}_0$ such that $\sum_i n_i=d$. The $\mathbb{C}[X,Y]$-modules $P_1$ and $P_0$ and their $\mathbb{Z}_{d'}\times\mathbb{Z}_d$-grading are given by, \begin{equation} P_1 = \mathbb{C}[X,Y]^{d'} \begin{pmatrix} [1,-m] \\ [2,-m-n_1] \\ [3,-m-n_1-n_2 ]\\ \vdots \\ [d', -m-\sum_{i=1} ^{d'-1}n_i ]\end{pmatrix} \ \ \ , \ \ \ P_0 = \mathbb{C}[X,Y]^{d'} \begin{pmatrix} [0,-m] \\ [1,-m-n_1] \\ [2,-m-n_1-n_2 ]\\ \vdots \\ [d'-1, -m-\sum_{i=1} ^{d'-1}n_i ]\end{pmatrix}. \end{equation} The factorizing maps are \begin{equation}\label{mapwithzero} p_1^{m,n}=Y 1_{d'}- \Xi_n(X)\ \ \ , \ \ \ p_0^{m,n} = 0 , \end{equation} where $(\Xi_n(X))_{a,b}:=\delta^{(d')}_{a,b+1}X^{n_a}$. As computed in \cite{brunner07a} the general rule for fusion of a special defect $P^{(m,n)}$ and a $\mathbb{Z}_d$-irreducible boundary condition $Q^{(M,N)}$ is \begin{equation}\label{genfusion} P^{(m,\underline n)}*Q^{(M,N)}=\bigoplus_{a\mathbb{Z}_{d'}:\ i(a)<\text{min}(N,n_a)} Q^{(a,k(a))}, \end{equation} where $i(a)=\left\{n-M+\sum_{j=0}^an_j\right\}_d$. One can check that special defects send the boundary condition $Q^{(M,1)}$ to another such boundary condition with $N=1$, $Q^{(M',1)}$. Let $P^{(m,\underline n)}$ be a special defect and $Q^{(M,N=1)}$ an irreducible B-type boundary condition. Then their fusion is \begin{equation}\label{defectbdy} P^{(m,\underline n)}*Q^{(M,N=1)}=\begin{cases} 0, & M\notin \mathcal{L}_{(m,\underline n)}\\ Q^{(a,1)}, & M=m+\sum_{i=1}^a n_i \end{cases} \end{equation} where $\mathcal{L}_{(m,\underline n)}:=m+\left\{ n_0, n_0+n_1,\dots,n_0+n_1+\cdots n_{d'-1}\right\}$. \subsection{Comparison with RG flow in the $\mathbb{C}/\mathbb{Z}_d$ theories} We can compare the result for the fusion of the defects $P^{m,n}$ with boundary conditions $Q^{M,N}$ of the LG model with zero superpotential with the RG flow between the $\mathbb{C}/\mathbb{Z}_d$ orbifolds. For this purpose we describe the RG flow in these models by looking at their chiral rings. Upon bosonizing the fermionic fields of the superstring theory, one can construct the chiral operators given in \cite{harvey01} \begin{equation} X_j = \sigma_{j/n}\exp[i(j/n)(H-\xbar H)] \ \ , \ \ j =1,\dots, n-1, \end{equation} where $\sigma_{j/n}$ is the bosonic twist operator. These operators are the bosonic components of the respective chiral fields which we will also denote by $X_j$. The higher chiral fields are powers of $X:=X_1$. The chiral ring of this theory is generated by $X$ and \begin{equation} Y:=\frac{1}{V_2}\psi\psi= \frac{1}{V_2}\exp[i(H-\xbar H)], \end{equation} modulo \begin{equation}\label{ringrel} X^d=Y. \end{equation} Deformations of equation (\ref{LGnoW}) by the following F-term preserve supersymmetry since the $X_j$ fields are chiral, \begin{equation}\label{deflangrangian} \delta L = \sum_{j=1} ^{n-1} \lambda^j \displaystyle \int d^2\theta \ X_j. \end{equation} The deformed theory has a chiral ring with the same fields as before but with relation in equation \ref{ringrel} altered to \begin{equation}\label{deformed} X^d+\sum_{j=1} ^{d-1}g_j(\lambda)X^j=Y, \end{equation} where $g_j(\lambda)$ are polynomials in the couplings \cite{harvey01}. A deformation such as in equation (\ref{deflangrangian}) induces a RG flow in the theory. By considering the case where $g_i=0$ for $i\leq d'-1$, the IR and UV limits of the ring condition above are $X^d=Y$ and $g_{d'}X^{d'}=Y$ respectively. These two are the conditions defining $\mathbb{C}/\mathbb{Z}_d$ and $\mathbb{C}/\mathbb{Z}_{d'}$, respectively. We note that for every RG flow $\mathbb{C}/\mathbb{Z}_d \longrightarrow \mathbb{C}/\mathbb{Z}_{d'}$ there exists a matrix factorization $P^{(m,\underline n)}$ of $W=0$ representing a defect $D$ between $\mathbb{C}/\mathbb{Z}_d$ and $\mathbb{C}/\mathbb{Z}_{d'}$. Given two such bulk theories, we can juxtapose them via a defect $P^{(m,\underline n)}$ by choosing $m\in\mathbb{Z}_d$ and non-negative integers $\left\{n_0,n_1,\dots,n_{d'-1}\right\}$ subject to $n_0+\cdots n_{d'-1}=d$. The solution is a non-unique defect but that reflects the action of the overall $\mathbb{Z}_{d'}\times \mathbb{Z}_{d}$ symmetry. In the next section we will have a better description of how the boundary degrees of freedom are mapped from one theory to the other under fusion with RG flow defects. As an example, consider the $\mathbb{Z}_5$ orbifold. In this case the chiral ring of the deformed theory is defined modulo $X^5+\sum_{j=1}^4 g_j(\lambda)X^j=Y$. If we set $g_1=g_2=0$, then the RG flow goes between $\mathbb{C}/\mathbb{Z}_5$ in the UV limit (since the theory's chiral ring has the relation $X^5=Y$) and $\mathbb{C}/\mathbb{Z}_3$ (since in the IR limit the defining relation is $X^3=Y$). Then the defect $P^{(3,\underline n)}$ with $\underline n = (2,2,1)$ can sit at the interface between the theories $\mathbb{C}/\mathbb{Z}_5$ and $\mathbb{C}/\mathbb{Z}_3$ such that B-type supersymmetry is preserved across the interface. \section{RG flows using mirror models} A second strategy is to study the orbifold RG flow in terms of the mirror of $\mathbb{C}/\mathbb{Z}_d$ \cite{vafa01}. Using mirror symmetry we obtain the diagram below. In the following $m$ stands for mirror symmetry and $|_B$ for the B-type defects; $\text{LG}_m$ denotes the LG model with $W=X^m$; and $\widetilde{\text{LG}}_m$ the twisted LG with $W=\widetilde X^m$. \begin{equation}\label{modeldiagram} \begin{CD} @. \text{LG}_m/\mathbb{Z}_m @ > \big |_B>> \text{LG}_n/\mathbb{Z}_n\\ @. @VV\cong V@ VV\cong V @.\\ \mathbb{C}/\mathbb{Z}_m @ >m>> \widetilde{\text{LG}}_m @ . \ \ \ \widetilde{\text{LG}}_n @>m>> \mathbb{C}/\mathbb{Z}_n @ . \\ @. @VV m V@ VV m V @.\\ @. \text{LG}_m @> RG >> \text{LG}_n\\ \end{CD} \end{equation} In the diagram above, the mirror mapping from $\mathbb{C}/\mathbb{Z}_n$ to a twisted LG with non-vanishing potential comes from a mirror correspondence between a gauged linear sigma model (GLSM) and a more general LG theory. As detailed in \cite{vafa01, Hori:2000kt}, one considers a GLSM whose geometry is described by \begin{equation} -d|X_0|^2 + \sum_{i=1}^n k_i |X_i|^2=t, \end{equation} where the fields $(X_0, X_i)$ come with $U(1)$ charges $(-d,k_i)$, and $t$ is the complexified Fayet-Iliopoulos (FI) parameter. Such GLSM is mirror to a LG theory with superpotential \begin{equation} \widetilde W=\sum_{i=1}^n Z_i^d +e^{t/d}\prod_{j=1}^n Z_j^{k_j}, \end{equation} where the variables $Z_i$ are twisted chiral fields, and the superpotential is taken modulo $(\mathbb{Z}_d)^{n-1}$. The IR fixed point of the GLSM is obtained with the limit $t\rightarrow - \infty$. This limit breaks the $U(1)$ symmetry to $\mathbb{Z}_d$ and the geometry obtained is that of $\mathbb{C}^n/\mathbb{Z}_d$. In this note we consider the $n=1$ case, i.e. $\mathbb{C}/\mathbb{Z}_d$. On the mirror side, the $t\rightarrow - \infty$ limit gives us the LG with $\widetilde W= Z^d$. Thus we see that the RG flow between the non-compact orbifolds can be described in terms of matrix factorizations of true LG orbifolds with non-zero superpotentials. \subsection{RG flow defects using mirror models} The idea is that via mirror symmetry we can represent the $\mathbb{C}/\mathbb{Z}_d$ orbifold as a twisted LG model with superpotential $W=\widetilde X^d$. We denote this theory by $\widetilde{ \text{LG}}_d$ in the above diagram. This theory is equivalent to the model $\text{LG}_d/\mathbb{Z}_d$, the orbifold of a non-twisted LG model with superpotential $W=X^d$ by $\mathbb{Z}_d$. So we can use defects between these LG orbifolds to study the RG flow between the original $\mathbb{C}/\mathbb{Z}_d$ orbifolds. As in the previous section we are again in the Landau-Ginzburg model so we can use the RG flows defects $P^{(m,\underline n)}$. The factorizing maps are as in equation (\ref{mapwithzero}) but with $p_0$ non-zero: \begin{equation} p_1^{m,n}=Y 1_{d'}- \Xi_n(X)\ \ \ , \ \ \ p_0^{m,n} = \prod_{i=1}^{d'-1}(Y 1_{d'}- \eta^i\Xi_n(X)), \end{equation} where $\eta$ is an elementary $d'th$ root of unity. And similarly, the irreducible matrix factorizations corresponding to these boundary conditions are of the same form as in equation (\ref{boundaryNOw}), \begin{equation}\label{} Q^{L,M}(Y)=\left( Q_1= \mathbb{C}[Y][L+M] \mathrel{\mathop{\rightleftarrows}^{ X^M}_{X^{d'-M}}} \mathbb{C}[Y][L]=Q_0\right). \end{equation} We review the graphical version introduced in \cite{brunner07a} to depict the fusion of $P^{(m,\underline n)}$ with the boundary conditions $Q^{(L,M)}$. To the set $Q^{(\underline M,1)}:=\left\{Q^{(M,1)} :0\leq M\leq d-1\right\}$ the following graph is assigned: A disk divided into $d$ equal sections by segments from the origin to the boundary. One segment is decorated to start labeling the sections $S_i$ from $i=0$ to $s=d-1$. Below is such a graph for $d=4$: \setlength{\unitlength}{5mm} \begin{center} \begin{picture}(10,10)(-6,-6) \put(0,0){\circle{6}} \put(0,0){\line(0,5){3}} \put(0,0){\line(0,-5){3}} \put(0,0){\vector(5,0){3}} \put(0,0){\line(-5,0){3}} \put(1.1,1.1){$S_0$} \put(-1.1,1.1){$S_1$} \put(-1.1,-1.1){$S_2$} \put(1.1,-1.1){$S_3$} \put(-.8,-4){{ $W=X^4 $}} \end{picture} \end{center} Using the graphical description described above, the special defects $P^{(m,n)}$ are represented by the operators \begin{equation}\label{graphspecial} \mathcal{O}^{(m,n)}:=\mathcal{T}_{-a(m,n)}\mathcal{S}_{\mathcal{L}^c(m,n)}, \end{equation} where $\mathcal{L}_{(m,n)}$ is defined below (\ref{defectbdy}) and $a(m,n):=|\left\{0,\dots,m\right\}\cap \mathcal{L}_{(m,n)}|$. The operator $\mathcal{S}_{\left\{s_1,\dots, s_k\right\}}$ deletes the sectors $S_{s_j}$ by merging the segments which bound them. The operator $\mathcal{T}_k$ acts as the $\mathbb{Z}_d$-symmetry by shifting $M\rightarrow M+k$ in $Q^{(M,1)}$. So just like $P^{(m,\underline n)}$, the operator $\mathcal{O}^{(m,n)}$ annihilates the sectors associated to boundary conditions whose label $M$ does not belong in $\mathcal{L}_{(m,n)}$. Then it relabels the remaining sectors by setting the $S_{m}$ to $S_0$. The above pictorial representation generalizes to boundary conditions $Q^{(M,N)}$ with $N>1$ as well. In this case, $Q^{(M,N)}$ corresponds to the union $S_M\cup S_{M+1}\cup\dots\cup S_{M+N-1}$. We want to show that the operators in the definition (\ref{graphspecial}) still represent the action of special defects on the boundary conditions in this $N>1$ case. Represent $Q^{(M,N)}$ by $S^{(M,N)}:=S_{M}\cup\dots\cup S_{M+N-1}$ and assume that $\mathcal{S}_{\mathcal{L}^c_{(m,n)}}$ shrinks $S^{(M,N)}$ to nothing. Then $\left\{M,M+1,\dots,M+N-1\right\}\subset \mathcal{L}^c_{(m,n)}$. Thus, $M+k\neq m +\sum_{i=1}^a n_i \ \forall a \in \mathbb{Z}_{d'},\ N-1\geq k\geq0$. This means, $k\neq m-M +\sum_{i=1}^a n_i=i(a), \ N-1\geq k\geq0$. Therefore, $i(a)>N-1$ which means $i(a)\geq N$. By equation (\ref{genfusion}), one has $P^{(m,\underline n)}*Q^{(M,N)}=0$. Here $P^{(m,n)}$ is the defect with the set $(m,n)$ a solution to $\mathcal{L}^c_{(m,n)}=\left\{ M,\dots, M+N-1\right\}$; and $Q^{(M,N)}$ such that $M= \min\left\{ M,\dots, M+N-1\right\}$, and $N =|\left\{ M,\dots, M+N-1\right\}|$. Now if $\mathcal{S}_{\mathcal{L}^c_{(m,n)}}$ does not delete the full union $S^{(M,N)}$, then \begin{equation}\label{diskintersection} \begin{split} \left\{ M,\dots, M+N-1\right\}\cap \mathcal{L}^c_{(m,n)}&=\left\{ 0,\dots, N-1\right\}\cap m -M+\left\{n_0,n_0+n_1,\dots, n_0+n_1+\cdots +n_{d'-1}\right\}\\ &=\left\{ 0,\dots, N-1\right\}\cap \mathcal{J}\\ &=\left\{i_1,\dots,i_l\right\}\neq \emptyset, \end{split} \end{equation} where $\mathcal{J}:=\left\{i(a)\ |\ a\in \mathbb{Z}_{d'}\right\}$. Hence, there exists $a\in \mathbb{Z}_{d'}$ such that $i(a)< N$ and by equation (\ref{genfusion}) the corresponding fusion $P^{(m,n)}*Q^{(M,N)}$ is not zero. As previously discussed this fusion is then $P^{(m,n)}*Q^{(M,N)}=Q^{(a_1,k(a_1)}$ where $a_1$ minimizes $i(a)$ and \begin{equation} k(a)=\min\left\{j>0 \ | \ \sum_{l=1}^j n_{a+l} \leq N\right\}. \end{equation} Since we have restricted to the case $n_i\geq 1 \ \forall i$, $k(a_1)=l$ is the number of sections of $S^{(M,N)}$ not annihilated by $S_{\left\{\cdots\right\}}$. Thus, $P^{(m,\underline n)}*Q^{(M,N)}=Q^{(a_1,l)}$. {One notes that $a_1$ is the number of $Q^{(M',1)}$ with $M'\in\left\{m,\dots, M\right\}$ not annihilated by $P$.} Hence, the operators $\mathcal{O}$ represent the $P$ action on all B-type boundary conditions \cite{brunner07a}. \subsection{Comparison with RG flow}\label{comparison} The RG flows between the $\mathbb{C}/\mathbb{Z}_d$ orbifolds can be studied in terms of the mirror picture as well. As we previously mentioned, mirror symmetry relates these orbifolds and the twisted Landau-Ginzburg model with twisted superpotential $\widetilde W = \widetilde X^d$. These twisted model can be related via mirror symmetry to a Landau-Ginzburg model with superpotential $ W = X^d$. Therefore we can frame the RG flow of interest $\mathbb{C}/\mathbb{Z}_d\longrightarrow \mathbb{C}/\mathbb{Z}_{d'}$ as the RG flow $\text{LG}_d \longrightarrow \text{LG}_{d'}$ in the presence of A-supersymmetry. The RG flows in the Landau-Ginzburg models are encoded in the behavior of the deformed superpotential $W_\lambda$ of the respective model. That is, we consider perturbations \begin{equation} W_{\lambda_0}=X^d+\lambda_0 X^{d'}\ \ , \ \ d'<d, \end{equation} of $W=X^d$. The RG flow affects the superpotential by scaling it \begin{equation}\label{rgscale} W_{\lambda_0}\rightarrow \Lambda^{-1} W_{\lambda_0}. \end{equation} Upon a field redefinition, $X\rightarrow \Lambda X$, we obtain \begin{equation}\label{deformedW} \Lambda^{-1}W_{\lambda_0}=X^d+\lambda_0\Lambda^{\frac{d'-d}{d}}X^{d'}=:W_\lambda(X), \end{equation} where $\lambda(\Lambda):=\lambda_0\Lambda^{\frac{d'-d}{d}}$ is the running parameter: \begin{equation} \lim_{\Lambda \rightarrow \infty} \lambda = 0\ \ (\textit{UV}) \ , \ \ \lim_{\Lambda \rightarrow 0} \lambda = \infty \ \ (\textit{IR}). \end{equation} So at either end of the flow we end up with a homogeneous potential. We assume that the imaginary parts of the critical values of $W_\lambda$ stay different $\forall \ \lambda$. Since we are interested in Landau-Ginzburg models on the half-plane with a non-zero boundary, we refer to the language of A-branes discussed in section \ref{abraneslg}. The RG flow has a description in terms of the A-branes and the respective deformations \cite{hori00, brunner07} under non-zero $\lambda$ in equation (\ref{deformedW}). Each A-brane formed by segments from $X_{*i}$ to the boundary points $B_a$ and $B_b$ is denoted by $\overline{B_a X_{*i} B_b}$. As the deformed superpotential flows into the IR, the critical points $X_{*i}$, $i>0$, flow to infinity, while the critical point $X_{*1}=0$ associated with the homogeneous superpotential remains. The A-branes associated with the points $X_{*i}$ then decouple from the theory since the respective Lagrangian submanifolds $\gamma_{X_{*i}}$ disappear. Therefore the IR A-branes are labeled by the equivalent classes $([B_i],[B_j])$ of the relationship $B_k\sim B_l$ when connected on $\Gamma\setminus\Gamma_1$. A generic A-brane in the UV might be composed of segments which are part of $\Gamma_1$ and $\Gamma_i$ in the deformed potential $(\lambda\neq0)$. In this case the A-brane decays into the sum of an A-brane which decouples in the IR and an A-brane which flows to an IR A-brane. To illustrate, let us consider the example we discussed in section \ref{abraneslg} with $W=X^4$ and the deformation $W_\lambda=X^4+\lambda X^3$. $W=X^4$ corresponds to the $\mathbb{C}/\mathbb{Z}_4$ orbifold. The deformed $W_\lambda$ has critical points $X_{*1}=0$ of order $n=2$, and $X_{*2}=-3\lambda$ of order $n=1$. We see that we flow to the IR $X_{*2}\rightarrow \partial D$ so the A-brane $\overline{B_3 X_{*2}B_2}$ decouples. So the endpoint of the flow is the $\mathbb{C}/\mathbb{Z}_3$ orbifold. As an example of the decay of the UV A-branes when $\lambda\neq 0$, consider $\overline{B_3 X_{*}B_1}$. As we turn on $\lambda$ this A-brane decays to $\overline{B_3 X_{*2}B_2} +\overline{B_2 X_{*1}B_1}$. One can map the A-brane diagrams to the disk diagrams representing the B-type boundary conditions \cite{brunner07}; and hence there is a correspondence between the flow of the A-brane deformations and the action of the special defects on the disk diagrams of B-type boundary conditions. As noted above, in the IR only those preimages of $\infty$ which are not connected on $\Gamma \setminus\Gamma_1$ survive. These are precisely the points in the set \begin{equation}\label{pointset} \mathcal{L}=\left\{a\in\mathbb{Z}_d | B_a \nsim B_{a+1}\right\}. \end{equation} In terms of the graphical disk operations for the B-type defects, this is equivalent to starting with disk partitioned into $S_i$ sectors representing the $Q^{M,N}$ B-type boundary conditions; and acting on this disk with the $\mathcal{S}_{\mathcal{L}^c}$ operator with $\mathcal{L}$ as in Eq. (\ref{pointset}). \chapter{\uppercase {Introduction}}\label{intro} Conformal field theories in two dimensions (2D) initially encompassed the study of theories which are invariant under mappings which transform the metric by an overall scale factor. In local coordinates, this transformation is given by \begin{equation} g'_{\rho \sigma}(x') \frac{\partial x'^\rho}{\partial x^\mu} \frac{\partial x'^\sigma}{\partial x^\nu} = \Omega(x)g_{\mu\nu}(x), \end{equation} where $x'=f(x)$, with $f:(U,x)\rightarrow (V,x')$ and $U, V\subset \mathbb{R}^2$. Differently from higher dimensions, in 2D the set of all such local conformal transformations form an infinite algebra: the Virasoro algebra. The aesthetics and power of 2D CFTs are greatly derived from the fact that these conformal mappings are holomorphic (and antiholomorphic) functions on the complex plane. Works by Belavin, Polyakov, and Zamolodchikov \cite{belavin} developed much of the general formalism for 2D CFTs. Later on, consideration was given to 2D CFTs on surfaces with boundaries such as the upper-half plane and the infinite strip. Most of the seminal work on systems with boundaries was developed by John Cardy \cite{cardy}. The study of boundary CFTs (BCFTs) has been an important field of research in 2D theories with wide applications to D-branes in string theory \cite{Recknagel:1998ih, Brunner:1999, Gaberdiel:2001zq, Gaberdiel:2008rk}. The introduction of boundaries to the worldsheet has the consequence of reducing the amount of conformal symmetries allowed; and more interestingly, it mixes the holomorphic and antiholomorphic degrees of freedom. The reduction of conformal symmetries follows from the need to consider only those transformations which preserve the boundary. A deeper consequence of boundaries is the appearance of new elements in the Hilbert space called boundary states which are non-existent for a CFT purely living in the bulk. Such boundary states form the boundary Hilbert space of the CFT. Corresponding fields which reside on the boundary have their own OPE algebra. OPEs can also be taken between bulk fields and the fields living on the boundary. That is, boundaries give new physics. After boundaries, the next step was taken by Affleck \cite{affleck} by studying defects. Differently from boundaries, the defects considered by Affleck were curves with field content to either side. One can think of a boundary as a defect where one of the theories is the trivial (empty) theory. Again, new types of fields and corresponding states appear with the introduction of defects. Very importantly, defects can be mapped to boundaries where the bulk theory is the tensor product of the two theories flanking the original defect. One way to think of these defects is as boundary conditions consistent with respect to both theories. More recently, 2D theories have been shown to contain a rich class of new defects \cite{bachas02,bachas07,fuchs07,gaiotto12,Quella:2002CT, brunner03, brunner07, Konechny:2015qla, Graham:2003nc, Fuchs:2015ska}. In this sense, a \emph{defect} is a one-dimensional object in 2D theories, and more generally a submanifold of positive co-dimension in higher dimensional spaces. These objects are also defects in the sense of those considered by Affleck since they are one-dimensional curves separating two theories. But that is where the similarities end. More than simply domain walls, or consistent boundary conditions, this new type of defects have the following properties: \begin{itemize} \item Defects have a binary operation called fusion where two defects are brought together to form a third defect as shown in Figure \ref{fig:tamu-fig1}. For two defects $D$ and $B$, we denote this operation by $(D,B)=D*B$. \item Via the fusion operation defects form representations of the symmetries present in the theory. \end{itemize} \begin{itemize} \item Defects encode information about dualities and mappings between the theories to either side of the defect. \item A defect $D$ gives rise to a linear map $\widehat D: \mathcal{H}_1 \rightarrow \mathcal{H}_2$ between the Hilbert spaces of two theories separated by $D$ \cite{frohlich09}. By first choosing an orientation for the defect, we can move $D$ across a field insertion and pinch the defect to wrap it around the insertion point, as shown in Figure \ref{fig:tamu-fig2}. \end{itemize} \begin{figure}[h] \centering \includegraphics[width=100mm,scale=0.5]{figures/fusing_defects.jpg} \caption{Fusion of defects} \label{fig:tamu-fig1} \end{figure} An important class of defects are those called \emph{topological defects} which commute with the field insertion of the energy-momentum tensor. That is, a defect $D$ is topological if \begin{equation} \widehat D T_1(z) = T_2(z)\widehat D, \end{equation} where $\widehat D: \mathcal{H}_1 \rightarrow \mathcal{H}_2$ is the representation of the defect $D$ as an operator intertwining the Hilbert spaces of the adjacent CFTs. In this case, the defect can be deformed through the worldsheet without affecting the values of the correlation functions, as long as it does not cross an operator insertion point. Hence the name ``topological''. To see that this is the case, we note that commuting with the energy-momentum tensor $T(z)$ means that the defect commutes with the Virasoro generators (which are the elements of the infinite conformal algebra in 2D). Topological defects form a subset of a larger class of interest called \emph{conformal defects} whose elements commute with the difference of the holomorphic and antiholomorphic components of the energy-momentum tensor \cite{fuchs07}, \begin{equation} \widehat D (T_1(z)- \xbar T_1(z)) = (T_2(z)- \xbar T_2(z)) \widehat D. \end{equation} The behavior of boundary degrees of freedom under the renormalization group (RG) flow represents a problem in both string theory and condensed matter physics which is not fully understood (see \cite{dorey00}, \cite{keller07}, \cite{hori04}, \cite{fredenhagen06} and references therein). A new approach consists of utilizing defects to bring the RG flow from the bulk to the boundary. This technique was exploited in \cite{brunner07a} within the framework of Landau-Ginzburg (LG) models to study the boundary RG flow between the two-dimensional orbifolds $\mathcal{M}_{d-2}/\mathbb{Z}_d$, where $\mathcal{M}_{d-2}$ are the supersymmetric minimal models. RG flow defects were also constructed in \cite{gaiotto12} between consecutive Virasoro minimal models in the bulk. \begin{figure}[h] \centering \includegraphics[width=100mm,scale=0.5]{figures/operator_defect.jpg} \caption{Defect action on field insertions} \label{fig:tamu-fig2} \end{figure} Defects in LG models have a general description in terms of matrix factorizations which allows us to construct examples of boundaries and defects. Also, the language of matrix factorization contains a general operation called the tensor product of matrix factorizations which gives a recipe to compute the fusion of any two LG defects \cite{brunner07,brunner07a, enger05}. The theory of defects in LG models is versatile because it provides direct information on other theories which are not necessarily LG models. This fact follows because LG models can be mapped to other interesting theories via different RG flows or mirror symmetry \cite{hori00, vafa01}. In this article we are particularly interested in the non-compact orbifold $\mathbb{C}/\mathbb{Z}_d$. This orbifold is not target-space supersymmetric, but it exhibits $\mathcal{N}=2$ worldsheet supersymmetry. In LG models, defects are topological provided that a topological twist has been performed \cite{brunner07}. There are two types of twists that render $N=2$ theories topological \cite{hori00a} and they are called A-twist and B-twist. In the presence of boundaries or defects, only half of the total $(2,2)$ supersymmetry is preserved, and similar to the topological twist there are two ways to break half of the supersymmetry. The remaining symmetry is called A-type or B-type depending on which supersymmetric charges are kept. The generators for each respective supersymmetry are \begin{equation}\label{abgenerators} \begin{split} &(A) \ \ \ \ \xbar Q_A:=\xbar Q_+ + \operatorname{e}^{i\alpha}Q_-, \ \ \ \ Q_A:=Q_+ +\operatorname{e}^{-i\alpha}\xbar Q_-,\\ &(B) \ \ \ \ \xbar Q_B:=\xbar Q_+ +\operatorname{e}^{i\beta}\xbar Q_-, \ \ \ \ Q_B:= Q_+ +\operatorname{e}^{-i\beta} Q_-, \end{split} \end{equation} where $\alpha$ and $\beta$ are real numbers, and $Q_\pm$, $\xbar Q_\pm$ are the generators of the full $\mathcal{N}=(2,2)$ supersymmetry. A topological A(B)-twist must be done alongside A(B)-type supersymmetry if the boundaries and defects are to be supersymmetric and topological. In this dissertation, it is assumed throughout that the LG models are already topological. In each case, there is a BRST-operator $Q_A$ or $Q_B$ which characterizes the physical degrees of freedom at the boundary. In this dissertation, the machinery of matrix factorizations for defects is applied to the non-compact orbifold $\mathbb{C}/\mathbb{Z}_d$ which is the archetype for string theory on \begin{equation}\label{arch} \mathbb{R}^{d-1,1}\times\mathbb{R}^{10-d}/G , \end{equation} where $G$ is some discrete $SO(d-10)$ subgroup \cite{harvey01}. This important model is linked to the LG language in two ways that are exploited in this note. First, by introducing superspace variables the fermionic string theory on $\mathbb{C}/\mathbb{Z}_d$ can be viewed as the orbifold of a LG model with zero superpotential. And second, we also go from the $\mathbb{C}/\mathbb{Z}_d$ theory to a twisted LG model description using mirror symmetry as discussed in \cite{vafa01}. In this dissertation we extend the work of \cite{brunner07a} which describes the boundary RG flow in LG models and supersymmetric minimal models in terms of topological defects. Our work generalizes the results of \cite{brunner07a} to the non-supersymmetric case of the non-compact $\mathbb{C}/\mathbb{Z}_n$ theories. The orbifold $\mathbb{C}/\mathbb{Z}_d$ is physically relevant because it is the simplest model to study tachyon condensation \cite{adams01}; in (\ref{arch}), the tachyons are closed strings localized at the fixed points of the orbifold group action. Techniques to study the RG flow in these models have been considered in \cite{vafa01, harvey01}. To study the problem at hand, we consider the $\mathbb{C}/\mathbb{Z}_d$ orbifold theory on the upper-half plane $\Sigma = \left\{(x^0,x^1)\in \mathbb{R}^2 \ | \ x^0 \geq 0\right\}$ with B-type supersymmetry. Inserting the identity defect at $x^0=y>0$, we can perturb the theory over $x\geq y$. Letting the perturbation drive the theory to the IR we obtain a setup describing the IR theory in the bulk while near the boundary we still have the UV theory, with a defect $D$ sitting at the interface $x^0=y$. The next step is to take the RG flow to the boundary via the limit $y\rightarrow 0$. In terms of defects, this limit gives the fusion of the boundary $B$ and the defect $D$. Aside from the approach described above of utilizing matrix factorizations to represent defects, we also study defects as linear operators between the Hilbert spaces of the given theories. In order to talk about defects in the operator representation, we must first deal with boundary states in product theories. The theories of conformal boundaries and that of defects are intertwined. Not only are boundaries a class of trivial defects, but via the folding trick defects are mapped to boundaries \cite{affleck}. The folding trick is a powerful tool in QFT where one considers two theories, say $\text{CFT}_1$ and $\text{CFT}_2$, separated by a defect or domain wall. Then the theory $\text{CFT}_1\oplus\text{CFT}_2$ in the presence of the defect is equivalent to the folded theory $\text{CFT}_1\otimes\xbar{\text{CFT}}_2$, where the bar means interchanging the left and right movers \cite{affleck, bachas07}. The defect is mapped to a boundary in the $\text{CFT}_1\otimes\xbar{\text{CFT}}_2$ theory. The folding trick also applies to non-CFT theories such as general LG models \cite{brunner07}. In this dissertation we study the boundary states of the $c=2$ bosonic theory taking value in $(\mathbb{S}^1/\mathbb{Z}_2)^2$ and apply the unfolding procedure in order to obtain defects between the orbifolds $S^1_R/\mathbb{Z}_2$, where $R$ is the radius of the circle. Unfolding is the inverse of the folding trick. In this procedure elements of the boundary theory $\mathcal{H}^{1\otimes 2}_\partial$ of some bulk theory $\mathcal{H}_1 \otimes \mathcal{H}_2$ are mapped to defects between the theories with Hilbert spaces $\mathcal{H}_1$ and $\mathcal{H}_2$. This method was applied in \cite{bachas02, bachas07} in order to study the defects between bosonic $S^1$-valued theories. Defects in the context of the compact boson were also considered in \cite{fuchs07} but without the unfolding map. Here we rely on the unfolding procedure due to consistency. In general it is difficult to have a criteria describing a consistent set of defects. By construction, the criteria met in this dissertation is that the defects map to consistent boundary states in the product theory. Our work on topological defects for the superstring on $\mathbb{C}/\mathbb{Z}_d$ provides a novel approach to the question of tachyon condensation. One of the main reasons supersymmetry enters string theory is to attain stable, tachyon-free spacetimes. As previously mentioned, the model $\mathbb{C}/\mathbb{Z}_d$ is non-supersymmetric and contains closed string tachyons (the $\mathcal{N}=(2,2)$ supersymmetry present is on the worldsheet only). The question of tachyon condensation is a difficult one which has been studied in a few cases \cite{vafa01, harvey01, adams01}. At the worldsheet level, the process of tachyon condensation is due to RG flows generated by perturbations of the starting point CFT \cite{adams01}. So far not much is known about these bulk RG flows and the present methods to describe them are laborious and complicated. The new defects presented in this dissertation provide a new perspective on the bulk and boundary RG flows for the $\mathbb{C}/\mathbb{Z}_d$ models as well as a simpler way to do computations without the need for regularization schemes. More technically, our results for $\mathbb{C}/\mathbb{Z}_d$ show that matrix factorizations of $W(X)=0$ are well defined objects which can successfully describe defects for the non-compact orbifold in question. Furthermore, it is shown that the RG flow between the $\mathbb{C}/\mathbb{Z}_d$ models can be described in terms of these defects. An important application of our work, and one we show here, is the description of the boundary RG flow of these theories in terms of the fusion product of boundaries and RG defects. The significance of our work on the compact orbifold $S^1/\mathbb{Z}_2$ comes from the present dearth of non-trivial theories whose spectrum of consistent defects is known: until this work, the compact boson was the only theory whose spectrum of defects was written down. Taken together, our work on the $S^1/\mathbb{Z}_2$ theory and those of \cite{bachas07, fuchs07} on the compact boson will provide a very complete picture of the possible defects between elements of the same branches of $c=1$, 2D CFT phase space \cite{Ginsparg:1988ui}. Very importantly, since we have covered the part of the spectrum which contains the twisted degrees of freedom, using our results it should be doable to build defects between the different branches of the $c=1$ phase space. That is, defects between $S^1/\mathbb{Z}_2$ and $S^1$. It is important to emphasize that the D-branes studied in this work carry their own weight aside from their utility to derive defects. D-branes in compact orbifolds have not been well studied in the literature \cite{Brunner:1999} so our work provides more examples in the boundary CFT formalism. The dissertation is structured as follows. In chapter 2, defects and boundary conditions are developed for the Type I superstring on a worldsheet with boundaries and taking values in the cone $\mathbb{C}/\mathbb{Z}_d$. We start by reviewing $\mathcal{N}=(2,2)$ theories in the presence of boundaries. The introduction of a boundary reduces the supersymmetry and we are left with either A-type or B-type supersymmetry which are subsets of the full $\mathcal{N}=(2,2)$ symmetry. We review the algebraic language of matrix factorizations suitable for B-type boundaries and defects. The geometrical description of wave-front trajectories for A-type D-branes (A-branes) for supersymmetric sigma models is also reviewed with an emphasis on LG models. Both descriptions are related by mirror symmetry mappings. Proceeding the reviews, a superspace description of $\mathbb{C}/\mathbb{Z}_d$ as a LG model with zero superpotential is developed to obtain a description of boundary conditions and defects in terms of matrix factorizations of $W(X)=0$. We show that suitable defects exist such that they divide the UV and IR theories. In the case of $\mathbb{C}/\mathbb{Z}_d$ we can keep track of both RG endpoints by means of the chiral ring. The addition of chiral terms to the Lagrangian to induce an RG flow also produces deformations to the chiral ring of the theory. The resulting chiral ring at each endpoint of the flow characterizes the theory in the UV or IR. Lastly, we show that RG defects can be used to work out the boundary RG flows of these theories. We work with the mirror theories of the non-compact orbifolds which are orbifolded LG theories with non-zero superpotentials. The B-type boundary conditions have a dual description in terms of A-branes. The action of the RG B-type defects on the B-type boundaries is compared with the RG flow as described by the dual A-type branes. This comparison demonstrates that indeed the posited RG defects enforce the RG flow on the boundary without a need for regularization techniques. In chapter 3 we move away from matrix factorizations as descriptions of D-branes and instead the BCFT formalism is used. We start with a review of Affleck and Oshikawa's boundary theory construction for the single boson taking values in $S^1/\mathbb{Z}_2$ \cite{affleck}. Following this method, we work out possible boundary states for the free bosonic theory on $S^1/\mathbb{Z}_2 \times S^1/\mathbb{Z}_2$. In the BCFT formalism, D-branes are represented as coherent states which solve conformal boundary operator equations. In the free theory, this problem can be reduced to searching for elements of the Hilbert space which are consistent with boundary conditions of the bulk fields. Lastly, in chapter 4 the D-branes in the product theory are mapped to conformal defects between the $S^1/\mathbb{Z}_2$ bosonic theories. In this chapter we review the unfolding map used by \cite{bachas07} which gives a correspondence between D-branes (i.e., boundary states) and defects. The unfolding map is employed here as a direct way to obtain the possible spectrum of classes of defects. From the spectrum of defects, those defects which are transmissive or totally reflective are identified. We finish this chapter by computing the fusions of those transmissive defects which are topological. These products show that the topological defects form a closed algebra. We conclude in chapter 5 with a summary of the work presented here and some remarks on possible future directions. \chapter*{NOMENCLATURE} \addcontentsline{toc}{chapter}{NOMENCLATURE} \vspace{18pt} \begin{spacing}{1.0} \begin{longtable}[htbp]{@{}p{0.33\textwidth} p{0.62\textwidth}@{}} OGAPS & Office of Graduate and Professional Studies at Texas A\&M University\\ [2ex] B/CS & Bryan and College Station\\ [2ex] TAMU & Texas A\&M University\\ [2ex] SDCC & San Diego Comic-Con\\ [2ex] EVIL & Every Villain is Lemons\\ [2ex] EPCC & Educator Preparation and Certification Center at Texas A\&M University - San Antonio\\ [2ex] FFT & Fast Fourier Transform\\ [2ex] ARIMA & Autoregressive Integrated Moving Average\\ [2ex] SSD & Solid State Drive\\ [2ex] HDD & Hard Disk Drive\\ [2ex] O\&M & Eller Oceanography and Meteorology Building\\ [2ex] DOS & Disk Operating System\\ [2ex] HDMI & High Definition Multimedia Interface\\ [2ex] $L^1$ & Space of absolutely Lebesgue integrable functions; i.e., $\int |f| < \infty$\\ [2ex] $L^2$ & Space of square-Lebesgue-integrable functions, i.e., $\int |f|^2 < \infty$\\ [2ex] $PC(S)$ & Space of piecewise-continuous functions on $S$\\ [2ex] GNU & GNU is Not Unix\\ [2ex] GUI & Graphical User Interface\\ [2ex] PID & Principal Integral Domain\\ [2ex] MIP & Mixed Integer Program\\ [2ex] LP & Linear Program\\ [2ex] \end{longtable} \end{spacing} \pagebreak{} \subsubsection[]{×} in your List of Tables. The default is 3. \setlength{\cftaftertoctitleskip}{1em} \renewcommand{\cftaftertoctitle}{% \hfill{\normalfont {Page}\par}} \tableofcontents \end{singlespace} \pagebreak{} \phantomsection \addcontentsline{toc}{chapter}{LIST OF FIGURES} \renewcommand{\cftloftitlefont}{\center\normalfont\MakeUppercase} \setlength{\cftbeforeloftitleskip}{-12pt} \renewcommand{\cftafterloftitleskip}{12pt} \renewcommand{\cftafterloftitle}{% \\[4em]\mbox{}\hspace{2pt}FIGURE\hfill{\normalfont Page}\vskip\baselineskip} \begingroup \begin{center} \begin{singlespace} \setlength{\cftbeforechapskip}{0.4cm} \setlength{\cftbeforesecskip}{0.30cm} \setlength{\cftbeforesubsecskip}{0.30cm} \setlength{\cftbeforefigskip}{0.4cm} \setlength{\cftbeforetabskip}{0.4cm} \listoffigures \end{singlespace} \end{center} \renewcommand{\cftlottitlefont}{\center\normalfont\MakeUppercase} \setlength{\cftbeforelottitleskip}{-12pt} \renewcommand{\cftafterlottitleskip}{1pt} \renewcommand{\cftafterlottitle}{% \\[4em]\mbox{}\hspace{2pt}TABLE\hfill{\normalfont Page}\vskip\baselineskip} \begin{center} \begin{singlespace} \setlength{\cftbeforechapskip}{0.4cm} \setlength{\cftbeforesecskip}{0.30cm} \setlength{\cftbeforesubsecskip}{0.30cm} \setlength{\cftbeforefigskip}{0.4cm} \setlength{\cftbeforetabskip}{0.4cm} \end{singlespace} \end{center} \endgroup \pagebreak{} \chapter*{DEDICATION} \addcontentsline{toc}{chapter}{DEDICATION} \begin{center} \vspace*{\fill} To my family. \vspace*{\fill} \end{center} \pagebreak{} \chapter*{CONTRIBUTORS AND FUNDING SOURCES} \addcontentsline{toc}{chapter}{CONTRIBUTORS AND FUNDING SOURCES} \subsection*{Contributors} This work was supported by a thesis (or) dissertation committee consisting of Professor XXXX [advisor --– also note if co-advisor] and XXX of the Department of [Home Department] and Professor(s) XXXX of the Department of [Outside Department]. The data analyzed for Chapter X was provided by Professor XXXX. The analyses depicted in Chapter X were conducted in part by Rebecca Jones of the Department of Biostatistics and were published in (year) in an article listed in the Biographical Sketch. All other work conducted for the thesis (or) dissertation was completed by the student independently. \subsection*{Funding Sources} Graduate study was supported by a fellowship from Texas A\&M University and a dissertation research fellowship from XXX Foundation. \pagebreak{} \chapter{\uppercase {A Second Appendix Whose Title Is Much Longer Than The First}} Text for the Appendix follows. \begin{figure}[h] \centering \includegraphics[scale=.50]{figures/Penguins.jpg} \caption{Another TAMU figure.} \label{fig:tamu-fig6} \end{figure} \section{Appendix Section} \section{Second Appendix Section} \pagebreak{} \chapter{\uppercase{First Appendix}} Text for the Appendix follows. \begin{figure}[h] \centering \includegraphics[scale=.50]{figures/Penguins.jpg} \caption{TAMU figure} \label{fig:tamu-fig5} \end{figure} \chapter*{ACKNOWLEDGMENTS} \addcontentsline{toc}{chapter}{ACKNOWLEDGMENTS} \indent I would like to give thanks to my Ph.D. adviser Dr. Melanie Becker for being a great mentor during my time at Texas A\&M University, and all the invaluable advice she has given me. I am also very grateful to Dr. Daniel Robbins for being an integral collaborator in the projects composing this dissertation. I would also like to give thanks to my committee members Dr. Christopher Pope, Dr. Teruki Kamon, and Dr. Stephen A. Fulling for participating in my preliminary and final defense exams. Also a special thanks to Dr. Katrin Becker for agreeing to be part of my defense exam. I am grateful to Ilka Brunner for her helpful feedback on the Landau-Ginzburg work. I extend my thanks to Ning Su with whom I shared many insightful discussions and previous work. I would like to take this moment to thank Sebastian Guttenberg, Jakob Palmkvist, Andy Royston, and William D. Linch III, Yaodong Zhu, Sunny Guha, and Zhao Wang who have been great colleagues. Special thanks to Ilarion V. Melnikov for introducing me to the beautiful theory of defects. \pagebreak{} \chapter*{ABSTRACT} \addcontentsline{toc}{chapter}{ABSTRACT} \pagestyle{plain} \pagenumbering{roman} \setcounter{page}{2} We study conformal defects in two important examples of string theory orbifolds. First, we show that topological defects in the language of Landau-Ginzburg models carry information about the RG flow between the non-compact orbifolds $\mathbb{C}/\mathbb{Z}_d$. Such defects are shown to correctly implement the bulk-induced RG flow on the boundary. Secondly, we study what the possible conformal defects are between the $c=1$ bosonic 2D conformal field theories with target space $S^1/\mathbb{Z}_2$. The defects cataloged here are obtained from boundary states corresponding to D-branes in the $c=2$ free theory with target space $S^1/\mathbb{Z}_2 \times S^1/\mathbb{Z}_2$. Via the unfolding procedure, such boundary states are later mapped to defects between the circle orbifolds. Furthermore, we compute the algebra of the topological class of defects at different radii. \pagebreak{} \section{% \begin{document} \renewcommand{\tamumanuscripttitle}{Topological defects in string theory orbifolds with target spaces $\mathbb{C}/\mathbb{Z}_n$ and $S^1/\mathbb{Z}_2$ } \renewcommand{\tamupapertype}{ Dissertation} \renewcommand{\tamufullname}{Yaniel Cabrera} \renewcommand{\tamudegree}{Doctor of Philosophy} \renewcommand{\tamuchairone}{Melanie Becker} \renewcommand{\tamumemberone}{Christopher Pope} \newcommand{Teruki Kamon}{Teruki Kamon} \newcommand{Stephen Fulling}{Stephen Fulling} \renewcommand{\tamudepthead}{Peter McIntyre} \renewcommand{\tamugradmonth}{August} \renewcommand{\tamugradyear}{2017} \renewcommand{\tamudepartment}{Physics} \include{data/titlepage} \include{data/abstract} \include{data/dedication} \include{data/acknowledgements} \include{data/lists} \include{data/section1} \include{data/section2} \include{data/section3} \include{data/section4} \let\oldbibitem\bibitem \renewcommand{\bibitem}{\setlength{\itemsep}{0pt}\oldbibitem} \phantomsection \addcontentsline{toc}{chapter}{REFERENCES} \renewcommand{\bibname}{{\normalsize\rm REFERENCES}} \bibliographystyle{utphys}
2,869,038,156,436
arxiv
\section{#1}\setcounter{theorem}{0} \setcounter{equation}{0} \par\noindent} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \newtheorem{theorem}{Theorem} \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corr}[theorem]{Corollary} \newtheorem{prop}[theorem]{Prop} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{deff}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \newcommand{\, \cdot\, }{\, \cdot\, } \newcommand{\text{supp }}{\text{supp }} \newcommand{{\mathbb R}}{{\mathbb R}} \newcommand{{\R^3\backslash\mathcal{K}}}{{{\mathbb R}^3\backslash\mathcal{K}}} \newcommand{{\R^n\backslash\mathcal{K}}}{{{\mathbb R}^n\backslash\mathcal{K}}} \newcommand{{\partial\mathcal{K}}}{{\partial\mathcal{K}}} \newcommand{{\text {diag}}}{{\text {diag}}} \newcommand{{\not\negmedspace\nabla}}{{\not\negmedspace\nabla}} \newcommand{{\text{tr}}}{{\text{tr}}} \newcommand{{T_\varepsilon}}{{T_\varepsilon}} \newcommand{{\mathcal{W}}}{{\mathcal{W}}} \newcommand{{\mathcal{K}}}{{\mathcal{K}}} \newcommand{{\vec{n}}}{{\vec{n}}} \renewcommand{\ss}[1]{{\substack{#1}}} \newcommand{\ksweight}{{\frac{\langle c_1t-r\rangle\,\langle c_2t-r\rangle}{\langle c_1t-r\rangle+ \langle c_2t-r\rangle}}} \newcommand{\sksweight}{{\frac{\langle c_1s-r\rangle\,\langle c_2s-r\rangle}{\langle c_1s-r\rangle + \langle c_2s-r\rangle}}} \newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle} \newcommand{{\text{curl}\,}}{{\text{curl}\,}} \renewcommand{\div}{{\text{div}}} \newcommand{{\mathbf 1}}{{\mathbf 1}} \newcommand{\subheading}[1]{{\bf #1}} \begin{document} \title[Elastic waves in exterior domains] {Elastic waves in exterior domains \\ Part II: Global existence with a null structure} \thanks{The authors were supported by the NSF} \author{Jason Metcalfe} \address{Department of Mathematics, University of California, Berkeley, CA 94720-3840} \email{[email protected]} \author{Becca Thomases} \address{Courant Institute of Mathematical Sciences, New York University, New York, NY 10012} \email{[email protected]} \begin{abstract} In this article, we prove that solutions to a problem in nonlinear elasticity corresponding to small initial displacements exist globally in the exterior of a nontrapping obstacle. The medium is assumed to be homogeneous, isotropic, and hyperelastic, and the nonlinearity is assumed to satisfy a null condition. The techniques contained herein would allow for more complicated geometries provided that there is a sufficient decay of local energy for the linearized problem. \end{abstract} \maketitle \newsection{Introduction} In this paper, we shall prove global existence of elastic waves in exterior domains subject to a null condition. The elastic medium is assumed to be homogeneous, isotropic, and hyperelastic, and we only consider small initial displacements. We solve the said equations exterior to a bounded, nontrapping obstacle with smooth boundary, though our techniques would allow for any bounded, smooth obstacle for which there is a sufficiently fast decay of local energy. For the boundaryless problem, global existence was previously shown in \cite{Agemi} and \cite{Si}. See, also, \cite{Si2, SiNotes}. As in studies of the wave equation, the null condition is essential in order to obtain global solutions. Without some assumption on the structure of the nonlinearity, blow up in finite time was shown in \cite{J2}. In this case, almost global existence of solutions was shown in \cite{J} and a simplified proof was later given in \cite{KS}. In \cite{Met1}, the corresponding almost global existence result was shown outside of a star-shaped obstacle.\footnote{By finite propagation speed and \cite{J2}, blow up in finite time is still possible.} Several related studies have been conducted concerning multiple speed systems of wave equations in three-dimensional exterior domains. Almost global existence was established in \cite{KSS2, KSS3} and \cite{MS3} exterior to star-shaped domains. Using the techniques of \cite{MS1}, these results can be extended to any domain for which there is a sufficiently fast decay of local energy. For nonlinearities satisfying the null condition, global existence has been established in \cite{MS1, MS4} and \cite{MNS1, MNS2}. Our proof will use many of the techniques established in this context. We shall utilize the method of commuting vector fields, though the equation of interest is not Lorentz invariant and thus we will be required to use a restricted set of vector fields. Moreover, we will rely upon the arguments of \cite{KSS3} and \cite{MS1} to establish the necessary higher-order energy estimates. Here, we use elliptic regularity to establish energy estimates involving the generators of translations. Without the star-shapedness assumption, proving energy estimates involving the scaling vector field is more delicate, and we will use a boundary term estimate which is reminiscent of those in \cite{MS1}. For energy estimates involving the full set of vector fields, a class of weighted mixed-norm estimates, called KSS estimates, will be called upon to handle the relevant boundary terms. These estimates are analogous to those of \cite{KSS2} for the wave equation and have previously be established in \cite{Met1} for the current setting. The nontrapping hypothesis is assumed to guarantee the existence of an exponential decay of local energy. See \cite{Y}. Similar well-known estimates (see \cite{MRS}) were widely used in the studies of nonlinear wave equations in exterior domains. Our techniques would, in fact, permit any exterior domain\footnote{By this, we mean the exterior of any bounded obstacle with smooth boundary.} for which there is a sufficiently fast decay of local energy. In particular, the method of proof would allow for the loss of regularity in the local energy decay which would be necessary if there were trapped rays. In the setting of the wave equation, see \cite{I1, I2} for such local energy decays and \cite{MS1}, \cite{MNS1, MNS2} for proofs of long-time existence in such domains. Moreover, by using the technique of proof contained herein, the almost global existence result of \cite{Met1} may be extended to any domain for which there is such a decay of local energy. Let us more precisely describe the initial-boundary value problem. We fix a bounded, nontrapping\footnote{Recall that more general domains are possible, but the authors are not aware of the necessary decay of local energy results for such domains.} obstacle ${\mathcal{K}}\subset{\mathbb R}^3$ with smooth boundary. We may, without loss of generality, assume that $0\in{\mathcal{K}}\subset\{|x|<1\}$, and we will do so throughout. The linearized equation of elasticity is \begin{equation}\label{linear} Lu=\partial_t^2 u -c_2^2 \Delta u - (c_1^2-c_2^2)\nabla (\nabla\cdot u)=0, \end{equation} where $u=(u^1,u^2,u^3)$ is the displacement vector. The constants $c_1, c_2$ may be assumed to satisfy $$c_1^2-\frac{4}{3}c_2^2>0,\quad c_2^2>0.$$ The two quantities in the above equations represent the bulk and shear moduli respectively. The nonlinear problem that results from hyperelastic mediums is \begin{equation} \label{nonlinear} (Lu)^I=\partial_l (B^{IJK}_{lmn}\partial_m u^J \partial_n u^K)+\dots\footnotemark \end{equation} \footnotetext{Here and throughout, we use the summation convention. Latin indices indicate that the implicit summation runs from $1$ to $3$, while Greek indices are used when the summation shall run from $0$ to $3$.}where the real constants $B^{IJK}_{lmn}$ satisfy the symmetry condition \begin{equation} \label{nonlinear.symmetry} B^{IJK}_{lmn}=B^{JIK}_{mln}=B^{IKJ}_{lnm}. \end{equation} The terms not explicitly stated in \eqref{nonlinear} are all of cubic or higher order in $\nabla u, \nabla^2 u$. The null condition that we require assumes that \begin{equation} \label{null.condition} B^{IJK}_{lmn}\xi_I\xi_J\xi_K\xi_l\xi_m\xi_n=0,\quad \text{whenever } \xi\in S^2. \end{equation} Rather than interpreting the equation and null condition further, we refer the reader to \cite{Agemi}, \cite{Si}, and the references therein. The interested reader may also wish to refer to the expository article \cite{SiNotes}. As has become common, for convenience, we will truncate the equations at the quadratic level. Such a truncation does not affect the long-time existence. We then study the following initial-boundary value problem: \begin{equation} \label{main.equation} \begin{cases} (Lu)^I = \partial_l (B^{IJK}_{lmn}\partial_m u^J \partial_n u^K),\quad (t,x)\in {\mathbb R}_+\times {\R^3\backslash\mathcal{K}},\\ u|_{\partial\mathcal{K}}=0,\\ u(0,\, \cdot\, )=f,\quad \partial_t u(0,\, \cdot\, )=g \end{cases} \end{equation} for ``small'' Cauchy data $f,g$. For convenience in the sequel, we shall set $$Q^I(\nabla u,\nabla^2 u) = \partial_l (B^{IJK}_{lmn}\partial_m u^J \partial_n u^K).$$ In order to solve \eqref{main.equation}, we must assume that the initial data satisfy certain well-known compatibility conditions. Letting $J_ku=\{\partial_x^\alpha u\,:\,0\le|\alpha|\le k\}$ and noticing that $\partial_t^k u(0,\, \cdot\, )=\psi_k(J_kf,J_{k-1}g)$, $0\le k\le m$ for formal $H^m$ solutions, we require that the compatibility functions $\psi_k$ -- which depend on $Q$, $J_kf$, and $J_{k-1}g$ -- vanish on ${\partial\mathcal{K}}$ for $0\le k\le m-1$. Smooth data $(f,g)\in C^\infty$ are said to satisfy the compatibility conditions to infinite order if this holds for all $m$. Under these assumptions, we may prove that solutions to \eqref{main.equation} corresponding to small initial displacements exist globally. This is our main result. \begin{theorem} \label{main.theorem} Let ${\mathcal{K}}$ be a bounded, nontrapping obstacle with smooth boundary as above. Assume that $B^{IJK}_{lmn}$ are real constants satisfying \eqref{nonlinear.symmetry} and \eqref{null.condition}. Suppose further that the data $(f,g)\in C^\infty({\R^3\backslash\mathcal{K}})$ satisfy the compatibility conditions to infinite order. Then, there are positive constants $\varepsilon_0$ and $N$ so that for all $\varepsilon\le \varepsilon_0$, if \begin{equation} \label{data.smallness} \sum_{|\alpha|\le N} \|\langle x\rangle^{|\alpha|}\partial_x^\alpha f\|_2 +\sum_{|\alpha|\le N-1} \|\langle x\rangle^{|\alpha|+1} \partial_x^\alpha g\|_2\le \varepsilon,\footnotemark \end{equation} then \eqref{main.equation} has a unique global solution $u\in C^\infty([0,\infty)\times {\R^3\backslash\mathcal{K}})$. \footnotetext{Here, and throughout, $\langle x\rangle=\langle r\rangle = \sqrt{1+|x|^2}$ denotes the Japanese bracket.} \end{theorem} This paper is organized as follows. In the remainder of this section, we introduce some notations that will be used throughout. In particular, we introduce the vector fields that shall be utilized. The second section is devoted to estimates related to the energy inequality. These include the exponential decay of local energy, the energy inequality and its higher order variants, and weighted mixed-norm KSS estimates. In the third section, we gather the main pointwise decay estimates that are used to give global existence. As a corollary to these, we obtain the main boundary term estimate that is utilized instead of the star-shapedness assumption which was used, e.g., in \cite{Met1}. The fourth section is devoted to certain Sobolev-type estimates and estimates related to the null condition. Finally, in the last section, we prove Theorem \ref{main.theorem}. \noindent{\em Acknowledgements:} The authors are particularly grateful to T. Sideris and C. Sogge for introducing us to this problem and for numerous related discussions and collaborations. \subsection{Notation} When convenient, we shall set $x_0=t$, $\partial_0=\partial_t$. The space-time gradient will be denoted by $u'=\partial u=(\partial_t, \nabla_x u)$, but we shall reserve the notation $\nabla$ for $\nabla=\nabla_x =(\partial_1,\partial_2,\partial_3)$. The generators of the spatial rotations are denoted $$\Omega=(\Omega_1,\Omega_2,\Omega_3)=x\times \nabla,$$ where $\times$ is the usual vector cross product. When studying elasticity, it is natural to use the generators of the simultaneous rotations $$\tilde{\Omega}_l = \Omega_l I + U_l,$$ where $$U_1=\begin{bmatrix}0&0&0\\0&0&1\\0&-1&0\end{bmatrix},\quad U_2=\begin{bmatrix}0&0&-1\\0&0&0\{\mathbf 1}&0&0\end{bmatrix},\quad U_3=\begin{bmatrix}0&1&0\\-1&0&0\\0&0&0\end{bmatrix}.$$ The scaling operator will be denoted $$S=t\partial_t + r\partial_r -1.$$ This differs slightly from the usual notion of the scaling vector field, but here it is more convenient to use this modification as it properly preserves the null structure. See \cite{Si}. We will set $$Z=\{\partial, \tilde{\Omega}\}, \quad \Gamma=\{\partial,\tilde{\Omega}, S\}.$$ We differentiate these two sets as it will be necessary to produce estimates which require relatively few occurrences of the scaling vector field $S$. This is due to the fact that the coefficients of $S$ can be arbitrarily large in a neighborhood of ${\partial\mathcal{K}}$, while those of $Z$ are bounded. Similar care with $S$ had to be taken in \cite{KSS3}, \cite{MS1}, \cite{MNS1, MNS2}, and \cite{Met1}. A key property of the vector fields is the commutation properties with $L$. In particular, we have that \begin{equation}\label{commutators} [L,\partial]=[L,\tilde{\Omega}]=0,\quad [L,S]=2L.\end{equation} Moreover, we note that the vector fields preserve the null structure of $Q$. See \cite{Si}.\footnote{See Proposition 3.1.} We will use the projections onto the radial and transverse directions, defined as follows: $$P_1 u = \frac{x}{r}\langle \frac{x}{r}, u\rangle, \quad P_2 u = [I-P_1]u = -\frac{x}{r}\times\Bigl(\frac{x}{r}\times u\Bigr).$$ We shall use $A\lesssim B$ to denote that there is a positive, unspecified constant $C$ so that $A\le CB$, and we will use $S_t=[0,t]\times{\R^3\backslash\mathcal{K}}$ to denote a time-strip of height $t$ in the exterior domain. \bigskip \newsection{Energy and KSS estimates} \subsection{Local energy decay} An important tool in previous studies of nonlinear problems in exterior domains is the decay of local energy. It is here that we require the geometric condition on the obstacle. In the current setting, we have the following result of \cite{Y}. This is an analog of the classical result of \cite{MRS} for the wave equation. \begin{theorem}\label{theorem.local.energy} Let ${\mathcal{K}}\subset\{|x|<1\}\subset {\mathbb R}^3$ be a nontrapping obstacle with smooth boundary. Let $u$ solve \begin{equation} \label{local.energy.equation} \begin{cases} Lu=0,\quad (t,x)\in {\mathbb R}_+\times{\R^3\backslash\mathcal{K}},\\ u|_{\partial\mathcal{K}}=0,\\ \text{supp } u(0,\, \cdot\, ), \partial_t u(0,\, \cdot\, )\subset \{|x|<10\}. \end{cases} \end{equation} Then \begin{equation} \label{local.energy} \Bigl(\int_{\{x\in{\R^3\backslash\mathcal{K}}\,:\, |x|<10\}} |u'(t,x)|^2\:dx\Bigr)^{1/2}\lesssim e^{-ct} \|u'(0,\, \cdot\, )\|_2, \end{equation} for some $c>0$.\end{theorem} In what follows, we shall require a higher order version of \eqref{local.energy}. To establish this, we utilize the following version of elliptic regularity, which appeared in \cite{Met1}. \begin{lemma} \label{lemma.ell.reg} Let ${\mathcal{K}}\subset\{|x|<1\}\subset {\mathbb R}^3$ be an obstacle with smooth boundary. Suppose that $u\in C^\infty ({\mathbb R}_+\times{\R^3\backslash\mathcal{K}})$, $u|_{\partial\mathcal{K}}=0$, and $u$ vanishes for large $|x|$ for each $t$. Then, \begin{equation} \label{ell.reg} \sum_{|\alpha|\le M}\|S^\nu \partial^\alpha u'(t,\, \cdot\, )\|_2\lesssim \sum_\ss{j+\mu\le M+\nu\\\mu\le\nu} \|S^\mu \partial_t^j u'(t,\, \cdot\, )\|_2 +\sum_\ss{|\beta|+\mu\le M+\nu-1\\\mu\le\nu} \|S^\mu \partial^\beta Lu(t,\, \cdot\, )\|_2 \end{equation} and \begin{multline} \label{ell.reg.local} \sum_{|\alpha|\le M} \|S^\nu \partial^\alpha u'(t,\, \cdot\, )\|_{L^2(|x|<4)} \lesssim \sum_\ss{j+\mu\le M+\nu\\\mu\le\nu} \|S^\mu \partial_t^j u'(t,\, \cdot\, )\|_{L^2(|x|<6)} \\+ \sum_\ss{|\beta|+\mu\le M+\nu-1\\\mu\le\nu} \|S^\mu \partial^\beta Lu(t,\, \cdot\, )\|_{L^2(|x|<6)} \end{multline} for any $M$ and $\nu$. \end{lemma} Using \eqref{ell.reg.local} and the fact that $\partial_t$ preserves the Dirichlet boundary conditions, we can immediately establish the following result which is also from \cite{Met1}. \begin{lemma} \label{lemma.local.energy.high} Let ${\mathcal{K}}\subset \{|x|<1\}\subset {\mathbb R}^3$ be a smooth, nontrapping obstacle. Suppose that $u|_{\partial\mathcal{K}}=0$ and $Lu(t,x)=0$ for $|x|>4$ and $t>0$. Suppose also that $u(t,x)=0$ for $t\le 0$. Then if $M$ and $\nu$ are fixed and if $c>0$ is as in \eqref{local.energy}, \begin{multline} \label{local.energy.high} \sum_\ss{|\alpha|+\mu\le M+\nu\\\mu\le \nu} \|S^\mu \partial^\alpha u'(t,\, \cdot\, )\|_{L^2(|x|<4)} \lesssim \sum_\ss{|\alpha|+\mu\le M+\nu-1\\\mu\le\nu} \|S^\mu \partial^\alpha Lu(t,\, \cdot\, )\|_2 \\+\int_0^t e^{-(c/2)(t-s)} \sum_\ss{|\alpha|+\mu\le M+\nu\\\mu\le\nu} \|S^\mu \partial^\alpha Lu(s,\, \cdot\, )\|_2\:ds. \end{multline} \end{lemma} \subsection{Energy estimates}\label{energy.estimates.section} In this section, we gather the energy estimates which we shall require. The basic energy estimate is rather standard. In order to establish estimates for $\partial^\alpha u$, we shall use elliptic regularity as above. For energy estimates involving the scaling vector field, we use cutoff techniques from \cite{MS1} and control the resulting commutator with estimates given in Section \ref{boundary}. Finally, we handle the boundary terms that arise when studying $S^\mu Z^\alpha u$ using the KSS estimates given in the following section. We begin with the standard energy inequality for the variable coefficient operator \begin{equation} \label{Lgamma} (L_\gamma u)^I = (Lu)^I + \gamma^{IJ,jk}\partial_j\partial_k u^J. \end{equation} We look at smooth solutions of \begin{equation} \label{perturbed.equation} \begin{cases} L_\gamma u = F,\quad (t,x)\in {\mathbb R}_+\times{\R^3\backslash\mathcal{K}},\\ u|_{\partial\mathcal{K}}=0,\\ u(0,\, \cdot\, )=f,\quad \partial_tu(0,\, \cdot\, )=g \end{cases} \end{equation} assuming that \begin{equation} \label{gamma.symmetry} \gamma^{IJ,jk}=\gamma^{JI,kj} \end{equation} and \begin{equation} \label{gamma.smallness} \sum_{I,J=1}^3 \sum_{j,k=1}^3 \|\gamma^{IJ,jk}(t,\, \cdot\, )\|_\infty\le \frac{\delta}{1+t} \end{equation} for $\delta>0$ sufficiently small, depending on $c_1$ and $c_2$. We define the energy-momentum vector associated to $L_\gamma$, \begin{equation} \label{e0} e_0[u]=\frac{1}{2}|\partial_t u|^2+\frac{c_2^2}{2}|\nabla u|^2 + \frac{c_1^2-c_2^2}{2}(\nabla\cdot u)^2 -\frac{1}{2}\gamma^{IJ,ij}\partial_i u^I \partial_j u^J, \end{equation} \begin{equation}\label{ek} e_k[u]=-c_2^2\partial_t u^I \partial_k u^I - (c_1^2-c_2^2)\partial_t u^k (\nabla\cdot u) +\gamma^{IJ,kj}\partial_j u^J \partial_t u^I,\quad k=1,2,3. \end{equation} With \eqref{gamma.symmetry}, it is easy to check that \begin{equation} \label{e.div} \partial_0 e_0[u]+\partial_k e_k[u]=\partial_t u^I (L_\gamma u)^I -\frac{1}{2}(\partial_t \gamma^{IJ,ij})\partial_i u^I \partial_j u^J + (\partial_k \gamma^{IJ,kj})\partial_j u^J \partial_t u^I. \end{equation} Setting $$E_M(t)=\int \sum_{j=0}^M e_0(\partial_t^j u)(t,x)\:dx,$$ the following results immediately from \eqref{e.div} and that $\partial_t$ preserves the boundary conditions. \begin{lemma} \label{lemma.energy.dt} Assume that $\gamma$ satisfies \eqref{gamma.symmetry} and \eqref{gamma.smallness}, and let $u$ be a smooth solution to \eqref{perturbed.equation} which vanishes for large $|x|$ for each $t$. Then \begin{equation} \label{energy.dt} \partial_t E^{1/2}_M(t)\lesssim \sum_{j=0}^M \|L_\gamma \partial_t^j u(t,\, \cdot\, )\|_2 + \|\gamma'(t,\, \cdot\, )\|_\infty E_M^{1/2}(t) \end{equation} for any fixed $M=0,1,2,\dots$.\footnote{Here, we have set $\|\gamma'(t,\, \cdot\, )\|_\infty = \sum_{I,J=1}^3\sum_{j,k=0}^3 \sum_{\beta=0}^3 \|\partial_\beta \gamma^{IJ,jk}(t,\, \cdot\, )\|_\infty.$} \end{lemma} We next examine energy estimates involving the scaling vector field. To do so, we set $$\tilde{S}=t\partial_t + \eta(x) r\partial_r -1$$ where $\eta\in C^\infty({\mathbb R}^3)$ with $\eta(x)\equiv 0$ for $x\in {\mathcal{K}}$ and $\eta(x)\equiv 1$ for $|x|>1$. For $$X_{\nu,j}=\int e_0(\tilde{S}^\nu \partial_t^j u)(t,x)\:dx,$$ we have the following. \begin{lemma} \label{lemma.energy.tilde.S} If $u$ is a smooth solution to \eqref{perturbed.equation} and vanishes for large $|x|$ for each $t$, then \begin{multline} \label{energy.tilde.S} \partial_t X_{\nu,j}\lesssim X^{1/2}_{\nu,j} \|\tilde{S}^\nu \partial_t^j L_\gamma u(t,\, \cdot\, )\|_2 + \|\gamma'(t,\, \cdot\, )\|_\infty X_{\nu,j} \\+X_{\nu,j}^{1/2} \|[\tilde{S}^\nu \partial_t^j,\gamma^{kj}\partial_k\partial_l]u(t,\, \cdot\, )\|_2 + X_{\nu,j}^{1/2}\sum_{\mu\le\nu-1} \|S^\mu \partial_t^j Lu(t,\, \cdot\, )\|_2 \\+ X^{1/2}_{\nu,j}\sum_\ss{|\alpha|+\mu\le j+\nu\\\mu\le\nu-1} \|S^\mu \partial^\alpha u'(t,\, \cdot\, ) \|_{L^2(\{|x|<1\})} \end{multline} for any fixed $\nu,j$.\footnote{For a differential operator $P=P(t,x,D_t,D_x)$, we set $[P,\gamma^{kl}\partial_k\partial_l]u=\sum_{I,J=1}^3 \sum_{k,l=1}^3 |[P,\gamma^{IJ,kl}\partial_k\partial_l] u^J|.$} \end{lemma} \noindent{\em Proof of Lemma \ref{lemma.energy.tilde.S}:} As $\tilde{S}$ and $\partial_t$ preserve the boundary condition, it follows from \eqref{energy.dt} that \begin{equation} \label{energy.tilde.S.1} \partial_t X_{\nu,j}\lesssim X_{\nu,j}^{1/2} \|L_\gamma \tilde{S}^\nu \partial_t^j u(t,\, \cdot\, )\|_2 + \|\gamma'(t,\, \cdot\, )\|_\infty X_{\nu,j}. \end{equation} We next examine the commutator \begin{align*} |L_\gamma \tilde{S}^\nu \partial_t^j u| &\le |\tilde{S}^\nu \partial_t^j L_\gamma u| + |[\tilde{S}^\nu \partial_t^j, \gamma^{kl}\partial_k\partial_l]u| + |[\tilde{S}^\nu, L]\partial_t^j u|\\ &\le |\tilde{S}^\nu \partial_t^j L_\gamma u|+ |[\tilde{S}^\nu \partial_t^j, \gamma^{kl}\partial_k \partial_l] u| + |[S^\nu,L]\partial_t^j u| + |[\tilde{S}^\nu - S^\nu, L]\partial_t^j u|\\ &\lesssim |\tilde{S}^\nu \partial_t^j L_\gamma u| + |[\tilde{S}^\nu \partial_t^j, \gamma^{kl} \partial_k\partial_l]u|+\sum_{\mu\le\nu-1} |S^\mu \partial_t^j Lu|\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad + {\mathbf 1}_{\{|x|<1\}}(x) \sum_\ss{|\alpha|+\mu\le j+\nu\\\mu\le\nu-1} |S^\mu \partial^\alpha u'|. \end{align*} Using this in \eqref{energy.tilde.S.1} completes the proof. \qed Using the previous lemma and elliptic regularity, we shall prove \begin{proposition} \label{prop.energy.L.d} Assume \eqref{gamma.symmetry} and \eqref{gamma.smallness} for $\delta$ sufficiently small, and let $u$ be a smooth solution to \eqref{perturbed.equation}. Suppose further that \begin{equation} \label{gamma.smallness.2} \|\gamma'(t,\, \cdot\, )\|_\infty\le \delta/(1+t) \end{equation} and \begin{multline} \label{dt.bound} \sum_\ss{j+\mu\le N+\nu\\\mu\le\nu} \Bigl(\|\tilde{S}^\mu \partial_t^j L_\gamma u(t,\, \cdot\, )\|_2 + \|[\tilde{S}^\mu \partial_t^j, \gamma^{kl}\partial_k\partial_l]u(t,\, \cdot\, )\|_2\Bigr) \\\le \frac{\delta}{1+t} \sum_\ss{j+\mu\le N+\nu\\\mu\le\nu} \|\tilde{S}^\mu \partial_t^j u'(t,\, \cdot\, )\|_2 + H_{\nu,N}(t) \end{multline} for some fixed $N$ and $\nu$ and some function $H_{\nu,N}(t)$.\footnote{In the sequel, $H_{\nu,N}(s)$ will involve $\|\langle x\rangle^{-1/2} S^\mu Z^\alpha u'(s,\, \cdot\, )\|^2_2$ for $|\alpha|+\mu\ll N+\nu$. As such, this term will be handled using the KSS estimates in the next section.} Then, \begin{multline} \label{energy.L.d} \sum_\ss{|\alpha|+\mu\le N+\nu\\\mu\le\nu} \|S^\mu \partial^\alpha u'(t,\, \cdot\, )\|_2 \lesssim \sum_\ss{|\alpha|+\mu\le N+\nu-1\\\mu\le\nu} \|S^\mu \partial^\alpha Lu(t,\, \cdot\, )\|_2 \\+ (1+t)^{A\delta} \sum_\ss{j+\mu\le N+\nu\\\mu\le\nu} X^{1/2}_{\mu,j}(0) \\+(1+t)^{A\delta}\Bigl(\int_0^t \sum_\ss{|\alpha|+\mu\le N+\nu-1\\\mu\le \nu-1} \|S^\mu \partial^\alpha Lu(s,\, \cdot\, )\|_2\:ds + \int_0^t H_{\nu,N}(s)\:ds\Bigr) \\+(1+t)^{A\delta} \int_0^t \sum_\ss{|\alpha|+\mu\le N+\nu\\\mu\le\nu-1} \|S^\mu \partial^\alpha u' (s,\, \cdot\, )\|_{L^2(|x|<1)}\:ds, \end{multline} for some constant $A>0$. \end{proposition} \noindent{\em Proof of Proposition \ref{prop.energy.L.d}:} For $\delta$ in \eqref{gamma.smallness} sufficiently small, \begin{equation}\label{energy.sim} \sum_\ss{j+\mu\le N+\nu\\\mu\le \nu} \|\tilde{S}^\mu \partial_t^j u'(t,\, \cdot\, )\|_2 \lesssim \sum_\ss{j+\mu\le N+\nu\\\mu\le\nu} X^{1/2}_{\mu,j}(t).\end{equation} By \eqref{energy.tilde.S}, \eqref{gamma.smallness.2}, and \eqref{dt.bound}, it thus follows that \begin{multline} \label{energy.L.d.1} \partial_t \sum_\ss{j+\mu\le N+\nu\\\mu\le\nu} X^{1/2}_{\mu,j}(t)\le \frac{A\delta}{1+t} \sum_\ss{j+\mu\le N+\nu\\\mu\le\nu} X^{1/2}_{\mu,j}(t)+AH_{\nu,N}(t) \\+A\sum_\ss{j+\mu\le N+\nu-1\\\mu\le\nu-1} \|S^\mu \partial_t^j Lu(t,\, \cdot\, )\|_2 +A\sum_\ss{|\alpha|+\mu\le N+\nu\\\mu\le\nu-1} \|S^\mu \partial^\alpha u'(t,\, \cdot\, )\|_{L^2(|x|<1)}. \end{multline} By Gronwall's inequality, $\sum_\ss{j+\mu\le N+\nu\\\mu\le\nu} X^{1/2}_{\mu,j}$ satisfies the desired bound. Thus, by combining this estimate for $X^{1/2}_{\mu,j}$, \eqref{energy.sim}, \eqref{ell.reg}, and \eqref{energy.L.d.1}, the estimate \eqref{energy.L.d} follows.\qed Finally, we establish the necessary energy estimates for $S^\mu Z^\alpha u$. To this end, we set $$Y_{N,\nu}(t)=\sum_\ss{|\alpha|+\mu\le N+\nu\\\mu\le\nu} \int e_0(S^\mu Z^\alpha u)(t,x)\:dx.$$ We then argue as in the proof of Lemma \ref{lemma.energy.dt} with $E_M(t)$ replaced by $Y_{N,\nu}(t)$. The boundary terms that arise mesh well with the KSS estimates of the next section. \begin{proposition} \label{prop.energy.S.Z} Assume \eqref{gamma.symmetry} and \eqref{gamma.smallness}. Suppose that $u$ solves \eqref{perturbed.equation} and vanishes for large $|x|$ for each $t$. Then, \begin{multline} \label{energy.S.Z} \partial_t Y_{N,\nu}\lesssim Y_{N,\nu}^{1/2} \sum_\ss{|\alpha|+\mu\le N+\nu\\\mu\le\nu} \|L_\gamma S^\mu Z^\alpha u(t,\, \cdot\, )\|_2 + \|\gamma'(t,\, \cdot\, )\|_\infty Y_{N,\nu}\\+ \sum_\ss{|\alpha|+\mu\le N+\nu+1\\\mu\le\nu} \|S^\mu \partial^\alpha u'(t,\, \cdot\, )\|^2_{L^2(|x|<1)}. \end{multline} \end{proposition} \noindent{\em Proof of Proposition \ref{prop.energy.S.Z}:} By arguing as in the proof of Lemma \ref{lemma.energy.dt}, it follows that \begin{multline} \label{Y.div} \partial_t Y_{N,\nu} \lesssim Y_{N,\nu}^{1/2} \sum_\ss{|\alpha|+\mu\le N+\nu\\\mu\le\nu} \|L_\gamma S^\mu Z^\alpha u(t,\, \cdot\, )\|_2 + \|\gamma'(t,\, \cdot\, )\|_\infty Y_{N,\nu} \\+ \sum_\ss{|\alpha|+\mu\le N_0+\nu_0\\\mu\le\nu_0} \int_{\partial\mathcal{K}} |e_k(S^\mu Z^\alpha u)(t,y) n_k|\:d\sigma(y) \end{multline} where $n=(n_1,n_2,n_3)$ is the outward normal to ${\mathcal{K}}$ at a point $x\in {\partial\mathcal{K}}$ and $e_k[\, \cdot\, ]$ are as in \eqref{ek}. Recalling that ${\mathcal{K}}\subset \{|x|<1\}$, $$\sum_\ss{|\alpha|+\mu\le N+\nu\\\mu\le\nu} |S^\mu Z^\alpha u(t,x)|\lesssim \sum_\ss{|\alpha|+\mu\le N+\mu\\\mu\le\nu} |S^\mu \partial^\alpha u(t,x)|,\quad x\in{\partial\mathcal{K}},$$ and thus, by a trace theorem, we have that the last term in \eqref{Y.div} is $$\lesssim \int_{\{x\in{\R^3\backslash\mathcal{K}}\,:\,|x|<1\}} \sum_\ss{|\alpha|+\mu\le N+\nu+1\\\mu\le \nu} |S^\mu \partial^\alpha u'(t,x)|^2\:dx,$$ which completes the proof.\qed \subsection{KSS estimates} In this section, we present a class of weighted, mixed-norm estimates, called KSS estimates, which are particularly useful for estimating the boundary term in \eqref{energy.S.Z} and for dealing with certain technicalities regarding the distribution of the occurrences of the scaling vector field in the proof of the main theorem. Such estimates were first used to study nonlinear problems in \cite{KSS2} and have played a fundamental role in previous studies of problems in exterior domains. The estimates that we give are from \cite{Met1}, and we refer the interested reader to that article for detailed proofs. They are based on the boundaryless estimates for the wave equation from \cite{Sterb} and \cite{MS3}. The corresponding estimates for elasticity in the boundaryless case follows from a Helmholtz-Hodge decomposition. In order to prove estimates in the exterior domain one relies on the decay of local energy when $x$ is near the boundary of the obstacle and uses the boundaryless estimates when $|x|$ is large. \begin{proposition} \label{prop.kss} Suppose that ${\mathcal{K}}\subset\{|x|<1\}\subset{\mathbb R}^3$ is a nontrapping obstacle with smooth boundary. Suppose further that $u\in C^\infty$ satisfies $u|_{\partial\mathcal{K}}=0$, $u(t,x)=0$ for $t\le 0$, and vanishes for large $|x|$ for every $t$. Then, \begin{multline} \label{KSS.S.d} (\log(2+T))^{-1/2} \sum_\ss{|\alpha|+\mu\le M+\nu\\\mu\le\nu} \|\langle x\rangle^{-1/2} S^\mu \partial^\alpha u'\|_{L^2_tL^2_x(S_T)} \\\lesssim \int_0^T \sum_\ss{|\alpha|+\mu\le M+\nu\\\mu\le\nu} \|S^\mu \partial^\alpha Lu(s,\, \cdot\, )\|_2\:ds +\sum_\ss{|\alpha|+\mu\le M+\nu-1\\\mu\le\nu} \|S^\mu \partial^\alpha Lu\|_{L^2_tL^2_x(S_T)} \end{multline} and \begin{multline} \label{KSS.S.Z} (\log(2+T))^{-1/2}\sum_\ss{|\alpha|+\mu\le M+\nu\\\mu\le \nu} \|\langle x\rangle^{-1/2} S^\mu Z^\alpha u'\|_{L^2_tL^2_x(S_T)}\\\lesssim \int_0^T \sum_\ss{|\alpha|+\mu\le M+\nu\\\mu\le\nu} \|S^\mu Z^\alpha Lu(s,\, \cdot\, )\|_2\:ds +\sum_\ss{|\alpha|+\mu\le M+\nu-1\\\mu\le\nu} \|S^\mu Z^\alpha Lu\|_{L^2_tL^2_x(S_T)} \end{multline} for any fixed $M,\nu$ and $T>0$. \end{proposition} \bigskip \newsection{Pointwise estimates and boundary term estimates} In this section, we shall present our main pointwise decay estimates. These are analogues of those used in \cite{KSS3} for the wave equation. See also \cite{MS1}, \cite{MNS1, MNS2}. In the process of proving such estimates, we shall also prove the boundary term estimate which is required to handle the last term of \eqref{energy.L.d}. This is reminiscent of ideas from \cite{MS1, MS2}. We begin by looking at the solution to the boundaryless problem \begin{equation} \label{boundaryless.equation} \begin{cases} Lv(t,x)=0,\\ v(0,\, \cdot\, )=f,\quad \partial_t v(0,\, \cdot\, )=g. \end{cases} \end{equation} By arguing using spherical means\footnote{See \cite{J}.}, we have that \begin{multline} \label{fund.soln} 4\pi v_I(t,x)=\int_{S^2} \Bigl[(2\delta_{IJ} - 4 y_I y_J)f_J(x+c_2ty) \\\qquad\qquad\qquad\qquad\qquad\qquad\qquad +t (\delta_{IJ}-y_Iy_J)(g_J(x+c_2ty)+c_2 y_K \nabla_Kf_J(x+c_2ty))\Bigr]\:d\sigma(y) \\\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+\int_{S^2} \Bigl[(-\delta_{IJ}+4y_Iy_J)f_J(x+c_1ty) \\\qquad\qquad\qquad\qquad\qquad\qquad\qquad+t y_Iy_J(g_J(x+c_1ty)+c_1 y_K \nabla_Kf_J(x+c_1ty))\Bigr]\:d\sigma(y) \\ -\int_{c_2t}^{c_1t} \int_{S^2} r^{-1} (\delta_{IJ}-3y_Iy_J)(tg_J(x+ry)+f_J(x+ry))\:d\sigma(y)\:dr. \end{multline} After a simple change of variables, this is \begin{multline} \label{fund.soln.2} 4\pi v_I(t,x)=\int_{S^2} \Bigl[(2\delta_{IJ} - 4 y_I y_J)f_J(x+c_2ty) \\\qquad\qquad\qquad\qquad\qquad\qquad\qquad +t (\delta_{IJ}-y_Iy_J)(g_J(x+c_2ty)+c_2 y_K \nabla_Kf_J(x+c_2ty))\Bigr]\:d\sigma(y) \\\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!+\int_{S^2} \Bigl[(-\delta_{IJ}+4y_Iy_J)f_J(x+c_1ty) \\\qquad\qquad\qquad\qquad\qquad\qquad\qquad+t y_Iy_J(g_J(x+c_1ty)+c_1 y_K \nabla_Kf_J(x+c_1ty))\Bigr]\:d\sigma(y) \\ -\int_{c_2}^{c_1} \int_{S^2} c^{-1} (\delta_{IJ}-3y_Iy_J)(tg_J(x+cty)+f_J(x+cty))\:d\sigma(y)\:dc. \end{multline} Using this representation, we shall be able to prove estimates on $v$ using techniques similar to those for wave equations. \subsection{Pointwise estimates in ${\mathbb R}^3$} In \cite{KSS3}, the authors adapted a $L^1-L^\infty$ H\"ormander-type estimate to eliminate the dependence on Lorentz invariance. As such, these estimates could be applied to study multiple speed wave equations and certain boundary value problems. In this section, we prove analogues of these pointwise estimates. The first of these estimates is for the homogeneous equation \eqref{boundaryless.equation}. \begin{lemma} \label{lemma.ptwise.bdyless.data} Let $v$ be a solution to \eqref{boundaryless.equation}. Then, \begin{equation} \label{ptwise.bdyless.data} (1+t+|x|)|v(t,x)|\lesssim \sum_{|\alpha|\le 4} \|\langle x\rangle^{|\alpha|} \partial^\alpha f\|_2 +\sum_{|\alpha|\le 3} \|\langle x\rangle^{1+|\alpha|}\partial^\alpha g\|_2. \end{equation} \end{lemma} \noindent{\em Proof of Lemma \ref{lemma.ptwise.bdyless.data}:} Using the following estimates for the wave equation from \cite{MNS2}\footnote{See the proof of Lemma 2.2.} \begin{equation} \label{wave.data.1} (1+t+|x|) t\int_{S^2} |h(x+ty)|\:d\sigma(y) \lesssim \sum_\ss{|\alpha|+\mu\le 3\\\mu\le 1} \int_{{\mathbb R}^3} |(|z|\partial_{|z|})^\mu Z^\alpha h(z)|\:\frac{dz}{\langle z\rangle} \end{equation} and \begin{equation} \label{wave.data.2} (1+t+|x|)\int_{S^2} |h(x+ty)|\:d\sigma(y)\lesssim \sum_\ss{|\alpha|+\mu\le 3\\\mu\le 1} \int_{{\mathbb R}^3} |(|z|\partial_{|z|})^\mu Z^\alpha h(z)|\:\frac{dz}{\langle z\rangle^2}, \end{equation} \eqref{ptwise.bdyless.data} follows from \eqref{fund.soln.2} and the Schwarz inequality.\qed We now prove the corresponding result for the inhomogeneous equation. \begin{lemma} \label{lemma.ptwise.bdyless.inhom} Let $w$ solve $L w=G$ for $(t,x)\in {\mathbb R}_+\times{\mathbb R}^3$, and suppose that $w(t,x)=0$ for $t\le 0$. Then, \begin{equation} \label{ptwise.bdyless.inhom} (1+t+|x|)|w(t,x)|\lesssim \sum_\ss{|\alpha|+\mu\le 3\\\mu\le 1} \int_0^t \int_{{\mathbb R}^3} |S^\mu Z^\alpha G(s,y)|\:\frac{dy\:ds}{|y|}. \end{equation} Suppose further that $G(s,y)=0$ when $|y|>20c_1 s$. Then, \begin{equation} \label{ptwise.bdyless.inhom.localized} (1+t)|w(t,x)|\lesssim \sum_\ss{|\alpha|+\mu\le 3\\\mu\le 1} \int_{\theta t}^t \int_{{\mathbb R}^3} |S^\mu Z^\alpha G(s,y)|\:\frac{dy\:ds}{|y|},\quad \text{for } |x|<c_2 t/10, \end{equation} where $\theta$ is some constant depending on $c_2$. \end{lemma} \noindent{\em Proof of Lemma \ref{lemma.ptwise.bdyless.inhom}:} This follows as in the proof of the previous lemma, using the following bound from \cite{KSS3}\footnote{See the proof of Proposition 2.1.} $$\int_0^t \int_{S^2} (t-s) |F(s,x-(t-s)y)|\:d\sigma(y)\:ds \lesssim \sum_\ss{|\alpha|+\mu\le 3\\\mu\le 1} \int_0^t \int_{{\mathbb R}^3} |S^\mu Z^\alpha F(s,y)|\: \frac{dy\:ds}{|y|}.$$ The lemma then follows from \eqref{fund.soln.2} and Duhamel's principle. The second estimate follows from the appropriate Huygens' principle for $L$. See Fig. \ref{trsgsvtime}.\qed \begin{figure}[h] \begin{center} \epsfxsize=15cm \epsffile{jmbt_2.eps} \end{center} \caption{An illustration of the definition of $\theta t$.} \label{trsgsvtime} \end{figure} By Duhamel's principle and the positivity of \eqref{fund.soln.2}, we also have \begin{lemma} \label{lemma.ptwise.bdyless.2}\footnote{We note that a portion of this lemma has previously appeared in \cite{XQ}.} Let $v$ solve \eqref{boundaryless.equation}. Then, \begin{multline} \label{ptwise.bdyless.data.2} |x||v(t,x)|\lesssim \sum_{i=1,2} \int_{||x|-c_i(t-s)|}^{|x|+c_i(t-s)} \sup_{|\theta|=1} |f(\rho\theta)|\:d\rho +\int_{c_2}^{c_1} \int_{||x|-c(t-s)|}^{|x|+c(t-s)} \sup_{|\theta|=1} |f(\rho\theta)|\:d\rho\:dc \\+\sum_{i=1,2} \int_{||x|-c_i(t-s)|}^{|x|+c_i(t-s)} \sup_{|\theta|=1} |\nabla f(\rho\theta)| \rho\:d\rho \\+\sum_{i=1,2} \int_{||x|-c_i(t-s)|}^{|x|+c_i(t-s)} \sup_{|\theta|=1} |g(\rho\theta)|\rho\:d\rho +\int_{c_2}^{c_1} \int_{||x|-c(t-s)|}^{|x|+c(t-s)} \sup_{|\theta|=1} |g(\rho\theta)|\rho\:d\rho\:dc. \end{multline} Moreover, let $w$ solve $L w = G$ for $(t,x)\in {\mathbb R}_+\times{\mathbb R}^3$, and suppose that $w(t,x)=0$ for $t\le 0$. Then \begin{multline} \label{ptwise.bdyless.inhom.2} |x||w(t,x)|\lesssim \sum_{i=1,2}\int_0^t \int_{||x|-c_i(t-s)|}^{|x|+c_i(t-s)} \sup_{|\theta|=1} |G(s,\rho\theta)|\rho \:d\rho\:ds \\+\int_{c_1}^{c_2} \int_0^t \int_{||x|-c(t-s)|}^{|x|+c(t-s)} \sup_{|\theta|=1} |G(s,\rho\theta)|\rho\: d\rho\:ds\:dc. \end{multline} \end{lemma} \subsection{Pointwise estimates in ${\R^3\backslash\mathcal{K}}$} Here, we wish to establish the main decay estimates which shall be used in the sequel. At the first order of difficulty, these are exterior domain analogs of \eqref{ptwise.bdyless.data} and \eqref{ptwise.bdyless.inhom}. For these estimates, we shall look at solutions to the following \begin{equation} \label{ptwise.equation} \begin{cases} Lu(t,x)=F(t,x),\quad (t,x)\in {\mathbb R}_+\times{\R^3\backslash\mathcal{K}},\\ u(t,x)|_{{\partial\mathcal{K}}}=0,\\ u(0,x)=f(x),\quad \partial_t u(0,x)=g(x),\\ \text{supp } f,g \subset \{|x|\ge 6\}. \end{cases} \end{equation} For such solutions, we shall have the following estimates. \begin{theorem} \label{theorem.ptwise} Let ${\mathcal{K}}\subset{\mathbb R}^3$ be a bounded, nontrapping obstacle with smooth boundary. Then smooth solutions of \eqref{ptwise.equation} satisfy \begin{multline} \label{ptwise} (1+t+|x|)|S^\nu Z^\alpha u(t,x)| \lesssim \sum_\ss{j+|\beta|+k\le M+\nu+7\\j\le 1} \|\langle x\rangle^{j+|\beta|} \nabla^\beta \partial_t^{k+j} u(0,\, \cdot\, )\|_2 \\+ \int_0^t \int_{\R^3\backslash\mathcal{K}} \sum_\ss{|\beta|+\mu\le M+\nu+6\\\mu\le\nu+1} |S^\mu Z^\beta F(s,y)|\:\frac{dy\:ds}{|y|} \\+\int_0^t \sum_\ss{|\beta|+\mu\le M+\nu+3\\\mu\le\nu+1} \|S^\mu \partial^\beta F(s,\, \cdot\, )\|_{L^2(\{|x|<4\})}\:ds \end{multline} for any $|\alpha|=M$ and any $\nu$. \end{theorem} \noindent{\em Proof of Theorem \ref{theorem.ptwise}:} This proof resembles those in \cite{KSS3} and \cite{MS1} for the wave equation quite closely. A portion of this argument was also given previously in \cite{XQ}. As a first reduction, we prove that \begin{multline} \label{ptwise.1} (1+t+|x|)|S^\nu Z^\alpha u(t,x)|\lesssim \sum_\ss{j+|\beta|+k\le M+\nu+4\\j\le 1} \|\langle x\rangle^{j+|\beta|}\nabla^\beta \partial_t^{k+j} u(0,\, \cdot\, )\|_2 \\+\int_0^t \int_{\R^3\backslash\mathcal{K}} \sum_\ss{|\beta|+\mu\le |\alpha|+\nu+3\\\mu\le\nu+1} |S^\mu Z^\beta F(s,y)| \:\frac{dy\:ds}{|y|} \\+\sup_{|y|\le 2, 0\le s\le t} (1+s) \sum_\ss{|\beta|+\mu\le |\alpha|+\nu\\\mu\le\nu} \Bigl(|S^\mu Z^\beta u'(s,y)| + |S^\mu Z^\beta u(s,y)|\Bigr). \end{multline} \noindent{\em Proof of \eqref{ptwise.1}:} To do this, we follow the proof of Lemma 4.2 of \cite{KSS3}. The estimate is obvious for $|x|<2$. Thus, for the remainder of the proof, we shall assume that $|x|\ge 2$. As such, for a smooth $\rho$ satisfying $\rho(r)\equiv 1$ for $r\ge 2$ and $\rho(r)\equiv 0$ for $r\le 1$, it suffices to establish the estimate for $w(t,x)=\rho(|x|) S^\nu Z^\alpha u(t,x)$, which solves the boundaryless\footnote{Recall that we are assuming ${\mathcal{K}}\subset\{|x|<1\}$.} equation \begin{multline*} Lw=\rho L S^\nu Z^\alpha u - 2c_2^2\nabla \rho \cdot \nabla (S^\nu Z^\alpha u) - c_2^2(\Delta \rho) S^\nu Z^\alpha u \\-(c_1^2-c_2^2)\nabla(\nabla\rho\cdot S^\nu Z^\alpha u) - (c_1^2-c_2^2)\nabla\rho \nabla \cdot (S^\nu Z^\alpha u). \end{multline*} We write $w=w_1+w_2$ where $Lw_1=\rho L S^\nu Z^\alpha u$ with $(w_1,\partial_t w_1)(0,\, \cdot\, )= (f,g)$. The estimate for $w_1$ follows from \eqref{ptwise.bdyless.data} and \eqref{ptwise.bdyless.inhom}. By \eqref{ptwise.bdyless.inhom.2}, we see that \begin{multline}\label{ptwise.w0} |w_2(t,x)|\lesssim \frac{1}{|x|}\sum_{i=1,2}\int_0^t \int_{|r-c_i(t-s)|}^{r+c_i(t-s)} \sup_{|\theta|=1} |G(s,\rho\theta)|\rho\:d\rho\:ds \\+\frac{1}{|x|}\int_{c_2}^{c_1} \int_0^t \int_{|r-c(t-s)|}^{r+c(t-s)} \sup_{|\theta|=1} |G(s,\rho\theta)|\rho\:d\rho\:ds\:dc \end{multline} where $$ G(t,x)=Lw-\rho LS^\nu Z^\alpha u.$$ We shall now work with the last term in \eqref{ptwise.w0}. The bound for the other term follows similarly. By noting that $G$ vanishes for $|x|\ge 2$, it follows that we must have $$-2\le |x|-c(t-s)\le 2$$ for the $d\rho$ integral to be nonzero. That is, we must have $$\frac{ct-|x|-2}{c}\le s\le \frac{ct-|x|+2}{c}.$$ Thus, we conclude that the last term in \eqref{ptwise.w0} is \begin{multline*} \lesssim \frac{1}{|x|}\int_{c_2}^{c_1} \frac{1}{1+|ct-|x||}\\\times\sum_\ss{|\beta|+\mu\le M+\nu\\\mu\le\nu} \sup_\ss{\frac{ct-|x|-2}{c}\le s\le \frac{ct-|x|+2}{c}\\ |y|\le 2} (1+s)\Bigl(|S^\mu Z^\beta u'(s,y)| + |S^\mu Z^\beta u(s,y)|\Bigr)\:dc \end{multline*} from which the bound for $|w_2|$ follows.\qed In order to complete the proof of Theorem \ref{theorem.ptwise}, it will suffice to show that $$(1+t)\sum_\ss{|\beta|+\mu\le M+\nu+1\\\mu\le\nu}\sup_{|x|<2} |S^\mu \partial^\beta u(t,x)|$$ is bounded by the right side of \eqref{ptwise}. Using smooth cutoffs, we may examine the following cases separately: \begin{enumerate} \item[{\em Case 1:}] $u$ has vanishing data and $F(s,y)$ vanishes for $|y|>4$ \item[{\em Case 2:}] $F(s,y)=0$ for $|y|<3$. \end{enumerate} We use the following consequence of the Fundamental Theorem of Calculus and Sobolev embedding \begin{equation} \label{FTC} \begin{split} (1+t)\sum_\ss{|\beta|+\mu\le M+\nu+1\\\mu\le\nu} \sup_{|x|<2} &|S^\mu \partial^\beta u(t,x)|\\ &\lesssim \int_0^t \sum_\ss{|\beta|+\mu\le M+\nu+2\\\mu\le \nu, j\le 1} \|(s\partial_s)^j S^\mu \partial^\beta u'(s,\, \cdot\, )\|_{L^2(\{|x|<4\})}\:ds\\ &\lesssim \int_0^t \sum_\ss{|\beta|+\mu\le M+\nu+3\\\mu\le\nu+1}\|S^\mu \partial^\beta u'(s,\, \cdot\, )\|_{L^2(\{|x|<4\})}\:ds. \end{split} \end{equation} Here, for the first inequality, we have also used the fact that the Dirichlet boundary conditions permit us to bound $u$ locally by $u'$. The bound in {\em Case 1} now follows from an application of \eqref{local.energy.high}. To complete the proof, we need only examine {\em Case 2}. Here, we write $u=w+u_r$ where $w$ solves the boundaryless equation $Lw=F$ with $(w,\partial_t w)(0,\, \cdot\, )=(u,\partial_t u)(0,\, \cdot\, )$. We fix $\eta\in C^\infty({\mathbb R}^3)$ with $\eta(x)\equiv 1$ for $|x|<2$ and $\eta(x)\equiv 0$ for $|x|\ge 3$. Letting $\tilde{u}=\eta w + u_r$, we notice that $u=\tilde{u}$ for $|x|<2$, that $\tilde{u}$ solves \begin{equation}\label{tildeu.equation} L\tilde{u} = -2c_2^2\nabla \eta\cdot\nabla w - c_2^2(\Delta\eta) w - (c_1^2-c_2^2) \nabla(\nabla\eta\cdot w) - (c_1^2-c_2^2)\nabla\eta \nabla\cdot w, \end{equation} and that $\tilde{u}$ has vanishing Cauchy data. As the right side vanishes unless $2\le |x|\le 3$, we may apply the result of the previous case to see that $$ (1+t)\sum_\ss{|\beta|+\mu\le M+\nu+1\\\mu\le\nu} \sup_{|x|<2} |S^\mu \partial^\beta \tilde{u}(t,x)| \lesssim \int_0^t \sum_\ss{|\beta|+\mu\le M+\nu+4\\\mu\le\nu} \|S^\mu \partial^\beta w(s,\, \cdot\, )\|_{L^\infty(2\le |x|\le 3)}\:ds. $$ It remains only to bound the term on the right of the preceding equation. To do this, we apply \eqref{ptwise.bdyless.data.2}, \eqref{ptwise.bdyless.inhom.2}, and Sobolev's lemma on $S^2$. For the forcing term, for example, we have $$\sum_\ss{|\beta|+\mu\le M+\nu+6\\\mu\le \nu}\int_0^t \int_{c_2}^{c_1} \int_0^s \int_{|c(s-\tau)-|y||\le 4} |S^\mu Z^\beta F(\tau,y)|\:\frac{dy\:d\tau}{|y|}\:dc\:ds$$ corresponding to the last term in \eqref{ptwise.bdyless.inhom.2}. For fixed $c$, we note that the sets $\Lambda_s = \{(\tau,y)\,:\, 0\le \tau\le s,\,|c(s-\tau)-|y||\le 4\}$ have finite overlap. I.e., for $|s-s'|\ge 10$, $\Lambda_s\cap\Lambda_{s'}=\emptyset$. Thus, we see that the desired bound holds. Similar arguments can be used to estimate the remaining terms in \eqref{ptwise.bdyless.data.2} and \eqref{ptwise.bdyless.inhom.2}, which completes the proof. \qed We will also require the following pointwise estimate which follows from arguments similar to those in \cite{MNS1}. \begin{theorem} \label{theorem.ptwise.2} Suppose ${\mathcal{K}}\subset \{|x|<1\}\subset {\mathbb R}^3$ is a nontrapping obstacle with smooth boundary. Let $u$ solve \begin{equation} \label{no.data.equation} \begin{cases} Lu(t,x)=F(t,x),\quad (t,x)\in {\mathbb R}_+\times{\R^3\backslash\mathcal{K}},\\ u|_{\partial\mathcal{K}}=0,\\ u(t,x)=0,\quad t\le 0. \end{cases} \end{equation} Suppose further that $F(t,x)=0$ when $|x|>20c_1t$. Then, if $|x|<c_2t/10$ and $t>1$, \begin{multline} \label{ptwise.2} (1+t+|x|)|S^\nu Z^\alpha u(t,x)| \lesssim \sum_\ss{|\beta|+\mu\le |\alpha|+\nu+6\\\mu\le\nu+1} \int_{\theta t}^t \int_{\R^3\backslash\mathcal{K}} |S^\mu Z^\beta F(s,y)|\:\frac{dy\:ds}{|y|} \\+\sup_{0\le s\le t} (1+s) \sum_\ss{|\beta|+\mu\le |\alpha|+\nu+2\\\mu\le\nu} \|S^\mu \partial^\beta F(s,\, \cdot\, )\|_2. \end{multline} \end{theorem} \noindent{\em Proof of Theorem \ref{theorem.ptwise.2}:} By arguing as in \eqref{ptwise.1} using \eqref{ptwise.bdyless.inhom.localized}, we see that for $|x|<c_2 t/10$, \begin{multline} \label{ptwise.2.1} (1+t)|S^\nu Z^\alpha u(t,x)|\lesssim \int_{\theta t}^t \int_{\R^3\backslash\mathcal{K}} \sum_\ss{|\beta|+\mu\le |\alpha|+\nu+3\\\mu\le\nu+1} |S^\mu Z^\beta F(s,y)|\:\frac{dy\:ds}{|y|} \\+\sup_{|y|\le 2, 0\le s\le t} (1+s) \sum_\ss{|\beta|+\mu\le |\alpha|+\nu+1\\\mu\le\nu} |S^\mu \partial^\beta u(s,y)|. \end{multline} Thus, we need only show that the last term is controlled by the right side of \eqref{ptwise.2}. First suppose that $F(s,y)=0$ if $|y|>4$. Then, by a Sobolev estimate and \eqref{local.energy.high}, it follows that \begin{equation} \label{ptwise.2.2} (1+t)\sup_{|y|<2} \sum_\ss{|\beta|+\mu\le |\alpha|+\nu+1\\\mu\le\nu}|S^\mu \partial^\beta u(t,x)| \lesssim \sup_{0\le s\le t} \sum_\ss{|\beta|+\mu\le |\alpha|+\nu+2\\\mu\le\nu} (1+s)\|S^\mu \partial^\beta F(s,\, \cdot\, )\|_2. \end{equation} Here we are also using that the Dirichlet boundary conditions allow us to control $u$ locally by $u'$. Thus, for the remainder of the proof, we may assume that $F(s,y)=0$ for $|y|\le 3$. Letting $\tilde{u}$ be as in the proof of Theorem \ref{theorem.ptwise}, we see from \eqref{tildeu.equation} and \eqref{ptwise.2.2} that \begin{multline} \label{ptwise.2.3} (1+t)\sup_{|y|<2} \sum_\ss{|\beta|+\mu\le |\alpha|+\nu+1\\\mu\le\nu} |S^\mu \partial^\beta u(t,x)| \\\lesssim \sup_{0\le s\le t} \sum_\ss{|\beta|+\mu\le |\alpha|+\nu+3\\\mu\le\nu} (1+s)\|S^\mu \partial^\beta w(s,\, \cdot\, )\|_{L^\infty(2\le |x|\le 3)}, \end{multline} where $w$ is a solution to the boundaryless equation $Lw=F$ with vanishing initial data. Provided that $s>30/c_2$, the theorem follows from \eqref{ptwise.bdyless.inhom.localized}, \eqref{ptwise.2.1}, \eqref{ptwise.2.2}, and \eqref{ptwise.2.3}. Else, when $s\le 30/c_2$, we may use \eqref{ptwise.bdyless.inhom} and finite propagation speed to bound this term by the second term in the right of \eqref{ptwise.2}, which completes the proof.\qed \subsection{Boundary term estimates}\label{boundary} In order to handle the boundary terms that arise in \eqref{energy.L.d}, we shall give an estimate which is in the spirit of the original ones of \cite{MS1, MS2} for the wave equation. This is the key estimate, along with the decay of local energy \eqref{local.energy}, that allows us to drop the star-shapedness assumption on ${\mathcal{K}}$. Here, we are merely isolating the proof of the bound for the right side of \eqref{FTC}. \begin{lemma} \label{lemma.bdy.term} Let ${\mathcal{K}}$ be a bounded, nontrapping obstacle with smooth boundary. Suppose that $u$ is a smooth solution of \eqref{no.data.equation}. Then, \begin{multline} \label{bdy.term} \int_0^t \|S^\nu \partial^\alpha u'(s,\, \cdot\, )\|_{L^2(|x|<1)}\:ds \lesssim \sum_\ss{|\beta|+\mu\le M+\nu+3\\\mu\le \nu} \int_0^t \int_{\R^3\backslash\mathcal{K}} |S^\mu Z^\beta F(s,y)|\:\frac{dy\:ds}{|y|} \\+ \int_0^t \sum_\ss{|\beta|+\mu\le M+\nu\\\mu\le \nu} \|S^\mu \partial^\beta F(s,\, \cdot\, )\|_{L^2(\{|x|<4\})}\:ds \end{multline} for any $|\alpha|=M$ and $\nu$. \end{lemma} \bigskip \newsection{Sobolev-type estimates and null form estimates} \subsection{Weighted Sobolev estimates} In the sequel, we shall require the following rather standard weighted Sobolev estimate from \cite{K}. This follows by applying Sobolev embedding on ${\mathbb R}\times S^2$. The decay results from the difference in the volume elements between ${\mathbb R}\times S^2$ and, say, ${\mathbb R}^3$. \begin{lemma} \label{lemma.weighted.Sobolev} Suppose that $h\in C^\infty({\R^3\backslash\mathcal{K}})$. Then, for $R\ge 1$, \begin{equation} \label{weighted.Sobolev} \|h\|_{L^\infty(R/2<|x|<R)} \lesssim R^{-1} \sum_{|\alpha|+|\beta|\le 2} \|\tilde{\Omega}^\alpha \partial_x^\beta h\|_{L^2(R/4<|x|<2R)}. \end{equation} \end{lemma} \subsection{Klainerman-Sideris decay estimates} In this section, we study another class of weighted Sobolev-type estimates. These follow from the boundaryless estimates of \cite{Si}, which are closely related to estimates that originally appeared in \cite{KS}. The interested reader should also consult \cite{ST} for a unified approach to proving such estimates. The main boundaryless estimate of \cite{Si}\footnote{See (3.24).} to consider states \begin{multline} \label{KS.bdyless} \|\langle c_1t-r\rangle P_1 \partial\nabla h(t,\, \cdot\, )\|_2 + \|\langle c_2t-r\rangle P_2 \partial\nabla h(t,\, \cdot\, )\|_2 \\\lesssim \sum_{|\alpha|+\mu\le 1} \|S^\mu Z^\alpha h'(t,\, \cdot\, )\|_2 + t\|Lh(t,\, \cdot\, )\|_2. \end{multline} In order to establish similar bounds when there is a boundary, we use ideas from \cite{MNS1}. This yields \begin{lemma} \label{lemma.KS.1} Let ${\mathcal{K}}\subset\{|x|<1\}\subset{\mathbb R}^3$ be an obstacle with smooth boundary. Suppose $u(t,x)\in C^\infty({\mathbb R}_+\times{\R^3\backslash\mathcal{K}})$ and $u|_{\partial\mathcal{K}}=0$. Then, \begin{multline} \label{KS.1} \|\langle c_1t-r\rangle P_1 \partial \nabla S^\nu Z^\alpha u(t,\, \cdot\, )\|_2 + \|\langle c_2t-r\rangle P_2 \partial\nabla S^\nu Z^\alpha u(t,\, \cdot\, )\|_2 \\\lesssim \sum_\ss{|\beta|+\mu\le M+\nu+1\\\mu\le\nu+1} \|S^\mu Z^\alpha u'(t,\, \cdot\, )\|_2 + t\sum_\ss{|\beta|+\mu\le M+\nu\\\mu\le\nu}\|S^\mu Z^\beta Lu(t,\, \cdot\, )\|_2 \\+ (1+t)\sum_{\mu\le\nu} \|S^\mu u'(t,\, \cdot\, )\|_{L^2(|x|<2)} \end{multline} for any fixed $|\alpha|=M$ and $\nu$. \end{lemma} \noindent{\em Proof of Lemma \ref{lemma.KS.1}:} We first show that \begin{multline} \label{KS.1.1} \sum_{\kappa=1,2} \|\langle c_\kappa t-r\rangle P_\kappa \partial \nabla S^\nu Z^\alpha u(t,\, \cdot\, )\|_2 \lesssim \sum_\ss{|\beta|+\mu\le M+\nu+1\\\mu\le\nu+1} \|S^\mu Z^\beta u'(t,\, \cdot\, )\|_2 \\+ \sum_\ss{|\beta|+\mu\le M+\nu\\\mu\le\nu} t \|S^\mu Z^\beta Lu(t,\, \cdot\, )\|_2 + (1+t)\sum_\ss{|\beta|+\mu\le M+\nu+2\\\mu\le\nu} \|S^\mu \partial^\beta u(t,\, \cdot\, )\|_{L^2(|x|<3/2)}. \end{multline} This estimate is trivial if the norm in the left is over $\{|x|<3/2\}$ as the coefficients of $Z$ are $O(1)$ on this set. To handle the case when the norms are over $\{|x|\ge 3/2\}$, we fix $\eta\in C^\infty({\mathbb R}^3)$ with $\eta(x)\equiv 0$ for $|x|<1$ and $\eta(x)\equiv 1$ for $|x|>3/2$ and apply \eqref{KS.bdyless} to $h(t,x)=\eta(x)u(t,x)$ which solves the boundaryless equation $$Lh=\eta(x)Lu - 2c_2^2 \nabla \eta\cdot\nabla u - c_2^2(\Delta \eta)u - (c_1^2-c_2^2)\nabla(\nabla\eta\cdot u)-(c_1^2-c_2^2)\nabla\eta\nabla\cdot u.$$ This proves \eqref{KS.1.1} as the last four terms are supported in $\{|x|\le 3/2\}$. It remains to show that the last term in \eqref{KS.1.1} is bounded by the right side of \eqref{KS.1}. Here we simply apply (a trivial modification of) \eqref{ell.reg.local} and the differential inequality $$\sum_{\kappa=1,2}|c_\kappa t-r||P_\kappa\partial_t \nabla v(t,x)| \lesssim \sum_{|\alpha|+\mu\le 1} |S^\mu Z^\alpha \nabla v(t,x)|+t|Lv(t,x)|,$$ which is also from \cite{Si}.\footnote{See (3.10b).} \qed We also have the following variants of the above. \begin{corr} \label{corr.ks.1} Let ${\mathcal{K}}\subset\{|x|<1\}\subset{\mathbb R}^3$ be an obstacle with smooth boundary. Suppose $u(t,x)\in C^\infty({\mathbb R}_+\times{\R^3\backslash\mathcal{K}})$ and $u|_{\partial\mathcal{K}}=0$. Then, \begin{multline} \label{KS.2} |x|\langle c_\kappa t-r\rangle |P_\kappa \partial\nabla S^\nu Z^\alpha u(t,x)| \lesssim \sum_\ss{|\beta|+\mu\le M+\nu+3\\\mu\le \nu+1} \|S^\mu Z^\beta u'(t,\, \cdot\, )\|_2 \\+ \sum_\ss{|\beta|+\mu\le M+\nu+2\\\mu\le \nu} t\|S^\mu Z^\beta Lu(t,\, \cdot\, )\|_2 + (1+t)\sum_{\mu\le\nu} \|S^\mu u'(t,\, \cdot\, )\|_{L^2(|x|<2)}, \end{multline} \begin{multline} \label{KS.3} \|\langle c_\kappa t-r\rangle P_\kappa \partial\nabla S^\nu Z^\alpha u(t,\, \cdot\, )\|_{L^2(|x|>t/4)} \lesssim \sum_\ss{|\beta|+\mu\le M+\nu+1\\\mu\le\nu+1} \|S^\mu Z^\beta u'(t,\, \cdot\, )\|_{L^2(|x|>t/8)} \\+ (1+t)\sum_\ss{|\beta|+\mu\le M+\nu\\\mu\le\nu} \|S^\mu Z^\beta Lu(t,\, \cdot\, )\|_{L^2(|x|>t/8)}, \end{multline} and \begin{multline} \label{KS.4} \sup_{|x|>t/2} |x|\langle c_\kappa t-r\rangle |P_\kappa \partial\nabla S^\nu Z^\alpha u(t,\, \cdot\, )| \lesssim \sum_\ss{|\beta|+\mu\le |\alpha|+\nu+3\\\mu\le\nu+1} \|S^\mu Z^\beta u'(t,\, \cdot\, )\|_{L^2(|x|>t/8)} \\+(1+t)\sum_\ss{|\beta|+\mu\le |\alpha|+\nu+2\\\mu\le\nu} \|S^\mu Z^\beta Lu(t,\, \cdot\, )\|_{L^2(|x|>t/8)} \end{multline} for any $\kappa=1,2$ and fixed $|\alpha|=M$ and $\nu$. \end{corr} \noindent{\em Proof of Corollary \ref{corr.ks.1}:} Estimates \eqref{KS.2} and \eqref{KS.4} follow from \eqref{KS.1} and \eqref{KS.3} respectively using \eqref{weighted.Sobolev}. Estimate \eqref{KS.3} follows from applying \eqref{KS.bdyless} to $h(t,x)=\eta(x/\langle t\rangle) u(t,x)$ where $\eta(z)\equiv 1$ for $|z|>1/4$ and $\eta(z)\equiv 0$ for $|z|<1/8$.\qed We also have \begin{lemma} \label{lemma.KS.5} Let ${\mathcal{K}}\subset\{|x|<1\}\subset{\mathbb R}^3$ be an obstacle with smooth boundary. Suppose $u(t,x)\in C^\infty({\mathbb R}_+\times {\R^3\backslash\mathcal{K}})$ and $u|_{\partial\mathcal{K}}=0$. Then, \begin{multline} \label{KS.5} \langle r\rangle \langle c_\kappa t-r\rangle^{1/2} |P_\kappa \partial S^\nu Z^\alpha u(t,x)| \lesssim \sum_\ss{|\beta|+\mu\le M+\nu+2\\\mu\le\nu+1} \|S^\mu Z^\beta u'(t,\, \cdot\, )\|_2 \\+(1+t)\sum_\ss{|\beta|+\mu\le M+\nu+1\\\mu\le\nu} \|S^\mu Z^\beta Lu(t,\, \cdot\, )\|_2 +(1+t)\sum_{\mu\le\nu} \|S^\mu u'(t,\, \cdot\, )\|_{L^2(|x|<2)} \end{multline} and \begin{multline} \label{KS.6} \sup_{|x|>t/4} \langle r\rangle\langle c_\kappa t-r\rangle^{1/2} |P_\kappa \partial S^\nu Z^\alpha u(t,x)| \lesssim \sum_\ss{|\beta|+\mu\le M+\nu+2\\\mu\le\nu+1} \|S^\mu Z^\beta u'(t,\, \cdot\, )\|_{L^2(|x|>t/8)} \\+(1+t)\sum_\ss{|\beta|+\mu\le M+\nu+1\\\mu\le\nu} \|S^\mu Z^\beta Lu(t,\, \cdot\, )\|_{L^2(|x|>t/8)} \end{multline} for any $\kappa=1,2$ and fixed $|\alpha|=M$ and $\nu$. \end{lemma} \noindent{\em Proof of Lemma \ref{lemma.KS.5}:} Using the arguments of the proof of Lemma \ref{lemma.KS.1}, this follows from the boundaryless estimate $$ \langle r\rangle\langle c_\kappa t-r\rangle^{1/2} |P_\kappa \partial h(t,x)| \lesssim \sum_\ss{|\beta|+\mu\le 2\\\mu\le 1} \|S^\mu Z^\beta h'(t,\, \cdot\, )\|_2 + \langle t\rangle \sum_\ss{|\beta|\le 1} \|Z^\beta Lh(t,\, \cdot\, )\|_2 $$ of \cite{Si}.\footnote{See (3.20c) and (3.24).}\qed \subsection{Null form estimates} The additional decay afforded to us by the null condition is encapsulated in the following estimate of \cite{Si}.\footnote{See Proposition 3.2.} This is closely related to those for multiple speed systems of wave equations used, e.g., in \cite{Si3}. \begin{lemma} \label{lemma.null.condition} Assume that $Q$ satisfies the null condition \eqref{null.condition}. Let $\mathcal{N}=\{(\alpha,\beta,\gamma)\neq (1,1,1), (2,2,2)\}$ be the set of nonresonant indices. Then, \begin{multline} \label{null.condition.decay} |\langle u, Q(v,w)\rangle|\\\lesssim \frac{1}{r} |u|\sum_{|a|\le 1} \Bigl[|\nabla \tilde{\Omega}^a v| |\nabla w| + |\nabla\tilde{\Omega}^a w||\nabla v| + |\nabla^2 v||\tilde{\Omega}^a w| + |\nabla^2 w||\tilde{\Omega}^a v|\Bigr] \\+\sum_{\mathcal{N}} |P_\alpha u|\Bigl[ |P_\beta \nabla^2 v| |P_\gamma \nabla w| + |P_\beta \nabla^2 w||P_\gamma\nabla v|\Bigr], \end{multline} for sufficiently regular $u,v,w$. \end{lemma} \bigskip \newsection{Proof of global existence} In this section, we prove Theorem \ref{main.theorem}. We take $N=112$ in \eqref{data.smallness}, but this is far from optimal. By scaling in the $t$ variable, we may take $c_1=1$ without loss of generality. We begin by making a reduction that allows us to avoid technicalities related to the compatibility conditions. While the reduction from \cite{KSS3}\footnote{See also \cite{MS1} and \cite{MNS1}.} works when we have truncated at the quadratic level, the necessary scaling breaks down when the higher order terms are present. We instead use a reduction that is more reminiscent of that from \cite{MNS2}. We begin by noting that if $\varepsilon$ in \eqref{data.smallness} is sufficiently small, then there is a constant $C_0$ for which \begin{equation} \label{local.solution.estimate} \sup_{0\le t\le 2} \sum_{|\alpha|\le 112} \|\partial^\alpha u(t,\, \cdot\, )\|_{L^2(|x|\le 25)}\le C_0\varepsilon. \end{equation} This follows from well-known local existence theory. See, e.g., \cite{KSS1}, which is only stated for diagonal wave equations but as the proofs only rely on energy estimates the results carry over to the current setting. On the other hand, over $\{|x|>5 (t+1)\}$, $u$ corresponds to a boundaryless solution.\footnote{Recall that ${\mathcal{K}}\subset\{|x|<1\}$ by assumption.} Thus by the estimates which correspond to those that follow for the boundaryless problem\footnote{See, e.g., \cite{Si} and \cite{MNS2}.}, we have \begin{multline} \label{bdyless.solution.estimate} \sup_{0\le t<\infty} \sum_{|\alpha|+\mu\le 111} \|S^\mu Z^\alpha u'(t,\, \cdot\, )\|_{L^2(|x|\ge 5(t+1))} \\+ \sup_\ss{|x|\ge 5(t+1)\\0\le t<\infty} (1+t+|x|)\sum_{|\alpha|+\mu\le 108} |S^\mu Z^\alpha u'(t,x)|\le C_1\varepsilon. \end{multline} We fix a smooth cutoff function $\eta$ with $\eta(t,x)\equiv 1$ if $t\le 3/2$ and $|x|\le 20$, $\eta(t,\, \cdot\, )\equiv 0$ for $t>2$, and $\eta(\, \cdot\, ,x)\equiv 0$ for $|x|>25$. Setting $u_0=\eta u$, it follows that $u$ solves \eqref{main.equation} for $0<t<T$ if and only if $w=u-u_0$ solves \begin{equation} \label{w.equation} \begin{cases} Lw = (1-\eta)Q(\nabla u,\nabla^2 u) - [L,\eta]u, \quad (t,x)\in {\mathbb R}_+\times{\R^3\backslash\mathcal{K}},\\ w|_{\partial\mathcal{K}}=0,\\ w(0,\, \cdot\, )=(1-\eta)(0,\, \cdot\, )f,\\ \partial_t w(0,\, \cdot\, )=(1-\eta)(0,\, \cdot\, )g - \eta_t(0,\, \cdot\, )f \end{cases} \end{equation} over the same interval. We now fix another smooth cutoff $\beta$ with $\beta(z)\equiv 1$ for $z\ge 10$ and $\beta(z)\equiv 0$ for $z\le 6$. Then, let $v$ solve the linear equation \begin{equation} \label{v.equation} \begin{cases} Lv = \beta\bigl(\frac{|x|}{t+1}\bigr)(1-\eta) Q(\nabla u, \nabla^2 u) - [L,\eta]u,\\ v|_{\partial\mathcal{K}} = 0,\\ v(0,\, \cdot\, )=(1-\eta)(0,\, \cdot\, )f,\\ \partial_t v(0,\, \cdot\, )=(1-\eta)(0,\, \cdot\, )g - \eta_t(0,\, \cdot\, )f. \end{cases} \end{equation} We shall show that \begin{multline} \label{v.estimate} (1+t+|x|) \sum_{|\alpha|+\mu\le 102} |S^\mu Z^\alpha v(t,x)| + \sum_{|\alpha|+\mu\le 100} \|S^\mu Z^\alpha v'(t,\, \cdot\, )\|_2 \\+(\log(2+t))^{-1} \sum_{|\alpha|+\mu\le 98} \|\langle x\rangle^{-1/2} S^\mu Z^\alpha v'\|_{L^2_tL^2_x(S_t)}\le C_2\varepsilon \end{multline} for some absolute constant $C_2>0$ and for any $t>0$. \noindent{\em Proof of \eqref{v.estimate}:} We first notice that by \eqref{ptwise} and \eqref{data.smallness} the first term in the left side of \eqref{v.estimate} is \begin{multline} \label{v.equation.1} \lesssim \varepsilon + \int_0^t \int_{\R^3\backslash\mathcal{K}} \sum_{|\alpha|+\mu\le 108} |S^\mu Z^\alpha \beta\bigl(\frac{|x|}{t+1}\bigr)(1-\eta) Q(\nabla u,\nabla^2u)|\:\frac{dy\:ds}{|y|} \\+ \int_0^t \int_{\R^3\backslash\mathcal{K}} \sum_{|\alpha|+\mu\le 108} |S^\mu Z^\alpha [L,\eta]u|\:\frac{dy\:ds}{|y|}. \end{multline} Here, we have also used that Sobolev's lemma and the assumption that $0\in {\mathcal{K}}$ allow us to control the last term in the right of \eqref{ptwise} by the one which precedes it. Since $[L,\eta]u$ vanishes unless $t\le 2$ and $|x|\le 25$, the last term is clearly $O(\varepsilon)$ by \eqref{local.solution.estimate}. The second term in \eqref{v.equation.1} is \begin{equation}\label{v.equation.2} \lesssim \int_0^t \int_{|y|\ge 6(s+1)} \sum_{|\beta|+\mu\le 108} |S^\mu Z^\beta \nabla u(s,y)| \sum_{|\beta|+\mu\le 108} |S^\mu Z^\beta \nabla^2 u(s,y)| \:\frac{dy\:ds}{|y|}.\end{equation} Over $|x|>6(t+1)$, we note that by (a trivial modification of) \eqref{KS.4} and \eqref{KS.6} \begin{multline}\label{v.equation.3} \sup_{|x|>6(t+1)} \Bigl(\langle c_\kappa t-r\rangle \langle r\rangle |P_\kappa \nabla^2 S^\mu Z^\beta u(t,x)| + \langle r\rangle \langle c_\kappa t-r\rangle^{1/2} |P_\kappa \nabla S^\mu Z^\beta u(t,x)|\Bigr) \\\lesssim \sum_{|\alpha|+\nu\le 111} \|S^\nu Z^\alpha u'(t,\, \cdot\, )\|_{L^2(|x|>5(t+1))} + \sum_{|\alpha|+\nu\le 110} \langle t\rangle\|S^\nu Z^\alpha Q(\nabla u,\nabla^2 u)\|_{L^2(|x|>5(t+1))} \end{multline} for $\kappa=1,2$ and for $|\beta|+\mu\le 108$. The right side of \eqref{v.equation.3} is easily seen to be $O(\varepsilon)$ from \eqref{bdyless.solution.estimate}. Applying this bound to the terms of \eqref{v.equation.2} and recalling that we are assuming that $c_1=1$, it follows that \eqref{v.equation.2} is $$\lesssim \varepsilon^2 \int_0^t \frac{1}{s^{(3/2)-}} \int_{\{y\in {\R^3\backslash\mathcal{K}}\,:\, |y|\ge 6(s+1)\}} \frac{1}{|y|^{3+}}\:dy\:ds \lesssim \varepsilon^2$$ as desired. For the second term in the left side of \eqref{v.estimate}, we use the standard energy integral method and see that \begin{multline*} \partial_t \sum_{|\alpha|+\nu\le 100} \|(S^\nu Z^\alpha v)'(t,\, \cdot\, )\|^2_2 +\partial_t \sum_{|\alpha|+\nu\le 100} \|\nabla\cdot(S^\nu Z^\alpha v)(t,\, \cdot\, )\|_2^2 \\\lesssim \Bigl(\sum_{|\alpha|+\nu\le 100} \|S^\nu Z^\alpha v'(t,\, \cdot\, )\|_2\Bigr) \Bigl(\sum_{|\alpha|+\nu\le 100} \|S^\nu Z^\alpha Lv(t,\, \cdot\, )\|_2\Bigr) \\+\sum_{|\alpha|+\nu\le 100} \Bigl|\int_{\partial\mathcal{K}} \partial_t S^\nu Z^\alpha v(t,\, \cdot\, ) \nabla S^\nu Z^\alpha v(t,\, \cdot\, )\cdot n\:d\sigma\Bigr| \end{multline*} where $n$ is the outward unit normal to ${\mathcal{K}}$ at $x\in {\partial\mathcal{K}}$. Since ${\mathcal{K}}\subset\{|x|<1\}$, we may use \eqref{data.smallness}, \eqref{v.equation}, and a trace theorem to see that \begin{multline} \label{v.estimate.2.1} \sum_{|\alpha|+\nu\le 100} \|S^\nu Z^\alpha v'(t,\, \cdot\, )\|_2^2 \lesssim \varepsilon^2 \\+\Bigl(\int_0^t \sum_{|\alpha|+\nu\le 100} \|S^\nu Z^\alpha \beta\bigl(|x|/(s+1)\bigr) (1-\eta)(s,\, \cdot\, ) Q(\nabla u,\nabla^2u)(s,\, \cdot\, )\|_2\:ds\Bigr)^2 \\+\Bigl(\int_0^t \sum_{|\alpha|+\nu\le 100} \|S^\nu Z^\alpha [L,\eta]u(s,\, \cdot\, )\|_2\:ds\Bigr)^2 \\+ \int_0^t \sum_{|\alpha|+\nu\le 101} \|S^\nu \partial^\alpha v'(s,\, \cdot\, )\|^2_{L^2(|x|<1)}\:ds. \end{multline} Using the bound for the first term in \eqref{v.estimate}, it is easy to see that the last term is $O(\varepsilon^2)$. By the support properties of $\eta$ and \eqref{local.solution.estimate}, the preceding term satisfies the same bound. The desired bound for the second term in the right of \eqref{v.estimate.2.1} follows from \eqref{v.equation.3} and \eqref{bdyless.solution.estimate} as above. It remains only to show that the last term in the left of \eqref{v.estimate} is $O(\varepsilon)$. Here, it suffices to show that \begin{equation}\label{v.estimate.3.1} \sum_{|\alpha|+\mu\le 98} \int_0^t \|\langle x\rangle^{-1/2} S^\mu Z^\alpha v'(s,\, \cdot\, )\|^2_{L^2(|x|\le c_2s/2)}\:ds \end{equation} is $O(\varepsilon^2 (\log(2+t))^2)$ as the estimate follows trivially from the bound for the second term in \eqref{v.estimate} when $|x|>c_2s/2$. For this, we note that by \eqref{KS.5} \begin{multline} \label{v.estimate.3.2} \langle r\rangle \langle c_\kappa t-r\rangle^{1/2} \sum_{|\alpha|+\mu\le 98}|P_\kappa \partial S^\mu Z^\alpha v(t,x)|\lesssim \sum_{|\alpha|+\mu\le 100} \|S^\mu Z^\alpha v'(t,\, \cdot\, )\|_2 \\+ \sum_{|\alpha|+\mu\le 99} (1+t)\|S^\mu Z^\alpha Lv(t,\, \cdot\, )\|_2 + (1+t)\sum_{\mu\le 98} \|S^\nu v'(t,\, \cdot\, )\|_{ L^\infty(|x|<2)}. \end{multline} The first and last terms in the right are $O(\varepsilon)$ by the bounds for the preceding terms in \eqref{v.estimate}. The second term in the right is \begin{multline*} \lesssim \sum_{|\alpha|+\mu\le 99} (1+t)\|S^\mu Z^\alpha \beta(|\, \cdot\, |/(t+1)) (1-\eta)(t,\, \cdot\, ) Q(\nabla u,\nabla^2 u)\|_2 \\+ \sum_{|\alpha|+\mu\le 99} (1+t)\|S^\mu Z^\alpha [L,\eta]u(t,\, \cdot\, )\|_2. \end{multline*} Each of these terms are $O(\varepsilon)$ using \eqref{bdyless.solution.estimate} and \eqref{local.solution.estimate} respectively. Using the $O(\varepsilon)$ bound for the right side of \eqref{v.estimate.3.2}, it follows that \eqref{v.estimate.3.1} is $$\lesssim \varepsilon^2 \int_0^t \frac{1}{1+s} \|\langle x\rangle^{-3/2}\|^2_{L^2(|x|\le c_2s/2)}\:ds \lesssim \varepsilon^2 (\log(2+t))^2$$ which completes the proof.\qed The estimate \eqref{v.estimate} allows us in most instances to restrict our attention to $w-v$ which solves \begin{equation} \label{wv.equation} \begin{cases} L(w-v)=(1-\beta)\Bigl(\frac{|x|}{t+1}\Bigr)(1-\eta)(t,x)Q(\nabla u,\nabla^2 u),\quad (t,x)\in {\mathbb R}_+\times{\R^3\backslash\mathcal{K}},\\ (w-v)|_{\partial\mathcal{K}} =0,\\ (w-v)(t,x)=0,\quad t\le 0. \end{cases} \end{equation} This equation meets the requirements of our estimates in the previous sections. In particular, it has vanishing Cauchy data and a forcing term which is supported in $|x|\le 20c_1 t$. Depending on the estimates being utilized, we shall use certain $L^2$ and $L^\infty$ bounds for $u$ while at other times we shall use them for $w-v$ or $w$. Since $u=(w-v)+v+u_0$ and since $u_0$ and $v$ satisfy \eqref{local.solution.estimate} and \eqref{v.estimate} respectively, it will always be the case that a bound for $w-v$, $w$, or $u$ will imply the same bound for the others. We now set up a continuity argument. From this point forward, the argument resembles that of \cite{MNS1} quite closely. For $\varepsilon>0$ as above, we assume that we have a solution to \eqref{main.equation} on $0\le t\le T$ satisfying \begin{align} \sum_\ss{|\alpha|+\nu\le 52\\\nu\le 1} \|S^\nu Z^\alpha w'(t,\, \cdot\, )\|_2 &\le A_0 \varepsilon, \label{I}\\ (1+t+r)\sum_{|\alpha|\le 40} |Z^\alpha w'(t,x)|&\le A_1 \varepsilon, \label{II}\\ (1+t+r)\sum_\ss{|\alpha|+\nu\le 55\\\nu\le 2} |S^\nu Z^\alpha (w-v)(t,x)|&\le B_1\varepsilon^2 (1+t)^{1/10} \log(2+t),\label{III}\\ \sum_{|\alpha|\le 100} \|\partial^\alpha u'(t,\, \cdot\, )\|_2&\le B_2\varepsilon (1+t)^{1/40},\label{IV}\\ \sum_\ss{|\alpha|+\nu\le 72\\\nu\le 3} \|S^\nu Z^\alpha u'(t,\, \cdot\, )\|_2&\le B_3 \varepsilon (1+t)^{1/20},\label{V}\\ \sum_\ss{|\alpha|+\nu\le 70\\\nu\le 3} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'\|_{L^2_tL^2_x(S_t)} &\le B_4 \varepsilon (1+t)^{1/20} (\log (2+t))^{1/2}.\label{VI} \end{align} We may take $A_0=A_1=4C_2$ in the first two estimates, where $C_2$ is as in \eqref{v.estimate}. For sufficiently small $\varepsilon$, all of the above estimates hold for, say, $T=2$ by the local existence theory. Thus, for $\varepsilon$ sufficiently small, we prove \begin{enumerate} \item[{\em (i.)}] \eqref{I} is valid with $A_0$ replaced by $A_0/2$, \item[{\em (ii.)}] Assuming {\em (i.)}, \eqref{II} is valid with $A_1$ replaced by $A_1/2$, \item[{\em (iii.)}] \eqref{III}-\eqref{VI} follow from \eqref{I} and \eqref{II} for fixed constants $B_i$. \end{enumerate} For such an $\varepsilon$, this then proves that a solution exists for all $t>0$. \subsection{Preliminary results} We begin with the following preliminary results assuming \eqref{I}-\eqref{VI}: \begin{equation} \label{prelim.1} r \langle c_\kappa t-r\rangle |P_\kappa \partial \nabla S^\nu Z^\alpha u(t,x)|\lesssim \varepsilon (1+t)^{3/20} \log(2+t) \end{equation} \begin{equation} \label{prelim.2} \langle r\rangle \langle c_\kappa t-r\rangle^{1/2} |P_\kappa \partial S^\nu Z^\alpha u(t,x)|\lesssim \varepsilon (1+t)^{3/20} \log(2+t), \end{equation} and \begin{equation} \label{prelim.3} (1+t)\|S^\nu Z^\alpha Lu(t,\, \cdot\, )\|_2\lesssim \varepsilon (1+t)^{3/20} \log(2+t) \end{equation} for $\kappa=1,2$, $\nu\le 1$ and $|\alpha|+\nu\le 63$. Using \eqref{KS.2} and \eqref{KS.5}, the first two estimates above follow from \eqref{prelim.3} via \eqref{local.solution.estimate}, \eqref{v.estimate}, \eqref{III}, and \eqref{V}. To show the latter estimate, we expand the left side to see that it is $$\lesssim (1+t) \Bigl\|\sum_\ss{|\alpha|+\mu\le 32\\\mu\le 1} |S^\mu Z^\alpha u'(t,\, \cdot\, )| \sum_\ss{|\alpha|+\mu\le 64\\\mu\le 1} |S^\mu Z^\alpha u'(t,\, \cdot\, )|\Bigr\|_2.$$ By \eqref{III} and \eqref{V}, as well as \eqref{local.solution.estimate} and \eqref{v.estimate}, the preceding equation is clearly $O(\varepsilon^2)$. Notice that the same proof shows that \eqref{prelim.1} and \eqref{prelim.2} hold with $u$ replaced by $w-v$. \subsection{Proof of {\em (i.)}} As the better estimate \eqref{v.estimate} holds for $v$, it suffices to show that \begin{equation} \label{i.goal} \sum_\ss{|\alpha|+\nu\le 52\\\nu\le 1} \|S^\nu Z^\alpha (w-v)'(t,\, \cdot\, )\|_2^2 \lesssim \varepsilon^3. \end{equation} Using \eqref{e.div} with $\gamma^{IJ,jk}\equiv 0$ and with $e_\mu[u]$ replaced by $e_\mu[S^\nu Z^\alpha (w-v)]$, we see that the left side of \eqref{i.goal} is \begin{multline} \lesssim \sum_\ss{|\alpha|+\nu\le 52\\\nu\le 1} \int_0^t \int_{\R^3\backslash\mathcal{K}} |\langle \partial_t S^\nu Z^\alpha (w-v), L S^\nu Z^\alpha (w-v)\rangle| \:dy\:ds \\+\sum_\ss{|\alpha|+\nu\le 52\\\nu\le 1} \Bigl|\int_0^t \int_{\partial\mathcal{K}} \partial_t S^\nu Z^\alpha (w-v) \partial_a S^\nu Z^\alpha (w-v) n_a \:d\sigma\:ds\Bigr|. \end{multline} Recalling that ${\mathcal{K}}\subset \{|x|<1\}$, and thus that the coefficients of $Z$ are $O(1)$ on ${\partial\mathcal{K}}$, it follows that the last term is bounded by $$\sum_\ss{|\alpha|+\nu\le 53\\\nu\le 1} \int_0^t \int_{\{|x|<1\}} |S^\nu \partial^\alpha (w-v)'(s,y)|^2\:dy\:ds.$$ This boundary term is seen to be $O(\varepsilon^4)$ using \eqref{III}. By \eqref{commutators}, \eqref{wv.equation}, and the fact that the vector fields preserve the null structure\footnote{See \cite{Si}, Proposition 3.1}, we may apply \eqref{null.condition.decay}, and thus it suffices to dominate \begin{multline} \label{energy.terms.remaining} \int_0^t \int_{\R^3\backslash\mathcal{K}} \frac{1}{|y|} \sum_\ss{|\alpha|+\nu\le 53\\\nu\le 1} |S^\nu Z^\alpha u| \sum_\ss{|\alpha|+\nu\le 53\\\nu\le 1} |S^\nu Z^\alpha \nabla u| \sum_\ss{|\alpha|+\nu\le 52\\\nu\le 1} |S^\nu Z^\alpha \partial_t (w-v)|\:dy\:ds \\ +\int_0^t \int_{\R^3\backslash\mathcal{K}} \sum_\ss{1\le I,J,K\le 2\\(I,J)\neq (J,K)} \sum_\ss{|\alpha|+\nu\le 52\\\nu\le 1} |P_K \partial S^\nu Z^\alpha (w-v)| \sum_\ss{|\alpha|+\nu\le 52\\\nu\le 1} |P_I \nabla S^\nu Z^\alpha u|\\\times \sum_\ss{|\alpha|+\nu\le 53\\\nu\le 1} |P_J \nabla S^\nu Z^\alpha u|\:dy\:ds. \end{multline} For the first term in \eqref{energy.terms.remaining}, we apply \eqref{local.solution.estimate}, \eqref{v.estimate}, and \eqref{III} to see that it is \begin{multline*} \lesssim \varepsilon \int_0^t \frac{\log(2+s)}{(1+s)^{9/10}} \sum_\ss{|\alpha|+\nu\le 53\\\nu\le 1} \|\langle y\rangle^{-1/2} S^\nu Z^\alpha u'(s,\, \cdot\, )\|_{L^2_y} \\\times \sum_\ss{|\alpha|+\nu\le 52\\\nu\le 1} \|\langle y\rangle^{-1/2} S^\nu Z^\alpha (w-v)'(s,\, \cdot\, )\|_{L^2_y}\:ds.\end{multline*} By the Schwarz inequality, \eqref{local.solution.estimate}, \eqref{v.estimate}, and \eqref{VI}, this term's contribution is $O(\varepsilon^3)$. To be concrete, we will focus on the case that $I\neq K$, $I=J$ for the second term in \eqref{energy.terms.remaining}. A symmetric argument will then complete the proof. For $\delta < (c_1-c_2)/2$, $$\{y\in{\R^3\backslash\mathcal{K}}\,:\, |y|\in [(1-\delta)c_Is, (1+\delta)c_Is]\}\cap \{y\in{\R^3\backslash\mathcal{K}}\,:\,|y|\in [(1-\delta)c_Ks, (1+\delta)c_Ks]\}=\emptyset.$$ Thus, it suffices to show the estimate over the complements of each of these sets separately. By \eqref{prelim.2}, it follows that over $\{|y|\not\in [(1-\delta)c_Ks, (1+\delta)c_Ks]\}$ the second term in \eqref{energy.terms.remaining} is dominated by $$\varepsilon \int_0^t \frac{\log(2+s)}{(1+s)^{7/20}} \int_{\{|y|\not\in [(1-\delta)c_Ks,(1+\delta)c_Ks]\}} \frac{1}{\langle y\rangle} \sum_\ss{|\alpha|+\mu\le 53\\\mu\le 2} |\partial S^\mu Z^\alpha u|^2\:dy\:ds.$$ The required $O(\varepsilon^3)$ bound follows from \eqref{VI}. A symmetric argument then completes the proof of {\em (i.)}. \subsection{Proof of {\em (ii.)}} By \eqref{v.estimate}, it again suffices to show \eqref{II} when $w$ is replaced by $w-v$. With this substitution, it follows from \eqref{ptwise.2} that the left side of \eqref{II} is dominated by \begin{multline} \label{ptwise.terms} \sum_\ss{|\alpha|+\nu\le 46\\\nu\le 1} \int_{\theta t}^t \int_{\R^3\backslash\mathcal{K}} |S^\nu Z^\alpha Q(\nabla u,\nabla^2 u)|\:\frac{dy\:ds}{|y|} \\+\sup_{0\le s\le t} (1+s)\sum_{|\alpha|\le 42} \|\partial^\alpha Q(\nabla u,\nabla^2 u)(s,\, \cdot\, )\|_2. \end{multline} For $|y|\ge c_2s/2$, the first term is trivially $O(\varepsilon^2)$ by \eqref{I}. Thus, it remains to control \begin{multline*} \int_{\theta t}^t \int_{\{|y|\le c_2 s/2\}} \sum_\ss{|\alpha|+\nu\le 46\\\nu\le 1} |S^\nu Z^\alpha Q(\nabla u,\nabla^2 u)|\:\frac{dy\:ds}{|y|} \\\lesssim \sum_{\kappa_1,\kappa_2=1,2}\int_{\theta t}^t \int_{\{|y|\le c_2s/2\}} \sum_\ss{|\beta|+\mu\le 46\\\mu\le 1} |r \langle c_{\kappa_1} s-r\rangle^{1/2} P_{\kappa_1} \nabla S^\mu Z^\beta u|\\\times \sum_\ss{|\alpha|+\nu\le 46\\\nu\le 1} |r\langle c_{\kappa_2} s-r\rangle P_{\kappa_2}\nabla^2 S^\nu Z^\alpha u| \:\frac{dy\:ds}{|y|^{3} \langle s\rangle^{3/2}}, \end{multline*} which is $O(\varepsilon^2)$ by \eqref{prelim.1} and \eqref{prelim.2}. Since the second term in \eqref{ptwise.terms} is easily seen to be $O(\varepsilon^2)$ using \eqref{I} and \eqref{II}, as well as \eqref{local.solution.estimate}, the proof of {\em (ii.)} is complete. \subsection{Proof of {\em (iii.)}} The remainder of the proof of Theorem \ref{main.theorem} follows from the proof given in \cite{MS1}, which we sketch for completeness. In this section, we will apply the results of Section \ref{energy.estimates.section} with $$\gamma^{IJ,jk}=-2B^{IJK}_{lmn}\partial_n u^K.$$ The necessary hypotheses \eqref{gamma.symmetry} and \eqref{gamma.smallness} follow from \eqref{nonlinear.symmetry} and \eqref{II} respectively. The additional hypothesis \eqref{gamma.smallness.2} also follows from \eqref{II}. We begin by proving \eqref{IV}. We first examine the case $\partial^\alpha u' = \partial_t^{|\alpha|} u'$. In the sequel, we shall use elliptic regularity to prove the general result. For $M\le 100$, we apply \eqref{energy.dt} and \eqref{II} to see that \begin{equation}\label{dt.1} \partial_t E^{1/2}_M(u)(t)\lesssim \sum_{j=0}^M \|L_\gamma \partial^j_t u(t,\, \cdot\, )\|_2 + \frac{\varepsilon}{1+t} E^{1/2}_M(u)(t).\end{equation} For fixed $M\le 100$, we have \begin{align*} \sum_{j\le M} |L_\gamma \partial^j_t u|&\lesssim \sum_{|\alpha|\le 40} |\partial^\alpha u'| \sum_{j\le M-1} |\partial_t^j \partial^2 u| + \sum_{|\alpha|\le M-41} |\partial^\alpha u'| \sum_{40\le |\alpha|\le M/2} |\partial^\alpha u'|\\ &\lesssim \frac{\varepsilon}{1+t} \sum_{j\le M-1} |\partial^j_t \partial^2 u| + \sum_{|\alpha|\le M-41} |\partial^\alpha u'| \sum_{|\alpha|\le M/2} |\partial^\alpha u'|, \end{align*} where, in the last line, we have applied \eqref{local.solution.estimate} and \eqref{II}. By \eqref{ell.reg} and a similar argument, we have \begin{multline*} \sum_{j\le M-1} \|\partial_t^j \partial^2 u(t,\, \cdot\, )\|_2 \lesssim \sum_{j\le M} \|\partial_t^j u'(t,\, \cdot\, )\|_2 + \frac{\varepsilon}{1+t} \sum_{j\le M-1} \|\partial_t^j \partial^2 u(t,\, \cdot\, )\|_2 \\+ \sum_\ss{|\alpha|\le M-41\\|\beta|\le M/2} \|\partial^\alpha u'(t,\, \cdot\, ) \partial^\beta u'(t,\, \cdot\, )\|_2. \end{multline*} For $\varepsilon$ sufficiently small, the second term can be bootstrapped. Thus, by combining the previous two estimates with \eqref{dt.1} and using that $\sum_{j\le M} \|\partial_t^j u'(t,\, \cdot\, )\|_2\approx E_M^{1/2}(u)(t)$ for $\varepsilon$ small, we have \begin{equation} \label{dt.2} \partial_t E^{1/2}_M(u)(t)\lesssim \frac{\varepsilon}{1+t} E^{1/2}_M(u)(t) + \sum_\ss{|\alpha|\le M-41\\|\beta|\le M/2} \|\partial^\alpha u'(t,\, \cdot\, )\partial^\beta u'(t,\, \cdot\, )\|_2. \end{equation} For $M\le 40$, the last term drops out. Thus, by \eqref{data.smallness} and Gronwall's inequality, we have $$\sum_{j\le 40} \|\partial_t^j u'(t,\, \cdot\, )\|_2 \lesssim \varepsilon (1+t)^{C\varepsilon},$$ which provides the same bound for $$\sum_{|\alpha|\le 40} \|\partial^\alpha u'(t,\, \cdot\, )\|_2$$ by elliptic regularity and \eqref{II}. In order to study the case $M>40$, we shall use the following lemma. \begin{lemma} \label{MS.lemma} Suppose that \eqref{local.solution.estimate}, \eqref{v.estimate}, \eqref{I}, and \eqref{II} hold. Moreover, for $\mu\le 3$ fixed, assume that \begin{multline} \label{MS.hyp.1} \sum_\ss{|\alpha|+\nu\le 100-8(\mu-1)\\\nu\le\mu-1} \|S^\nu \partial^\alpha u'(t,\, \cdot\, )\|_2 + \sum_\ss{|\alpha|+\nu\le 97-8(\mu-1)\\\nu\le\mu-1} \|\langle x\rangle^{-1/2} S^\nu \partial^\alpha u'\|_{L^2_tL^2_x(S_t)} \\+\sum_\ss{|\alpha|+\nu\le 96-8(\mu-1)\\\nu\le\mu-1} \|S^\nu Z^\alpha u'(t,\, \cdot\, )\|_2 +\sum_\ss{|\alpha|+\nu\le 94-8(\mu-1)\\\nu\le\mu-1} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'\|_{L^2_tL^2_x(S_t)} \\\lesssim \varepsilon (1+t)^{C\varepsilon+\sigma} \end{multline} and, for $M\le 100-8\mu$, \begin{multline} \label{MS.hyp.2} \sum_\ss{|\alpha|+\nu\le M\\\nu\le \mu} \|S^\nu \partial^\alpha u'(t,\, \cdot\, )\|_2 + \sum_\ss{|\alpha|+\nu\le M-3\\\nu\le \mu} \|\langle x\rangle^{-1/2} S^\nu \partial^\alpha u'\|_{L^2_tL^2_x(S_t)} \\+\sum_\ss{|\alpha|+\nu\le M-4\\\nu\le\mu} \|S^\nu Z^\alpha u'(t,\, \cdot\, )\|_2 +\sum_\ss{|\alpha|+\nu\le M-6\\\nu\le \mu} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'\|_{L^2_tL^2_x(S_t)} \lesssim \varepsilon (1+t)^{C\varepsilon+\sigma} \end{multline} for $C,\sigma>0$. Then there is a constant $C'>0$ so that \begin{multline} \label{MS.conclusion} \sum_\ss{|\alpha|+\nu\le M-2\\\nu\le\mu} \|\langle x\rangle^{-1/2} S^\nu \partial^\alpha u'\|_{L^2_tL^2_x(S_t)} + \sum_\ss{|\alpha|+\nu\le M-3\\\nu\le\mu} \|S^\nu Z^\alpha u'(t,\, \cdot\, )\|_2 \\+ \sum_\ss{|\alpha|+\nu\le M-5\\\nu\le\mu} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'\|_{L^2_tL^2_x(S_t)} \lesssim \varepsilon (1+t)^{C'\varepsilon+C'\sigma}. \end{multline} \end{lemma} \noindent{\em Proof of Lemma \ref{MS.lemma}:} We begin by bounding the first term in the left of \eqref{MS.conclusion}. By \eqref{local.solution.estimate} and \eqref{v.estimate}, it suffices to bound this term when $u$ is replaced by $w-v$. By \eqref{KSS.S.d}, we thus need to establish control of $$\sum_\ss{|\alpha|+\nu\le M-2\\\nu\le \mu} \int_0^t \|S^\nu \partial^\alpha Lu(s,\, \cdot\, )\|_2\:ds +\sum_\ss{|\alpha|+\nu\le M-3\\\nu\le\mu} \|S^\mu \partial^\alpha Lu\|_{L^2_tL^2_x(S_t)}.$$ We will show the bound for the first term. The estimate for the second term follows from straightforward modifications of the argument. For $\mu=0$, we have, as above, that $$ \sum_{|\alpha|\le M-2} |\partial^\alpha Lu| \lesssim \sum_{|\alpha|\le 40} |\partial^\alpha u'|\sum_{|\alpha|\le M-1} |\partial^\alpha u'| + \sum_{|\alpha|\le M-41} |\partial^\alpha u'| \sum_{|\alpha|\le M/2} |\partial^\alpha u'|. $$ When $M<41$, the last term vanishes and, in this case, we have that $$\sum_{|\alpha|\le M-2}\int_0^t \|\partial^\alpha Lu(s,\, \cdot\, )\|_2\:ds \lesssim \int_0^t \frac{\varepsilon}{1+s} \sum_{|\alpha|\le M-1} \|\partial^\alpha u'(s,\, \cdot\, )\|_2\:ds.$$ The desired bound then follows from \eqref{MS.hyp.2}. When $M\ge 41$, we apply \eqref{weighted.Sobolev} to see that \begin{multline*} \sum_{|\alpha|\le M-2} \int_0^t \|\partial^\alpha Lu(s,\, \cdot\, )\|_2\:ds \lesssim \int_0^t \frac{\varepsilon}{1+s}\sum_{|\alpha|\le M-1} \|\partial^\alpha u'(s,\, \cdot\, )\|_2\:ds \\+ \sum_{|\alpha|\le \max(M-38,M/2)} \|\langle x\rangle^{-1/2} Z^\alpha u'\|_{L^2_tL^2_x(S_t)}, \end{multline*} and \eqref{MS.conclusion} again follows from \eqref{MS.hyp.2}. For $\mu>0$, we have the similar bound \begin{multline}\label{withS} \sum_\ss{|\alpha|+\nu\le M-2\\\nu\le \mu} |S^\nu \partial^\alpha Lu| \lesssim \sum_{|\alpha|\le 40} |\partial^\alpha u'| \sum_\ss{|\alpha|+\nu\le M-1\\\nu\le\mu} |S^\nu \partial^\alpha u'| \\+ \sum_\ss{|\alpha|+\nu\le M-41\\\nu\le \mu} |S^\nu \partial^\alpha u'| \sum_{|\alpha|\le M-1} |\partial^\alpha u'| + \sum_\ss{|\alpha|+\nu\le M/2\\\nu\le\mu-1} |S^\nu \partial^\alpha u'| \sum_\ss{|\alpha|+\nu\le M-1\\\nu\le\mu-1} |S^\nu \partial^\alpha u'|. \end{multline} We may apply \eqref{II} and \eqref{weighted.Sobolev} to see that \begin{multline*} \sum_\ss{|\alpha|+\nu\le M-2\\\nu\le\mu} \int_0^t \|S^\nu \partial^\alpha Lu(s,\, \cdot\, )\|_2\:ds \lesssim \int_0^t \frac{\varepsilon}{1+s} \sum_\ss{|\alpha|+\nu\le M-1\\\nu\le\mu} \|S^\nu \partial^\alpha u'(s,\, \cdot\, )\|_2\:ds \\+\sum_\ss{|\alpha|+\nu\le M-39\\\nu\le\mu} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'\|_{L^2_tL^2_x(S_t)} \sum_{|\alpha|\le M-1} \|\langle x\rangle^{-1/2} \partial^\alpha u'\|_{L^2_tL^2_x(S_t)} \\+ \sum_\ss{|\alpha|+\nu\le \max(M-1, M/2+2)\\\nu\le \mu-1} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'\|^2_{L^2_tL^2_x(S_t)}, \end{multline*} and the desired bounds follow from \eqref{MS.hyp.1} and \eqref{MS.hyp.2}. Once the estimate \eqref{MS.conclusion} is established for the second term in the left, the bound for the third term in the left follows from the arguments above using \eqref{KSS.S.Z}. It remains to bound the second term in the left of \eqref{MS.conclusion}. To this end, we shall apply \eqref{energy.S.Z}. We first examine the $\mu=0$ case. Notice that \begin{align*} \sum_{|\alpha|\le M-3} \|L_\gamma Z^\alpha u(t,\, \cdot\, )\|_2 &\lesssim \sum_{|\alpha|\le 40} \|Z^\alpha u'(t,\, \cdot\, )\|_\infty \sum_{|\alpha|\le M-3} \|Z^\alpha u'(t,\, \cdot\, )\|_2 \\ &\qquad\qquad\qquad\qquad+ \sum_{|\alpha|\le M-43,|\beta|\le M/2} \|Z^\alpha u'(t,\, \cdot\, ) Z^\beta u'(t,\, \cdot\, )\|_2\\ &\lesssim \frac{\varepsilon}{1+t} Y^{1/2}_{M-3,0}(t) + \sum_{|\alpha|\le \max(M-41, M/2+2)} \|\langle x\rangle^{-1/2} Z^\alpha u'(t,\, \cdot\, )\|^2_2, \end{align*} and moreover notice that the last term is unnecessary when $M<43$. By \eqref{II}, when $M<43$, it follows from \eqref{energy.S.Z} and Gronwall's inequality that $$\sum_{|\alpha|\le M-3} \|Z^\alpha u'(t,\, \cdot\, )\|^2_2 \lesssim (1+t)^{C\varepsilon} \Bigl(\varepsilon^2 + \sum_{|\alpha|\le M-2} \|\partial^\alpha u'\|^2_{L^2_tL^2_x([0,t]\times\{|x|<1\})}\Bigr).$$ The desired bound then follows from that for the first term in the left of \eqref{MS.conclusion}. When $M\ge 43$, we similarly apply \eqref{energy.S.Z}, \eqref{II}, and Gronwall's inequality to see that \begin{multline*} \sum_{|\alpha|\le M-3} \|Z^\alpha u'(t,\, \cdot\, )\|^2_2 \\\lesssim (1+t)^{C\varepsilon} \Bigl(\varepsilon^2 + \sum_{|\alpha|\le \max(M-41,M/2+2)} \|\langle x\rangle^{-1/2} Z^\alpha u'\|^2_{L^2_tL^2_x(S_t)} \sup_{0<s<t} Y_{M-3,0}^{1/2}(s) \\+ \sum_{|\alpha|\le M-2} \|\partial^\alpha u'\|^2_{L^2_tL^2_x([0,t]\times\{|x|<1\})}\Bigr). \end{multline*} Thus, by additionally applying \eqref{MS.hyp.2} and bootstrapping the $Y^{1/2}_{M-3,0}$ term, the estimate follows. For $\mu>0$, we argue in a fashion similar to \eqref{withS}. Indeed, we have \begin{multline*} \sum_\ss{|\alpha|+\nu\le M-3\\\nu\le\mu} \|L_\gamma S^\nu Z^\alpha u(t,\, \cdot\, )\|_2 \lesssim \frac{\varepsilon}{1+t} Y_{M-3-\mu,\mu}^{1/2}(t) \\+\sum_\ss{|\alpha|+\nu\le M-41\\\nu\le\mu} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'(t,\, \cdot\, )\|_2 \sum_{|\alpha|\le M-3} \|\langle x\rangle^{-1/2} Z^\alpha u'(t,\, \cdot\, )\|_2 \\+\sum_\ss{|\alpha|+\nu\le \max(M-3,M/2+1)\\\nu\le\mu-1} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'(t,\, \cdot\, )\|^2_2. \end{multline*} Thus, by \eqref{data.smallness}, \eqref{energy.S.Z}, \eqref{II}, and Gronwall's inequality, it follows that \begin{multline*} \sum_\ss{|\alpha|+\nu\le M-3\\\nu\le\mu} \|S^\nu Z^\alpha u'(t,\, \cdot\, )\|_2^2 \\\lesssim (1+t)^{C\varepsilon}\Bigl[\varepsilon^2 + \sup_{0<s<t} Y^{1/2}_{M-3-\mu,\mu}(s) \Bigl(\sum_\ss{|\alpha|+\nu\le \max(M-3,M/2+1)\\\nu\le\mu-1} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'\|^2_{L^2_tL^2_x(S_t)} \\+ \sum_\ss{|\alpha|+\nu\le M-41\\\nu\le\mu} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'\|_{L^2_tL^2_x(S_t)} \sum_{|\alpha|\le M-3} \|\langle x\rangle^{-1/2} Z^\alpha u'\|_{L^2_tL^2_x(S_t)}\Bigr) \\+\sum_\ss{|\alpha|+\nu\le M-2\\\nu\le\mu} \|S^\nu \partial^\alpha u'\|^2_{L^2_tL^2_x([0,t]\times\{|x|<1\})}\Bigr]. \end{multline*} Applying \eqref{MS.hyp.1} and \eqref{MS.hyp.2} completes the proof.\qed Reverting back to \eqref{dt.2} and applying \eqref{weighted.Sobolev} and Gronwall's inequality, it follows that $$E_M^{1/2}(u)(t)\lesssim (1+t)^{C\varepsilon}\Bigl(\varepsilon + \sum_{|\alpha|\le \max(M-39, M/2)} \|\langle x\rangle^{-1/2} Z^\alpha u'\|^2_{L^2_tL^2_x(S_t)}\Bigr)$$ for $40\le M\le 100$. Using this in an induction argument based on the previous lemma gives $$\sum_{j\le 100} \|\partial_t^j u'(t,\, \cdot\, )\|_2\lesssim \varepsilon (1+t)^{C\varepsilon+\sigma},$$ and coupled with elliptic regularity, this yields \eqref{IV}. Additionally, from Lemma \ref{MS.lemma}, we obtain \begin{multline} \label{no.S} \sum_{|\alpha|\le 98} \|\langle x\rangle^{-1/2} \partial^\alpha u'\|_{L^2_tL^2_x(S_t)} + \sum_{|\alpha|\le 97} \|Z^\alpha u'(t,\, \cdot\, )\|_2 \\+\sum_{|\alpha|\le 95} \|\langle x\rangle^{-1/2} Z^\alpha u'\|_{L^2_tL^2_x(S_t)}\lesssim \varepsilon (1+t)^{C'\varepsilon +C'\sigma}, \end{multline} which gives the $\nu=0$ versions of \eqref{V} and \eqref{VI} and will provide the hypothesis \eqref{MS.hyp.1} for further applications of the lemma corresponding to a single occurrence of $S$. We now approach the proofs of the estimates involving the scaling vector field $S$ following a similar strategy. For a fixed $\mu=1,2,3$, we shall assume \eqref{MS.hyp.1}, and the main step is to show that \begin{equation} \label{S.energy.goal} \sum_\ss{|\alpha|+\nu\le 100-8\mu\\\nu\le\mu} \|S^\nu \partial^\alpha u'(t,\, \cdot\, )\|_2\lesssim \varepsilon (1+t)^{\tilde{C}(\varepsilon+\sigma)} \end{equation} for some $\tilde{C}>0$. We need first to establish \eqref{dt.bound}. Noticing that, for $M\le 100-8\mu$, \begin{multline*} \sum_\ss{j+\nu\le M\\\nu\le\mu} \Bigl(|\tilde{S}^\nu \partial_t^j L_\gamma u| + |[\tilde{S}^\nu \partial_t^j, \gamma^{kl}\partial_k\partial_l]u|\Bigr) \\\lesssim \sum_{|\alpha|\le 40} |\partial^\alpha u'| \sum_\ss{j+\nu\le M-1\\\nu\le\mu} |\tilde{S}^\nu\partial_t^j \partial^2 u| +\sum_\ss{|\alpha|+\nu\le M-40\\\nu\le\mu} |S^\nu \partial^\alpha u'| \sum_\ss{|\alpha|+\nu\le M \\\nu\le\mu-1} |S^\nu \partial^\alpha u'|, \end{multline*} it follows from elliptic regularity, \eqref{weighted.Sobolev}, and \eqref{II} that \begin{multline*} \sum_\ss{j+\nu\le M\\\nu\le\mu} \Bigl(\|\tilde{S}^\nu \partial_t^j L_\gamma u(t,\, \cdot\, )\|_2 + \|[\tilde{S}^\nu \partial_t^j, \gamma^{kl}\partial_k\partial_l]u(t,\, \cdot\, )\|_2\Bigr) \lesssim \frac{\varepsilon}{1+t} \sum_\ss{j+\nu\le M\\\nu\le\mu} \|\tilde{S}^\nu \partial_t^j u'(t,\, \cdot\, )\|_2 \\+\sum_\ss{|\alpha|+\nu\le M-38\\\nu\le\mu} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'(t,\, \cdot\, )\|_2^2 +\sum_\ss{|\alpha|+\nu\le M\\\nu\le\mu-1} \|\langle x\rangle^{-1/2} S^\nu \partial^\alpha u'(t,\, \cdot\, )\|_2^2. \end{multline*} Setting the sum of the last two terms above to be $H_{\nu,M-\nu}(t)$, it follows from \eqref{energy.L.d}, \eqref{II}, and \eqref{MS.hyp.1} that \begin{multline} \label{S.energy.1} \sum_\ss{|\alpha|+\nu\le M\\\nu\le\mu} \|S^\mu \partial^\alpha u'(t,\, \cdot\, )\|_2 \lesssim (1+t)^{A(\varepsilon+\sigma)}\varepsilon \\+ (1+t)^{A\varepsilon} \sum_\ss{|\alpha|+\nu\le M-38\\\nu\le\mu} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'\|_{L^2_tL^2_x(S_t)}^2 \\+(1+t)^{A\varepsilon} \sum_\ss{|\alpha|+\nu\le M\\\nu\le\mu-1} \int_0^t \|S^\nu \partial^\alpha u'(s,\, \cdot\, )\|_{L^2(\{|x|<1\})}\:ds. \end{multline} To control the last term in \eqref{S.energy.1}, we apply \eqref{local.solution.estimate}, \eqref{v.estimate}, and \eqref{bdy.term} to see that it is $$\lesssim \varepsilon \log(2+t)(1+t)^{A\varepsilon} + (1+t)^{A\varepsilon}\sum_\ss{|\alpha|+\nu\le M+3\\\nu\le \mu-1} \int_0^t \int |S^\nu Z^\alpha Lu(s,y)|\:\frac{dy\:ds}{|y|},$$ where we have also applied Sobolev's lemma to bound the second term in the right of \eqref{bdy.term} by the preceding one. Since the second term above is $$\le \sum_\ss{|\alpha|+\nu\le M+4\\\nu\le\mu-1} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'\|^2_{L^2_tL^2_x(S_t)},$$ it follows from \eqref{MS.hyp.1} that \begin{multline*} \sum_\ss{|\alpha|+\nu\le M\\\nu\le\mu} \|S^\mu \partial^\alpha u'(t,\, \cdot\, )\|_2 \lesssim (1+t)^{A'(\varepsilon+\sigma)}\varepsilon \\+ (1+t)^{A\varepsilon} \sum_\ss{|\alpha|+\nu\le M-38\\\nu\le\mu} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'\|_{L^2_tL^2_x(S_t)}^2. \end{multline*} For $M\le 40$, this yields \eqref{S.energy.goal}. For $M\ge 40$, we again argue inductively using Lemma \ref{MS.lemma}. Arguing as such proves \begin{multline*} \sum_\ss{|\alpha|+\nu\le 100-8\mu\\\nu\le\mu} \|S^\nu \partial^\alpha u'(t,\, \cdot\, )\|_2 + \sum_\ss{|\alpha|+\nu\le 97-8\mu\\\nu\le\mu} \|\langle x\rangle^{-1/2} S^\nu \partial^\alpha u'\|_{L^2_tL^2_x(S_t)} \\+\sum_\ss{|\alpha|+\nu\le 96-8\mu\\\nu\le\mu} \|S^\nu Z^\alpha u'(t,\, \cdot\, )\|_2 +\sum_\ss{|\alpha|+\nu\le 94-8\mu\\\nu\le\mu} \|\langle x\rangle^{-1/2} S^\nu Z^\alpha u'\|_{L^2_tL^2_x(S_t)} \\\lesssim \varepsilon (1+t)^{C\varepsilon+\sigma}, \quad \mu=0,1,2,3, \end{multline*} which implies \eqref{V} and \eqref{VI}. It remains only to prove \eqref{III}. Using \eqref{ptwise}, it is easy to see that the left side of \eqref{III} is dominated by the square of the left side of \eqref{VI}, from which \eqref{III} follows. This completes the proof of Theorem \ref{main.theorem}. \bigskip
2,869,038,156,437
arxiv
\section{Introduction} Experiments with ultracold atoms have now entered the regime of strong interactions, thanks to the possibility to manipulate the $s$-wave scattering length $a$ between cold atoms with a magnetically induced Feshbach resonance \cite{Feshbach,revueStringariFermions}. This has led to a revolution in the study of the few-body problem, as one can now have in a controllable way a scattering length much larger (in absolute value) than the range $b$ (and the effective range) of the interaction potential. In particular, this has allowed to confirm experimentally \cite{Efimov_manips,Jochim} the existence of the long-searched Efimov effect \cite{BH,Efimov1,Efimov2}: As shown by Efimov in the early 1970's, three particles interacting {\sl via} a short range potential with an infinite scattering length may exhibit an infinite number of trimer states with a geometric spectrum. The existence of an infinite number of bound states is usual, even at the two-body level, for long range interactions, but it is quite intriguing for short range interaction potentials. This Efimov effect takes place for three (same spin state) bosons \cite{Efimov1}, but it is more general, it also occurs for example for two (same spin state) fermions and a third distinguishable particle at least $13.607$ times lighter \cite{Efimov2,PetrovEfim}. On the experimental side, an increasing number of observable quantities are now at hand. For Efimov physics, the usual evidence of the emergence of an Efimov trimer state is a peak in the three-body loss rate as a function of the scattering length \cite{Efimov_manips}. Now radio-frequency spectroscopic techniques can give a direct access to the trimer spectrum \cite{Jochim}. For strongly interacting Fermi gases (without Efimov effect) a very precise measurement of the atomic momentum distribution $n(\mathbf{k})$ was performed recently, so precise that it allowed to see the large momentum tail $n(\mathbf{k})\sim C/k^4$, $k$ large but still smaller than $1/b$, and to quantitatively extract the coefficient $C$ whose values were satisfactorily compared to theory \cite{Jin_nk}. The same conclusion holds for the few-body numerical experiment of \cite{Blume}. Similarly, the first order coherence function $g^{(1)}$ of the atomic field over a distance $r$, a quantity measured for bosonic cold atoms \cite{Esslinger} but not yet for fermionic cold atoms, is related to the Fourier transform of $n(\mathbf{k})$ and is sensitive to the $1/k^4$ tail by a contribution that is non-differentiable with respect to the {\sl vector} $\mathbf{r}$ in $\mathbf{r}=\mathbf{0}$ \cite{lien_g1}, and that appeared in the many-body numerical experiment of \cite{Astra}. The occurrence of the $1/k^4$ tail in $n(\mathbf{k})$ is a direct consequence of two-body physics, that is of the binary zero-range interaction between two particles, and it holds in all spatial dimensions: According to Schr\"odinger's equation for the zero-energy scattering state $\phi(\mathbf{r})$ of two particles of relative coordinates $\mathbf{r}$, $\Delta_{\mathbf{r}} \phi(\mathbf{r})\propto \delta(\mathbf{r})$ for a contact (regularized Dirac delta) interaction \cite{contact_interaction}, so that in Fourier space $\tilde{\phi}(\mathbf{k}) \propto 1/k^2$ and $n(\mathbf{k})\propto |\tilde{\phi}(\mathbf{k})|^2$ scales as $1/k^4$. On the contrary, the coefficient $C$, called contact, depends on the many-body properties, and can be related to the derivative of the gas mean energy (or mean free energy at non-zero temperature) with respect to the scattering length, as was shown first for bosons in one dimension \cite{Olshanii}, then for spin 1/2 fermions in three dimensions \cite{Tan1,Braaten_contact,Tarruell}, and for bosonic or fermionic, three dimensional or bidimensional systems, in \cite{tangen}. In this paper, we anticipate that experimentally, it may be possible to measure with high precision the atomic momentum distribution in systems subjected to the Efimov effect, for example in a Bose gas with a large scattering length \cite{HammerBosons,Cornell_Bose,Hulet,Chevy}. To be specific, we consider in the center of mass frame the Efimov trimer states for three bosons interacting with infinite scattering length. After recalling the expression of the three-body wavefunction in section \ref{sec:wf}, we obtain the expression of the momentum distribution in terms of integrals over a single momentum vector in section \ref{sec:nk}, see Eqs.(\ref{eq:decomp},\ref{eq:nI},\ref{eq:nII},\ref{eq:nIII},\ref{eq:nIV}). As illustrated in section \ref{sec:appli}, this allows to perform a very precise numerical evaluation of the momentum distribution for all values of the single-particle wavevector $\mathbf{k}$, and to analytically obtain the large momentum behavior of $n(\mathbf{k})$: In addition to the expected $C/k^4$ term at large $k$, we find an unexpected $1/k^5$ subleading term, that is a direct and generic signature of Efimov physics \cite{cluster_Tan}, see (\ref{eq:nk_efi}). Another, more formal, consequence of this $1/k^5$ subleading term is that the general expression giving the energy as a functional of the momentum distribution $n(\mathbf{k})$, derived for the non-Efimovian case in \cite{Tan2} and extended (with the same form) to the Efimovian case in \cite{Leyronas}, turns out to be invalid in the Efimovian case \cite{note2D}. We conclude in section \ref{sec:conclusion}. \section{Normalized wavefunction of an Efimov trimer} \label{sec:wf} \subsection{Three-body state in position space} In this subsection we recall the wavefunction of an Efimov trimer and give the expression of its normalization constant. We consider an Efimov trimer state for three same-spin-state bosons of mass $m$ interacting {\sl via} a zero range potential with infinite scattering length. In order to avoid formal normalisability problems, it is convenient to imagine that the Efimov trimer is trapped in an arbitrarily weak harmonic potential, that is with a ground state harmonic oscillator length $a_{\rm ho}$ arbitrarily larger than the trimer size \cite{general}. Since the center of mass of the system is separable in a harmonic potential, this fixes the normalisability problem without affecting the internal wavefunction of the trimer in the limit $a_{\rm ho}\to +\infty$. In this limit, the energy of the trimer is equal to the free space energy \begin{equation} E_{\rm trim} = - \frac{\hbar^2 \kappa_0^2}{m}, \ \ \ \kappa_0>0. \end{equation} According to Efimov's asymptotic, zero-range theory \cite{Efimov1}, \begin{equation} \label{eq:kappa0} \kappa_0 = \frac{\sqrt{2}}{R_t} e^{-\pi q/|s_0|} e^{\mbox{\scriptsize Arg}\,\Gamma(1+s_0)/|s_0|} \end{equation} where $R_t>0$ is a length known as the three-body parameter \cite{def_Rt}, the quantum number $q$ is any integer in $\mathbb{Z}$, and the purely imaginary number $s_0 = i |s_0|$ is such that \begin{equation} \label{eq:def_ms0} |s_0| \cosh(\frac{|s_0|\pi}{2}) = \frac{8}{\sqrt{3}} \sinh(\frac{|s_0|\pi}{6}), \end{equation} so that $|s_0|=1.00623782510\ldots$ The corresponding three-body wavefunction $\Psi$ may be written for $\kappa_0 a_{\rm ho}\to +\infty$ as \begin{multline} \label{eq:etat} \Psi(\mathbf{r}_1,\mathbf{r}_2,\mathbf{r}_3) =\psi_{\rm CM}(\mathbf{C}) \left[ \psi(r_{12},\frac{|2\mathbf{r}_3-(\mathbf{r}_1+\mathbf{r}_2)|}{\sqrt{3}})\right. \\ \left. +\psi(r_{23},\frac{|2\mathbf{r}_1-(\mathbf{r}_2+\mathbf{r}_3)|}{\sqrt{3}}) + \psi(r_{31},\frac{|2\mathbf{r}_2-(\mathbf{r}_3+\mathbf{r}_1)|}{\sqrt{3}}) \right], \end{multline} where $\mathbf{C}=(\mathbf{r}_1+\mathbf{r}_2+\mathbf{r}_3)/3$ is the center of mass position of the three particles and the parameterization of $\psi$ is related to the Jacobi coordinates $\mathbf{r}=\mathbf{r}_2-\mathbf{r}_1$ and $\mbox{\boldmath$\rho$}=[2\mathbf{r}_3-(\mathbf{r}_1+\mathbf{r}_2)]/\sqrt{3}$. In our expression of $\Psi$, $\psi_{\rm CM}$ is the Gaussian wavefunction of the center of mass ground state in the harmonic trap, normalized to unity, and $\psi$ is a Faddeev component of the free space trimer wavefunction. The explicit expression of $\psi$ is known \cite{Efimov1}: \begin{equation} \label{eq:psi_F} \psi(r,\rho) = \frac{\mathcal{N}_\psi}{\sqrt{4\pi}} \frac{K_{s_0}(\kappa_0\sqrt{r^2+\rho^2})}{(r^2+\rho^2)/2} \frac{\sin[s_0(\frac{\pi}{2}-\alpha)]}{\sin(2\alpha)} \end{equation} where $K_{s_0}$ is a Bessel function and $\alpha=\mbox{atan}(r/\rho)$. The normalization factor $\mathcal{N}_\psi$ ensuring that $||\Psi||^2=1$ was not calculated in \cite{Efimov1}. To obtain its explicit expression, one first performs the change of variables $(\mathbf{r}_1,\mathbf{r}_2,\mathbf{r}_3)\to (\mathbf{C}, \mbox{\boldmath$\rho$}, \mathbf{r})$, whose Jacobian is $D(\mathbf{r}_1,\mathbf{r}_2,\mathbf{r}_3)/D(\mathbf{C},\mbox{\boldmath$\rho$},\mathbf{r})=(-\sqrt{3}/2)^3$. To integrate over $\mathbf{r}$ and $\mbox{\boldmath$\rho$}$ one then introduces hyperspherical coordinates in which the wavefunction separates; one then faces known integrals on the hyperradius \cite{Gradstein} and on the hyperangles \cite{Efimov93}. This leads to \cite{WernerThese}: \begin{multline} \label{eq:norma} |\mathcal{N}_\psi|^{-2} = \left(\frac{\sqrt{3}}{2}\right)^3 \frac{3\pi^2}{2\kappa_0^2 \cosh(\frac{|s_0|\pi}{2})} \\ \times \left[\cosh(\frac{|s_0|\pi}{2}) +\frac{|s_0|\pi}{2} \sinh(\frac{|s_0|\pi}{2}) -\frac{4\pi}{3\sqrt{3}}\cosh(\frac{|s_0|\pi}{6})\right]. \end{multline} \subsection{Three-body state in momentum space} To obtain the momentum distribution for the Efimov trimer, we need to evaluate the Fourier transform of the trimer wavefunction $\Psi$ given by (\ref{eq:etat}). Rather than directly using (\ref{eq:psi_F}), we take advantage of the fact that, for contact interactions, the Faddeev component $\psi$ obeys the non-interacting Schr\"odinger's equation with a source term. With the change to Jacobi coordinates, the Laplace operator in the coordinate space of dimension nine reads $\sum_{i=1}^{3} \Delta_{\mathbf{r}_i} = \frac{1}{3}\Delta_{\mathbf{C}} +2\left[\Delta_{\mathbf{r}}+\Delta_{\mbox{\boldmath$\rho$}}\right]$ so that \begin{equation} \label{eq:Schr} -\left[\kappa_0^2-\Delta_{\mathbf{r}}-\Delta_{\mbox{\scriptsize\boldmath$\rho$}}\right] \psi(r,\rho) = \delta(\mathbf{r}) B(\rho). \end{equation} The source term in the right hand side originates from the fact that \begin{equation} \psi(r,\rho) \underset{r\to 0}{\sim} -\frac{B(\rho)}{4\pi r} \end{equation} for a fixed $\rho$, this $1/r$ divergence coming from the replacement of the interaction potential by the Bethe-Peierls contact condition. Taking the Fourier transform of (\ref{eq:Schr}) over $\mathbf{r}$ and $\mbox{\boldmath$\rho$}$ leads to \begin{equation} \label{eq:psit} \tilde{\psi}(k,K) = -\frac{\tilde{B}(K)}{k^2+K^2+\kappa_0^2}, \end{equation} where the Fourier transform is defined as $\tilde{B}(K) \equiv \int d^3\rho\, e^{-i\mathbf{K}\cdot\mbox{\scriptsize\boldmath$\rho$}} B(\rho)$. $B(\rho)$ is readily obtained from (\ref{eq:psi_F}) by taking the limit $r\to 0$: \begin{equation} B(\rho) = -\mathcal{N}_\psi (4\pi)^{1/2} i \sinh(\frac{|s_0|\pi}{2}) \frac{K_{s_0}(\kappa_0\rho)}{\rho}. \end{equation} The Fourier transform of this expression is known, see relation 6.671(5) in \cite{Gradstein}, so that \begin{multline} \label{eq:Bt} \!\!\!\!\!\! \tilde{B}(K) = \frac{-2\pi^{5/2}\mathcal{N}_\psi}{K (K^2+\kappa_0^2)^{1/2}} \left\{ \left[\frac{(K^2+\kappa_0^2)^{1/2}+K}{\kappa_0}\right]^{s_0}\!\!\!\!-\mbox{c.c.}\right\} \end{multline} where $\mbox{c.c.}$ stands for the complex conjugate. Note that the expression between the curly brackets simply reduces to $2i\sin(|s_0|\alpha)$ if one sets $K=\kappa_0 \sinh \alpha$. What we shall need is the large $K$ behavior of $\tilde{B}(K)$. Expanding (\ref{eq:Bt}) in powers of $\kappa_0/K$ gives \begin{equation} \label{eq:agk} \tilde{B}(K) = \mathcal{N}_\psi \frac{2\pi^{5/2}}{K^2} \left[(2K/\kappa_0)^{-s_0}-\mbox{c.c.}\right] +O(1/K^4). \end{equation} The last step to obtain the trimer state vector in momentum space is to take the Fourier transform of (\ref{eq:etat}), using the appropriate Jacobi coordinates for each Faddeev component, or simply by Fourier transforming the first Faddeev component using the coordinates $(\mathbf{C},\mathbf{r},\mbox{\boldmath$\rho$})$ given above and by performing circular permutations on the particle labels. This gives \begin{multline} \label{eq:etatf} \tilde{\Psi}(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}_3)=\left(\frac{\sqrt{3}}{2}\right)^3 \tilde{\psi}_{\rm CM}(\mathbf{k}_1+\mathbf{k}_2+\mathbf{k}_3)\\ \times\Big[ \tilde{\psi}\left(\frac{|\mathbf{k}_2-\mathbf{k}_1|}{2}, \frac{|2\mathbf{k}_3-(\mathbf{k}_1+\mathbf{k}_2)|}{2\sqrt{3}}\right) +(231)+ (312)\Big], \end{multline} where the notation $(ijk)$ means that the indices $1,2,3$ have been replaced by $i,j,k$ respectively. \section{Integral expression of the momentum distribution} \label{sec:nk} To obtain the momentum distribution for an Efimov trimer state, it remains to integrate over $\mathbf{k}_3$ and $\mathbf{k}_2$ the modulus square of the Fourier transform (\ref{eq:etatf}) of the trimer wavefunction. In the limit $\kappa_0 a_{\rm ho}\to +\infty$ where one suppresses the harmonic trapping, one can set \begin{equation} |\tilde{\psi}_{\rm CM}(\mathbf{k}_1+\mathbf{k}_2+\mathbf{k}_3)|^2 = (2\pi)^3 \delta(\mathbf{k}_1+\mathbf{k}_2+\mathbf{k}_3) \end{equation} so that the trimer is at rest in all what follows. Integration over $\mathbf{k}_3$ is then straightforward: \begin{multline} \label{eq:nk_long} n(\mathbf{k}_1) = 3\left(\frac{\sqrt{3}}{2}\right)^6 \int\frac{d^3 k_2}{(2\pi)^3} \Big| \tilde{\psi}(\frac{|\mathbf{k}_2-\mathbf{k}_1|}{2},\frac{\sqrt{3}|\mathbf{k}_1+\mathbf{k}_2|}{2}) \\ + \tilde{\psi}(|\mathbf{k}_2+\frac{1}{2}\mathbf{k}_1|,\frac{\sqrt{3}}{2}k_1) + \tilde{\psi}(|\mathbf{k}_1+\frac{1}{2}\mathbf{k}_2|,\frac{\sqrt{3}}{2}k_2) \Big|^2. \end{multline} The factor $3$ in the right hand side results from the fact that, as e.g. in \cite{Tan1}, we normalize the momentum distribution $n(\mathbf{k})$ to the total number of particles (rather than to unity): \begin{equation} \int \frac{d^3k}{(2\pi)^3} \, n(\mathbf{k}) = 3. \end{equation} Also note that the sum of the squares of the arguments of $\tilde{\psi}$ is constant and equal to $k_1^2+k_2^2+\mathbf{k}_1\cdot\mathbf{k}_2$ for each term in the right hand side of (\ref{eq:nk_long}). When using (\ref{eq:psit}), one can thus put the denominator in (\ref{eq:psit}) as a common denominator, to obtain \begin{multline} \label{eq:nkb} \!\!\!\!\!\! n(\mathbf{k}_1) = \!\! \int\!\! \frac{d^3 k_2}{(2\pi)^3} \frac{\left[\tilde{B}(\frac{\sqrt{3}}{2}|\mathbf{k}_1+\mathbf{k}_2|) +\tilde{B}(\frac{\sqrt{3}}{2}k_1) + \tilde{B}(\frac{\sqrt{3}}{2}k_2) \right]^2}{(4^3/3^4)(k_1^2+k_2^2+\mathbf{k}_1\cdot\mathbf{k}_2+\kappa_0^2)^2}. \end{multline} For simplicity, we have assumed that the normalization factor $\mathcal{N}_\psi$ is purely imaginary, so that $\tilde{B}(K)$ is a real quantity. In the above writing of $n(\mathbf{k}_1)$, the only ``nasty" contribution is $\tilde{B}(\sqrt{3}|\mathbf{k}_1+\mathbf{k}_2|/2)$; the other contributions are ``nice" since they only depend on the moduli $k_1$ and $k_2$. Expanding the square in the numerator of (\ref{eq:nkb}), one gets six terms, three squared terms and three crossed terms. The change of variable $\mathbf{k}_2=-(\mathbf{k}'_2+\mathbf{k}_1)$ allows, in one of the squared terms and in one of the crossed terms, to transform a nasty term into a nice term. What remains is a nasty crossed term that cannot be turned into a nice one; in that term, as a compromise, one performs the change of variable $\mathbf{k}_2=-(\mathbf{k}_2'+\mathbf{k}_1/2)$. We finally obtain the momentum distribution as the sum of four contributions, \begin{equation} \label{eq:decomp} n(\mathbf{k}_1) = n_{\rm I}(\mathbf{k}_1) + n_{\rm II}(\mathbf{k}_1) + n_{\rm III}(\mathbf{k}_1) + n_{\rm IV}(\mathbf{k}_1), \end{equation} with \begin{eqnarray} \label{eq:nI} n_{\rm I}(\mathbf{k}_1) &\!\!\! =\!\! & \frac{3^4}{4^3}\!\! \int\!\! \frac{d^3 k_2}{(2\pi)^3} \frac{\tilde{B}^2(\frac{\sqrt{3}}{2}k_1)}{(k_1^2+k_2^2+\mathbf{k}_1\cdot\mathbf{k}_2+\kappa_0^2)^2} \\ \label{eq:nII} n_{\rm II}(\mathbf{k}_1) &\!\!\! =\!\! & \frac{3^4}{4^3}\!\! \int\!\! \frac{d^3 k_2}{(2\pi)^3} \frac{2\tilde{B}^2(\frac{\sqrt{3}}{2}k_2)}{(k_1^2+k_2^2+\mathbf{k}_1\cdot\mathbf{k}_2+\kappa_0^2)^2} \\ \label{eq:nIII} n_{\rm III}(\mathbf{k}_1) &\!\!\! =\!\! & \frac{3^4}{4^3}\!\! \int\!\! \frac{d^3 k_2}{(2\pi)^3} \frac{4 \tilde{B}(\frac{\sqrt{3}}{2}k_1) \tilde{B}(\frac{\sqrt{3}}{2}k_2)} {(k_1^2+k_2^2+\mathbf{k}_1\cdot\mathbf{k}_2+\kappa_0^2)^2} \\ \label{eq:nIV} n_{\rm IV}(\mathbf{k}_1) &\!\!\! =\!\! & \frac{3^4}{4^3}\!\! \int\!\! \frac{d^3 k_2}{(2\pi)^3} \frac{2\tilde{B}(\frac{\sqrt{3}}{4}|2\mathbf{k}_2+\mathbf{k}_1|)\tilde{B}(\frac{\sqrt{3}}{4}|2\mathbf{k}_2-\mathbf{k}_1|)} {(\kappa_0^2+k_2^2+\frac{3}{4}k_1^2)^2}. \nonumber \\ && \end{eqnarray} An interesting question is to know if one can go beyond the integral expressions Eqs.(\ref{eq:nI},\ref{eq:nII},\ref{eq:nIII},\ref{eq:nIV}), that is if one can obtain an explicit expression for the momentum distribution, at most in terms of special functions. The contribution $n_{\rm I}(\mathbf{k}_1)$ is straightforward to calculate: \begin{equation} \label{eq:nI_expl} n_{\rm I}(\mathbf{k}_1) = \frac{\sqrt{3}}{4\pi} \left(\frac{3}{4}\right)^{3} \frac{\tilde{B}^2(\sqrt{3}k_1/2)}{(k_1^2+4\kappa_0^2/3)^{1/2}}. \end{equation} The contribution $n_{\rm II}(\mathbf{k}_1)$ is also explicitly calculable by performing the change of variable $k_2=(2/\sqrt{3})\kappa_0\sinh\alpha$ and using the identity that can be derived from contour integration: \begin{equation} \int_{-\infty}^{+\infty} d\alpha\, \frac{e^{is\alpha}}{\cosh\alpha -\cosh\alpha_0} = \frac{2\pi\sin[s(i\pi-\alpha_0)]}{\sinh\alpha_0 \sinh(s\pi)}, \end{equation} where $s$ is any real number and $\alpha_0$ is any complex number with $0<\mbox{Im}\, \alpha_0 < \pi$. This also allows to obtain an explicit expression of $n_{\rm III}(\mathbf{k}_1)$ if one further applies integration by part, integrating the factor $\sin(|s_0|\alpha)$. We do not give however the resulting expressions since, contrarily to these first three contributions to $n(\mathbf{k}_1)$, the contribution $n_{\rm IV}(\mathbf{k}_1)$ in (\ref{eq:nIV}) blocked our attempt to calculate $n(\mathbf{k}_1)$ explicitly. For $\mathbf{k}_1=\mathbf{0}$ however it becomes equal to the contribution $n_{\rm II}$. $n(\mathbf{k}_1=0)$ can thus be evaluated explicitly in terms of $\kappa_0$ and $s_0$, see Appendix F in \cite{tangen}. In numerical form this gives \begin{equation} \label{eq:n_ori} n(\mathbf{k}=\mathbf{0}) = \frac{55.43379775608\ldots}{\kappa_0^3}. \end{equation} \section{Applications} \label{sec:appli} \subsection{Numerical evaluation of $n(\mathbf{k})$ at all $k$} The integral expression of $n(\mathbf{k})$ derived in section \ref{sec:nk} allows a straightforward and very precise numerical calculation of the single-particle momentum distribution for an infinite scattering length Efimov trimer, once all the doable angular integrations have been performed in spherical coordinates of polar axis $\mathbf{k}_1$. The result is shown for low values of $k$ in Fig.\ref{fig:nk}a, and for high values of $k$ in Fig.\ref{fig:nk}b. In particular, Fig.\ref{fig:nk}b was constructed to show how $n(\mathbf{k})$ approaches the asymptotic behavior (\ref{eq:nk_efi}) derived in the next subsection, that is to reveal the existence of a $1/k^5$ sub-leading oscillating term. Note that the accuracy of the numerics may be tested from (\ref{eq:n_ori}) and from the explicit analytical expressions of $n_{\rm I}$ (given in (\ref{eq:nI_expl})), of $n_{\rm II}$ and of $n_{\rm III}$ (not given). \begin{figure*}[htb] \includegraphics[width=0.445\linewidth,clip=]{fig1a.eps} \hfill \includegraphics[width=0.46\linewidth,clip=]{fig1b.eps} \caption{For a free space Efimov trimer at rest composed of three bosonic particles of mass $m$ interacting {\sl via} a zero-range, infinite scattering length potential, single-particle momentum distribution $n(\mathbf{k})$ as a function of $k$. (a) Numerical calculation from the expression for $n(\mathbf{k})$ appearing in Eq.(\ref{eq:decomp}). (b) Numerical calculation (solid line) and asymptotic behavior (\ref{eq:nk_efi}) (dashed line), with the horizontal axis in log scale. The unit of momentum is $\kappa_0$, such that the trimer energy is $-\hbar^2\kappa_0^2/m$. \label{fig:nk}} \end{figure*} \subsection{Large momentum behavior of $n(\mathbf{k})$} Starting from the integral representation Eqs.(\ref{eq:nI},\ref{eq:nII},\ref{eq:nIII},\ref{eq:nIV}), we show in the Appendix \ref{app:nk_high} that the single-particle momentum distribution has the asymptotic expansion at large wavevectors: \begin{equation} n(\mathbf{k})\underset{k\to\infty}{=} \frac{C}{k^4}+ \frac{D}{k^5} \cos\left[2|s_0|\ln(\sqrt{3}k/\kappa_0)+\varphi\right]+ \ldots \label{eq:nk_efi} \end{equation} where we recall that the trimer energy is $E_{\rm trim}=-\hbar^2 \kappa_0^2/m$, and the quantities $C$, $D$ and $\varphi$ derived in the Appendix \ref{app:nk_high} are given by \begin{eqnarray} C/\kappa_0&=& 8\pi^2 \sinh(|s_0|\pi/2) \tanh(|s_0|\pi)/\Big[\cosh(\frac{|s_0|\pi}{2}) \nonumber \\ \label{eq:Cexact} && +\frac{|s_0|\pi}{2} \sinh(\frac{|s_0|\pi}{2}) - \frac{4\pi}{3\sqrt{3}} \cosh(\frac{|s_0|\pi}{6})\Big] \\ &=& 53.09722846003081\ldots \\ D/\kappa_0^2 &\simeq& -89.26260 \\ \varphi &\simeq& -0.8727976. \end{eqnarray} The crucial point is that $D\neq 0$: Due to the Efimov effect, the momentum distribution has a slowly decaying $O(1/k^5)$ oscillatory subleading tail. \subsection{Breakdown of the usual energy-momentum distribution relation} In \cite{Leyronas} it was proposed that the expression of the energy as a functional of the momentum distribution, derived in \cite{Tan2} for equal mass spin 1/2 fermions, also holds for bosons (apart from the appropriate change of numerical factors). In the present case of a free space Efimov trimer at rest with an infinite scattering length, the energy formula of \cite{Leyronas} reduces to \begin{equation} \label{eq:leyronas} E_{\rm trim} \stackrel{?}{=} \int \frac{d^3k}{(2\pi)^3} \frac{\hbar^2 k^2}{2m} \Big[n(\mathbf{k})-\frac{C}{k^4}\Big]. \end{equation} We have however put a question mark, because the asymptotic expansion (\ref{eq:nk_efi}) implies that the integral in (\ref{eq:leyronas}) is not well-defined: After the change of variables $x=\ln(\sqrt{3}k/\kappa_0)$, the integrand behaves for $x\to+\infty$ as a linear superposition of $e^{2i |s_0| x}$ and $e^{-2i |s_0| x}$, that is as a periodic function of $x$ oscillating around zero. This was overlooked in~\cite{Leyronas}. At first sight, however, this does not look too serious: One often argues, when one faces the integral of such an oscillating function of zero mean, that the oscillations at infinity simply average to zero. More precisely, let us define the cut-off dependent energy functional \begin{equation} E(\Lambda) = \int_{k<\Lambda} \frac{d^3\!k}{(2\pi)^3} \frac{\hbar^2 k^2}{2m} \left[n(\mathbf{k})-\frac{C}{k^4}\right], \label{eq:elam} \end{equation} where the integration is limited to wavevectors $\mathbf{k}$ of modulus less than the cut-off. For $\Lambda\to+\infty$, $E(\Lambda)$ is asymptotically an oscillating function of the logarithm of $\Lambda$, oscillating around a mean value $\bar{E}$. The naive expectation would be that the trimer energy $E_{\rm trim}$ equals $\bar{E}$. This naive expectation is equivalent to the usual trick used to regularize oscillating integrals, consisting here in introducing a convergence factor $e^{-\eta \ln(\sqrt{3}k/\kappa_0)}$ in the integral without momentum cut-off and then taking the limit $\eta\to 0^+$: \begin{equation} \lim_{\eta\to 0^+} \int_{\mathbb{R}^3} \frac{d^3\!k}{(2\pi)^3} \frac{\hbar^2 k^2}{2m} \left[n(\mathbf{k})-\frac{C}{k^4}\right] e^{-\eta \ln(\sqrt{3}k/\kappa_0)} = \bar{E}. \label{eq:espoir} \end{equation} To test the naive regularization procedure, we performed a numerical calculation of $E(\Lambda)$, using the result (\ref{eq:decomp}) to perform a very accurate numerical calculation of $n(\mathbf{k})$. The result is shown as a solid line in Fig.\ref{fig:elam}. We also developed a more direct technique allowing a numerical calculation of $E(\Lambda)$ without the knowledge of $n(\mathbf{k})$, see Appendix \ref{app:brutale}: The corresponding results are represented as $+$ symbols in Fig.\ref{fig:elam} and are in perfect agreement with the solid line. As expected, $E(\Lambda)$ is asymptotically an oscillating function of the logarithm of $\Lambda$, oscillating around a mean value $\bar{E}$. To formalize, we introduce an arbitrary, non-zero value $k_{\rm min}$ of the momentum. We define $\delta n(k)= n(\mathbf{k})-C/k^4$ for $k<k_{\rm min}$, and for $k>k_{\rm min}$: \begin{equation} \delta n(k)=n(\mathbf{k}) - \left\{\frac{C}{k^4}+ \frac{D}{k^5} \cos\left[2|s_0|\ln(\sqrt{3}k/\kappa_0)+ \varphi\right]\right\}. \label{eq:defdn} \end{equation} The introduction of $k_{\rm min}$ ensures that the integral of $k^2 \delta n(k)$ over all $\mathbf{k}$ converges around $\mathbf{k}=\mathbf{0}$. The subtraction of the asymptotic behavior of $n(\mathbf{k})$ up to order $O(1/k^5)$ for $k>k_{\rm min}$ ensures that the integral of $k^2 \delta n(k)$ over all $\mathbf{k}$ converges at infinity. As a consequence we get in the large cut-off limit \begin{equation} E(\Lambda) = \bar{E} + \frac{\hbar^2 D}{8\pi^2 m |s_0|} \sin[2|s_0| \ln(\sqrt{3}\Lambda/\kappa_0)+\varphi] + O(1/\Lambda), \label{eq:elamasympt} \end{equation} with \begin{multline} \bar{E} = -\frac{\hbar^2 D}{8\pi^2 m |s_0|} \sin[2|s_0| \ln(\sqrt{3}k_{\rm min}/\kappa_0)+\varphi] \\ + \int_0^{+\infty} dk\, \frac{\hbar^2 k^4}{4\pi^2 m}\delta n(k). \label{eq:qveb} \end{multline} From this last equation (\ref{eq:qveb}) and the numerical calculations of $n(\mathbf{k})$ first up to $k=1000\kappa_0$ and then up to $k\simeq 5500\kappa_0$, we get two slightly different values of $\bar{E}$, which gives an estimate with an error bar \cite{astuce}: \begin{equation} \bar{E} \simeq 0.89397(3) E_{\rm trim}. \label{eq:nest} \end{equation} The conclusion is that $\bar{E}$ (significantly) differs from $E_{\rm trim}$: The naive regularization of the energy formula proposed in \cite{Leyronas} does not give the correct value of the trimer energy. An analytical representation of $\bar{E}$ in terms of single integrals can be obtained, see Appendix \ref{app:analy}. This analytical calculation gives a physical explanation of the failure of the naive regularization: It is inconsistent to add by hand the regularization factor $e^{-\eta \ln(\sqrt{3}k/\kappa_0)}$ at the last stage, that is in the integrand of (\ref{eq:leyronas}). To be consistent, the momentum cut-off function has to be introduced at the level of the three-body problem. Then the subleading $1/k^5$ term in the momentum distribution acquires a small non-oscillating component, of order $\eta/k^5$, that gives a non-zero contribution to the integral (\ref{eq:leyronas}) for $\eta\to 0^+$, since $\int_{k_{\rm min}}^{+\infty} dk\, k^4 \frac{\eta}{k^5} e^{-\eta \ln(\sqrt{3}k/\kappa_0)}$ does not tend to zero in this limit. The resulting integral representation of $\bar{E}$ confirms the numerical result and allows to evaluate $\bar{E}$ with a better precision: \begin{widetext} \begin{eqnarray} \bar{E}/ E_{\rm trim} &=& 1 - \frac{4\cosh(|s_0|\pi/2)}{\pi\sqrt{3}} \Big[\cosh(|s_0|\pi/2)+\frac{|s_0|\pi}{2}\sinh(|s_0|\pi/2)-\frac{4\pi}{3\sqrt{3}}\cosh(|s_0|\pi/6)\Big]^{-1} \nonumber \\ &\times& \int_0^{+\infty} dq\, \left\{ \frac{2\ln(q)\cos(|s_0|\ln q)}{1+q^2+q^4} +\frac{16 q \ln[(1+q^2)/4]}{|s_0|(1+q^2)(q^2+3)^2}\sin\left(|s_0|\ln\frac{1+q}{|1-q|}\right) \right. \nonumber \\ \label{eq:ebar_analy} && \left. + 4 \ln\left( \frac{1+q^2}{|1-q^2|}\right) \cos\left(|s_0|\ln\frac{1+q}{|1-q|}\right) \left[\frac{1}{2(q^2+3)}+\frac{\ln[2(1+q^2)/(q^2+3)]}{1-q^2}\right]\right\} \\ &=& 0.8939667780883\ldots \end{eqnarray} \end{widetext} \begin{figure}[htb] \includegraphics[width=0.8\linewidth,clip=]{fig2.eps} \caption{Cut-off dependent energy $E(\Lambda)$ as defined in (\ref{eq:elam}) for a free space infinite scattering length Efimov trimer with a zero-range interaction, as a function of the logarithm of the momentum cut-off $\Lambda$. Solid line: numerical result obtained {\sl via} a calculation of the momentum distribution $n(\mathbf{k})$. Symbols $+$: direct numerical calculation of $E(\Lambda)$ as exposed in the Appendix \ref{app:brutale}. Dashed sinusoidal line: asymptotic oscillatory behavior of $E(\Lambda)$ for large $\Lambda$, obtained by omitting $O(1/\Lambda)$ in (\ref{eq:elamasympt}). Dashed horizontal line: mean value $\bar{E}$ around which $E(\Lambda)$ oscillates at large $\Lambda$. The values of $\bar{E}$ obtained analytically (\ref{eq:ebar_analy}) and numerically (\ref{eq:nest}) are indistinguishable at the scale of the figure, and clearly deviate from the dotted line giving the true energy $E_{\rm trim}$ of the trimer, exemplifying the failure of a at a first sight convincing application of an energy formula for bosons in three dimensions. The unit of momentum $\kappa_0$ is such that the true trimer energy is $E_{\rm trim}=-\hbar^2 \kappa_0^2/m$. \label{fig:elam}} \end{figure} \section{Conclusion} \label{sec:conclusion} We have calculated the single-particle momentum distribution $n(\mathbf{k})$ for the free space Efimov trimer states of same spin state bosons interacting {\sl via} a zero-range potential with an infinite scattering length. The asymptotic behavior of $n(\mathbf{k})$ at large wavevectors, that we determined with good precision, is of particular interest: In addition to the $C/k^4$ tail expected from two-body physics, it has a subleading oscillating $1/k^5$ contribution, which is a signature of Efimov physics that one may try to observe experimentally. We obtained the analytical expression for the coefficient $C$, see (\ref{eq:Cexact}). This coefficient can also be obtained by a direct calculation in position space, using the fact that $C$ is proportional to $\int d^3\rho\, |B(\rho)|^2$ \cite{WernerThese,tangen}. This result allows to calculate the trimer energy for a finite scattering length $a$, to first order in $1/a$, thanks to the relation \cite{tangen} \begin{equation} \left(\frac{\partial E_{\rm trim}}{\partial (-1/a)}\right)_{R_t} = \frac{\hbar^2C}{8\pi m} \end{equation} where the derivative is taken for a fixed value of the three-body parameter $R_t$. In other words, we obtained analytically the derivative at $-\pi/2$ of Efimov's universal function $\Delta(\xi)$ \cite{BH}. In numerical form, it gives $\Delta'(-\pi/2)=2.125850069373\ldots$, which refines the previously known numerical estimate $\simeq 2.12$ \cite{BH}. Furthermore, the existence of the $1/k^5$ subleading term leads to a failure of the relation proposed in \cite{Leyronas} expressing the energy as a functional of the single-particle momentum distribution, which was not obvious {\sl a priori}. We have considered here the particular case of Efimov trimers. The coefficient of the $1/k^5$ subleading term was however obtained in Appendix \ref{app:nk_high} by taking the zero-energy limit $\kappa_0\to 0$. We thus expect that the phenomenology of the $1/k^5$ subleading term persists, not only for any other three-body system subjected to Efimov physics (such as two fermions of mass $M$ and a lighter particle of mass $m$ with a mass ratio $M/m>13.607$ \cite{Efimov2,PetrovEfim}), but also for a macroscopic Bose gas, at least for strong enough interactions \cite{Cornell_Bose,Chevy} so as to make the subleading $1/k^5$ term sizeable and maybe accessible to measurements. More precisely, we conjectured in \cite{tangen} that there is a subleading oscillating $1/k^5$ term in the tail of the momentum distribution [as in Eq.(\ref{eq:nk_efi})] for any $N$-body quantum state subjected to the Efimov effect, that is obeying three-body contact conditions involving the three-body parameter $R_t$, with a coefficient $D$ related to $\partial E/\partial(\ln R_t)$ through a simple proportionality factor. Moreover, this derivative was related in \cite{tangen} to a three-body analog of the contact, defined from the prefactor appearing in the three-body contact condition. \noindent {\bf Note:} After submission of this work, (i) the expression of the coefficient of the $1/k^5$ subleading tail of $n(\mathbf{k})$ in terms of the derivative of the energy with respect to the logarithm of the three-body parameter $R_t$, and (ii) the appropriate energy formula taking into account the Efimovian subleading $1/k^5$ term, appeared in \cite{platter_efim} for a Bose gas with an arbitrary number of particles. \acknowledgments {We thank S. Tan for useful discussions. F.W. is supported by NSF under Grant No. PHY-0653183. Y.C. is a member of IFRAF and acknowledges support from ERC Project FERLODIM N.228177.}
2,869,038,156,438
arxiv
\section{Introduction} The problem of controlling a stochastic dynamical system to either aid or hinder the estimation of its time-varying state arises across numerous applications in automatic control, signal processing, and robotics. Applications in which the problem has been investigated in its \emph{active estimation} form to aid state estimation include active state estimation and dual control in automatic control \cite{Blackmore2008, Hu2004,Baglietto2007,Scardovi2007}, controlled sensing in signal processing and robotics \cite{Krishnamurthy2016,Zois2014,Zois2017,Krishnamurthy2020,Hoffmann2010}, and active simultaneous localisation and mapping (SLAM) in robotics \cite{Mu2016, Thrun2005, Roy1999a, Stachniss2005, Valencia2012,Roy2005}. Conversely, applications in which the problem has been investigated in its \emph{active obfuscation} form to hinder state estimation include privacy in cyber-physical systems \cite{Li2018,Li2019,Farokhi2020,Tanaka2017,Savas2020,Shateri2020}, and covert navigation in robotics \cite{Marzouqi2011,Hibbard2019}. Despite these many applications, few works have explicitly addressed active estimation or obfuscation of entire \emph{state trajectories}, with most instead focusing on aiding or hindering state estimation as it relates to the performance of Bayesian filters. Bayesian filters provide marginal state estimates given a history of observations and controls. However, in many applications such as target tracking and SLAM, (joint) state \emph{trajectory} estimates are of greater interest than marginal state estimates. For instance, in surveillance applications, it can be important to estimate or conceal not just where a target currently is, but from where it came and what points it visited. Similarly in SLAM, better estimates of the past robot trajectory help reconstruct a more accurate map of the environment. Motivated by such applications, in this paper we investigate novel approaches to active state estimation and obfuscation that explicitly relate to estimating or concealing entire state trajectories. \subsection{Related Work} Developing meaningful measures of state uncertainty (or estimation performance) that are tractable to optimise within standard stochastic optimal control frameworks such as partially observed Markov decision processes (POMDPs) is a key challenge in active estimation and obfuscation. The solution of standard POMDPs involves reformulating them as fully observed Markov decision processes (MDPs) in terms of a belief (or information) state corresponding to the state estimate provided by a Bayesian filter. Numerous algorithms exist for solving the resulting belief-state MDPs, with the vast majority relying on the fact that standard POMDPs have cost and cost-to-go functions that are concave or piecewise-linear concave (PWLC) in terms of the belief state (see \cite{Haugh2020,Walraven2019,Krishnamurthy2016,Kurniawati2008,Garg2019,Krishnamurthy2007} and references therein). The intrinsic relationship between Bayesian filters and belief-state approaches for solving POMDPs has resulted in state-uncertainty measures related to filter estimates dominating the literature of both active state estimation and obfuscation (see \cite{Krishnamurthy2007,Krishnamurthy2016,Krishnamurthy2020,Araya2010,Li2019} and references therein) --- with particular interest paid to state-uncertainty measures that can be expressed as concave or PWLC functions of the belief state (cf.\ \cite{Araya2010} and \cite[Chapter 8]{Krishnamurthy2016}). State-uncertainty measures frequently considered for active estimation include the error probabilities \cite{Krishnamurthy2007,Blackmore2008}, mean-squared error \cite{Zois2014,Zois2017,Krishnamurthy2007}, Fisher information \cite{Flayac2017} and entropy \cite{Krishnamurthy2007,Scardovi2007,Roy1999a,Thrun2005} of Bayesian filter estimates (see also \cite[Chapter 8]{Krishnamurthy2016} and references therein). Similarly, active obfuscation approaches such as \cite{Li2019} consider minimising the probability mass of filter estimates at the true states. Unfortunately, these popular state-uncertainty measures based on filter estimates are of limited use in describing and optimising the uncertainty associated with entire state trajectories, since they neglect temporal correlations between states that arise due to the state dynamics. Without consideration of temporal correlations, active estimation approaches may select actions that lead to highly random (or uncertain) state transitions, and active obfuscation approaches such as \cite{Li2019} leave open the possibility of adversaries accurately inferring states at isolated times and using correlations to estimate the entire trajectory via Bayesian smoother-like algorithms (e.g., fixed-interval Bayesian smoothers and the Viterbi algorithm). Bayesian smoother-like algorithms are concerned with inferring the states of partially observed stochastic systems given entire measurement and control trajectories. Unlike Bayesian filters, they are thus capable of exploiting correlations between past, present, and future measurements and controls to compute estimates (cf.\ \cite[Section 3.5]{Krishnamurthy2016}). Bayesian smoother-like algorithms have been studied over many decades and constitute key components in many target tracking (cf.~\cite{Bar-Shalom2001}) and robot SLAM (cf.~\cite{Thrun2005}) systems. The problem of controlling a system so as to either aid or hinder the estimation of its state trajectory with smoother-like algorithms has received limited attention, with most efforts confined to the robotics literature on active SLAM (cf.~\cite{Mu2016,Stachniss2005,Valencia2012}). Treatments in robotics have, however, avoided the use of state-uncertainty measures related to trajectories due to tractability concerns, and have instead resorted to sums of marginal uncertainty measures without consideration of temporal state correlations (cf.\ \cite{Stachniss2005,Valencia2012}). Indeed, few state-uncertainty measures explicitly related to entire trajectories or trajectory estimates from smoother-like algorithms have been investigated. Most recently, the problem of obfuscating entire state trajectories from \emph{any} conceivable estimator has been investigated by drawing on ideas from privacy in static settings (e.g., datasets) including differential privacy \cite{Sandberg2015,Farokhi2020,Hale2015} and information theory \cite{Nekouei2019,Farokhi2020,Tanaka2017,Murguia2021}. These works, however, sidestep complete POMDP treatments either by only increasing the state's unpredictability \cite{Savas2020, Hibbard2019} or by only degrading the measurements \cite{Nekouei2019,Tanaka2017,Murguia2021} (rather than a combination of the two). Furthermore, as noted in \cite{Fehr2018}, POMDPs for information-averse or obfuscation problems frequently involve cost and cost-to-go functions that are not concave in the belief state, and so have mostly been avoided until recently because no satisfying (approximate) solution techniques existed. \subsection{Contributions} In this paper, we investigate the conditional entropy of the state trajectory given measurements and controls as a \emph{tractable} state-uncertainty measure for both active estimation and obfuscation in POMDPs. We dub this conditional entropy the \emph{smoother entropy} since it plays a pivotal role in tight upper and lower bounds on the minimum achievable probability of error for any conceivable state-trajectory estimator (cf.\ \cite{Feder1994}), including Bayesian smoother-like algorithms. Prior literature has dismissed the smoother entropy as an intractable objective in POMDPs (cf.\ \cite{Stachniss2005,Valencia2012}), since it has not been shown to be a function of the POMDP belief state with structural properties (e.g.\ additivity and concavity in the belief state) amenable to the use of standard POMDP solution techniques (e.g., dynamic programming). However, by using the Marko-Massey theory of \emph{directed information} \cite{Marko1973,Massey1990,Kramer1998,Massey2005}, we show that there are multiple belief-state forms of the smoother entropy, with one form leading to a belief-state MDP reformulation of active estimation with concave cost and cost-to-go functions, and another form leading to a belief-state MDP reformulation of active obfuscation that also has concave cost and cost-to-go functions. These concavity results are surprising since active estimation involves minimising the smoother entropy whilst active obfuscation involves maximising it, and POMDP formulations of obfuscation have frequently been avoided due to non-concave cost and cost-to-go functions (cf.\ \cite{Fehr2018}). They are also practically important since they enable the use of standard POMDP (approximate) solution techniques. The key contributions of this paper are thus: \begin{enumerate} \item The investigation of the \emph{smoother entropy} as a tractable state-uncertainty measure to be minimised for active state estimation and maximised for active state obfuscation; \item The novel expression of the smoother entropy using the Marko-Massey theory of directed information, leading to two novel expressions for it in terms of the standard concept of the POMDP belief state; \item The formulation of our active estimation and obfuscation problems as belief-state MDPs, both surprisingly with concave cost and cost-to-go functions; and, \item The development of PWLC approximate solutions and their associated error bounds for our active estimation and obfuscation problems using standard POMDP techniques. \end{enumerate} Compared to our early work in \cite{Molloy2021,Molloy2021a}, significant extensions in this paper include: 1) Use of the Marko-Massey theory of directed information to unify the derivations of belief-state smoother entropy forms and enable comparison with the directed-information work of \cite{Tanaka2017,Nekouei2019}; 2) Characterisation of the structural properties of all belief-state MDP formulations of our active estimation and obfuscation problems; 3) Development of PWLC (approximate) solutions and their associated error bounds; and 4) Numerical and theoretical analysis examining the operational relationship between smoother-entropy optimisation and estimation error probabilities. With the exception of Lemma \ref{lemma:estConcave} and Theorem \ref{theorem:estConcave} (published in \cite{Molloy2021a} without detailed proofs), the technical results of this paper are new in their full generality. \subsection{Paper Organisation} This paper is structured as follows. In Section \ref{sec:problem}, we pose our active estimation and obfuscation problems. In Section \ref{sec:directedInformation}, we establish novel forms of the smoother entropy for POMDPs. In Sections \ref{sec:activeEst} and \ref{sec:activeObf}, we exploit the novel smoother entropy forms to reformulate our active estimation and obfuscation problems as belief-state MDPs, and in Section \ref{sec:approx} we present a bounded-error approach for solving them. We illustrate active estimation and obfuscation in examples inspired by privacy in cloud-based control (e.g., \cite{Tanaka2017}) and uncertainty-aware robot navigation (e.g., \cite{Roy1999, Thrun2005,Nardi2019}) in Section \ref{sec:results}. We provide conclusions in Section \ref{sec:conclusion}. \subsection{Notation} Random variables will be denoted by capital letters, and their realisations by lower case letters (e.g., $X$ and $x$). Sequences of random variables and their realisations will be denoted by capital and lower case letters, respectively, with superscripts denoting their length (e.g., $X^T \triangleq \{X_1, X_2, \ldots, X_T\}$ and $x^T \triangleq \{x_1, x_2, \ldots, x_T\}$). With a mild abuse of notation, the probability mass function (pmf) of a random variable $X$ (or its probability density function if it is continuous) will be written as $p(x)$, the joint pmf of $X$ and $Y$ as $p(x, y)$, and the conditional pmf of $X$ given $Y = y$ as $p(x|y)$ or $p(x | Y = y)$. For a function $f$ of $X$, the expectation of $f$ evaluated with $p(x)$ will be denoted $E_X [f(x)]$ (i.e., random variables in expectations will be denoted by lower case letters). The conditional expectation of $f$ evaluated with $p(x|y)$ will be similarly denoted $E[f(x) | y]$. With a common abuse of notation, $E_\mu [ \cdot ]$ is also used to indicate the dependence of an expectation on a policy $\mu$. The {\em pointwise} (discrete) entropy of $X$ given $Y = y$ will be written $H(X | y) \triangleq - \sum_{x} p(x|y) \log p(x|y)$ with the (average) conditional entropy of $X$ given $Y$ being $H(X | Y) \triangleq E_{Y} \left[ H(X|y) \right]$. The mutual information between $X$ and $Y$ is $I(X; Y) \triangleq H(X) - H(X | Y) = H(Y) - H(Y | X)$.\footnote{If $Y$ is continuous-valued, then $H(Y)$ ($H(Y|X)$) is replaced with the \emph{differential entropy} $h(Y)$ (resp.\ \emph{conditional differential entropy} $h(X|Y)$) \cite{Cover2006}.} The pointwise conditional mutual information of $X$ and $Y$ given $Z = z$ is $I(X; Y | z) \triangleq H(X | z) - H(X | Y, z)$ with the (average) conditional mutual information given by $I(X; Y | Z) \triangleq E_{Z} \left[ I(X; Y | z) \right]$. Where there is no risk of confusion, we will omit the adjectives ``pointwise'' and ``conditional''. \section{Problem Formulation and Approach} \label{sec:problem} In this section, we formulate novel active state estimation and state obfuscation problems with smoother-entropy costs. We then sketch our approach for solving them as POMDPs. \subsection{Active Estimation and Obfuscation Problems} Let $X_k$ for $k \geq 1$ be a discrete-time, first-order controlled Markov chain with a finite state space $\mathcal{X} \triangleq \{1, 2, \ldots, N\}$. Let the initial probability distribution of $X_1$ be the vector $\pi_0 \in \Delta^N$ with components $\pi_0(i) \triangleq P(X_1 = i)$ for $i \in \mathcal{X}$. The initial probability distribution belongs to the $N$-dimensional probability simplex $\Delta^N \triangleq \{\pi \in [0,1]^N : \sum_{i = 1}^N \pi(i) = 1\}$. We shall let the (controlled) transition dynamics of $X_k$ be described by: \begin{align} \label{eq:stateProcess} A^{ij}(u) \triangleq p( X_{k+1} = i | X_k = j, U_k = u) \end{align} for $k \geq 1$ with the controls $U_k$ belonging to a finite set $\mathcal{U}$. The state process $X_k$ is (partially) observed through a stochastic measurement process $Y_k$ for $k \geq 1$ taking values in a (potentially continuous) metric space $\mathcal{Y}$. Given the state $X_k$ and control $U_k$, the measurements $Y_k$ are conditionally independent of previous states, controls and measurements. Thus we may define the measurement kernel \begin{align} \label{eq:obsProcess} B^{i} (Y_k, u) \triangleq p( Y_k | X_k = i, U_{k-1} = u) \end{align} for $k > 1$ with $B^{i}(Y_1) \triangleq p( Y_1 | X_1 = i)$. The measurement kernels are conditional probability density functions (pdfs) when the space $\mathcal{Y}$ is continuous, and conditional pmfs when $\mathcal{Y}$ is finite. The tuple $(X_k, Y_k, U_k)$ constitutes a controlled hidden Markov model (HMM) \cite{Krishnamurthy2016}. Controlled HMMs arise naturally in problems that involve selecting controls for the dual purpose of optimising both a \emph{system-performance measure} dependent on the state and control values (e.g.\ energy consumption) and a \emph{state-uncertainty measure} dependent on the uncertainty associated with the states (e.g.\ variance of state estimates). As a state-uncertainty measure, we consider the conditional entropy of the state trajectory $X^T$ given measurements $Y^T$ and controls $U^{T-1}$ for $T > 0$, i.e., \begin{align} \label{eq:condEntCriteria} H(X^T | Y^T, U^{T-1}) = E_{Y^T,U^{T-1}}[H(X^T | y^T, u^{T-1})]. \end{align} We shall refer to \eqref{eq:condEntCriteria} as the \emph{smoother entropy}. Our consideration of the smoother entropy as a state-uncertainty measure is motivated by $H(X^T | y^T, u^{T-1})$ in \eqref{eq:condEntCriteria} being the pointwise conditional entropy of the pmf $p(x^T | y^T, u^{T-1})$, which is the (joint) posterior distribution of concern in Bayesian state estimation --- with Bayesian smoothers computing its marginal pmfs $p(x_k | y^T, u^{T-1})$ for $1 \leq k \leq T$ and the Viterbi algorithm computing its mode (cf.\ \cite{Briers2010,Krishnamurthy2016}). Intuitively, the smaller (greater) the smoother entropy, the less (more) uncertain we expect state trajectory estimates from smoother-like algorithms. In the extreme case where the smoother entropy is zero, the state trajectory can be uniquely recovered from the record of measurements and controls. We therefore investigate the selection of controls to either minimise the smoother entropy for active estimation or maximise it for active obfuscation, whilst in both cases simultaneously minimising an arbitrary system-performance measure consisting of the sum of costs $c_k : \mathcal{X} \times \mathcal{U} \mapsto [0, \infty)$ for $1 \leq k < T$ and $c_T : \mathcal{X} \mapsto [0,\infty)$ for $k = T$. Our \emph{Active Estimation} problem is thus to find a (potentially stochastic) policy $\mu = \left\{ \mu_{k} : 1 \leq k < T \right\}$, defined by the sequence of conditional probability distributions \begin{align*} U_k | (Y^k = y^k, U^{k-1} = u^{k-1}) \sim \mu_{k}(y^k, u^{k-1}) \end{align*} for $k \geq 1$, that minimises the smoother entropy $H(X^T | Y^T, U^{T-1})$ and the costs $c_k$ and $c_T$ by solving: \begin{align} \label{eq:activeEstimation} \begin{split} \inf_\mu \Bigg\{ &H(X^T | Y^T, U^{T-1}) \\ &\quad+ E_\mu \left[ c_T(x_T) + \sum_{k = 1}^{T-1} c_k \left(x_k, u_k\right) \right] \Bigg\} \end{split} \end{align} subject to the state and measurement kernels \eqref{eq:stateProcess} and \eqref{eq:obsProcess}. Here, the expectation $E_\mu [\cdot]$ is over the joint distribution of the states $X^{T}$, controls $U^{T-1}$, and measurements $Y^{T}$ under the policy $\mu$. Our active estimation problem is motivated by applications in which we wish to enhance the estimation of system state trajectories such as controlled sensing, target tracking, and robot exploration and SLAM.\footnote{The trade-off between the estimation and control objectives can be tuned by including a positive coefficient in the costs $c_1, \ldots, c_T$.} Conversely, our \emph{Active Obfuscation} problem is to find a policy $\mu$ that maximises the smoother entropy $H(X^T | Y^T, U^{T-1})$ whilst minimising $c_k$ and $c_T$ by solving: \begin{align} \label{eq:activeObfuscation} \begin{split} \inf_\mu \Bigg\{ &-H(X^T | Y^T, U^{T-1}) \\ &\quad+ E_\mu \left[ c_T(x_T) + \sum_{k = 1}^{T-1} c_k \left(x_k, u_k\right) \right] \Bigg\} \end{split} \end{align} subject to the state and measurement kernels \eqref{eq:stateProcess} and \eqref{eq:obsProcess}. Our active obfuscation problem is motivated by applications where the aim is to prevent adversaries from estimating system state trajectories, for example, in privacy for cyber-physical systems and covert navigation in robotics. \subsection{Motivation and Operational Meaning} Beyond its interpretation as a measure of uncertainty, the smoother entropy is operationally meaningful in two way. Firstly, it provides lower and upper bounds on the minimum probability of error for \emph{any} (potentially non-Bayesian) estimator of the state trajectory \cite{Feder1994}. Specifically, let the minimum error probability for any estimator (i.e.\ any function $f : \mathcal{Y}^T \times \mathcal{U}^{T-1} \mapsto \mathcal{X}^T$) be \begin{align*} \epsilon \triangleq \min_{\hat{X}^T \in \{ f : \mathcal{Y}^T \times \mathcal{U}^{T-1} \mapsto \mathcal{X}^T\}} P(X^T \neq \hat{X}^T). \end{align*} We note that the minimum error probability is achieved by maximum \emph{a posteriori} estimators such as the Viterbi algorithm \cite{Feder1994}. Then, Theorem 1 of \cite{Feder1994} gives that \begin{align*} \Phi^{-1}(H(X^T | Y^T, U^{T-1})) \leq \epsilon \leq \phi^{-1}(H(X^T | Y^T, U^{T-1})) \end{align*} where $\Phi^{-1}$ and $\phi^{-1}$ are the inverse functions of the strictly monotonically increasing, convex continuous functions, and thus must themselves be strictly monotonically increasing. Minimising the smoother entropy for active estimation in \eqref{eq:activeEstimation} thus corresponds to minimising an upper bound on $\epsilon$, whilst maximising the smoother entropy for active obfuscation corresponds to maximising a lower bound on $\epsilon$. Our consideration of the smoother entropy for active estimation contrasts with approaches that instead minimise only the marginal terminal entropy $H(X_T|Y^T, U^{T-1})$ (or equivalently maximise the telescoping sum of information gains $\sum_{k = 1}^{T-1} [H(X_{k} | Y^{k}, U^{k-1}) - H(X_{k+1} | Y^{k+1}, U^k)]$) \cite{Thrun2005, Roy2005}. It also contrasts with approaches that instead minimise the sum of marginal entropies $H(X_k|Y^k, U^{k-1})$ \cite{Krishnamurthy2007, Krishnamurthy2016, Araya2010, Nardi2019} or $H(X_k|Y^T, U^{T-1})$ \cite{Stachniss2005, Valencia2012} for $1 \leq k \leq T$. Specifically, approaches based on marginal entropies neglect correlations between consecutive states and so overestimate the smoother entropy since \begin{align} \label{eq:entropyBounds} \begin{split} \sum_{k = 1}^T H(X_k | Y^{k}, U^{k-1}) &\geq \sum_{k = 1}^T H(X_k | Y^{T}, U^{T-1})\\ &\geq H(X^T | Y^T, U^{T-1}), \end{split} \end{align} with equality holding only when the states are (temporally) independent. Our active estimation and obfuscation problems explicitly encourage exploitation of the temporal dependencies between states via optimisation of the smoother entropy. Finally, minimising (maximising) the smoother entropy $H(X^T | Y^T, U^{T-1})$ using the controls $U^{T-1}$ is, in general, not equivalent to maximising (minimising) the conditional mutual information $I(X^T; Y^T | U^{T-1}) = H(X^T | U^{T-1}) - H(X^T | Y^T, U^{T-1})$, which is often the goal in controlled sensing and optimal Bayesian experimental design (e.g.\ \cite{Hoffmann2010}). For example, whilst maximising $I(X^T; Y^T | U^{T-1})$ increases the dependence between the states and measurements, the states themselves could become more uncertain due to the term $H(X^T | U^{T-1})$. Indeed, the mutual information $I(X^T; Y^T | U^{T-1})$ is the \emph{reduction} in state uncertainty due to the measurements and controls (cf.\ \cite[p.\ 19]{Cover2006}) --- it is not an absolute measure of state uncertainty. \subsection{POMDP Solution Approach} To solve our active estimation \eqref{eq:activeEstimation} and obfuscation \eqref{eq:activeObfuscation} problems, let us define the belief state $\pi_{k} \in \Delta^N$ as the conditional distribution of the state $X_k$ given the measurement history $y^k$ and past controls $u^{k-1}$, that is, $\pi_{k}(i) \triangleq p(X_{k} = i | y^k, u^{k-1})$ for $1 \leq i \leq N$. Given the transition dynamics \eqref{eq:stateProcess} and measurement kernel \eqref{eq:obsProcess}, the belief state evolves via the Bayesian filter: \begin{align} \label{eq:bayesTemp} \pi_{k+1}(i &= \dfrac{ B^i(y_{k+1}, u_{k}) \sum_{j = 1}^N \bar{\pi}_{k+1 | k} (i,j)}{\sum_{m = 1}^N\sum_{j = 1}^N B^m(y_{k+1},u_{k}) \bar{\pi}_{k+1 | k} (m,j)} \end{align} for $k \geq 1$ and all $1 \leq i \leq N$ where $\bar{\pi}_{k+1 | k}(i,j) \triangleq p(X_{k+1} = i, X_{k} = j | y^{k}, u^{k})$ is the joint predicted belief state given by \begin{align}\label{eq:bayesianPred} \bar{\pi}_{k+1 | k}(i,j) &= A^{ij}(u_{k}) \pi_{k}(j) \end{align} for $1 \leq i,j \leq N$. The Bayesian filter \eqref{eq:bayesTemp} is a mapping of $\pi_k$, $u_k$ and $y_{k+1}$ to $\pi_{k+1}$ so we shall write it compactly as \begin{align} \label{eq:bayesianFilter} \pi_{k+1} &= \Pi(\pi_{k}, u_{k}, y_{k+1}) \end{align} for $k \geq 1$, taking the initial belief state $\pi_1$ as $\pi_1(i) = B^i(y_1)\pi_0(i) / (\sum_{i = 1}^N B^i(y_1)\pi_0(i))$ for $1 \leq i \leq N$. Without the smoother entropy terms, \eqref{eq:activeEstimation} and \eqref{eq:activeObfuscation} can be reformulated as the standard POMDP or belief-state MDP: \begin{align} \label{eq:standardPOMDP} \begin{aligned} &\inf_{\bar{\mu}} & & E_{\bar{\mu}} \left[ \left. C_T(\pi_T) + \sum_{k = 1}^{T-1} C_k \left( \pi_k, u_{k} \right) \right| \pi_1 \right]\\ &\mathrm{s.t.} & & \pi_{k+1} = \Pi\left( \pi_{k}, u_{k}, y_{k+1} \right)\\ & & & Y_{k+1} \sim p(y_{k+1} | \pi_{k}, u_{k})\\ & & & \mathcal{U} \ni U_k \sim \bar{\mu}_k(\pi_k) \end{aligned} \end{align} where the optimisation is over policies $\bar{\mu} \triangleq \{\bar{\mu}_k : 1 \leq k < T\}$ defined by conditional probability distributions $\bar{\mu}_k(\pi_k)$ on $\mathcal{U}$ given the belief state $\pi_k$ (cf.~\cite[Chapter 7]{Krishnamurthy2016}). The belief-state cost functions are defined as $ C_T (\pi_T) \triangleq E_{X_T} [c_T(x_T) | \pi_T] $ and $ C_k (\pi_k, u_k) \triangleq E_{X_k} [c_k(x_k, u_k) | \pi_k, u_k], $ with $ p(y_{k+1} | \pi_{k}, u_{k}) = \sum_{i,j = 1}^N B^i(y_{k+1}, u_{k}) A^{ij}(u_{k}) \pi_{k}(j). $ Numerous techniques based on dynamic programming exist for finding (approximate) solutions to POMDPs of the form in \eqref{eq:standardPOMDP}, with many increasingly able to handle large state, measurement, and control spaces (see \cite{Haugh2020,Walraven2019,Krishnamurthy2016,Araya2010,Garg2019} and references therein). These techniques exploit structural properties of the cost functions $C_k (\pi_k, u_k)$ and $C_T(\pi_T)$ (and the resulting dynamic programming cost-to-go or value functions) in terms of the belief state. In particular, the vast majority of POMDP techniques exploit the fact that the cost and cost-to-go functions of standard POMDPs of the form in \eqref{eq:standardPOMDP} are concave (or PWLC) in the belief state $\pi_k$ for all $u_k \in \mathcal{U}$ (cf.\ \cite{Araya2010} and \cite[Chapter 8.4.4]{Krishnamurthy2016}).\footnote{Due to the control space $\mathcal{U}$ being finite, standard POMDP techniques are not usually concerned with structural properties with respect to the controls.} However, the presence of the smoother entropy $H(X^T | Y^T, U^{T-1})$ in \eqref{eq:activeEstimation} and \eqref{eq:activeObfuscation} complicates their solution in the same manner as standard POMDPs of the form in \eqref{eq:standardPOMDP} with cost and cost-to-go functions that are additive and concave in the belief state. Indeed, the smoother entropy has previously been dismissed as difficult or problematic, due to the correlations between successive states that it captures \cite{Stachniss2005,Valencia2012,Valencia2018}. Naive additive belief-state expressions of it also lead only to the upper bound in \eqref{eq:entropyBounds}, and the closest (exact) results in \cite{Hernando2005} establish only an additive (non-belief-state) expression for the \emph{pointwise} conditional entropy $H(X^T|y^T)$ for (uncontrolled) HMMs. In this paper, we establish novel and exact belief-state forms of the smoother entropy that possess an additive structure. This allows us to reformulate \eqref{eq:activeEstimation} and \eqref{eq:activeObfuscation} as belief-state MDPs analogous to \eqref{eq:standardPOMDP} with concave cost and cost-to-go functions, which can be efficiently solved using standard POMDP (approximate) solution techniques. \section{Additive and Belief-State Forms of the Smoother Entropy} \label{sec:directedInformation} In this section, we establish novel additive and belief-state forms of the smoother entropy $H(X^T | Y^T, U^{T-1})$ for POMDPs using concepts from the Marko-Massey theory of \emph{directed information} \cite{Marko1973,Massey1990,Kramer1998,Massey2005}. These novel forms will enable us to later reformulate our active estimation and obfuscation problems as (fully-observed) belief-state MDPs. \subsection{Marko-Massey Directed-Information Forms} To establish our first main result, let us define the \emph{causally conditioned directed information} from the states $X^T$ to the measurements $Y^T$ given the controls $U^{T-1}$ as \cite{Kramer1998,Massey2005} \begin{align} \label{eq:causalDirectedInformation} I(X^T \to Y^T \| U^{T-1}) &\triangleq \sum_{k = 1}^T I(X^k; Y_k | Y^{k-1}, U^{k-1}) \end{align} where $I(X^1; Y_1 | Y^{0}, U^{0}) \triangleq I(X_1; Y_1)$. Let us also define the \emph{causally conditioned entropy} of the states $X^T$ given the measurements $Y^{T-1}$ and controls $U^{T-1}$ as \cite{Kramer1998,Massey2005} \begin{align} \label{eq:causalEntropy} H(X^T \| Y^{T-1}, U^{T-1}) &\triangleq \sum_{k = 1}^T H(X_k | X^{k-1}, Y^{k-1}, U^{k-1}) \end{align} where $H(X_1 | X^{0}, Y^{0}, U^{0}) \triangleq H(X_1)$. Intuitively, $I(X^T \to Y^T \| U^{T-1})$ describes the total ``new'' information causally gained over each time-step about the states from the measurements given the controls, whilst $H(X^T \| Y^{T-1}, U^{T-1})$ describes the total uncertainty about the state trajectory over each time-step given causal knowledge of past states, measurements, and controls. The following theorem establishes that the (non-causal) smoother entropy $H(X^T | Y^T, U^{T-1})$ for POMDPs is the difference between $H(X^T \| Y^{T-1}, U^{T-1})$ and $I(X^T \to Y^T \| U^{T-1})$. \begin{theorem} \label{theorem:directedInformation} Consider the POMDP $(X_k, Y_k, U_k)$ with controls given by a (potentially stochastic) output feedback policy $\mu = \left\{ \mu_{k}(y^k, u^{k-1}) = p(u_{k} | y^k, u^{k-1}) : 1 \leq k < T \right\}$. Then, \begin{align} \label{eq:stageAdditiveCausal} \begin{split} &H(X^T | Y^T, U^{T-1})\\ &\quad= H(X^T \| Y^{T-1}, U^{T-1}) - I(X^T \to Y^T \| U^{T-1}). \end{split} \end{align} \end{theorem} \begin{IEEEproof} We prove \eqref{eq:stageAdditiveCausal} via induction on $T$. For $T = 1$, \begin{align*} &H(X^1 \| Y^{0}, U^{0}) - I(X^1 \to Y^1 \| U^{0})\\ &\quad= H(X_1) - I(X_1; Y_1) = H(X_1 | Y_1) \end{align*} and so \eqref{eq:stageAdditiveCausal} holds for $T = 1$. Suppose then that \eqref{eq:stageAdditiveCausal} holds for trajectory lengths smaller than $T$ where $T > 1$. From the definitions of the causally conditioned directed information \eqref{eq:causalDirectedInformation} and causal conditional entropy \eqref{eq:causalEntropy}, we have that \begin{align*} &I(X^T \to Y^{T} \| U^{T-1})\\ &\;= I(X^{T-1} \to Y^{T-1} \| U^{T-2}) + I(X^T ; Y_T | Y^{T-1}, U^{T-1}) \end{align*} and \begin{align*} &H(X^T \| Y^{T-1}, U^{T-1})\\ &\;= H(X^{T-1} \| Y^{T-2}, U^{T-2}) + H(X_T | X^{T-1}, Y^{T-1}, U^{T-1}). \end{align*} Combining these two equations gives \begin{align}\notag &H(X^T \| Y^{T-1}, U^{T-1}) - I(X^T \to Y^{T} \| U^{T-1})\\\notag &\;= H(X^{T-1} \| Y^{T-2}, U^{T-2}) + H(X_T | X^{T-1}, Y^{T-1}, U^{T-1}) \\\notag &\quad- I(X^{T-1} \to Y^{T-1} \| U^{T-2}) - I(X^T ; Y_T | Y^{T-1}, U^{T-1})\\\notag &\;= H(X^{T-1} | Y^{T-1}, U^{T-2}) + H(X_T | X^{T-1}, Y^{T-1}, U^{T-1}) \\\label{eq:induct1} &\quad- I(X^T ; Y_T | Y^{T-1}, U^{T-1}) \end{align} where the last equality follows from the induction hypothesis that \eqref{eq:stageAdditiveCausal} holds for trajectories shorter than $T > 1$. To simplify \eqref{eq:induct1}, note that the definition of mutual information implies that \begin{align}\notag &I(X^T ; Y_T | Y^{T-1}, U^{T-1})\\\notag &\;= H(X^T | Y^{T-1}, U^{T-1}) - H (X^T | Y^T, U^{T-1})\\\notag &\;= H(X^{T-1} | Y^{T-1}, U^{T-2}) + H(X_T | X^{T-1}, Y^{T-1}, U^{T-1})\\ \label{eq:condMut1} &\quad- H (X^T | Y^T, U^{T-1}) \end{align} where the last equality follows from the chain rule for conditional entropy, and by noting that $U_{T-1}$ is conditionally independent of $X^{T-1}$ given $U^{T-2}$ and $Y^{T-1}$ by virtue of the measurement kernel \eqref{eq:obsProcess} and the feedback control policy $\mu$. Substituting \eqref{eq:condMut1} into \eqref{eq:induct1} then gives that \begin{align*} &H(X^T \| Y^{T-1}, U^{T-1}) - I(X^T \to Y^{T} \| U^{T-1})\\ &\quad= H(X^T | Y^T, U^{T-1}) \end{align*} and so \eqref{eq:stageAdditiveCausal} holds for $T > 1$. The proof is complete. \end{IEEEproof} The causal conditioning on $Y^{T-1}$ in $H(X^T \| Y^{T-1}, U^{T-1})$ can be omitted in \eqref{eq:stageAdditiveCausal} since the Markov property of the state process $X_k$ and \eqref{eq:causalEntropy} implies that $ H(X^T \| Y^{T-1}, U^{T-1}) = H(X^T \| U^{T-1}). $ Hence, \eqref{eq:stageAdditiveCausal} resembles the trivial expression of the smoother entropy as the difference \begin{align} \label{eq:standardTrajectroyForm} \begin{split} &H(X^T|Y^T,U^{T-1})\\ &\quad= H(X^T | U^{T-1}) - I(X^T; Y^T | U^{T-1}). \end{split} \end{align} Expressions \eqref{eq:stageAdditiveCausal} and \eqref{eq:standardTrajectroyForm} are subtly different since the causally conditioned directed information and entropy terms in \eqref{eq:stageAdditiveCausal} involve conditional probabilities of the states $X_k$ given only the history of measurements $Y^k$ and controls $U^{k-1}$, whilst the standard conditional entropy and mutual information terms in \eqref{eq:standardTrajectroyForm} involve conditional probabilities of the states $X_k$ given the entire trajectories of measurements $Y^{T}$ and controls $U^{T-1}$. This difference means that \eqref{eq:stageAdditiveCausal} will lead directly to belief-state forms of the smoother entropy. To express the smoother entropy in terms of the belief state, we require the following corollary to Theorem \ref{theorem:directedInformation}. \begin{corollary} \label{corollary:additive} Under the conditions of Theorem \ref{theorem:directedInformation}, the smoother entropy has the additive forms: \begin{align} \notag H&(X^T | Y^T, U^{T-1})\\\label{eq:firstAdditiveForm} =& \sum_{k = 1}^T [H(X_k | X_{k-1}, U_{k-1}) - I(X_k ; Y_k | Y^{k-1}, U^{k-1})]\\ \label{eq:secondAdditiveForm} =& \sum_{k = 1}^T [H(X_k | Y^k, U^{k-1}) - I(X_k; X_{k-1} | Y^{k-1}, U^{k-1})]\\\label{eq:thirdAdditiveForm} =& H(X_T | Y^{T}, U^{T-1}) + \sum_{k = 1}^{T-1} H(X_{k} | X_{k+1}, Y^k, U^k) \end{align} with $H(X_1 | X_0, Y^0, U^0) \triangleq H(X_0)$, $H(X_1 | Y^{1}, U^{0}) \triangleq H(X_1 | Y_1)$, $I(X_1; X_0 | Y^0, U^0) \triangleq 0$, and $I(X_1; Y_1 | Y^0, U^0) \triangleq I(X_1;Y_1)$. \end{corollary} \begin{IEEEproof} The definition of mutual information implies \begin{align*} &I(X^k; Y_k | Y^{k-1}, U^{k-1})\\ &\quad= H(Y_k | Y^{k-1}, U^{k-1}) - H(Y_k | X^k, Y^{k-1}, U^{k-1})\\ &\quad= H(Y_k | Y^{k-1}, U^{k-1}) - H(Y_k | X_k, Y^{k-1}, U^{k-1})\\ &\quad= I(X_k; Y_k | Y^{k-1}, U^{k-1}) \end{align*} where the second equality holds due to the Markov property of the state process $X_k$. Thus, \eqref{eq:causalDirectedInformation} is equivalent to \begin{align*} I(X^T \to Y^T \| U^{T-1}) = \sum_{k = 1}^T I(X_k; Y_k | Y^{k-1}, U^{k-1}). \end{align*} Substituting this expression and the definition of the causally conditioned entropy \eqref{eq:causalEntropy} into \eqref{eq:stageAdditiveCausal}, noting also that \begin{align*} H(X_k | X^{k-1}, Y^{k-1}, U^{k-1}) = H(X_k | X_{k-1}, U_{k-1}) \end{align*} due to the Markov property of the state $X_k$, gives \eqref{eq:firstAdditiveForm}. Now, the summands in \eqref{eq:firstAdditiveForm} can be rewritten as \begin{align*} &H(X_k | X_{k-1}, U_{k-1}) - I(X_k; Y_k | Y^{k-1}, U^{k-1})\\ &\quad= H(X_k | X_{k-1}, Y^{k-1}, U^{k-1}) - I(X_k; Y_k | Y^{k-1}, U^{k-1})\\ &\quad= H(X_k | X_{k-1}, Y^{k-1}, U^{k-1}) - H(X_k | Y^{k-1}, U^{k-1})\\ &\qquad+ H(X_k | Y^{k}, U^{k-1})\\ &\quad= H(X_k | Y^{k}, U^{k-1}) - I(X_k; X_{k-1} | Y^{k-1}, U^{k-1}) \end{align*} where the first equality holds due to the Markov property of the state $X_k$, and the remainder follow from the definitions of the conditional mutual informations between $X_k$ and $Y_k$, and $X_k$ and $X_{k-1}$. The second additive form \eqref{eq:secondAdditiveForm} follows. Finally, symmetry of the mutual information in \eqref{eq:secondAdditiveForm} implies \begin{align*} &H(X^T | Y^T, U^{T-1})\\ &\quad= \sum_{k = 1}^T [ H(X_k | Y^{k}, U^{k-1}) - I(X_k; X_{k-1} | Y^{k-1}, U^{k-1})]\\ &\quad= \sum_{k = 1}^T [ H(X_k | Y^{k}, U^{k-1}) - H(X_{k-1} | Y^{k-1}, U^{k-1})\\ &\qquad+ H(X_{k-1} | X_k, Y^{k-1}, U^{k-1})]\\ &\quad= H(X_T | Y^{T}, U^{T-1}) + \sum_{k = 2}^{T} H(X_{k-1} | X_{k}, Y^{k-1}, U^{k-1}) \end{align*} where the last equality follows by noting that consecutive entropy terms $H(X_k | Y^k, U^{k-1})$ cancel since $H(X_{k-1} | Y^{k-1}, U^{k-1}) = H(X_{k-1} | Y^{k-1}, U^{k-2})$ by virtue of the state $X_{k-1}$ being conditionally independent of the control $U_{k-1}$ given $Y^{k-1}$ and $U^{k-2}$ due to \eqref{eq:stateProcess} and the feedback policy (cf.\ the conditions of Theorem \ref{theorem:directedInformation}). The third additive form \eqref{eq:thirdAdditiveForm} follows and the proof is complete. \end{IEEEproof} The additive forms established in Corollary \ref{corollary:additive} each provide different interpretations of the smoother entropy. The first form \eqref{eq:firstAdditiveForm} provides the interpretation of the smoother entropy as the sum of the uncertainty from the state transitions, i.e.\ $H(X_k | X_{k-1}, U_{k-1})$, minus the information about the states gained from the measurements, i.e.\ $I(X_k;Y_k | Y^{k-1}, U^{k-1})$. The second form \eqref{eq:secondAdditiveForm} suggests that the smoother entropy can be viewed as the sum of the marginal (or instantaneous) state uncertainties $H(X_k | Y^{k}, U^{k-1})$ minus the mutual information $I(X_k;X_{k+1}|Y^k, U^k)$ (or dependency) between consecutive states. This second form highlights that approaches based on the sum of marginal state uncertainties (as described before \eqref{eq:entropyBounds}) fail to exploit the dependency between consecutive states. Finally, the third form \eqref{eq:thirdAdditiveForm} offers an interpretation of the smoother entropy backwards in time, with it being the uncertainty associated with the final state $X_T$, i.e., $H(X_T | Y^T, U^{T-1})$, plus the uncertainty accumulated via (backwards) state transitions, i.e., $H(X_k | X_{k+1}, Y^{k}, U^{k})$. \subsection{Belief-State Forms of the Smoother Entropy} The significance of the forms of the smoother entropy established in Corollary \ref{corollary:additive} is that they lead to expressions in terms of the belief state $\pi_k$, as we shall now show. \subsubsection{First Belief-State Form} Recalling the definition of mutual information, the first \eqref{eq:firstAdditiveForm} and second \eqref{eq:secondAdditiveForm} additive forms of the smoother entropy established in Corollary \ref{corollary:additive} can both be expressed as the expectation of the sum of pointwise entropies in the sense that \begin{align}\notag &H(X^T | Y^T, U^{T-1})\\\notag &\quad= E_\mu \Bigg[ H(X_1 | y_1) + \sum_{k = 1}^{T-1} \big[H(X_{k+1} | y^{k+1}, u^{k}) \\ \label{eq:firstBeliefAdditiveFormPointWise} &\quad\qquad - H(X_{k+1} | y^{k}, u^{k}) + H(X_{k+1} | X_{k}, y^{k}, u^{k}) \big]\Bigg]. \end{align} The first term, $H(X_1 | y_1)$, is the entropy of the initial belief state $\pi_1$, i.e., $ H(X_1 | y_1) = - \sum_{i = 1}^N \pi_1(i) \log \pi_1(i). $ Similarly, the first term in the summation, $H(X_{k+1} | y^{k+1}, u^{k})$, is the entropy of the belief state $\pi_{k+1}$ given by \begin{align}\notag H(X_{k+1} | y^{k+1}, u^{k}) &= - \sum_{i = 1}^N \pi_{k+1}(i) \log \pi_{k+1}(i)\\\label{eq:first_current_cost_entropy} &\triangleq \tilde{\ell}_1 (\pi_{k}, u_{k}, y_{k+1}) \end{align} where the last line follows since $\pi_{k+1}$, and hence $H(X_{k+1} | y^{k+1}, u^{k})$, is a function $\tilde{\ell}_1$ of $\pi_{k}$, $y_{k+1}$ and $u_{k}$ via the Bayesian filter \eqref{eq:bayesianFilter}. The second term in the summation in \eqref{eq:firstBeliefAdditiveFormPointWise} is also a function of $\pi_{k}$ and $u_{k}$, namely, \begin{align}\notag &H(X_{k+1} | y^{k}, u^{k})\\\notag &\quad= - \sum_{i,j = 1}^N A^{ij}(u_{k}) \pi_{k} (j) \log \sum_{m = 1}^N A^{im}(u_{k}) \pi_{k} (m) \\\label{eq:first_pred_cost_entropy} &\quad\triangleq \tilde{\ell}_2 (\pi_{k}, u_{k}). \end{align} Finally, the conditional entropy $H(X_{k+1} | X_{k}, y^{k}, u^{k})$ in \eqref{eq:firstBeliefAdditiveFormPointWise} is also a function of $\pi_{k}$ and $u_{k}$ in the sense that \begin{align}\notag &H(X_{k+1} | X_{k}, y^{k}, u^{k})\\ \label{eq:first_cond_cost_entropy} &= - \sum_{i,j = 1}^N A^{ij}(u_{k}) \pi_{k} (j) \log A^{ij}(u_{k}) \triangleq \tilde{\ell}_3(\pi_{k}, u_{k}) \end{align} since $p(X_{k+1} | X_k, y^k, u^k) = p(X_{k+1} | X_k, u_k)$. Thus, \eqref{eq:firstBeliefAdditiveFormPointWise} yields the belief-state form: \begin{align}\notag &H(X^T | Y^T, U^{T-1})\\\label{eq:firstBeliefAdditiveForm} &\quad= H(X_1 | Y_1) + E_\mu \left[ \sum_{k = 1}^{T-1} \tilde{\ell} (\pi_{k}, u_{k}, y_{k+1}) \right] \end{align} where we write $H(X_1 | Y_1)$ separately since this conditional entropy is independent of the controls $U_1$, and we define \begin{align} \label{eq:firstBeliefAdditiveFormFunction} \begin{split} \tilde{\ell} (\pi_{k}, u_{k}, y_{k+1}) &\triangleq \tilde{\ell}_1 (\pi_{k}, u_{k}, y_{k+1}) - \tilde{\ell}_2 (\pi_{k}, u_{k})\\ &\quad+ \tilde{\ell}_3 (\pi_{k}, u_{k}). \end{split} \end{align} \subsubsection{Second Belief-State Form} The third additive form \eqref{eq:thirdAdditiveForm} of the smoother entropy established in Corollary \ref{corollary:additive} admits an alternative belief-state form of the smoother entropy. Firstly, \eqref{eq:thirdAdditiveForm} can be expressed as the expectation of pointwise entropies in the sense that \begin{align*} &H(X^T | Y^T, U^{T-1})\\ &= E_\mu \left[ H(X_T | y^{T}, u^{T-1}) + \sum_{k = 1}^{T-1} H(X_{k} | X_{k+1}, y^{k}, u^{k}) \right]. \end{align*} Since $H(X_T | y^{T}, u^{T-1})$ is the entropy of the terminal belief state $\pi_T$, it is solely a function of $\pi_T$ in the sense that \begin{align}\label{eq:second_terminal_cost_entropy} H(X_T | y^T, u^{T-1}) = - \sum_{i = 1}^N \pi_T(i) \log \pi_T(i) \triangleq \tilde{g}_T (\pi_T). \end{align} Similarly, the conditional entropy $H(X_{k} | X_{k+1}, y^{k}, u^{k})$ is a function of $\pi_k$ and $u_k$ in the sense that, \begin{align}\notag &H(X_{k} | X_{k+1}, y^{k}, u^{k})\\\notag &= - \sum_{i,j = 1}^N A^{ij}(u_{k}) \pi_{k}(j) \log \dfrac{A^{ij}(u_{k}) \pi_{k}(j)}{ \sum_{m = 1}^N A^{im}(u_{k}) \pi_{k}(m)}\\\label{eq:second_running_cost_entropy} &\triangleq \tilde{g}(\pi_k, u_k). \end{align} Thus, the third additive form \eqref{eq:thirdAdditiveForm} established in Corollary \ref{corollary:additive} yields the belief-state form: \begin{align}\notag &H(X^T | Y^T, U^{T-1})\\\label{eq:secondBeliefAdditiveForm} &\quad= E_\mu \left[ \tilde{g}_T(\pi_T) + \sum_{k = 1}^{T-1} \tilde{g} (\pi_{k}, u_{k}) \right]. \end{align} We shall exploit the belief-state forms of the smoother entropy in \eqref{eq:firstBeliefAdditiveForm} and \eqref{eq:secondBeliefAdditiveForm} to solve our active estimation \eqref{eq:activeEstimation} and obfuscation \eqref{eq:activeObfuscation} problems in the same manner as standard POMDPs. That is, we shall reformulate our problems as belief-state MDPs with cost and cost-to-go functions that are concave in the belief state. Surprisingly however, we will show that our active estimation problem \eqref{eq:activeEstimation} has concave costs when optimising the belief-state form of the smoother entropy in \eqref{eq:secondBeliefAdditiveForm} but not when optimising that in \eqref{eq:firstBeliefAdditiveForm}, and \emph{vice versa} for our active obfuscation problem \eqref{eq:activeObfuscation}. \section{Active Estimation Belief-State Reformulations and Structural Results} \label{sec:activeEst} In this section, we establish two distinct MDP reformulations of our active estimation problem \eqref{eq:activeEstimation} based on the novel belief-state expression of the smoother entropy in \eqref{eq:firstBeliefAdditiveForm} and \eqref{eq:secondBeliefAdditiveForm}. We provide dynamic programming descriptions of their cost-to-go functions and optimal solutions, before deriving their structural properties. These results will enable us to identify tractable (approximate) solutions. \subsection{Belief-State MDP Reformulations} The belief-state MDP reformulations of our active estimation problem are derived in the following theorem using the forms of the smoother entropy in \eqref{eq:firstBeliefAdditiveForm} and \eqref{eq:secondBeliefAdditiveForm}. \begin{theorem} \label{theorem:activeEstimationMDP} Define the functions \begin{align*} &\ell_k^e(\pi_{k}, u_{k})\\ &\quad\triangleq E_{Y_{k+1}, X_{k}} \left[ \left. \tilde{\ell} \left( \pi_{k}, u_{k}, y_{k+1}\right) + c_k (x_{k}, u_{k}) \right| \pi_{k}, u_{k} \right] \end{align*} and \begin{align*} g_k^e(\pi_{k}, u_{k}) &\triangleq E_{X_{k}} \left[ \tilde{g}(\pi_k, u_k) + c_k (x_{k}, u_{k}) | \pi_{k}, u_{k} \right] \end{align*} for $1 \leq k \leq T-1$, with $ \ell_T^e(\pi_{T}) \triangleq E_{X_{T}} \left[ \left. c_T (x_{T}) \right| \pi_{T} \right] $ and $ g_T^e(\pi_{T}) \triangleq E_{X_{T}} \left[ \tilde{g}_T(\pi_T) + c_T (x_{T}) | \pi_{T} \right]. $ Then, the active estimation problem \eqref{eq:activeEstimation} is equivalent to: \begin{align} \label{eq:firstActiveEstimationMDP} \begin{aligned} &\inf_{\bar{\mu}} & & E_{\bar{\mu}} \left[ \left. \ell_T^e(\pi_{T}) + \sum_{k = 1}^{T-1} \ell_k^e \left( \pi_{k}, u_{k} \right) \right| \pi_1 \right], \end{aligned} \end{align} and to: \begin{align} \label{eq:secondActiveEstimationMDP} \begin{aligned} &\inf_{\bar{\mu}} & & E_{\bar{\mu}} \left[ \left. g_T^e(\pi_T) + \sum_{k = 1}^{T-1} g_k^e \left( \pi_{k}, u_{k} \right) \right| \pi_1 \right]\\ \end{aligned} \end{align} where both infima are over potentially stochastic policies $\bar{\mu} = \{\bar{\mu}_k : 1 \leq k < T\}$ that are functions of the belief-state $\pi_k$, subject to the constraints: \begin{align*} & & & \pi_{k+1} = \Pi\left( \pi_{k}, u_{k}, y_{k+1} \right)\\ & & & Y_{k+1} \sim p(y_{k+1} | \pi_{k}, u_{k})\\ & & & \mathcal{U} \ni U_k \sim \bar{\mu}_k(\pi_k) \end{align*} for $1 \leq k \leq T-1$. \end{theorem} \begin{IEEEproof} By recalling the expression of the smoother entropy in \eqref{eq:firstBeliefAdditiveForm}, the cost function in \eqref{eq:activeEstimation} becomes \begin{align*} &H(X_1 | Y_1)\\ &\quad+ E_\mu \Bigg[ c_T(x_T) + \sum_{k = 1}^{T-1} \big\{ \tilde{\ell}(\pi_k, u_k, y_{k+1}) + c_k \left(x_k, u_k\right) \big\} \Bigg] \end{align*} where the expectation is over the joint distribution of the states $X^T$, measurements $Y^T$, and controls $U^{T-1}$ under the policy $\mu$. The linearity and tower properties of expectation imply that $ E_\mu \left[ c_T(x_T) \right] = E_{Y^T, U^{T-1}} \left[ \ell_T^e(\pi_T) \right], $ and similarly, $ E_\mu [ \tilde{\ell}(\pi_k, u_k) + c_k \left(x_k, u_k\right) ] = E_{Y^k, U^k} [ \ell_k^e(\pi_k, u_k) ] $ noting that $\pi_k$ is a deterministic function of the measurements $y^k$ and controls $u^{k-1}$ via \eqref{eq:bayesianFilter}. Thus, the cost function in \eqref{eq:activeEstimation} becomes \begin{align*} H(X_1 | Y_1) + E_{Y^T, U^{T-1}} \left[ \ell_T^e(\pi_T) + \sum_{k = 1}^{T-1} \ell_k^e(\pi_k, u_k) \right], \end{align*} and since $H(X_1|Y_1)$ is constant with respect to the controls $U^{T-1}$, it suffices to only consider optimisation of the expectation. In this stage-additive form, a standard POMDP (or MDP) result implies that there is no loss of optimality in restricting to policies $\bar{\mu}$ that are functions of the current belief state $\pi_k$, which is a sufficient statistic for $(y^k,u^{k-1})$ (see \cite[Section 5.4.1]{Bertsekas2005}), and so \eqref{eq:firstActiveEstimationMDP} follows. Now, recalling the alternative belief-state smoother entropy expression in \eqref{eq:secondBeliefAdditiveForm}, the cost function of our active estimation problem \eqref{eq:activeEstimation} may be expressed as \begin{align*} E_\mu \left[ c_T(x_T) + \tilde{g}_T(\pi_T) + \sum_{k = 1}^{T-1} \left\{ \tilde{g}(\pi_k, u_k) + c_k \left(x_k, u_k\right) \right\} \right] \end{align*} The linearity and tower properties of expectation imply that $ E_\mu \left[ c_T(x_T) + \tilde{g}_T(\pi_T) \right] = E_{Y^T, U^{T-1}} \left[ g_T^e(\pi_T) \right], $ and, $ E_\mu \left[ \tilde{g}(\pi_k, u_k) + c_k \left(x_k, u_k\right) \right] = E_{Y^k, U^k} \left[ g_k^e(\pi_k, u_k) \right] $. The cost function in \eqref{eq:activeEstimation} is thus alternatively given by \begin{align*} E_{Y^T,U^{T-1}} \left[ g_T^e(\pi_T) + \sum_{k = 1}^{T-1} g_k^e(\pi_k, u_k) \right]. \end{align*} It again suffices to consider belief-state policies $\bar{\mu}$ (cf.~\cite[Section 5.4.1]{Bertsekas2005}) and so \eqref{eq:secondActiveEstimationMDP} follows, completing the proof. \end{IEEEproof} \subsection{Dynamic Programming Equations} \label{subsec:dynProgramEst} Given the belief-state MDP reformulations of our active estimation problem in Theorem \ref{theorem:activeEstimationMDP}, without loss of optimality we may further restrict to deterministic policies $\bar{\mu}$ of the belief state $\pi_k$ in the sense that $u_{k} = \bar{\mu}_k(\pi_k)$. Such deterministic optimal policies are guaranteed to exist for finite-horizon MDPs (cf.~\cite{Bertsekas2005, Krishnamurthy2016}). We are now in a position to establish the cost-to-go (or value) functions of the two MDP reformulations of our active estimation problem in \eqref{eq:firstActiveEstimationMDP} and \eqref{eq:secondActiveEstimationMDP} Let us define the cost-to-go function for the first MDP reformulation of our active estimation problem in \eqref{eq:firstActiveEstimationMDP} as \begin{align*} J_k^{e,\ell}(\pi_k) \triangleq \inf_{\bar{\mu}_k^{T-1}} E_{\bar{\mu}_k^{T-1}} \left[ \left. \ell_T^e(\pi_T) + \sum_{m = k}^{T-1} \ell_m^e \left( \pi_{m}, u_{m} \right) \right| \pi_{k} \right] \end{align*} for $1 \leq k < T$ and $J_T^{e,\ell}(\pi_T) \triangleq \ell_T^e(\pi_T)$ where $\bar{\mu}_k^{T-1} \triangleq \{\bar{\mu}_k, \bar{\mu}_{k+1}, \ldots, \bar{\mu}_{T-1}\}$. Let us also define the cost-to-go function for the second MDP reformulation of our active estimation problem in \eqref{eq:secondActiveEstimationMDP} as \begin{align*} J_k^{e,g}(\pi_k) \triangleq \inf_{\bar{\mu}_k^{T-1}} E_{\bar{\mu}_k^{T-1}} \left[ \left. g_T^e(\pi_T) + \sum_{m = k}^{T-1} g_m^e \left( \pi_{m}, u_{m} \right) \right| \pi_{k} \right] \end{align*} for $1 \leq k < T$ and $J_T^{e,g}(\pi_T) \triangleq g_T^e(\pi_T)$. By following standard dynamic programming arguments (cf.~\cite[Section 8.4.3]{Krishnamurthy2016}), the cost-to-go function $J_k^{e,\ell}$ satisfies \begin{align} \label{eq:firstEstValueFunction} \begin{split} J_k^{e,\ell}(\pi_k) &= \inf_{u_k \in \mathcal{U}} \{ \ell_k^e (\pi_{k}, u_{k})\\ &\qquad+ E_{Y_{k+1}} [ J_{k+1}^{e,\ell}(\Pi(\pi_{k}, u_k, y_{k+1})) | \pi_k, u_k ] \} \end{split} \end{align} for $1 \leq k < T$ with $J_T^{e,\ell}(\pi_T) = \ell_T^e(\pi_T)$ and the optimisation subject to the same constraints as \eqref{eq:firstActiveEstimationMDP}. Similarly, the cost-to-go function $J_k^{e,g}$ satisfies \begin{align} \label{eq:secondEstValueFunction} \begin{split} J_k^{e,g}(\pi_k) &= \inf_{u_{k} \in \mathcal{U}} \{ g_k^e ( \pi_k, u_{k} ) \\ &\qquad + E_{Y_{k+1}} [ J_{k+1}^{e,g}(\Pi(\pi_{k}, u_{k}, y_{k+1})) | \pi_k, u_{k} ] \} \end{split} \end{align} for $1 \leq k < T$ with $J_T^{e,g}(\pi_T) = g_T^e(\pi_T)$ and the optimisation subject to the same constraints as \eqref{eq:secondActiveEstimationMDP}. The cost-to-go functions $J_k^{e,\ell}$ and $J_k^{e,g}$ are not equivalent since the belief-state forms of the smoother entropy in \eqref{eq:firstBeliefAdditiveForm} and \eqref{eq:secondBeliefAdditiveForm} used to construct the MDP reformulations of \eqref{eq:firstActiveEstimationMDP} and \eqref{eq:secondActiveEstimationMDP} breakdown the smoother entropy into different increments. In particular, the belief-state form of the smoother entropy in \eqref{eq:firstBeliefAdditiveForm} enables the separation and omission of the initial state entropy term $H(X_1|Y_1)$ in the MDP reformulation of \eqref{eq:firstActiveEstimationMDP} (as shown in the proof of Theorem \ref{theorem:activeEstimationMDP}). In contrast, the MDP reformulation of \eqref{eq:secondActiveEstimationMDP} based on the smoother entropy form in \eqref{eq:secondBeliefAdditiveForm} does not omit $H(X_1|Y_1)$ and so the initial cost-to-go functions satisfy $J_1^{e,g}(\pi_1) = J_1^{e,\ell}(\pi_1) + H(X_1|Y_1)$. Despite different cost-to-go functions, the reformulations \eqref{eq:firstActiveEstimationMDP} and \eqref{eq:secondActiveEstimationMDP} must yield a common (potentially nonunique) optimal policy $\bar{\mu}^{e*} =\{\bar{\mu}_k^{e*} : 1 \leq k < T\}$ satisfying \begin{align*} \begin{split} \bar{\mu}_k^{e*}(\pi_k) &= u_k^{e*} \in \arginf_{u_k \in \mathcal{U}} \{ \ell_k^e ( \pi_{k}, u_{k} ) \\ &\; \qquad\qquad + E_{Y_{k+1}} [ J_{k+1}^{e,\ell}(\Pi(\pi_{k}, u_k, y_{k+1})) | \pi_k, u_k ] \} \end{split} \end{align*} and \begin{align*} \begin{split} \bar{\mu}_k^{e*}(\pi_k) &= u_k^{e*} \in \arginf_{u_k \in \mathcal{U}} \{ g_k^e ( \pi_{k}, u_{k} ) \\ &\; \qquad \qquad + E_{Y_{k+1}} [ J_{k+1}^{e,g}(\Pi(\pi_{k}, u_k, y_{k+1})) | \pi_k, u_k ] \}. \end{split} \end{align*} In general, solving dynamic programming recursions for an optimal policy is greatly simplified when the cost and cost-to-go functions have the same structural properties as standard POMDPs of the form in \eqref{eq:standardPOMDP}. Recalling that the key structural results of concern in POMDPs are in terms of the belief state (rather than the controls, cf.\ \cite{Araya2010} and \cite[Chapter 8]{Krishnamurthy2016}), if either \eqref{eq:firstActiveEstimationMDP} or \eqref{eq:secondActiveEstimationMDP} have cost and cost-to-go functions that are concave in the belief state, then we can employ standard POMDP techniques to solve our active estimation problem via dynamic programming. We therefore next investigate the structural properties of \eqref{eq:firstActiveEstimationMDP} and \eqref{eq:secondActiveEstimationMDP}, and will use these later to find (approximate) solutions to our active estimation problem. \subsection{Structural Results} Our first structural result establishes the concavity of the instantaneous and terminal cost functions $g_k^e$ in the belief state. \begin{lemma} \label{lemma:estConcave} For any control $u_k \in \mathcal{U}$, the instantaneous and terminal costs $g_k^e(\pi_k, u_k)$ and $g_T^e(\pi_T)$ in \eqref{eq:secondActiveEstimationMDP} are concave and continuous in the belief state $\pi_k$ for $1 \leq k \leq T$. \end{lemma} \begin{IEEEproof} The definition of $g_T$ gives that \begin{align*} g_T^e(\pi_{T}) &= E_{X_{T}} \left[\tilde{g}_T(\pi_T) + c_T (x_{T}) | \pi_{T} \right]\\ &= H(X_T | y^T, u^{T-1}) + \sum_{i = 1}^N \pi_T(i) c_T(i). \end{align*} Considering the right-hand side of this equation, we see that the second term is linear (and hence concave and continuous) in $\pi_T$ whilst the first term $H(X_T | y^T, u^{T-1})$ is the entropy of the belief state $\pi_T$, which is concave and continuous in $\pi_T$ via standard results (cf.~\cite[Theorem 2.7.3]{Cover2006}). Since the sum of concave and continuous functions is concave and continuous, we have that $g_T^e$ is concave and continuous in $\pi_T$. Similarly, for any $u_k \in \mathcal{U}$, the definition of $g_k^e$ in Theorem \ref{theorem:activeEstimationMDP} gives that \begin{align*} g_k^e(\pi_{k}, u_{k}) &= E_{X_{k}} \left[\tilde{g}(\pi_k, u_k) + c_k (x_{k}, u_{k}) | \pi_{k}, u_{k} \right]\\ &= H(X_k | X_{k+1}, y^k, u^k) + \sum_{i = 1}^N \pi_k(i) c_k(i, u_k). \end{align*} Again, the second term on the right-hand side of this equation is linear (and hence concave and continuous) in $\pi_k$. The first term on the right-hand side of this equation, the conditional entropy $H(X_k | X_{k+1}, y^k, u^k)$, is continuous and concave in the joint distribution $p(X_k, X_{k+1} | y^k, u^k)$ (cf.\ \cite[Appendix A]{Globerson2007}). The joint distribution $p(X_k, X_{k+1} | y^k, u^k)$ is the joint predicted belief $\bar{\pi}_{k+1 | k}$, which is a linear function of the belief state $\pi_k$ given any $u_k \in \mathcal{U}$ as shown in \eqref{eq:bayesianPred}. Thus, $H(X_k | X_{k+1}, y^k, u^k)$ is the concave function of a linear function of $\pi_k$, and so it is concave and continuous in $\pi_k$. Summation of the terms in $g_k^e$ preserves concavity and continuity, and the proof is complete. \end{IEEEproof} Lemma \ref{lemma:estConcave} leads directly to the concavity of the cost-to-go function $J_k^{e,g}$ for the belief-state MDP reformulation in \eqref{eq:secondActiveEstimationMDP}. \begin{theorem} \label{theorem:estConcave} The cost-to-go function $J_k^{e,g}(\pi_k)$ of our active estimation problem \eqref{eq:activeEstimation} reformulated as the belief-state MDP in \eqref{eq:secondActiveEstimationMDP} is concave in $\pi_k$ for $1 \leq k \leq T$. \end{theorem} \begin{IEEEproof} Follows from \cite[Theorem 8.4.1]{Krishnamurthy2016} due to the concavity and continuity of the instantaneous and terminal cost functions $g_k^e$ and $g_T^e$ established in Lemma \ref{lemma:estConcave}. \end{IEEEproof} The significance of Lemma \ref{lemma:estConcave} and Theorem \ref{theorem:estConcave} is that our active estimation problem \eqref{eq:activeEstimation}, when reformulated as the belief-state MDP in \eqref{eq:secondActiveEstimationMDP}, has the same concavity properties as standard POMDPs of the form in \eqref{eq:standardPOMDP}. We shall exploit these results later in Section \ref{sec:approx} to find tractable (approximate) solutions. Here, we note that the alternative belief-state MDP reformulation in \eqref{eq:firstActiveEstimationMDP} surprisingly does not have concave instantaneous cost functions (which prohibits it from always having a concave cost-to-go function). Indeed, the following proposition establishes that the instantaneous and terminal cost functions $\ell_k^e$ in \eqref{eq:firstActiveEstimationMDP} are convex in $\pi_k$. \begin{proposition} \label{proposition:estConvex} For any control $u_k \in \mathcal{U}$, the instantaneous and terminal costs $\ell_k^e(\pi_k, u_k)$ and $\ell_T^e(\pi_T)$ in \eqref{eq:firstActiveEstimationMDP} are convex and continuous in the belief state $\pi_k$ for $1 \leq k \leq T$. \end{proposition} \begin{IEEEproof} The definition of $\ell_T^e$ implies that \begin{align*} \ell_T^e(\pi_{T}) = E_{X_{T}} \left[c_T (x_{T}) | \pi_{T} \right] = \sum_{i = 1}^N \pi_T(i) c_T(i), \end{align*} and so $\ell_T^e$ is linear (hence convex and continuous) in $\pi_T$. Considering now the costs $\ell_k^e$ for any $u_k \in \mathcal{U}$, we have that \begin{align}\notag &\ell_k^e(\pi_{k}, u_{k})\\\notag &= E_{Y_{k+1}} \left[\left. \tilde{\ell}_1(\pi_k, u_k, y_{k+1}) \right| \pi_k, u_k \right] \\\notag &\quad- \tilde{\ell}_2(\pi_k, u_k) + \tilde{\ell}_3(\pi_k, u_k) + E_{X_k} \left[ \left. c_k (x_{k}, u_{k}) \right| \pi_{k}, u_{k} \right] \\\notag &= H(X_{k+1} | Y_{k+1}, y^k, u^k) - H(X_{k+1}|y^k, u^k) \\\notag &\quad+ H(X_{k+1}|X_k, y^k, u^k) + E_{X_k} \left[ \left. c_k (x_{k}, u_{k}) \right| \pi_{k}, u_{k} \right]\\\label{eq:concave_proof_step1} \begin{split} &= H(X_{k+1}|X_k, y^k, u^k) - I(X_{k+1}; Y_{k+1} | y^{k}, u^k) \\ &\quad+ \sum_{i = 1}^N \pi_k(i) c_k(i, u_k) \end{split} \end{align} where the last equality holds since $I(X_{k+1}; Y_{k+1} | y^{k}, u^{k}) = H(X_{k+1}|y^k, u^k) - H(X_{k+1}|Y_{k+1}, y^k, u^k)$. The first and third terms in \eqref{eq:concave_proof_step1} are linear and hence convex and continuous in $\pi_k$ (as shown in \eqref{eq:first_cond_cost_entropy} for the first term). The second term in \eqref{eq:concave_proof_step1}, $-I(X_{k+1}; Y_{k+1} | y^{k}, u^{k})$, is convex and continuous in $\pi_k$ since: \begin{enumerate} \item $-I(X_{k+1}; Y_{k+1} | y^{k}, u^{k})$ is convex and continuous in the distribution $p(X_{k+1} | y^{k}, u^{k})$ via \cite[Theorem 2.7.4]{Cover2006} with the conditional distribution $p(Y_{k+1} | X_{k+1}, y^{k}, u^{k}) = p(Y_{k+1} | X_{k+1}, u_k)$ fixed and determined by the measurement kernel \eqref{eq:obsProcess}; and, \item $p(X_{k+1} | y^{k}, u^{k})$ is a linear function of $\pi_k$ since it is the marginal of the joint predicted belief $\bar{\pi}_{k+1|k}$ from \eqref{eq:bayesianPred}. Hence, $-I(X_{k+1}; Y_{k+1} | y^{k}, u^{k})$ is the convex function of a linear function of $\pi_k$, and thus is convex and continuous. \end{enumerate} The proof is complete since summation of the terms in \eqref{eq:concave_proof_step1} preserves convexity and continuity. \end{IEEEproof} Proposition \ref{proposition:estConvex} is surprising because it shows that, notwithstanding Lemma \ref{lemma:estConcave} and Theorem \ref{theorem:estConcave}, the equivalent formulation \eqref{eq:firstActiveEstimationMDP} of our active estimation problem \eqref{eq:activeEstimation} has cost functions that are convex in the belief state. Since standard POMDP techniques are based on these functions being concave in the belief state (cf.\ \cite{Araya2010,Krishnamurthy2016}), it does not further assist us in solving \eqref{eq:activeEstimation}. It does, however, suggest that a belief-state MDP reformulation of our active obfuscation problem \eqref{eq:activeObfuscation} using the belief-state form of the smoother entropy in \eqref{eq:firstBeliefAdditiveForm} may have useful concavity properties since it involves maximising, rather than minimising, the smoother entropy. We explore results for our active obfuscation problem in the next section. \section{Active Obfuscation Belief-State Reformulations and Structural Results} \label{sec:activeObf} In this section, we establish results analogous to Section \ref{sec:activeEst} but for our active obfuscation problem \eqref{eq:activeObfuscation}. In contrast to Section \ref{sec:activeEst} however, we shall show that the belief-state form of the smoother entropy in \eqref{eq:firstBeliefAdditiveForm} leads to a belief-state MDP reformulation of \eqref{eq:activeObfuscation} with concave cost and cost-to-go functions, whilst the belief-state form in \eqref{eq:secondBeliefAdditiveForm} does not. \subsection{Belief-State MDP Reformulations} Our first active obfuscation result is along the same lines as Theorem \ref{theorem:activeEstimationMDP} and establishes two belief-state MDP reformulations of our active obfuscation problem using the belief-state forms of the smoother entropy in \eqref{eq:firstBeliefAdditiveForm} and \eqref{eq:secondBeliefAdditiveForm}. \begin{theorem} \label{theorem:activeObfMDP} Define the functions \begin{align*} &\ell_k^o(\pi_{k}, u_{k})\\ &\quad\triangleq E_{Y_{k+1}, X_{k}} \left[ \left. c_k (x_{k}, u_{k}) - \tilde{\ell} \left( \pi_{k}, u_{k}, y_{k+1}\right) \right| \pi_{k}, u_{k} \right] \end{align*} and \begin{align*} g_k^o(\pi_{k}, u_{k}) &\triangleq E_{X_{k}} \left[ c_k (x_{k}, u_{k}) -\tilde{g}(\pi_k, u_k) | \pi_{k}, u_{k} \right] \end{align*} for $1 \leq k < T$, together with, $ \ell_T^o(\pi_{T}) \triangleq E_{X_{T}} \left[ \left. c_T (x_{T}) \right| \pi_{T} \right] $ and $ g_T^o(\pi_{T}) \triangleq E_{X_{T}} \left[ c_T (x_{T}) -\tilde{g}_T(\pi_T) | \pi_{T} \right] $ where $c_k$, $\tilde{\ell}$, and $\tilde{g}$ are defined in \eqref{eq:activeObfuscation}, \eqref{eq:firstBeliefAdditiveFormFunction}, and \eqref{eq:secondBeliefAdditiveForm}. Then, the active obfuscation problem \eqref{eq:activeObfuscation} is equivalent to: \begin{align} \label{eq:firstActiveObfMDP} \begin{aligned} &\inf_{\bar{\mu}} & & E_{\bar{\mu}} \left[ \left. \ell_T^o(\pi_{T}) + \sum_{k = 1}^{T-1} \ell_k^o \left( \pi_{k}, u_{k} \right) \right| \pi_1 \right], \end{aligned} \end{align} and to: \begin{align} \label{eq:secondActiveObfMDP} \begin{aligned} &\inf_{\bar{\mu}} & & E_{\bar{\mu}} \left[ \left. g_T^o(\pi_T) + \sum_{k = 1}^{T-1} g_k^o \left( \pi_{k}, u_{k} \right) \right| \pi_1 \right] \end{aligned} \end{align} where both infima are over potentially stochastic policies $\bar{\mu} = \{\bar{\mu}_k : 1 \leq k < T\}$ that are functions of the belief state $\pi_k$, subject to the constraints: \begin{align*} & & & \pi_{k+1} = \Pi\left( \pi_{k}, u_k, y_{k+1} \right)\\ & & & Y_{k+1} \sim p(y_{k+1} | \pi_{k}, u_{k})\\ & & & \mathcal{U} \ni U_k \sim \bar{\mu}_k(\pi_k) \end{align*} for $1 \leq k \leq T-1$. \end{theorem} \begin{IEEEproof} The proof is similar to that of Theorem \ref{theorem:activeEstimationMDP} with appropriate substitution of $\ell_k^o$ and $g_k^o$ for $\ell_k^e$ and $g_k^e$. \end{IEEEproof} \subsection{Dynamic Programming Equations} Given the belief-state MDP formulations of our active obfuscation problem in Theorem \ref{theorem:activeObfMDP}, then as discussed at the start of Section \ref{subsec:dynProgramEst}, we may consider only deterministic policies $\bar{\mu}$ of the belief state in the sense that $u_k = \bar{\mu}_k(\pi_k)$. The cost-to-go functions of our active obfuscation problem MDPs \eqref{eq:firstActiveObfMDP} and \eqref{eq:secondActiveObfMDP} are then \begin{align*} J_k^{o,\ell}(\pi_k) \triangleq \inf_{\bar{\mu}_k^{T-1}} E_{\bar{\mu}_k^{T-1}} \left[ \left. \ell_T^o(\pi_T) + \sum_{m = k}^{T-1} \ell_m^o \left( \pi_{m}, u_{m} \right) \right| \pi_{k} \right] \end{align*} and \begin{align*} J_k^{o,g}(\pi_k) \triangleq \inf_{\bar{\mu}_k^{T-1}} E_{\bar{\mu}_k^{T-1}} \left[ \left. g_T^o(\pi_T) + \sum_{m = k}^{T-1} g_m^o \left( \pi_{m}, u_{m} \right) \right| \pi_{k} \right], \end{align*} respectively, with $J_T^{o,\ell}(\pi_T) \triangleq \ell_T^o(\pi_T)$ and $J_T^{o,g}(\pi_T) \triangleq g_T^o(\pi_T)$. The cost-to-go functions $J_k^{o,\ell}$ and $J_k^{o,g}$ satisfy the dynamic programming recursions (cf.~\cite[Section 8.4.3]{Krishnamurthy2016}): \begin{align} \label{eq:firstObfValueFunction} \begin{split} J_k^{o,\ell}(\pi_k) &= \inf_{u_k \in \mathcal{U}} \left\{ \ell_k^o \left( \pi_{k}, u_{k} \right) \right. \\ &\quad\; \left. + E_{Y_{k+1}} \left[ J_{k+1}^{o,\ell}(\Pi(\pi_{k}, u_k, y_{k+1})) | \pi_k, u_k \right] \right\} \end{split} \end{align} for $1 \leq k < T$ with $J_T^{o,\ell}(\pi_T) = \ell_T^o(\pi_T)$, and \begin{align} \label{eq:secondObfValueFunction} \begin{split} J_k^{o,g}(\pi_k) &= \inf_{u_{k} \in \mathcal{U}} \left\{ g_k^o \left( \pi_k, u_{k} \right) \right. \\ &\quad\; \left. + E_{Y_{k+1}} \left[ J_{k+1}^{o,g}(\Pi(\pi_{k}, u_{k}, y_{k+1})) | \pi_k, u_{k} \right] \right\} \end{split} \end{align} for $1 \leq k < T$ with $J_T^{o,g}(\pi_T) = g_T^o(\pi_T)$ and with the optimisations subject to the same constraints as \eqref{eq:firstActiveObfMDP} and \eqref{eq:secondActiveObfMDP}. The cost-to-go functions $J_k^{o,\ell}$ and $J_k^{o,g}$ of our two belief-state MDP reformulations of \eqref{eq:activeObfuscation} are not equivalent, but are initially related via $ J_1^{o,g}(\pi_1) = J_1^{o,\ell}(\pi_1) - H(X_1 | Y_1) $ since the term $-H(X_1|Y_1)$ is omitted in the construction of \eqref{eq:firstActiveObfMDP} (cf.~Theorem \ref{theorem:activeObfMDP} and the proof of Theorem \ref{theorem:activeEstimationMDP}). Despite their different cost-to-go functions, the two MDP formulations of active obfuscation must yield the same controls resulting in a common optimal policy $\bar{\mu}^{o*} =\{\bar{\mu}_k^{o*} : 1 \leq k < T\}$ satisfying \begin{align*} \begin{split} \bar{\mu}_k^{o*}(\pi_k) &= u_k^{o*} \in \arginf_{u_k \in \mathcal{U}} \{ \ell_k^o ( \pi_{k}, u_{k} ) \\ &\; \qquad\qquad + E_{Y_{k+1}} [ J_{k+1}^{o,\ell}(\Pi(\pi_{k}, u_k, y_{k+1})) | \pi_k, u_k ] \} \end{split} \end{align*} and \begin{align*} \begin{split} \bar{\mu}_k^{o*}(\pi_k) &= u_k^{o*} \in \arginf_{u_k \in \mathcal{U}} \{ g_k^o ( \pi_{k}, u_{k} ) \\ &\; \qquad\qquad + E_{Y_{k+1}} [ J_{k+1}^{o,g}(\Pi(\pi_{k}, u_k, y_{k+1})) | \pi_k, u_k ] \}. \end{split} \end{align*} In order to use standard POMDP algorithms to solve our active obfuscation problem, we next seek to show that its cost-to-go functions have structural properties analogous to those of standard POMDPs of the form in \eqref{eq:standardPOMDP}. \subsection{Structural Results} Motivated by the convexity properties established in Proposition \ref{proposition:estConvex} for minimising the smoother entropy given by \eqref{eq:firstBeliefAdditiveForm}, we now instead consider its maximisation via \eqref{eq:firstActiveObfMDP}. \begin{lemma} \label{lemma:obfConcave} For any control $u_k \in \mathcal{U}$, the instantaneous and terminal costs $\ell_k^o(\pi_k, u_k)$ and $\ell_T^o(\pi_T)$ in \eqref{eq:firstActiveObfMDP} are concave and continuous in the belief state $\pi_k$ for $1 \leq k \leq T$. \end{lemma} \begin{IEEEproof} Note that $\ell_T^o(\pi_T) = \ell_T^e(\pi_T)$, and so $\ell_T^o(\pi_T)$ is concave and continuous via Proposition \ref{proposition:estConvex}. For any control $u_k \in \mathcal{U}$, note also that \begin{align*} \ell_k^o(\pi_k, u_k) &= 2 \sum_{i = 1}^N \pi_k(i) c_k(i,u_k) - \ell_k^e(\pi_k, u_k). \end{align*} The first term on the right-hand side is linear (and hence concave and continuous) in $\pi_k$, and the second term, $-\ell_k^e$, is concave and continuous in $\pi_k$ since $\ell_k^e$ is convex and continuous via Proposition \ref{proposition:estConvex}. The proof is complete since concavity and continuity are preserved by addition. \end{IEEEproof} Our main structural result for active obfuscation follows. \begin{theorem} \label{theorem:obfConcave} The cost-to-go function $J_k^{o,\ell}(\pi_k)$ of our active obfuscation problem \eqref{eq:activeObfuscation} reformulated as the belief-state MDP \eqref{eq:firstActiveObfMDP} is concave in $\pi_k$ for $1 \leq k \leq T$. \end{theorem} \begin{IEEEproof} From \cite[Theorem 8.4.1]{Krishnamurthy2016} via Lemma \ref{lemma:obfConcave}. \end{IEEEproof} Lemma \ref{lemma:obfConcave} and Theorem \ref{theorem:obfConcave} establish that our active obfuscation problem reformulated as the belief-state MDP in \eqref{eq:firstActiveObfMDP} has the same concavity properties as standard POMDPs of the form in \eqref{eq:standardPOMDP}. The following proposition shows that the alternative MDP reformulation of our active obfuscation problem in \eqref{eq:secondActiveObfMDP} does not share these properties. \begin{proposition} \label{proposition:obfConvex} For any control $u_k \in \mathcal{U}$, the instantaneous and terminal costs $g_k^o(\pi_k, u_k)$ and $g_T^o(\pi_T)$ in \eqref{eq:secondActiveObfMDP} are convex and continuous in the belief state $\pi_k$ for $1 \leq k \leq T$. \end{proposition} \begin{IEEEproof} By definition, we have that \begin{align*} g_T^o(\pi_T) &= 2\sum_{i = 1}^N \pi_T(i) c_T(i) - g_T^e(\pi_T). \end{align*} The first term on the right-hand side is linear (and hence convex and continuous) in $\pi_T$, and the second term $-g_T^e$ is convex in $\pi_T$ due to $g_T^e$ being concave in $\pi_T$ (as shown in Lemma \ref{lemma:estConcave}). Thus, $g_T^o$ is convex and continuous in $\pi_T$. Similarly, for any control $u_k \in \mathcal{U}$, we have that \begin{align*} g_k^o(\pi_k, u_k) &= 2\sum_{i = 1}^N \pi_k(i) c_k(i,u_k) - g_k^e(\pi_k, u_k), \end{align*} which is convex and continuous in $\pi_k$ via the same argument as for $g_T^o$. The proof is complete. \end{IEEEproof} The structural results of Lemma \ref{lemma:obfConcave}, Theorem \ref{theorem:obfConcave}, and Proposition \ref{proposition:obfConvex} are surprising since they mirror those of Lemma \ref{lemma:estConcave}, Theorem \ref{theorem:estConcave}, and Proposition \ref{proposition:estConvex} but concern maximisation of the smoother entropy instead of its minimisation. Lemma \ref{lemma:obfConcave} and Theorem \ref{theorem:obfConcave} are particularly surprising since maximisation of the smoother entropy leads to concave cost and cost-to-go functions whilst maximisation of the sum of marginal entropies (cf.\ \eqref{eq:entropyBounds}) does not (only minimisation of the sum of marginal entropies leads to concave cost and cost-to-go functions, cf.\ \cite[Section 8.4.3]{Krishnamurthy2016}). The different belief-state forms of the smoother entropy we established in Section \ref{sec:directedInformation} are key to our surprising structural results. Indeed, Lemma \ref{lemma:obfConcave} and Theorem \ref{theorem:obfConcave} consider the MDP reformulation of our active obfuscation problem \eqref{eq:firstActiveObfMDP} based on the belief-state form of the smoother entropy in \eqref{eq:firstBeliefAdditiveForm} whilst Lemma \ref{lemma:estConcave} and Theorem \ref{theorem:estConcave} consider the MDP reformulation of our active estimation problem \eqref{eq:secondActiveEstimationMDP} based on the alternative belief-state form of the smoother entropy in \eqref{eq:secondBeliefAdditiveForm}. Similarly, Proposition \ref{proposition:estConvex} considers the MDP reformulation of active estimation based on the belief-state form of the smoother entropy in \eqref{eq:firstBeliefAdditiveForm} whilst Proposition \ref{proposition:obfConvex} considers the MDP reformulation of active obfuscation based on the belief-state form of the smoother entropy in \eqref{eq:secondBeliefAdditiveForm}. As we show next, the practical significance of our structural results is that they enable the solution of our active estimation and obfuscation problems using standard techniques. \section{Bounded-Error PWLC Approximate Solutions} \label{sec:approx} Directly solving our active estimation and obfuscation problems using the dynamic programming recursions of \eqref{eq:firstEstValueFunction}, \eqref{eq:secondEstValueFunction}, \eqref{eq:firstObfValueFunction}, and \eqref{eq:secondObfValueFunction} is often intractable since the cost-to-go functions are, in general, infinite dimensional. Furthermore, use of existing POMDP algorithms requires cost and cost-to-go functions that are piecewise-linear concave (PWLC) in the belief state (cf.\ \cite{Araya2010} and \cite[Chapter 8.4.4]{Krishnamurthy2016}). In this section, we present an approach that overcomes these difficulties and yields tractable bounded-error approximate solutions to our active estimation and obfuscation problems by exploiting the concave belief-state MDP formulations of \eqref{eq:secondActiveEstimationMDP} and \eqref{eq:firstActiveObfMDP}. Our approach generalises that of \cite{Araya2010} to finite-horizon undiscounted POMDPs and involves: \begin{enumerate} \item Constructing bounded-error PWLC approximations (i.e., finite-dimensional representations) of the concave costs $g_k^e$ for active estimation or $\ell_k^o$ for active obfuscation; and, \item Using the PWLC approximations of $g_k^e$ or $\ell_k^o$ with standard POMDP algorithms to solve (finite-dimensional) dynamic programming recursions for PWLC approximations of the cost-to-go functions $J_k^{e,g}$ or $J_k^{o, \ell}$. \end{enumerate} Use of standard POMDP algorithms is important since they are increasingly able to handle large state and measurement spaces (cf.\ \cite{Walraven2019,Garg2019,Haugh2020}). In this section, we shall assume that the measurement space $\mathcal{Y}$ is finite (e.g., as given or obtained by discretising a continuous space). \subsection{Bounded-Error PWLC Cost Approximations} We first note that the concavity of the cost functions $g_k^e$ and $\ell_k^o$ established in Lemmas \ref{lemma:estConcave} and \ref{lemma:obfConcave} allows us to approximate them using PWLC functions. Specifically, let us consider a finite set $\Xi \subset \Delta^N$ of \emph{base points} $\xi \in \Xi$ at which the gradients $\nabla_\xi g_k^e(\xi, u)$ and $\nabla_\xi \ell_k^o(\xi, u)$ of $g_k^e(\cdot, u)$ and $\ell_k^o(\cdot, u)$, respectively, are well defined for all $u \in \mathcal{U}$. For each control $u \in \mathcal{U}$, the tangent hyperplane to $g_k^e(\cdot, u)$ at $\xi \in \Xi$ is \begin{align*} \omega_{k,\xi}^{e,u} (\pi) \triangleq g_k^e(\xi, u) + \left< (\pi - \xi), \nabla_\xi g_k^e(\xi, u) \right> = \left< \pi, \alpha_{k,\xi}^{e,u} \right> \end{align*} and the tangent hyperplane to $\ell_k^o(\cdot, u)$ at $\xi \in \Xi$ is \begin{align*} \omega_{k,\xi}^{o,u} (\pi) \triangleq \ell_k^o(\xi, u) + \left< (\pi - \xi), \nabla_\xi \ell_k^o(\xi, u) \right> = \left< \pi, \alpha_{k,\xi}^{o,u} \right> \end{align*} for $\pi \in \Delta^N$ where $\left< \cdot, \cdot \right>$ denotes the inner product, and $\alpha_{k,\xi}^{e,u} \triangleq g_k^e(\xi, u) + \nabla_\xi g_k^e(\xi, u) - \left< \xi, \nabla_\xi g_k^e(\xi, u) \right> \in \mathbb{R}^N$ and $\alpha_{k,\xi}^{o,u} \triangleq \ell_k^o(\xi, u) + \nabla_\xi \ell_k^o(\xi, u) - \left< \xi, \nabla_\xi \ell_k^o(\xi, u) \right> \in \mathbb{R}^N$. The hyperplanes $\omega_{k,\xi}^{u,e}$ and $\omega_{k,\xi}^{u,o}$ form (upper bound) PWLC approximations $\hat{g}_k^e$ and $\hat{\ell}_k^o$ to $g_k^e$ and $\ell_k^o$, i.e., \begin{align*} \hat{g}_k^e(\pi, u) \triangleq \min_{\xi \in \Xi} \left< \pi, \alpha_{k,\xi}^{e,u} \right> \geq g_k^e(\pi, u) \end{align*} and \begin{align*} \hat{\ell}_k^o(\pi, u) \triangleq \min_{\xi \in \Xi} \left< \pi, \alpha_{k,\xi}^{o,u} \right> \geq \ell_k^o(\pi, u). \end{align*} PWLC approximations of the concave terminal costs $g_T^e$ and $\ell_T^o$ are constructed in an identical manner (without the need to consider the controls). As shown in the following lemma, the approximation errors associated with $\hat{g}_k^e$ and $\hat{\ell}_k^o$ are bounded. \begin{lemma} \label{lemma:bounds} Consider the set of base points $\Xi$ and associated PWLC approximations $\hat{g}_k^e$ and $\hat{\ell}_k^o$ for $1 \leq k \leq T$. Then there exists scalar constants $\kappa^e, \kappa^o > 0$, and $\beta^e, \beta^o \in (0,1)$ such that the errors in the approximations $\hat{g}_k^e$ and $\hat{\ell}_k^o$ are bounded, namely, $ |g_k^e(\pi,u) - \hat{g}_k^e(\pi,u)| \leq \kappa^e(\delta_\Xi)^{\beta^e} $ and $ |\ell_k^o(\pi,u) - \hat{\ell}_k^o(\pi,u)| \leq \kappa^o(\delta_\Xi)^{\beta^o} $ for all $1 \leq k \leq T$, all $\pi \in \Delta^N$, and all $u \in \mathcal{U}$ where $\delta_\Xi \triangleq \min_{\pi \in \Delta^N} \max_{\xi \in \Xi} \| \pi - \xi \|_1$ is the sparsity of the base-point set $\Xi$ and $\|\cdot\|_1$ denotes the $l^1$-norm. \end{lemma} \begin{IEEEproof} Recall that a function $f : \mathcal{D} \mapsto \mathbb{R}$ with $\mathcal{D} \subset \mathbb{R}^N$ is $\beta$-H{\"o}lder continuous on $\mathcal{D}$ if these exists constants $\beta \in (0,1]$ and $K_\beta >0$ such that $ |f(x) - f(y)| \leq K_\beta \|x - y\|_1^\beta $ for all $x,y \in \mathcal{D}$ \cite{Araya2010}. We note that the (negative) entropy function $f(x) = \sum_{i = 1}^N x(i) \log x(i)$ is $\beta$-H{\"o}lder continuous on $\Delta^N$ with $\beta < 1$ and the convention $0 \log 0 = 0$ (cf.\ \cite[Example 1.1.4]{Fiorenza2017} and \cite[p. 7]{Araya2010}). Furthermore, continuous linear functions are $\beta$-H{\"o}lder continuous, as are the sums, differences, and compositions of $\beta$-H{\"o}lder continuous functions (cf.~\cite[Propositions 1.2.1 and 1.2.2]{Fiorenza2017}). Thus, for each control $u \in \mathcal{U}$, the functions $g_k^e$ and $\ell_k^o$ are $\beta$-H{\"o}lder continuous in $\pi_k$ since each term in $g_k^e$ and $\ell_k^o$ is either linear in $\pi_k$ or can be expressed as the composition of a linear function and the entropy function (e.g.\ via \eqref{eq:bayesianPred}). The $\beta$-H{\"o}lder continuity of $g_k^e$ and $\ell_k^o$ combined with their continuity and concavity properties established in Lemmas \ref{lemma:estConcave} and \ref{lemma:obfConcave} imply that $g_k^e$ and $\ell_k^o$ satisfy the conditions of \cite[Theorem 4.3]{Araya2010} for each control $u \in \mathcal{U}$. The lemma assertion then follows from \cite[Theorem 4.3]{Araya2010} (noting that we equivalently consider upper bounds on concave functions rather than lower bounds on convex functions). \end{IEEEproof} \subsection{PWLC Dynamic Programming and Error Bounds} Standard POMDP algorithms provide a means of solving belief-state dynamic programming recursions when the cost and cost-to-go functions involved are PWLC in the belief state. Hence, by replacing the costs $g_k^e$ and $\ell_k^o$ in the dynamic programming recursions of \eqref{eq:secondEstValueFunction} and \eqref{eq:firstObfValueFunction} with the PWLC approximations $\hat{g}_k^e$ and $\hat{\ell}_k^o$, the recursions can be solved for approximate cost-to-go functions $\hat{J}_k^{e,g}$ and $\hat{J}_k^{o,\ell}$ using standard POMDP algorithms. Under the assumption that $\mathcal{Y}$ is finite, the resulting approximate cost-to-go functions are PWLC, which standard POMDP algorithms can exploit by operating directly on the sets of vectors $\{\alpha_{k,\xi}^{e,u} : \xi \in \Xi,\, u \in \mathcal{U}\}$ and $\{\alpha_{k,\xi}^{o,u} : \xi \in \Xi,\, u \in \mathcal{U}\}$ that define $\hat{g}_k^e$ and $\hat{\ell}_k^o$ (see \cite[Chapter 7.5]{Krishnamurthy2016} and \cite[Section 3.3]{Araya2010} for details of these algorithms and their inherent requirement for concavity of the cost and cost-to-go functions in the belief state). The following theorem shows that the resulting errors between the (exact) cost-to-go functions and the PWLC approximate cost-to-go functions are bounded by virtue of Lemma \ref{lemma:bounds}. \begin{theorem} \label{theorem:bounds} Consider the set of base points $\Xi$, the PWLC approximations $\hat{g}_k^e$ and $\hat{\ell}_k^o$, and the associated approximate cost-to-go functions $\hat{J}_k^{e,g}$ and $\hat{J}_k^{o,\ell}$. Then there exists scalar constants $\kappa^e, \kappa^o > 0$, and $\beta^e, \beta^o \in (0,1)$ such that \begin{align} \label{eq:actEstValueBound} \| J_k^{e,g} - \hat{J}_k^{e,g}\|_\infty \leq (T - k + 1)\kappa^e (\delta_\Xi)^{\beta^e} \end{align} and \begin{align} \label{eq:actObfValueBound} \| J_k^{o,\ell} - \hat{J}_k^{o,\ell}\|_\infty \leq (T - k + 1)\kappa^o (\delta_\Xi)^{\beta^o} \end{align} for $1 \leq k \leq T$ where $\|\cdot\|_\infty$ denotes the $L^\infty$-norm. \end{theorem} \begin{IEEEproof} We prove \eqref{eq:actEstValueBound} via (backwards) induction on $k$. For $k = T$, \eqref{eq:actEstValueBound} holds via Lemma \ref{lemma:bounds} since $J_T^{e,g} = g_T$ and $\hat{J}_T^{e,g} = \hat{g}_T$. Let $\mathcal{T}$ denote the dynamic programming mapping using $g_k^e$ in the sense that \begin{align*} (\mathcal{T}J_{k+1}^{e,g})(\pi_k) &\triangleq \inf_{u_{k} \in \mathcal{U}} \{ g_k^e ( \pi_k, u_{k} ) \\ &\qquad + E_{Y_{k+1}} [ J_{k+1}^{e,g}(\Pi(\pi_{k}, u_{k}, y_{k+1})) | \pi_k, u_{k} ] \}, \end{align*} for $\pi_k \in \Delta^N$, and similarly let $\hat{\mathcal{T}}$ denote the dynamic programming mapping using $\hat{g}_k^e$ in the sense that \begin{align*} (\hat{\mathcal{T}}J_{k+1}^{e,g})(\pi_k) &\triangleq \inf_{u_{k} \in \mathcal{U}} \{ \hat{g}_k^e ( \pi_k, u_{k} ) \\ &\qquad + E_{Y_{k+1}} [ J_{k+1}^{e,g}(\Pi(\pi_{k}, u_{k}, y_{k+1})) | \pi_k, u_{k} ] \} \end{align*} for $\pi_k \in \Delta^N$. Then, assuming that \eqref{eq:actEstValueBound} holds for times $T-1, \ldots, k+1$, at time $k$ we have that \begin{align*} &\| J_k^{e,g} - \hat{J}_k^{e,g}\|_\infty\\ &\quad= \| \mathcal{T} J_{k+1}^{e,g} - \hat{\mathcal{T}}\hat{J}_{k+1}^{e,g}\|_\infty\\ &\quad\leq \| \mathcal{T} \hat{J}_{k+1}^{e,g} - \hat{\mathcal{T}} \hat{J}_{k+1}^{e,g}\|_\infty + \| \mathcal{T} J_{k+1}^{e,g} - \mathcal{T} \hat{J}_{k+1}^{e,g}\|_\infty\\ &\quad\leq \kappa^e (\delta_\Xi)^{\beta^e} + \| \mathcal{T} J_{k+1}^{e,g} - \mathcal{T} \hat{J}_{k+1}^{e,g}\|_\infty\\ &\quad\leq \kappa^e (\delta_\Xi)^{\beta^e} + \| J_{k+1}^{e,g} - \hat{J}_{k+1}^{e,g}\|_\infty\\ &\quad\leq (T - k + 1) \kappa^e (\delta_\Xi)^{\beta^e} \end{align*} where the first equality holds by definition of $\mathcal{T}$ and $\hat{\mathcal{T}}$; the first inequality is the triangle inequality; the second inequality holds via Lemma \ref{lemma:bounds} since $\mathcal{T}$ and $\hat{\mathcal{T}}$ differ in their use of $g_k^{e,g}$ and $\hat{g}_k^{e,g}$; the third inequality holds due to the monotonicity and constant-shift properties of the dynamic programming operator (cf.~\cite[Lemmas 1.1.1 and 1.1.2]{Bertsekas2012} and the argument in the convergence/contraction proof of \cite[Proposition 1.2.6]{Bertsekas2012}); and, the last inequality follows from the induction hypothesis. The proof of \eqref{eq:actEstValueBound} via induction is complete. With \eqref{eq:actObfValueBound} proved using an identical argument, the proof is complete. \end{IEEEproof} Lemma \ref{lemma:bounds} and Theorem \ref{theorem:bounds} imply that the error in the PWLC cost and cost-to-go function approximations can be made arbitrarily small by decreasing the sparsity $\delta_\Xi$ of the base points $\Xi$. Whilst the problem of how to select base points in PWLC approximations is largely open \cite{Araya2010}, recent point-based POMDP solvers (e.g.\ \cite{Walraven2019} for finite-horizon POMDPs and \cite{Garg2019,Kurniawati2008} for infinite-horizon POMDPs) provide insights based on reachability results. We illustrate PWLC approximations with simple base-point selection schemes leading to suitable performance in the next section. \section{Examples and Simulation Results} \label{sec:results} In this section, we illustrate and compare our active obfuscation and estimation problems in examples inspired by privacy for cloud-based control and uncertainty-aware navigation. \subsection{Privacy in Cloud-based Control: Active Obfuscation} For our first example, we consider cloud-based control as illustrated in Fig.~\ref{fig:cloud_based} and based on the scheme described in \cite{Tanaka2017,Nekouei2019}. In cloud-based control, a client seeks to have a dynamical system controlled by a cloud service without explicitly disclosing the system's state trajectory $X^T$. The client provides the cloud service with outputs $Y_k$ of a privacy filter and the cloud service computes and returns control inputs $U_k$ using a policy provided by the client. In the worst case, the cloud service knows the system dynamics and the privacy filter (i.e., the measurement model). The client's problem of designing a policy that keeps the state trajectory $X^T$ private whilst ensuring a suitable level of system performance is consistent with our active obfuscation problem \eqref{eq:activeObfuscation}. \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{cloud.pdf} \caption{Cloud-based control scheme described in \cite{Tanaka2017,Nekouei2019}.} \label{fig:cloud_based} \end{figure} \subsubsection{Simulation Example} To illustrate active obfuscation control in cloud-based control, let us consider a POMDP with three states $\mathcal{X} = \{1, 2, 3\}$, three controls $\mathcal{U} = \{1, 2, 3\}$, and three possible measurements (i.e., outputs of the privacy filter) $\mathcal{Y} = \{1, 2, 3\}$. Let us also consider a regulator-type system-performance index that penalises deviations of the system from the third state $e_3$ at the final time $T = 10$, namely, $c_T(x_T) = \mathbbm{1}_{\{x_T \neq e_3\}}$ and $c_k(x_k, u_k) = 0$ for all $x_k \in \mathcal{X}$ and $u_k \in \mathcal{U}$. We consider two state transition matrices \begin{align*} A(1) = \begin{bmatrix} 0.8 & 0.8 & 0.1\\ 0.1 & 0.1 & 0.8\\ 0.1 & 0.1 & 0.1 \end{bmatrix} \text{ and } A(2) = \begin{bmatrix} 0.1 & 0.1 & 0.1\\ 0.8 & 0.1 & 0.1\\ 0.1 & 0.8 & 0.8 \end{bmatrix} \end{align*} where the elements in the $i$th rows and $j$th columns correspond to $A^{ij}(u)$. We also consider a third transition matrix $A(3) \in \mathbb{R}^{3 \times 3}$ with $A^{ij}(3) = 0.95$ if $i = j$ and $0.025$ otherwise. The output privacy filter is described by the emission matrix \begin{align*} \begin{bmatrix} 0.61 & 0.3 & 0.09\\ 0.3 & 0.4 & 0.3\\ 0.09 & 0.3 & 0.61 \end{bmatrix} \end{align*} with the element in the $i$th row and $j$th column corresponding to the probability $B^i(Y_k = j, u_k)$ for all $u_k \in \mathcal{U}$ (the observations are control-invariant). We solved our active obfuscation problem \eqref{eq:activeObfuscation} using the PWLC approximate solution approach described in Section \ref{sec:approx}. Specifically, we constructed a PWLC approximation $\hat{\ell}_k^o$ of the costs $\ell_k^o$ from the belief-state MDP reformulation of \eqref{eq:firstActiveObfMDP}. As base points $\Xi$, we selected the middle of the simplex $\Delta^N$ and points near the vertices with values in their largest element of $1 - 0.01(N-1)$ and $0.01$ in their other $N-1$ elements. We then solved for an approximately optimal policy using a standard POMDP solver\footnote{https://www.pomdp.org/code/} suitably modified for PWLC costs (as detailed in \cite[Section 8.4.5]{Krishnamurthy2016}) and implementing the incremental pruning algorithm. For the purpose of comparison, we used the same POMDP solver to implement: \begin{itemize} \item A standard POMDP policy (\emph{Stand.\ POMDP}) solving \eqref{eq:standardPOMDP} instead of \eqref{eq:activeObfuscation} (but with the same costs $c_T$ and $c_k$); and, \item The minimum directed information policy (\emph{Min.\ Dir.\ Info.}) proposed in \cite{Tanaka2017} to minimise the information gain provided directly by the measurements, which is equivalent to our active obfuscation approach with the (negative) smoother entropy $-H(X^T | Y^T, U^{T-1})$ replaced by $I(X^T \to Y^T \| U^{T-1})$ in \eqref{eq:activeObfuscation}. \end{itemize} Our implementation of the \emph{Min.\ Dir.\ Info.} policy of \cite{Tanaka2017} exploits the new results developed in this paper. Specifically, in light of Theorem \ref{theorem:directedInformation} and Corollary \ref{corollary:additive}, the \emph{Min.\ Dir.\ Info.} policy can be computed in the same manner as our active obfuscation policy with omission of the terms due to $H(X^T \| Y^{T-1}, U^{T-1})$ in $\ell_k^o$ (i.e.\ by constructing a PWLC approximation of the difference $H(X_{k+1} | y^{k+1}, u^k) - H(X_{k+1} | y^k, u^k)$ instead of $\tilde{\ell}$ in $\ell_k^o$). We omit comparisons with policies that minimise the negative sum of marginal entropies (cf.\ \eqref{eq:entropyBounds}) since the cost-to-go functions of such policies are concave in the belief state (as discussed after Proposition \ref{proposition:obfConvex}), so cannot be computed with standard POMDP solvers. \begin{table}[t!] \begin{center} \caption{Cloud-Based Control Privacy: Estimated terminal cost, smoother entropy, total cost, and maximum a posteriori (MAP) error probabilities (best values in bold).} \label{tbl:cloud} \begin{tabular}{@{}lcccc@{}} \toprule \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Policy}}} & \textbf{Term. Cost} & \textbf{Smoother} & \textbf{Total Cost} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}MAP Err.\\ Prob. \end{tabular}}} \\ \multicolumn{1}{c}{} & $E[c_T(x_T)]$ & \textbf{Entropy}& \eqref{eq:activeObfuscation} & \\ \cmidrule(rl){1-5} \textbf{Active Obf.} & 0.2752 & \textbf{6.3797} & \textbf{-6.1045} & \textbf{0.9400} \\ \textbf{Min. Dir. Info.} & 0.1935 & 4.1378 & -3.9443 & 0.7899 \\ \textbf{Stand.\ POMDP} & \textbf{0.1892} & 4.1010 & -3.9115 & 0.7757 \\ \bottomrule \end{tabular} \end{center} \end{table} \subsubsection{Simulation Results} We performed $1000$ Monte Carlo simulations of each policy. The initial state of the system in each simulation was selected from a uniform distribution over $\mathcal{X}$. Table \ref{tbl:cloud} summarises the estimated smoother entropies under each policy as well as the estimated probability of error of maximum \emph{a posteriori} (MAP) estimates of the state trajectory computed via the Viterbi algorithm (cf.\ \cite[Section 3.5.3]{Krishnamurthy2016}). The smoother entropies were estimated by averaging the entropies $H(X^T | y^{T}, u^{T-1})$ of the posterior state distributions $p(x^{T} | y^{T}, u^{T-1})$ over the Monte Carlo runs, whilst the MAP error probabilities were estimated by counting the number of times the Viterbi trajectory estimate differed from the true state trajectory (averaging over the Monte Carlo runs). From Table \ref{tbl:cloud} we see that our active obfuscation policy increases the smoother entropy more than the \emph{Min.\ Dir.\ Info.} policy of \cite{Tanaka2017}. As a consequence, the error probability of MAP estimates under our active obfuscation policy is $6.79\%$ greater than under the \emph{Min.\ Dir.\ Info.} policy. The reason for this increase is that our active obfuscation policy increases both the unpredictability of the state process, i.e.\ $H(X^T \| Y^{T-1}, U^{T-1})$, and decreasing the information gained from the observations, i.e.\ $I(X^T \to Y^T \| U^{T-1})$ (cf.\ Theorem \ref{theorem:directedInformation}), whilst the \emph{Min.\ Dir.\ Info.} considers only decreasing the information in the observations without also increasing the unpredictability of the state process. \subsection{Uncertainty-Aware Navigation: Active Estimation} For our second example, we consider an uncertainty-aware navigation problem inspired by those in robotics (e.g., \cite{Roy1999, Thrun2005,Nardi2019}). In our uncertainty-aware navigation problem, an agent (starting from an unknown initial location) seeks to reach a given goal location whilst actively localising itself so as to avoid becoming lost and so as to enable its path to the goal to be estimated (for the purpose of later being retraced, communicated, or used for mapping). The problem of determining the agent's uncertainty-aware navigation policy is thus consistent with our active estimation problem \eqref{eq:activeEstimation} with $X_k$ being the agent's navigation state (e.g., position), $U_k$ being the agent's controls (e.g., movement direction), and $Y_k$ being the measurements of the agent's state from its navigation sensors (e.g., measurements of its position). \subsubsection{Simulation Example} For the purpose of simulations, we consider a modified version of the well-known \texttt{4x3.95} test POMDP\footnote{https://www.pomdp.org/examples/} \cite{Parr1995,Littman1995} in which an agent navigates in a grid surrounded by walls as shown in Fig.\ \ref{fig:navigation}(a). Each cell in the grid constitutes a state in the agent's state space $\mathcal{X} = \{1, \ldots, 12\}$ (enumerated top-to-bottom, left-to-right). The agent has five possible control actions corresponding to: transitioning to one of the four neighbouring cells left, right, up, or down with probability $0.8$ (failing to move with probability $0.2$); or, staying put with probability $1$. If a transition would take the agent out of the grid then it remains stationary. The agent receives measurements $\mathcal{Y} = \{0,1,2,3,4\}$ corresponding to the number of walls detected adjacent to its current cell. In each cell, the agent detects a wall when it is present with probability $0.9$, but detects a wall when it is not present with probability $0.1$. We highlight that the dimensions of the state, measurement, and control spaces in this example exceed those of other recent uncertainty-aware POMDP examples (e.g.\ the grid-info $\rho$-POMDP of \cite[Section 6]{Fehr2018}). The agent is initially placed (uniformly) randomly in one of the cells and is not provided with knowledge of this cell. Over a horizon of $T = 10$, the agent seeks to move so that it finishes in the bottom-right-most cell with knowledge of the path it took (and where it started). We model this situation by considering our active estimation problem \eqref{eq:activeEstimation} with $c_T(x_T) = \mathbbm{1}_{\{x_T \neq 12\}}$ and $c_k(x_k, u_k) = 0$ for all $x_k \in \mathcal{X}$ and $u_k \in \mathcal{U}$. \begin{figure}[t!] \centering \begin{subfigure}{0.48\columnwidth} \centering \includegraphics[width=\textwidth]{Picture1.pdf} \caption{} \end{subfigure} \hfill \begin{subfigure}{0.48\columnwidth} \centering \includegraphics[width=\textwidth]{Picture2.pdf} \caption{} \end{subfigure}\\ \begin{subfigure}{0.48\columnwidth} \centering \includegraphics[width=\textwidth]{Picture4.pdf} \caption{} \end{subfigure} \hfill \begin{subfigure}{0.48\columnwidth} \centering \includegraphics[width=\textwidth]{Picture3.pdf} \caption{} \end{subfigure} \caption{Active Estimation Example: (a) Agent \& Goal; and, Trajectories from (b) Our Active Policy, (c) Min.\ Marg.\ Ent.\ Policy, and, (d) Stand.\ POMDP Policy (bottom in black) and Min.\ Term.\ Ent.\ Policy (top in red and filled).} \label{fig:navigation} \end{figure} We solved our active estimation problem \eqref{eq:activeEstimation} using the PWLC approximate solution approach detailed in Section \ref{sec:approx}. That is, we constructed a PWLC approximation $\hat{g}_k^e$ of the costs $g_k^e$ in the belief-state MDP reformulation of \eqref{eq:secondActiveEstimationMDP} using the same approach for selecting base points $\Xi$ as in our cloud-based control example, and we solved for an approximately optimal policy using the same POMDP solver as in our cloud-based control example. For comparison, we also implemented: \begin{itemize} \item A standard POMDP policy (\emph{Stand.\ POMDP}) solving \eqref{eq:standardPOMDP} instead of \eqref{eq:activeEstimation} (but with the same costs $c_T$ and $c_k$); \item A minimum marginal entropy policy (\emph{Min.\ Marg.\ Ent.}) as in \cite{Krishnamurthy2007,Krishnamurthy2016,Araya2010} that minimises the sum $\sum_{k = 1}^T H(X_k | Y^k, U^{k-1})$ instead of $H(X^T | Y^T, U^{T-1})$ in \eqref{eq:activeEstimation} and is solvable using a PWLC approximation of $g_k^e$ with $H(X_k | y^k, u^{k-1})$ instead of $\tilde{g}$; and, \item A minimum terminal entropy policy (\emph{Min.\ Term.\ Ent.}) as in \cite{Roy2005} that minimises $H(X_T | Y^T, U^{T-1})$ instead of the smoother entropy in \eqref{eq:activeEstimation} and is solvable using a PWLC approximation with $\tilde{g}$ omitted from $g_k^e$. \end{itemize} We omit a policy involving the sum $\sum_{k = 1}^T H(X_k | Y^T, U^{T-1})$ as in \eqref{eq:entropyBounds} since it does not have an existing belief-state form (note that $H(X_k | Y^T, U^{T-1}) = H(X^T | Y^T, U^{T-1}) - H(X_1^{k-1}, X_{k+1}^T | X_k, Y^T, U^{T-1})$). \subsubsection{Simulation Results} The results of $1000$ Monte Carlo simulations of each policy are summarised in Table \ref{tbl:navigation}. Representative realisations of each of the policies are shown in Fig.~\ref{fig:navigation}(b)-(d) for simulations where the agent starts in the first state. Transitions not shown in Fig.~\ref{fig:navigation}(b)-(d) correspond to the agent staying in the goal state until $T = 10$ after reaching it. Table \ref{tbl:navigation} suggests that the standard POMDP policy results in the lowest terminal cost since it moves the agent directly towards the goal (see Fig.~\ref{fig:navigation}(d)). Our active estimation policy minimises the smoother entropy and the total cost, but has a greater terminal cost than the other policies. Indeed, as illustrated in Fig.~\ref{fig:navigation}(b), our active estimation policy often reduces the uncertainty about the initial state $X_1$, and hence the entire trajectory, by initially electing to keep the agent still so as to receive measurements without changing the state. Our active estimation policy elects only to move the agent after the initial state uncertainty is reduced, which leads to better trajectory estimates (as evidenced by the lesser MAP error probability in Table \ref{tbl:navigation}) but sometimes results in time being exhausted before the agent reaches the goal. In contrast, the \emph{Min.\ Marg.\ Ent.} and \emph{Min.\ Term.\ Ent.} policies typically elect to move immediately and reduce instantaneous state uncertainties by passing through the distinct states in the middle with no surrounding walls and keeping the agent still at either isolated time instances $k > 1$ (see Fig.~\ref{fig:navigation}(c)) or at the end of the trajectory (see Fig.~\ref{fig:navigation}(d)). The \emph{Min.\ Marg.\ Ent.} and \emph{Min.\ Term.\ Ent.} policies thus achieve lesser terminal costs but greater smoother entropies compared to our active estimation policy. \begin{table}[t!] \begin{center} \caption{Uncertainty-Aware Navigation: Estimated terminal cost, smoother entropy, total cost, and maximum a posteriori (MAP) error probabilities (best values in bold).} \label{tbl:navigation} \begin{tabular}{@{}lcccc@{}} \toprule \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Policy}}} & \textbf{Term. Cost} & \textbf{Smoother} & \textbf{Total Cost} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}MAP Err.\\ Prob. \end{tabular}}} \\ \multicolumn{1}{c}{} & $E[c_T(x_T)]$ & \textbf{Entropy}& \eqref{eq:activeEstimation} & \\ \cmidrule(rl){1-5} \textbf{Active Est.} & 0.1348 & \textbf{1.1706} & \textbf{1.3054} & \textbf{0.3860} \\ \textbf{Min. Marg. Ent.} & 0.0229 & 1.6390 & 1.6620 & 0.4690 \\ \textbf{Min. Term. Ent.} & 0.0063 & 1.7687 & 1.7750 & 0.5170\\ \textbf{Stand. POMDP} & \textbf{0.0043} & 1.7795 & 1.7838 & 0.5230 \\ \bottomrule \end{tabular} \end{center} \end{table} \section{Conclusion} \label{sec:conclusion} We investigated the smoother entropy (i.e.\ the conditional entropy of the state trajectory given measurements and controls) as a tractable criterion for active state estimation and obfuscation. We established novel forms of the smoother entropy using the Marko-Massey theory of directed information that lead to reformulations of our active estimation and obfuscation problems as belief-state MDPs with concave cost and cost-to-go functions. We used the concavity properties to find bounded-error solutions to both our active estimation and obfuscation problems using standard POMDP techniques. The applicability of our active obfuscation and estimation problems to privacy in cloud-based control and uncertainty-aware navigation was illustrated through simulations. Future work could include investigating reinforcement learning for active estimation and obfuscation problems with smoother-entropy costs, and investigating game-theoretic formulations of adversarial active estimation and obfuscation using the smoother entropy, including inverse problems such as inverse filtering (cf.\ \cite{Lourenco2020,Krishnamurthy2019,Mattila2020}). \bibliographystyle{IEEEtran}
2,869,038,156,439
arxiv
\section{Introduction} The dominant hard scattering process at the LHC is the production of hadronic jets. Their ubiquity in the collider environment allows for high precision studies of jets to be performed by the ATLAS~\cite{atlasjet,atlasjet2} and CMS~\cite{cmsjet,cmsjet2,cmsjet3,cmsjet4} experiments. Jet cross sections are theoretically interesting as they are sensitive to the value of the strong coupling constant~\cite{Giele:1995kb,CDF:2001hn,asjet,atlasrunning}, as well as the parton distribution functions~\cite{Giele:1994xd} and to new physics beyond the Standard Model~\cite{ATLAS:2012pu,CMS:2013qha}. In order to fully utilise these precisely measured observables, we need a comparably precise understanding of the theoretical prediction for the cross section. The jet cross section can be calculated using a combination of perturbative techniques for the hard scattering subprocesses and non-perturbative parton distribution functions, \begin{eqnarray} {\rm d}\sigma&=&\sum_{a,b}\int\frac{{\rm d}\xi_{1}}{\xi_{1}}\frac{{\rm d}\xi_{2}}{\xi_{2}}~f_{a}(\xi_{1},\mu^2)f_{b}(\xi_{2},\mu^2)~{\rm d} \hat\sigma_{ab}(\alpha(\mu^2),\mu^2), \end{eqnarray} where the sum runs over parton species in the colliding hadrons. The parton distribution function (PDF), $f_{a}(\xi_{1},\mu^2){\rm{d}\xi_{1}}$ describes the probability to find the parton of species $a$ with momentum fraction $\xi_{1}$ in the hadron, which is defined by the choice of factorization scale, $\mu$, which in this paper is set equal to the renormalization scale. The partonic cross section, ${\rm d} \hat\sigma_{ab}$, describes the probability for the initial-state partons to interact and produce a final-state, $X$, normalised to the hadron-hadron flux. The partonic cross section is calculable within perturbative QCD and has a series expansion in the strong coupling constant, \begin{eqnarray} {\rm d} \hat\sigma_{ab}&=&{\rm d} \hat\sigma_{ab,LO}+\bigg(\frac{\alpha_{s}(\mu^2)}{2\pi}\bigg){\rm d} \hat\sigma_{ab,NLO}+\bigg(\frac{\alpha_{s}(\mu^2)}{2\pi}\bigg)^2{\rm d} \hat\sigma_{ab,NNLO}+{\cal{O}}(\alpha_{s}(\mu^2)^3), \end{eqnarray} where the series has been truncated at next-to-next-to leading order (NNLO). For dijet production the leading order cross section carries an overall factor of $\alpha_{s}^2$ such that the NLO and NNLO corrections carry overall factors of $\alpha_{s}^{3}$ and $\alpha_{s}^{4}$ respectively. The most accurate theoretical predictions for dijet observables are currently those calculated at NLO accuracy~\cite{Ellis:1988hv,Ellis:1990ek,eks,jetrad,nlojet1,nlojet2,powheg2j,meks}. Further improvements include the inclusion of LO~\cite{Baur:1989qt} and NLO~\cite{Moretti:2006ea,Dittmaier:2012kx} electroweak corrections and the study of QCD threshold corrections~\cite{Kidonakis:2000gi,Kumar:2013hia}. As the LHC experiments continue to record and analyse jet data, the experimental precision on the single inclusive and exclusive dijet cross sections demand better precision from our theory predictions. This has led to a drive to provide the NNLO corrections to the jet cross section in order to bring theory uncertainties in line with the experimental precision attainable at the LHC. Many techniques have been developed in recent years to calculate NNLO corrections with hadronic initial states. The antenna subtraction method~\cite{GehrmannDeRidder:2005cm} was developed for $e^{+}e^{-}$ annihilation, where it was successfully applied to the calculation of the three-jet cross section at NNLO~\cite{our3j1,our3j2,ourevent1,ourevent2,ourevent3,weinzierl3j1,weinzierl3j2,weinzierlevent1,weinzierlevent2}. The method has subsequently been generalised to hadronic initial-states~\cite{Daleo:2006xa,Daleo:2009yj,Boughezal:2010mc,GehrmannDeRidder:2012ja,Gehrmann:2011wi}, and applied to the leading colour contributions to gluonic dijet production in gluon fusion~\cite{Glover:2010im,GehrmannDeRidder:2011aa,GehrmannDeRidder:2012dg,GehrmannDeRidder:2013mf} and quark-antiquark annihilation~\cite{Currie:2013vh}. Recent years have also seen the development of the sector improved subtraction technique, STRIPPER~\cite{Czakon:2010td} which has subsequently been applied to several phenomenological studies for top pair production~\cite{Czakon:2011ve,czakontop1,czakontop2,czakontop3,czakontop4} and Higgs plus jet production~\cite{Boughezal:2013uia}. The NNLO corrections for a wide range of processes involving the production of colourless particles are also known, either for single particle production, Higgs~\cite{babishiggs,babishiggs1,babishiggs2,babishiggs3, grazzinihiggs}, Drell-Yan~\cite{babisdy1,babisdy2,kirilldy1,kirilldy2,grazzinidy1,grazzinidy2}, and di-boson production ~\cite{grazziniwh,babisgg,grazzinigg,grazzinizg,deFlorian:2013jea}. The NNLO mass factorised partonic cross section is composed of three contributions: the double real, real-virtual and double virtual corrections, \begin{eqnarray} {\rm d} \hat\sigma_{ab,NNLO}&=&\int_{n+2}{\rm d} \hat\sigma_{ab,NNLO}^{RR}+\int_{n+1}\Big[{\rm d} \hat\sigma_{ab,NNLO}^{RV}+{\rm d} \hat\sigma_{ab,NNLO}^{MF,1}\Big]\nonumber\\ &+&\int_{n}\Big[{\rm d} \hat\sigma_{ab,NNLO}^{VV}+{\rm d} \hat\sigma_{ab,NNLO}^{MF,2}\Big], \end{eqnarray} where each contribution is defined to contain the relevant phase space integration measure and so $\int_{n}$ simply keeps track of the number of final-state particles involved in the phase space integral. It is well known that each of these terms is separately divergent, either containing singularities in regions of single or double unresolved phase space or explicit IR poles in $\epsilon$, yet the sum of all three contributions can be arranged such that all singularities cancel to yield a finite result. In order to perform this reorganisation we construct three subtraction terms such that the partonic cross section can be re-expressed in the form, \begin{eqnarray} {\rm d} \hat\sigma_{ab,NNLO}&=&\int_{n+2}\Big[{\rm d} \hat\sigma_{ab,NNLO}^{RR}-{\rm d} \hat\sigma_{ab,NNLO}^{S}\Big]\nonumber\\ &+&\int_{n+1}\Big[{\rm d} \hat\sigma_{ab,NNLO}^{RV}-{\rm d} \hat\sigma_{ab,NNLO}^{T}\Big]\nonumber\\ &+&\int_{n\phantom{+1}}\Big[{\rm d} \hat\sigma_{ab,NNLO}^{VV}-{\rm d} \hat\sigma_{ab,NNLO}^{U}\Big]. \end{eqnarray} The double real subtraction term is constructed to remove all single and double unresolved divergences and renders the double real channel IR finite. The real-virtual subtraction term is a combination of double real subtraction terms integrated over a single unresolved phase space, mass factorization contributions and new subtraction terms introduced to remove the remaining singularities of the real-virtual contribution. The double virtual subtraction term is constructed from mass factorization terms and the remaining subtraction terms from the double real and real-virtual, integrated over the double and single unresolved phase spaces respectively, \begin{eqnarray} {\rm d} \hat\sigma_{ab,NNLO}^{T}&=&{\rm d} \hat\sigma_{ab,NNLO}^{V,S}-{\rm d} \hat\sigma_{ab,NNLO}^{MF,1}-\int_{1}{\rm d} \hat\sigma_{ab,NNLO}^{S},\\ {\rm d} \hat\sigma_{ab,NNLO}^{U}&=&-{\rm d} \hat\sigma_{ab,NNLO}^{MF,2}-\int_{1}{\rm d} \hat\sigma_{ab,NNLO}^{V,S}-\int_{2}{\rm d} \hat\sigma_{ab,NNLO}^{S}. \end{eqnarray} In this paper we are concerned with the NNLO correction to the dijet cross section in the all-gluon approximation. To help organise the calculation it is useful to define the operators ${\cal{LC}}$ and ${\cal{SLC}}$, which project out the leading colour and sub-leading colour corrections such that, \begin{eqnarray} {\rm d} \hat\sigma_{gg,NNLO}&=&{\cal{LC}}\Big({\rm d} \hat\sigma_{gg,NNLO}\Big)+{\cal{SLC}}\Big({\rm d} \hat\sigma_{gg,NNLO}\Big), \end{eqnarray} where ${\cal{LC}}\Big({\rm d} \hat\sigma_{gg,NNLO}\Big)$ was discussed in Refs.~ \cite{Glover:2010im,GehrmannDeRidder:2011aa,GehrmannDeRidder:2012dg,GehrmannDeRidder:2013mf} while ${\cal{SLC}}\Big({\rm d} \hat\sigma_{gg,NNLO}\Big)$ constitutes the remaining contribution to the cross section discussed in this paper.\footnote{Note that the definition of the leading colour contribution contains an overall factor of $(N^2 -1)$, as does the subleading colour contribution. The two are separated by a relative factor of $N^2$ and so strictly expanding as a series in $N$ leads to a mixing of the two contributions. In this paper we \emph{define} the ${\cal{LC}}$ and ${\cal{SLC}}$ operators to both contain this overall factor of $(N^{2}-1)$ so as to avoid such mixing of terms.} It can be seen by simple power counting in $N$ that the NNLO mass factorization terms for this process only contribute to the leading colour cross section, i.e., \begin{eqnarray} {\cal{SLC}}\Big({\rm d} \hat\sigma_{gg,NNLO}^{MF,1}\Big)&=&0,\label{eq:slcmf1}\\ {\cal{SLC}}\Big({\rm d} \hat\sigma_{gg,NNLO}^{MF,2}\Big)&=&0\label{eq:slcmf2}. \end{eqnarray} The significance of Eqs.~\eqref{eq:slcmf1} and~\eqref{eq:slcmf2} for this calculation is that there is no mass factorization contribution at sub-leading colour. The sub-leading colour contribution poses an interesting theoretical challenge for the antenna subtraction scheme previously employed to compute the leading colour contribution~\cite{Glover:2010im,GehrmannDeRidder:2011aa,GehrmannDeRidder:2012dg,GehrmannDeRidder:2013mf}. This method is well suited to leading colour calculations and those where the cross section can be written as a sum of colour ordered squared partial amplitudes with simple factorization behaviour in unresolved limits. However, the sub-leading colour contribution, is constructed from the incoherent interference of partial amplitudes and it is an interesting question to see whether the method is sufficiently general to systematically remove all of the IR singularities. As we will show, this can be achieved in a straightforward manner without the need to derive new antennae or to perform new analytic integrals. The phenomenology of this process is also interesting as it gives a concrete example of the size of sub-leading colour corrections to the leading colour process at NNLO. Na\"ively we expect sub-leading colour contributions to be numerically small because in the all-gluon channel they are suppressed by a factor of $1/N^2$ relative to the leading colour contribution. In addition to this power counting, QCD displays colour coherence and so sub-leading colour contributions can contain incoherent interferences of partial amplitudes. These incoherent interferences will generically contain contributions which are suppressed by quantum mechanical destructive interference effects, and so the colour incoherent sub-leading colour contributions may be suppressed even further than the na\"ive $1/N^2$ suppression. These heuristic arguments are appealing but it is also desirable to make firm quantitative statements about the relevance of sub-leading colour contributions. In this paper we do so by explicitly calculating the sub-leading colour contribution to dijet production at NNLO in the all-gluon approximation and comparing it with the leading colour contribution. The paper is organised in the following way. In Section 2, we define the notation used throughout the paper and introduce also the notions of colour space that help organise the sub-leading colour contributions. In Sections 3, 4 and 5 we systematically step through the double real, real-virtual and double virtual contributions, first defining the relevant matrix element and then deriving the appropriate subtraction terms. We show that the antenna subtraction technique requires no significant alterations or new ingredients in order to deal with the incoherent interferences of partial amplitudes. In particular, in Section 3 we show that the single and double unresolved limits of the double real matrix element at sub-leading colour can be fully described using just three-parton tree-level antennae, without the need for four-parton antenna functions. In Section 4, we give a more compact form for the real-virtual matrix element than that present in the literature~\cite{Bern:1993mq}. As in the double unresolved case, we show that the single unresolved limits of the real-virtual matrix element do not require the one-loop three-parton antenna and can be described with only tree-level three-parton antennae to remove all explicit and implicit singularities. We derive the double virtual subtraction term by integrating the remaining double real and real-virtual subtraction terms, and show that it analytically cancels the explicit poles in the formula for the two-loop matrix elements~\cite{Glover:2001af,Glover:2001rd}. We have implemented these terms into a parton-level event generator, which can compute the all-gluon contribution to any infrared-safe observable related to dijet final states at hadron colliders. Section 5 is devoted to a first numerical study of the size of the full colour NNLO cross section for some experimentally relevant observables; the single jet inclusive distribution for a range of rapidity intervals and the dijet invariant mass distribution. Finally, our findings are briefly summarized in Section 6. \section{Notation and colour space} Throughout this paper, complex amplitudes are denoted by calligraphic letters, whereas real squared amplitudes, summed over helicities are denoted by Roman letters. Generic amplitudes, independent of the scattering process, are written using the letter ${\cal M}$, whereas for the specific process of gluon scattering the we use the letter ${\cal A}$. Amplitudes and squared amplitudes containing colour information are written in boldface whereas colour stripped amplitudes are not. Thus, the full $n$-point $\ell$-loop amplitude is denoted by $\bs{\cal M}_{n}^{\ell}$, whereas the same quantity for gluon scattering is denoted by $\bs{\cal A}_{n}^{\ell}$. The corresponding colour stripped partial amplitudes and their squares are denoted by ${\cal M}_{n}^{\ell}$, ${\cal{A}}_{n}^{\ell}$ and $M_{n}^{\ell}$, $A_{n}^{\ell}$ respectively. The squared full amplitudes, containing all colour information, for generic and gluonic scattering process are denoted by $\bs{M}_{n}^{\ell}$ and $\bs{A}_{n}^{\ell}$. Specific combinations of integrated antennae and mass factorisation kernels can be used to express the explicit IR poles of one- and two-loop contributions to the cross section. This approach is of particular use in the antenna subtraction process where writing the poles of the virtual and double virtual cross sections in terms of integrated dipoles allows the pole cancellation to be carried out in a transparent fashion. The poles of the integrated dipoles correspond to those of the one- and two-loop insertion operators, and so they can be dressed with colour charge operators and inserted into the matrix element sandwiches to obtain the pole structure of the cross section by working in colour space. For $n$-parton scattering, the amplitudes carry colour indices $\{c\}=\{c_{1},\cdots,c_{n}\}$ where $c_{i}=1,\cdots,N^2-1$ for gluons and $c_{i}=1,\cdots,N$ for quarks and antiquarks. A set of basis vectors for the colour space can be constructed, $\{|\bs{c}\rangle\}=\{|c_{1}\cdots c_{n}\rangle\}$, the projection of an arbitrary vector into which defines a scalar in colour space. In this space we define a vector which represents a scattering process, such that its projection onto the colour basis vectors produces the coloured scattering amplitude, \begin{eqnarray} \bs{{\cal M}}_{n}^{\{c\}}(\{p\})&=&\langle \bs{c}|{\cal M}_{n}(\{p\})\rangle. \end{eqnarray} The full squared amplitude, summed over colours is then given by, \begin{eqnarray} \bs{M}_{n}(\{p\})&=&\sum_{\{c\}}\langle{\cal M}_{n}(\{p\})|\bs{c}\rangle\langle \bs{c}|{\cal M}_{n}(\{p\})\rangle\nonumber\\ &=&\langle{\cal M}_{n}(\{p\})|{\cal M}_{n}(\{p\})\rangle. \end{eqnarray} The emission of a gluon from parton $i$ is associated with the colour charge operator, $\bs{T}_{i}=T_{i}^{c}|c\rangle$, which carries the vector colour index of the emitted gluon, $c$, and is a matrix in the colour indices of the emitting parton $i$, i.e., \begin{eqnarray} \langle \bs{a}|T_{i}^{c}|\bs{b}\rangle&=&\delta_{a_{1}b_{1}}\cdots T_{a_{i}b_{i}}^{c}\cdots\delta_{a_{n}b_{n}}. \end{eqnarray} The colour charges form an algebra, the elements of which satisfy the following properties, \begin{eqnarray} \bs{T}_{i}\cdot\bs{T}_{j}&=&\bs{T}_{j}\cdot\bs{T}_{i},\nonumber\\ \bs{T}_{i}^2&=&C_{i}\bs{1}, \end{eqnarray} where $\bs{T}_{i}\cdot\bs{T}_{j}=\sum_{c}T_{i}^{c}T_{j}^{c}$ and $\bs{1}$ is the identity matrix in colour space. $C_{i}$ is the Casimir coefficient associated with a parton of type $i$, i.e., for partons in the fundamental representation, $C_{q}=C_{\b{q}}=C_{F}=\frac{N^2-1}{2N}$, for partons in the adjoint representation, $C_{g}=C_{A}=N$. The product of two colour charges, $\bs{T}_{i}\cdot\bs{T}_{j}$, is a matrix acting on the colour indices of the partons $i$ and $j$ in the scattering process and so when sandwiched between two state vectors, produces a scalar in colour space called a colour correlated matrix element, \begin{eqnarray} \langle{\cal{M}}_{n}|\bs{T}_{i}\cdot\bs{T}_{j}|{\cal{M}}_{n}\rangle&=&T_{a_{i}b_{i}}^{c}T_{a_{j}b_{j}}^{c}\bs{\cal{M}}_{n}^{\dagger,\{a\}}(\{p\})\bs{\cal{M}}_{n}^{\{b\}}(\{p\}).\label{eq:mtijm} \end{eqnarray} At NNLO we also encounter the colour correlated double operator insertion sandwich, defined by, \begin{eqnarray} \langle{\cal{M}}_{n}|(\bs{T}_{i}\cdot\bs{T}_{j})(\bs{T}_{k}\cdot\bs{T}_{l})|{\cal{M}}_{n}\rangle&=&T_{a_{i}b_{i}}^{c_1}T_{a_{j}b_{j}}^{c_1}T_{a_{k}b_{l}}^{c_2}T_{a_{k}b_{l}}^{c_2}\bs{\cal{M}}_{n}^{\dagger,\{a\}}(\{p\})\bs{\cal{M}}_{n}^{\{b\}}(\{p\}).\label{eq:mtijtklm} \end{eqnarray} To write down the pole structure of the one- and two-loop cross sections encountered in this paper we must evaluate the following colour charge sandwiches, \begin{eqnarray} &&\langle{\cal{A}}_{n}^{0}|\bs{T}_{i}\cdot\bs{T}_{j}|{\cal{A}}_{n}^{0}\rangle,\ n=4,5,\label{eq:titj}\\ &&\langle{\cal{A}}_{4}^{0}|\bs{T}_{i}\cdot\bs{T}_{j}|{\cal{A}}_{4}^{1}\rangle,\label{eq:titjm1}\\ &&\langle{\cal{A}}_{4}^{0}|(\bs{T}_{i}\cdot\bs{T}_{j})(\bs{T}_{k}\cdot\bs{T}_{l})|{\cal{A}}_{4}^{0}\rangle.\label{eq:titjtktl} \end{eqnarray} For gluons the explicit form of the colour charge operators is given by, \begin{eqnarray} T_{ab}^{c}&=&if_{acb}.\label{eq:tfabc} \end{eqnarray} We choose to write the amplitudes in a colour ordered basis in terms of colour ordered partial amplitudes. In such a basis, the tree-amplitudes have the form, \begin{eqnarray} \bs{\cal{A}}_{n}^{0,\{a\}}(\{p\})&=&\sum_{\sigma\in S_{n}/Z_{n}}{\rm{Tr}}(a_{\sigma(1)},\cdots,a_{\sigma(n)})\ {\cal{A}}_{n}^{0}(\sigma(1),\cdots,\sigma(n)),\label{eq:treegluon} \end{eqnarray} where the symmetry group $S_{n}/Z_{n}$ contains all non-cyclic permutations of $n$ elements and the arguments of the colour stripped partial amplitudes represent external momenta. Each $a_{i}$ in the trace of Eq.~\eqref{eq:treegluon} represents a generator of the SU($N$) algebra in the fundamental representation carrying the adjoint colour index $a_{i}$ associated with gluon $i$. The four-gluon one-loop amplitude, in a colour ordered basis, is given by~\cite{Bern:1990ux}, \begin{eqnarray} \hspace{-0.5cm}\bs{\cal{A}}_{4}^{1,\{a\}}(\{p\})&=&\sum_{\sigma\in S_{4}/Z_4}N\ {\rm{Tr}}(a_{\sigma(1)},a_{\sigma(2)},a_{\sigma(3)},a_{\sigma(4)})\ {\cal{A}}_{4,1}^{1}(\sigma(1),\sigma(2),\sigma(3),\sigma(4))\nonumber\\ &+&\sum_{\rho\in S_{4}/Z_{2}\times Z_{2}}{\rm{Tr}}(a_{\rho(1)}a_{\rho(2)}){\rm{Tr}}(a_{\rho(3)},a_{\rho(4)})\ {\cal{A}}_{4,3}^{1}(\rho(1),\rho(2),\rho(3),\rho(4)), \end{eqnarray} where $\sigma$ is the set of orderings inequivalent under cyclic permutations and $\rho$ is the set of orderings inequivalent under cyclic permutations of the two subsets of orderings $\{\rho(1),\rho(2)\}$ and $\{\rho(3),\rho(4)\}$ and the interchange of these sub-sets. The colour indices of these amplitudes are then contracted with those of the colour charge operators, given in Eq.~\eqref{eq:tfabc}, and conjugate amplitudes to produce the sandwich, as shown in Eqs.~\eqref{eq:mtijm} and~\eqref{eq:mtijtklm}. A result which will prove useful throughout this paper is that the four-parton tree-level single insertion sandwich in Eq.~\eqref{eq:titj} only contributes at leading colour such that, \begin{eqnarray} {\cal{SLC}}\Big(\langle{\cal{A}}_{4}^{0}|\bs{T}_{i}\cdot\bs{T}_{j}|{\cal{A}}_{4}^{0}\rangle\Big)&=&0.\label{eq:slczero} \end{eqnarray} Setting $N_F=0$ (according to the all gluon approximation of this paper), the single unresolved integrated dipoles~\cite{Currie:2013vh}, $\bs{J}_{2}^{(1)}$, which dress these colour charges are defined as combinations of integrated antenna functions~\cite{GehrmannDeRidder:2005cm,Daleo:2006xa,Daleo:2009yj,Boughezal:2010mc,GehrmannDeRidder:2012ja,Gehrmann:2011wi} and mass factorisation kernels. The final-final, initial-final and initial-initial gluon-gluon dipoles are given by, \begin{eqnarray} \bs{J}_{2}^{(1)}(1_{g},2_{g})&=&\frac{1}{3}{\cal{F}}_{3}^{0}(s_{{1}{2}}),\label{eq:j21def1}\\ \bs{J}_{2}^{(1)}(\hb{1}_{g},2_{g})&=&\frac{1}{2}{\cal{F}}_{3,g}^{0}(s_{\b{1}{2}})-\frac{1}{2}{\Gamma}_{gg}^{(1)}(x_{1})\delta(1-x_{2}),\label{eq:j21def2}\\ \bs{J}_{2}^{(1)}(\hb{1}_{g},\hb{2}_{g})&=&{\cal{F}}_{3,gg}^{0}(s_{\b{1}\b{2}})-\frac{1}{2}{\Gamma}_{gg;gg}^{(1)}(x_{1},x_{2})\label{eq:j21def3}, \end{eqnarray} where hatted arguments denote initial-state partons and the mass factorization kernels used to define the initial-final and initial-initial dipoles are defined as~\cite{GehrmannDeRidder:2011aa}, \begin{eqnarray} \Gamma_{gg;gg}^{(1)}(x_{1},x_{2})&=&\Gamma_{gg}^{(1)}(x_{1})\delta(1-x_{2})+\Gamma_{gg}^{(1)}(x_{2})\delta(1-x_{1}),\\ \Gamma_{gg}^{(1)}(x_{1})&=&\frac{1}{\epsilon}~p_{gg}^{0}(x_{1}). \end{eqnarray} These integrated dipoles can be stitched together to form an integrated antenna string which contains the poles of an extended string of gluons including, by definition, a correlation between the endpoints of the string due to the cyclical symmetry of the partial amplitudes, \begin{eqnarray} \bs{J}_{n}^{(1)}(1_{g},2_{g},3_{g},\cdots,(n-1)_{g},n_{g})&=&\bs{J}_{2}^{(1)}(1_{g},2_{g})+\bs{J}_{2}^{(1)}(2_{g},3_{g})+\cdots\nonumber\\ &+&\bs{J}_{2}^{(1)}((n-1)_{g},n_{g})+\bs{J}_{2}^{(1)}(n_{g},1_{g}).\label{eq:jsum} \end{eqnarray} The double unresolved integrated dipoles are given by, \begin{eqnarray} \bs{J}_{2}^{(2)}(1_{g},2_{g})&=&\frac{1}{4}{\cal{F}}_{4}^{0}(s_{{1}{2}})+\frac{1}{3}{\cal{F}}_{3}^{1}(s_{{1}{2}})+\frac{1}{3}\frac{b_{0}}{\epsilon}{\cal{F}}_{3}^{0}(s_{{1}{2}})\Big[\biggl(\frac{|s_{{1}{2}}|}{\mu^{2}}\biggr)^{-\epsilon}-1\Big]\nonumber\\ &-&\frac{1}{9}\big[{\cal{F}}_{3}^{0}\otimes{\cal{F}}_{3}^{0}\big](s_{12}),\\ \bs{J}_{2}^{(2)}(\hb{1}_{g},2_{g})&=&\frac{1}{2}{\cal{F}}_{4,g}^{0}(s_{\b{1}{2}})+\frac{1}{2}{\cal{F}}_{3,g}^{1}(s_{\b{1}{2}})+\frac{1}{2}\frac{b_{0}}{\epsilon}{\cal{F}}_{3,g}^{0}(s_{\b{1}{2}})\Big[\biggl(\frac{|s_{\b{1}{2}}|}{\mu^{2}}\biggr)^{-\epsilon}-1\Big]\nonumber\\ &-&\frac{1}{4}\big[{\cal{F}}_{3,g}^{0}\otimes{\cal{F}}_{3,g}^{0}\big](s_{\b{1}2})-\frac{1}{2}\overline{\Gamma}_{gg}^{(2)}(x_{1})\delta(1-x_{2}),\\ \bs{J}_{2}^{(2)}(\hb{1}_{g},\hb{2}_{g})&=&{\cal{F}}_{4,gg}^{0,\rm{adj}}(s_{\b{1}\b{2}})+\frac{1}{2}{\cal{F}}_{4,gg}^{0,\rm{n.adj}}(s_{\b{1}\b{2}})+{\cal{F}}_{3,gg}^{1}(s_{\b{1}\b{2}})+\frac{b_{0}}{\epsilon}{\cal{F}}_{3,gg}^{0}(s_{\b{1}\b{2}})\Big[\biggl(\frac{|s_{\b{1}\b{2}}|}{\mu^{2}}\biggr)^{-\epsilon}-1\Big]\nonumber\\ &-&\big[{\cal{F}}_{3,gg}^{0}\otimes{\cal{F}}_{3,gg}^{0}\big](s_{\b{1}\b{2}})-\frac{1}{2}\overline{\Gamma}_{gg;gg}^{(2)}(x_{1},x_{2}), \end{eqnarray} where the relevant mass factorization kernels are defined by~\cite{GehrmannDeRidder:2012dg}, \begin{eqnarray} \overline{\Gamma}_{gg;gg}^{(2)}(x_{1},x_{2})&=&\overline{\Gamma}_{gg}^{(2)}(x_{1})\delta(1-x_{2})+\overline{\Gamma}_{gg}^{(2)}(x_{2})\delta(1-x_{1}),\\ \overline{\Gamma}_{gg}^{(2)}(x_{1})&=&-\frac{1}{2\epsilon}\Big(p_{gg}^{1}(x_{1})+\frac{\beta_{0}}{\epsilon}p_{gg}^{0}(x_{1})\Big).\label{eq:gamma2b} \end{eqnarray} Using the integrated dipoles, and evaluating the colour charge sandwiches directly, allows the pole structure of one- and two-loop contributions to the cross section to be written in terms of single and double unresolved integrated antenna dipoles. The initial-final and initial-initial dipoles contain mass factorization kernels, however as stated in Eqs.~\eqref{eq:slcmf1} and~\eqref{eq:slcmf2}, the mass factorization contribution is zero at sub-leading colour so all mass factorization kernels in the integrated dipoles ultimately cancel in the full subtraction term. The pole cancellation with the relevant subtraction terms can then be achieved in a clear and simple fashion as we will show in Secs.~\ref{sec:RV} and~\ref{sec:VV}. \section{Double-real contribution} \label{sec:RR} The double real six gluon tree-level amplitude squared is given by, \begin{eqnarray} &&\bs{A}_{6}^{0}(\{p\})=g^{8}N^{4}(N^2-1)\bigg\{ \sum_{\sigma\in S_{6}/Z_{6}}A_{6}^{0}(1,\sigma(2),\sigma(3),\sigma(4),\sigma(5),\sigma(6))\nonumber\\ &&+\frac{2}{N^2}\ {\cal A}_{6}^{0\dagger}(1,\sigma(2),\sigma(3),\sigma(4),\sigma(5),\sigma(6)) \Big[{\cal A}_{6}^{0}(1,\sigma(3),\sigma(5),\sigma(2),\sigma(6),\sigma(4))\nonumber\\ &&+{\cal A}_{6}^{0}(1,\sigma(3),\sigma(6),\sigma(4),\sigma(2),\sigma(5)) +{\cal A}_{6}^{0}(1,\sigma(4),\sigma(2),\sigma(6),\sigma(3),\sigma(5))\Big] \bigg\}, \end{eqnarray} where the sum $S_{6}/Z_{6}$ is the group of 5! non-cyclic permutations of the six gluons. The leading colour contribution and the double real subtraction term has been discussed in Ref.~\cite{Glover:2010im}. At sub-leading colour, the double real radiation contribution is given by, \begin{eqnarray} \lefteqn{{\rm d}\hat\sigma_{NNLO}^{RR}= {\cal N}_{LO} \left(\frac{\alpha_s}{2\pi}\right)^2\frac{\b{C}(\epsilon)^2}{C(\epsilon)^2} {\rm d}\Phi_{4}(p_{3},\ldots,p_{6};p_1;p_2)\, J_{2}^{(4)}(p_{3},\ldots,p_{6}) \, \frac{2}{4!} \sum_{\sigma\in S_{6}/Z_{6}}}\nonumber\\ &\times &{\cal A}_{6}^{0\dagger}(1,\sigma(2),\sigma(3),\sigma(4),\sigma(5),\sigma(6)) \Big[{\cal A}_{6}^{0}(1,\sigma(3),\sigma(5),\sigma(2),\sigma(6),\sigma(4))\nonumber\\ &&+\ {\cal A}_{6}^{0}(1,\sigma(3),\sigma(6),\sigma(4),\sigma(2),\sigma(5) +{\cal A}_{6}^{0}(1,\sigma(4),\sigma(2),\sigma(6),\sigma(3),\sigma(5))\Big], \label{eq:RR} \end{eqnarray} where $\b{C}(\epsilon)=8\pi^2 C(\epsilon)=(4\pi)^\epsilon e^{-\epsilon\gamma}$ and the overall factor is given by, \begin{eqnarray} {\cal{N}}_{LO}&=&\frac{1}{2s}\frac{1}{4(N^2-1)^2}(g^2 N)^2(N^2-1). \end{eqnarray} In Eq.~\eqref{eq:RR} we can see that the tree-level six gluon squared matrix element at sub-leading colour can be written as three incoherent interferences, summed over permutations. The three orderings, \begin{eqnarray} &&{\cal{A}}_{6}^{0}(1,\sigma{(3)},\sigma{(5)},\sigma{(2)},\sigma{(6)},\sigma{(4)}),\nonumber\\ &&{\cal{A}}_{6}^{0}(1,\sigma{(3)},\sigma{(6)},\sigma{(4)},\sigma{(2)},\sigma{(5)}),\nonumber\\ &&{\cal{A}}_{6}^{0}(1,\sigma{(4)},\sigma{(2)},\sigma{(6)},\sigma{(3)},\sigma{(5)}),\nonumber \end{eqnarray} are the only independent orderings that exist for six gluon scattering which have no common neighbouring pairs of partons with the conjugate amplitude's ordering, \begin{eqnarray} &&{\cal{A}}_{6}^{0,\dagger}(1,\sigma{(2)},\sigma{(3)},\sigma{(4)},\sigma{(5)},\sigma{(6)}).\nonumber \end{eqnarray} One immediate consequence of this is that the sub-leading colour matrix element does not contain any single, double or triple collinear collinear divergences. With no collinear divergences present in the double real cross section, the only divergences to be removed are those associated with single and double soft gluons. The double real subtraction term can be divided into five distinct contributions, \begin{eqnarray} {\rm d} \hat\sigma_{NNLO}^{S}&=&{\rm d} \hat\sigma_{NNLO}^{S,a}+{\rm d} \hat\sigma_{NNLO}^{S,b}+{\rm d} \hat\sigma_{NNLO}^{S,c}+{\rm d} \hat\sigma_{NNLO}^{S,d}+{\rm d} \hat\sigma_{NNLO}^{S,e}, \end{eqnarray} which will be discussed in the following sections. \subsection{Single unresolved subtraction term} \label{sec:ssRR} The interferences in Eq.\eqref{eq:RR} contain no collinear divergences but do contain soft singularities. In the single soft limit the colour ordered partial amplitudes factorize~\cite{Berends:1988zn,Catani:1999ss}, \begin{equation} {\cal M}_{n+1}^0(\hdots,p_i,p_j,p_k,\hdots)\stackrel{j\to0}{\longrightarrow}{\cal{S}}^{0}(p_i,p_j,p_k){\cal M}_{n}^0(\hdots,p_i,p_k,\hdots), \end{equation} where \begin{eqnarray} {\cal{S}}^{0}(p_i,p_j,p_k)&=&S_{\mu}^{0}(p_i,p_j,p_k)\epsilon^{\mu}(p_j). \end{eqnarray} The tree-level single soft current is given by\cite{Weinzierl:2003fx}, \begin{equation} S_{\mu}^{0}(p_i,p_j,p_k)=2\frac{p_{i}^{\rho}F_{\rho\mu\sigma}(p_j)p_{k}^{\sigma}}{s_{ij}s_{jk}}, \end{equation} and $F_{\rho\mu\sigma}(p)$ is defined by, \begin{equation} F_{\rho\mu\sigma}(p)=g_{\rho\mu}p_{\sigma}-p_{\rho}g_{\mu\sigma}\;. \end{equation} Summing over polarizations allows the soft limit of the interference to be written in terms of eikonal factors, \begin{eqnarray} \lefteqn{{\cal{A}}_{6}^{0,\dagger}(\cdots,a,i,b,\cdots){\cal{A}}_{6}^{0}(\cdots,c,i,d,\cdots)\stackrel{i\to0}{\longrightarrow}}\nonumber\\ &&\frac{1}{2}\Big[S_{aid}+S_{bic}-S_{aic}-S_{bid}\Big]{\cal{A}}_{5}^{0,\dagger}(\cdots,a,{b},\cdots){\cal{A}}_{5}^{0}(\cdots,c,{d},\cdots),\label{eq:fact} \end{eqnarray} where the eikonal factor is, \begin{eqnarray} S_{ijk}&=&\frac{2s_{ik}}{s_{ij}s_{jk}}. \end{eqnarray} The eikonal factors have uniquely defined hard radiators and can be immediately promoted to antenna functions with an appropriate momentum mapping~\cite{GehrmannDeRidder:2005cm,Daleo:2006xa} to obtain a candidate subtraction term for the single soft limit of a generic tree-level interference, \begin{eqnarray} \lefteqn{{\cal{A}}_{6}^{0,\dagger}(\cdots,a,i,b,\cdots){\cal{A}}_{6}^{0}(\cdots,c,i,d,\cdots)\stackrel{i\to0}{\approx}}\nonumber\\ \frac{1}{2}&\Big[&X_{3}^{0}(a,i,d)\ {\cal{A}}_{5}^{0,\dagger}(\cdots,\widetilde{(ai)},{b},\cdots){\cal{A}}_{5}^{0}(\cdots,{c},\widetilde{(id)},\cdots)\nonumber\\ &+&X_{3}^{0}(b,i,c)\ {\cal{A}}_{5}^{0,\dagger}(\cdots,{a},\widetilde{(bi)},\cdots){\cal{A}}_{5}^{0}(\cdots,\widetilde{(ic)},{d},\cdots)\nonumber\\ &-&X_{3}^{0}(a,i,c)\ {\cal{A}}_{5}^{0,\dagger}(\cdots,\widetilde{(ai)},{b},\cdots){\cal{A}}_{5}^{0}(\cdots,\widetilde{(ic)},{d},\cdots)\nonumber\\ &-&X_{3}^{0}(b,i,d)\ {\cal{A}}_{5}^{0,\dagger}(\cdots,{a},\widetilde{(bi)},\cdots){\cal{A}}_{5}^{0}(\cdots,{c},\widetilde{(id)},\cdots)\Big].\label{eq:fact2} \end{eqnarray} For convenience the momentum mapping shown in Eq.~\eqref{eq:fact2} is a final-final type but the factorization pattern is true for all mappings. It can be easily seen that the collinear limits of the antennae in this block of terms cancel in Eq.~\eqref{eq:fact2}. The single unresolved subtraction term is given by, \begin{eqnarray} {\rm d}\hat\sigma_{NNLO}^{S,a}&=&{\cal N}_{LO} \left(\frac{\alpha_s }{2\pi}\right)^2 {\rm d}\Phi_{4}(p_{3},\ldots,p_{6};p_1;p_2)\, \, \frac{12}{4!} \sum_{(i,j,k,l)\in P(3,4,5,6)} 2{\rm{Re}}\Big\{\nonumber\\ &-& F_{3}^{0}(\h{1},l,\h{2}) {\cal{A}}_{5}^{0}(\hb{1},\hb{2},\tilde{i},\tilde{j},\tilde{k}){\cal{A}}_{5}^{0,\dagger}(\hb{1},\tilde{j},\hb{2},\tilde{k},\tilde{i})J_{2}^{(3)}(p_{\tilde{i}},p_{\tilde{j}},p_{\tilde{k}})\nonumber\\ &-& f_{3}^{0}(\h{2},l,i){\cal{A}}_{5}^{0}(\h{1},\hb{2},\widetilde{(li)},j,k){\cal{A}}_{5}^{0,\dagger}(\h{1},j,\hb{2},k,\widetilde{(li)})J_{2}^{(3)}(p_j,p_k,p_{\widetilde{(li)}})\nonumber\\ &-&f_{3}^{0}(i,l,j){\cal{A}}_{5}^{0}(\h{1},\h{2},\widetilde{(il)},\widetilde{(lj)},k){\cal{A}}_{5}^{0,\dagger}(\h{1},\widetilde{(lj)},\h{2},k,\widetilde{(il)})J_{2}^{(3)}(p_k,p_{\widetilde{(il)}},p_{\widetilde{(lj)}})\nonumber\\ &-&f_{3}^{0}(j,l,k){\cal{A}}_{5}^{0}(\h{1},\h{2},i,\widetilde{(jl)},\widetilde{(lk)}){\cal{A}}_{5}^{0,\dagger}(\h{1},\widetilde{(jl)},\h{2},\widetilde{(lk)},i)J_{2}^{(3)}(p_i,p_{\widetilde{(jl)}},p_{\widetilde{(lk)}})\nonumber\\ &-& f_{3}^{0}(\h{1},l,k){\cal{A}}_{5}^{0}(\hb{1},\h{2},i,j,\widetilde{(lk)}){\cal{A}}_{5}^{0,\dagger}(\hb{1},j,\h{2},\widetilde{(lk)},i)J_{2}^{(3)}(p_i,p_j,p_{\widetilde{(lk)}})\nonumber\\ &+& f_{3}^{0}(\h{1},l,j){\cal{A}}_{5}^{0}(\hb{1},\h{2},i,\widetilde{(lj)},k){\cal{A}}_{5}^{0,\dagger}(\hb{1},\widetilde{(lj)},\h{2},k,i)J_{2}^{(3)}(p_i,p_k,p_{\widetilde{(lj)}})\nonumber\\ &+& f_{3}^{0}(\h{2},l,j){\cal{A}}_{5}^{0}(\h{1},\hb{2},i,\widetilde{(lj)},k){\cal{A}}_{5}^{0,\dagger}(\h{1},\widetilde{(lj)},\hb{2},k,i)J_{2}^{(3)}(p_i,p_k,p_{\widetilde{(lj)}})\nonumber\\ &+& f_{3}^{0}(\h{2},l,k){\cal{A}}_{5}^{0}(\h{1},\hb{2},i,j,\widetilde{(lk)}){\cal{A}}_{5}^{0,\dagger}(\h{1},j,\hb{2},\widetilde{(lk)},i)J_{2}^{(3)}(p_i,p_j,p_{\widetilde{(lk)}})\nonumber\\ &+&f_{3}^{0}(i,l,k){\cal{A}}_{5}^{0}(\h{1},\h{2},\widetilde{(il)},j,\widetilde{(lk)}){\cal{A}}_{5}^{0,\dagger}(\h{1},j,\h{2},\widetilde{(lk)},\widetilde{(il)})J_{2}^{(3)}(p_j,p_{\widetilde{(il)}},p_{\widetilde{(lk)}})\nonumber\\ &+& f_{3}^{0}(\h{1},l,i){\cal{A}}_{5}^{0}(\hb{1},\h{2},\widetilde{(li)},j,k){\cal{A}}_{5}^{0,\dagger}(\hb{1},j,\h{2},k,\widetilde{(li)})J_{2}^{(3)}(p_j,p_k,p_{\widetilde{(li)}})\Big\}. \label{eq:siga2} \end{eqnarray} Once analytically integrated, Eq.~\eqref{eq:siga2} is added back as part of the real-virtual subtraction term where it cancels the explicit IR poles of the real-virtual matrix element. \subsection{Double unresolved subtraction term} The only double unresolved divergences present are those associated with two simultaneously soft gluons. In the double soft gluon limit, the full squared gluonic matrix element factorizes in the following way~\cite{Catani:1999ss}, \begin{eqnarray} \bs{A}_{6}^{0}(\{p\})&\stackrel{i,j\to0}{\propto}&\sum_{(a,b)}\sum_{(c,d)}S_{aib}S_{cjd}~\langle{\cal{A}}_{4}^{0}|(\bs{T}_{a}\cdot\bs{T}_{b})(\bs{T}_{c}\cdot\bs{T}_{d})|{\cal{A}}_{4}^{0}\rangle\nonumber\\ &-&N~\sum_{(a,b)}S_{ab}(i,j)~\langle{\cal{A}}_{4}^{0}|\bs{T}_{a}\cdot\bs{T}_{b}|{\cal{A}}_{4}^{0}\rangle,\label{eq:2gsoftfac} \end{eqnarray} where the four parton double soft function, $S_{ab}(i,j)$~\cite{Catani:1999ss} is related to the double soft function, $S_{aijb}$, derived in~\cite{Campbell:1997hg}. The last term in Eq.~\eqref{eq:2gsoftfac} is proportional to the sandwich $\langle{\cal{A}}_{4}^{0}|\bs{T}_{a}\cdot\bs{T}_{b}|{\cal{A}}_{4}^{0}\rangle$, which as stated in Eq.~\eqref{eq:slczero}, does not contribute to the sub-leading colour contribution. Accordingly, we find that the double soft factorization pattern involves only eikonal factors, e.g. in the limit where gluons five and six go simultaneously soft, \begin{eqnarray} \lefteqn{{\cal{SLC}}\Big(\bs{A}_{6}^{0}(\{p\})\Big)\stackrel{5,6\to0}{\longrightarrow}}\nonumber\\ \frac{1}{4}&\Big[&\Big(S_{153}+S_{254}-S_{154}-S_{254}\Big)\Big(S_{163}+S_{264}-S_{162}-S_{364}\Big) A_{4}^{0}(\hat{1},\hat{2},3,4)\nonumber\\ &+&\Big(S_{153}+S_{254}-S_{154}-S_{254}\Big)\Big(S_{162}+S_{364}-S_{164}-S_{263}\Big) A_{4}^{0}(\hat{1},\hat{2},4,3)\nonumber\\ &+&\Big(S_{152}+S_{354}-S_{154}-S_{253}\Big)\Big(S_{162}+S_{364}-S_{163}-S_{264}\Big)A_{4}^{0}(\hat{1},3,\hat{2},4)\Big].\label{eq:2gsoftfac2}\nonumber\\ \end{eqnarray} We can also study the double soft limit of the partial amplitudes directly~\cite{Berends:1988zn,Catani:1999ss}, \begin{equation} {\cal M}_{n+2}^0(\hdots,p_i,p_j,p_k,p_l,\hdots)\stackrel{j,k\to0}{\longrightarrow}{\cal{S}}^{0}(p_i,p_j,p_k,p_l){\cal M}_{n}^0(\hdots,p_i,p_l,\hdots), \end{equation} where \begin{eqnarray} {\cal{S}}^{0}(p_i,p_j,p_k,p_l)&=&S_{\mu\nu}(p_i,p_j,p_k,p_l)\epsilon^{\mu}(p_j)\epsilon^{\nu}(p_k), \end{eqnarray} and the double soft current can be written as~\cite{Weinzierl:2003fx}, \begin{eqnarray} \lefteqn{S_{\mu\nu}(p_i,p_j,p_k,p_l)=}\nonumber\\ &&4\Big[\frac{p_{i}^{\rho}F_{\rho\mu}^{\sigma}(p_j)F_{\sigma\nu\tau}(p_k)p_{l}^{\tau}}{s_{ij}s_{jk}s_{kl}} -\frac{p_{i}^{\rho}F_{\rho\mu}^{\sigma}(p_j)F_{\sigma\nu\tau}(p_k)p_{i}^{\tau}}{s_{ij}s_{jk}(s_{ij}+s_{ik})} -\frac{p_{l}^{\rho}F_{\rho\mu}^{\sigma}(p_j)F_{\sigma\nu\tau}(p_k)p_{l}^{\tau}}{s_{jk}s_{kl}(s_{jl}+s_{kl})}\Big]\;. \label{eq:dsoftcurr} \end{eqnarray} When taking the double soft limit of Eq.~\eqref{eq:RR}, we encounter contractions between single and double soft currents. Summing over all colour orderings, we obtain symmetric sums of double soft currents which can be rewritten using the identity~\cite{Berends:1988zn}, \begin{equation} S_{\mu\nu}(p_i,p_j,p_k,p_l)+S_{\nu\mu}(p_i,p_k,p_j,p_l)=S_{\mu}(p_i,p_j,p_l)S_{\nu}(p_i,p_k,p_l)\;. \label{eq:softcur1} \end{equation} Any remaining terms involving the double soft current can be eliminated using, \begin{eqnarray} -S_{\mu\nu}(p_i,p_j,p_k,p_l)&+&S_{\mu\nu}(p_i,p_j,p_k,p_a)+S_{\nu\mu}(p_l,p_k,p_j,p_c)-S_{\nu\mu}(p_a,p_k,p_j,p_c)\nonumber\\ &=&-S_{\mu}(p_i,p_j,p_k)S_{\nu}(p_i,p_k,p_l)+S_{\mu}(p_i,p_j,p_k)S_{\nu}(p_i,p_k,p_a)\nonumber\\ &&-S_{\mu}(p_k,p_j,p_c)S_{\nu}(p_c,p_k,p_l)+S_{\mu}(p_k,p_j,p_c)S_{\nu}(p_c,p_k,p_a),\label{eq:softcur2} \end{eqnarray} so that the double soft limit of the sub-leading colour matrix element can be written purely in terms of eikonal factors, as described in Eq.~\eqref{eq:2gsoftfac2}. To the best of our knowledge, the relation in Eq.~\eqref{eq:softcur2} does not exist in the literature and can be confirmed analytically using~\eqref{eq:dsoftcurr}. The resulting subtraction terms can be obtained by promoting each eikonal factor to a three-parton tree-level antenna, as outlined in Sec.~\ref{sec:ssRR}. At leading colour, the double unresolved subtraction term is partitioned into three terms depending on the \emph{colour connection} of the unresolved partons: \begin{itemize} \item Colour connected, where the unresolved partons, $i,j$, are colour connected to each other and a single pair of neighbouring hard radiators, $a,b$, corresponding to the colour ordering $(\cdots,a,i,j,b,\cdots)$. \item Almost colour connected, where the unresolved partons, $i,j$, are not colour connected but are each colour connected to a neighbouring pair of hard radiators, $a,c$ and $c,b$, with one hard radiator in common, corresponding to the colour ordering $(\cdots,a,i,c,j,b,\cdots)$. \item Colour disconnected, where the unresolved partons, $i,j$, are not colour connected and have no hard radiating neighbours in common, corresponding to the colour ordering $(\cdots,a,i,b,\cdots,c,j,d,\cdots)$. \end{itemize} This classification is particularly useful for leading colour calculations because, due to colour coherence, the leading colour cross section is formed from squared partial amplitudes, each with a definite colour ordering. For sub-leading colour calculations this distinction is not so apparent. An incoherent interference generally produces a complicated factorization pattern, particularly in the soft limits, and does not have a simple connection with the colour ordering inherent to squared partial amplitudes. For such interferences, what is colour connected, almost colour connected or colour disconnected is not immediately obvious. We have already established that the double unresolved subtraction term consists of iterated three-parton antennae. For the remainder of this paper we consider the slightly looser definitions of terms: \begin{itemize} \item ${\rm d} \hat\sigma_{NNLO}^{S,b}$, the antennae have repeated hard radiators, $\sim X_{3}^{0}(a,i,b)X_{3}^{0}(A,j,B)$, \item ${\rm d} \hat\sigma_{NNLO}^{S,c}$, the antennae have one repeated radiator, $\sim X_{3}^{0}(a,i,c)X_{3}^{0}(C,j,b)$, \item ${\rm d} \hat\sigma_{NNLO}^{S,d}$, the antennae have no repeated radiators, $\sim X_{3}^{0}(a,i,b)X_{3}^{0}(c,j,d)$, \end{itemize} where $A, B, C$ denote the repeated hard radiators which have composite momenta resulting from the appropriate momentum mapping fixed by the primary antenna, e.g. $(a,i,b)\to(A,B)$. The subtraction terms, ${\rm d} \hat\sigma_{NNLO}^{S,b}$ and ${\rm d} \hat\sigma_{NNLO}^{S,c}$ are integrated and added back to the real-virtual cross section while ${\rm d} \hat\sigma_{NNLO}^{S,d}$ can be simultaneously integrated over both unresolved partons and so is added back in the double virtual subtraction term. The sum of the three double unresolved subtraction terms is given by, \begin{eqnarray} \lefteqn{{\rm d} \hat\sigma_{NNLO}^{S,b}+{\rm d} \hat\sigma_{NNLO}^{S,c}+{\rm d} \hat\sigma_{NNLO}^{S,d}={\cal N}_{LO} \left(\frac{\alpha_s}{2\pi}\right)^2\frac{\b{C}(\epsilon)^2}{C(\epsilon)^2} {\rm d}\Phi_{4}(p_{3},\ldots,p_{6};p_1;p_2)\frac{12}{4!}\sum_{(i,j,k,l)\in P(3,4,5,6)}}\nonumber\\ \Big\{&-&\frac{1}{2}F_3^0(\h{1},l,\h{2})F_3^0(\h{\b{1}},\tilde{k},\h{\b{2}}) A_4^0(\h{\bb{1}},\tilde{\tilde{j}},\h{\bb{2}},\tilde{\tilde{i}})J_{2}^{(2)}(p_{\tilde{\tilde{j}}},p_{\tilde{\tilde{i}}})\nonumber\\ &-&f_3^0(\h{1},l,i)f_3^0(\h{\b{1}},k,\widetilde{(li)}) A_4^0(\h{\bb{1}},\h{2},\widetilde{(k\widetilde{(li)})},j)J_{2}^{(2)}(p_{\widetilde{(k\widetilde{(li)})}},p_{j})\nonumber\\ &-&f_3^0(2,l,i)f_3^0(\h{\b{2}},k,\widetilde{(li)}) A_4^0(\h{1},\h{\bb{2}},j,\widetilde{(k\widetilde{(li)})})J_{2}^{(2)}(p_{j},p_{\widetilde{(k\widetilde{(li)})}})\nonumber\\ &-&\frac{1}{2}f_3^0(i,l,j)f_3^0(\widetilde{(il)},k,\widetilde{(lj)})A_4^0(\h{1},\widetilde{(\widetilde{(il)}k)},\h{2},\widetilde{(k\widetilde{(lj)})}) J_{2}^{(2)}(p_{\widetilde{(\widetilde{(il)}k)}},p_{\widetilde{(k\widetilde{(lj)})}})\nonumber\\\nonumber\\ &+&\frac{1}{2}F_3^0(\h{1},l,\h{2})f_3^0(\h{\b{1}},\tilde{k},\tilde{j})\nonumber\\ &&\times\Big[A_4^0(\h{\bb{1}},\h{\b{2}},\widetilde{(\tilde{k}\tilde{j})},\tilde{i})+A_4^0(\h{\bb{1}},\widetilde{(\tilde{k}\tilde{j})},\h{\b{2}},\tilde{i})- A_4^0(\h{\bb{1}},\h{\b{2}},\tilde{i},\widetilde{(\tilde{k}\tilde{j})})\Big]J_{2}^{(2)}(p_{\tilde{i}},p_{\widetilde{(\tilde{k}\tilde{j})}})\nonumber\\ &+&\frac{1}{2}F_3^0(\h{1},l,\h{2})f_3^0(\h{\b{2}},\tilde{k},\tilde{j})\nonumber\\ &&\times\Big[A_4^0(\h{\b{1}},\h{\bb{2}},\tilde{i},\widetilde{(\tilde{k}\tilde{j})})+A_4^0(\h{\b{1}},\widetilde{(\tilde{k}\tilde{j})},\h{\bb{2}},\tilde{i})-A_4^0(\h{\b{1}},\h{\bb{2}},\widetilde{(\tilde{k}\tilde{j})},\tilde{i})\Big]J_{2}^{(2)}(p_{\widetilde{(\tilde{k}\tilde{j})}},p_{\tilde{i}})\nonumber\\ &+&\frac{1}{2}f_3^0(\h{1},l,i)f_3^0(j,k,\widetilde{(li)})\nonumber\\ &&\times\Big[A_4^0(\h{\b{1}},\h{2},\widetilde{(k\widetilde{(li)})},\widetilde{(jk)})+A_4^0(\h{\b{1}},\widetilde{(jk)},\h{2},\widetilde{(k\widetilde{(li)})})-A_4^0(\h{\b{1}},\h{2},\widetilde{(jk)},\widetilde{(k\widetilde{(li)})})\Big]J_{2}^{(2)}(p_{\widetilde{(jk)}},p_{\widetilde{(k\widetilde{(li)})}})\nonumber\\ &+&\frac{1}{2}f_3^0(\h{1},l,i)f_3^0(\h{2},k,\widetilde{(li)})\nonumber\\ &&\times\Big[A_4^0(\h{\b{1}},\h{\b{2}},j,\widetilde{(k\widetilde{(li)})})+A_4^0(\h{\b{1}},\h{\b{2}},\widetilde{(k\widetilde{(li)})},j)-A_4^0(\h{\b{1}},\widetilde{(k\widetilde{(li)})},\h{\b{2}},j)\Big]J_{2}^{(2)}(p_{\widetilde{(k\widetilde{(li)})}},p_{j})\nonumber\\ &+&\frac{1}{2}f_3^0(\h{1},l,i)f_3^0(\h{\b{1}},k,j)\nonumber\\ &&\times\Big[A_4^0(\h{\bb{1}},\h{2},\widetilde{(kj)},\widetilde{(li)})+A_4^0(\h{\bb{1}},\h{2},\widetilde{(li)},\widetilde{(kj)})-A_4^0(\h{\bb{1}},\widetilde{(kj)},\h{2},\widetilde{(li)})\Big]J_{2}^{(2)}(p_{\widetilde{(kj)}},p_{\widetilde{(li)}})\nonumber\\ &+&\frac{1}{2}f_3^0(\h{1},l,i)F_3^0(\h{\b{1}},k,\h{2})\nonumber\\ &&\times\Big[A_4^0(\h{\bb{1}},\h{\b{2}},\widetilde{(\widetilde{(li)})},\tilde{j})+A_4^0(\h{\bb{1}},\widetilde{(\widetilde{(li)})},\h{\b{2}},\tilde{j})-A_4^0(\h{\bb{1}},\h{\b{2}},\tilde{j},\widetilde{(\widetilde{(li)})})\Big]J_{2}^{(2)}(p_{\tilde{j}},p_{\widetilde{(\widetilde{(li)})}})\nonumber\\ &+&\frac{1}{2}f_3^0(2,l,i)f_3^0(j,k,\widetilde{(li)})\nonumber\\ &&\times\Big[A_4^0(\h{1},\h{\b{2}},\widetilde{(jk)},\widetilde{(k\widetilde{(li)})})+A_4^0(\h{1},\widetilde{(jk)},\h{\b{2}},\widetilde{(k\widetilde{(li)})})-A_4^0(\h{1},\h{\b{2}},\widetilde{(k\widetilde{(li)})},\widetilde{(jk)})\Big]J_{2}^{(2)}(p_{\widetilde{(k\widetilde{(li)})}},p_{\widetilde{(jk)}})\nonumber\\ &+&\frac{1}{2}f_3^0(2,l,i)f_3^0(\h{1},k,\widetilde{(li)}) \nonumber\\ &&\times\Big[A_4^0(\h{\b{1}},\h{\b{2}},j,\widetilde{(k\widetilde{(li)})})+A_4^0(\h{\b{1}},\h{\b{2}},\widetilde{(k\widetilde{(li)})},j)-A_4^0(\h{\b{1}},\widetilde{(k\widetilde{(li)})},\h{\b{2}},j)\Big]J_{2}^{(2)}(p_{\widetilde{(k\widetilde{(li)})}},p_{j})\nonumber\\ &+&\frac{1}{2}f_3^0(2,l,i)f_3^0(\h{\b{2}},k,j) \nonumber\\ &&\times\Big[A_4^0(\h{1},\h{\bb{2}},\widetilde{(kj)},\widetilde{(li)})A_4^0(\h{1},\h{\bb{2}},\widetilde{(li)},\widetilde{(kj)})-A_4^0(\h{1},\widetilde{(kj)},\h{\bb{2}},\widetilde{(li)})\Big]J_{2}^{(2)}(p_{\widetilde{(kj)}},p_{\widetilde{(li)}})\nonumber\\ &+&\frac{1}{2}f_3^0(2,l,i)F_3^0(\h{1},k,\h{\b{2}}) \nonumber\\ &&\times\Big[A_4^0(\h{\b{1}},\h{\bb{2}},\tilde{j},\widetilde{(\widetilde{(li)})})+A_4^0(\h{\b{1}},\widetilde{(\widetilde{(li)})},\h{\bb{2}},\tilde{j})-A_4^0(\h{\b{1}},\h{\bb{2}},\widetilde{(\widetilde{(li)})},\tilde{j})\Big]J_{2}^{(2)}(p_{\tilde{j}},p_{\widetilde{(\widetilde{(li)})}})\nonumber\\ &+&\frac{1}{4}f_3^0(i,l,j)f_3^0(\h{1},k,\widetilde{(il)})\nonumber\\ &&\times\Big[A_4^0(\h{\b{1}},\h{2},\widetilde{(k\widetilde{(il)})},\widetilde{(lj)})+A_4^0(\h{\b{1}},\widetilde{(k\widetilde{(il)})},\h{2},\widetilde{(lj)})-A_4^0(\h{\b{1}},\h{2},\widetilde{(lj)},\widetilde{(k\widetilde{(il)})})\Big]J_{2}^{(2)}(p_{\widetilde{(k\widetilde{(il)})}},p_{\widetilde{(lj)}})\nonumber\\ &+&\frac{1}{4}f_3^0(i,l,j)f_3^0(\h{1},k,\widetilde{(lj)})\nonumber\\ &&\times\Big[A_4^0(\h{\b{1}},\h{2},\widetilde{(k\widetilde{(lj)})},\widetilde{(il)})+A_4^0(\h{\b{1}},\widetilde{(il)},\h{2},\widetilde{(k\widetilde{(lj)})})-A_4^0(\h{\b{1}},\h{2},\widetilde{(il)},\widetilde{(k\widetilde{(lj)})})\Big]J_{2}^{(2)}(p_{\widetilde{(il)}},p_{\widetilde{(k\widetilde{(lj)})}})\nonumber\\ &+&\frac{1}{4}f_3^0(i,l,j)f_3^0(\h{2},k,\widetilde{(il)})\nonumber\\ &&\times\Big[A_4^0(\h{1},\h{\b{2}},\widetilde{(lj)},\widetilde{(k\widetilde{(il)})})+A_4^0(\h{1},\widetilde{(k\widetilde{(il)})},\h{\b{2}},\widetilde{(lj)})-A_4^0(\h{1},\h{\b{2}},\widetilde{(k\widetilde{(il)})},\widetilde{(lj)})\Big]J_{2}^{(2)}(p_{\widetilde{(k\widetilde{(il)})}},p_{\widetilde{(lj)}})\nonumber\\ &+&\frac{1}{4}f_3^0(i,l,j)f_3^0(\h{2},k,\widetilde{(lj)})\nonumber\\ &&\times\Big[A_4^0(\h{1},\h{\b{2}},\widetilde{(il)},\widetilde{(k\widetilde{(lj)})})+A_4^0(\h{1},\widetilde{(il)},\h{\b{2}},\widetilde{(k\widetilde{(lj)})})-A_4^0(\h{1},\h{\b{2}},\widetilde{(k\widetilde{(lj)})},\widetilde{(il)})\Big]J_{2}^{(2)}(p_{\widetilde{(k\widetilde{(lj)})}},p_{\widetilde{(il)}})\nonumber\\ \nonumber\\ &-&F_3^0(\h{1},l,\h{2})f_3^0(\tilde{j},\tilde{k},\tilde{i}) A_4^0(\h{\b{1}},\widetilde{(\tilde{j}\tilde{k})},\h{\b{2}},\widetilde{(\tilde{k}\tilde{i})})J_{2}^{(2)}(p_{\widetilde{(\tilde{j}\tilde{k})}},p_{\widetilde{(\tilde{k}\tilde{i})}})\nonumber\\ &-&f_3^0(\h{1},l,i)f_3^0(\h{2},k,j) A_4^0(\h{\b{1}},\h{\b{2}},\widetilde{(li)},\widetilde{(kj)})J_{2}^{(2)}(p_{\widetilde{(li)}},p_{\widetilde{(kj)}})\Big\}. \end{eqnarray} In single soft limits, the single and double unresolved subtraction terms ${\rm d} \hat\sigma_{NNLO}^{S,a,b,c,d}$ over-subtract the divergences of the matrix element, and so a large angle soft subtraction term is introduced to compensate for this over-subtraction, denoted by ${\rm d} \hat\sigma_{NNLO}^{S,e}$, \begin{eqnarray} &&{\rm d} \hat\sigma_{NNLO}^{S,e}={\cal N}_{LO} \left(\frac{\alpha_s}{2\pi}\right)^2\frac{\b{C}(\epsilon)^2}{C(\epsilon)^2} {\rm d}\Phi_{4}(p_{3},\ldots,p_{6};p_1;p_2)\frac{12}{4!} \sum_{(i,j,k,l)\in P(3,4,5,6)}\Big\{\nonumber\\ &&\frac{1}{4} \Big(-S_{1l\widetilde{(il)}}+S_{1\tilde{l}\widetilde{(\widetilde{(il)})}}-S_{2l\widetilde{(il)}}+S_{2\tilde{l}\widetilde{(\widetilde{(il)})}}-S_{1l\widetilde{(lj)}}+S_{1\tilde{l}\widetilde{(\widetilde{(lj)})}}-S_{2l\widetilde{(lj)}}+S_{2\tilde{l}\widetilde{(\widetilde{(lj)})}}\nonumber\\ &&~-2 S_{1\tilde{l}2}+2 S_{1l2}\Big)~F_3^0(\h{1},k,\h{2}) A_4^0(\h{\b{1}},\widetilde{(\widetilde{(il)})},\h{\b{2}},\widetilde{(\widetilde{(lj)})})J_{2}^{(2)}(p_{\widetilde{(\widetilde{(il)})}},p_{\widetilde{(\widetilde{(lj)})}})\nonumber\\ &-&\frac{1}{2}\Big(S_{1l\widetilde{(il)}}-S_{1\tilde{l}\widetilde{(\widetilde{(il)})}}-S_{2l\widetilde{(il)}}+S_{2\tilde{l}\widetilde{(\widetilde{(il)})}}-S_{1l\widetilde{(lj)}}+S_{1\tilde{l}\widetilde{(\widetilde{(lj)})}}+S_{2l\widetilde{(lj)}}-S_{2\tilde{l}\widetilde{(\widetilde{(lj)})}}\Big)\nonumber\\ &&\times~F_3^0(\h{1},k,\h{2}) A_4^0(\h{\b{1}},\h{\b{2}},\widetilde{(\widetilde{(il)})},\widetilde{(\widetilde{(lj)})})J_{2}^{(2)}(p_{\widetilde{(\widetilde{(il)})}},p_{\widetilde{(\widetilde{(lj)})}})\nonumber\\ &-&\frac{1}{2} \Big(S_{\widetilde{(k\widetilde{(il)})}l\widetilde{(jl)}}-S_{\widetilde{(il)}l\widetilde{(lj)}}-S_{2l\widetilde{(k\widetilde{(il)})}}+S_{2l\widetilde{(il)}}\Big)\nonumber\\ &&\times~f_3^0(\h{1},k,\widetilde{(il)}) A_4^0(\h{\b{1}},2,\widetilde{(lj)},\widetilde{(k\widetilde{(il)})})J_{2}^{(2)}(p_{\widetilde{(lj)}},p_{\widetilde{(k\widetilde{(il)})}})\nonumber\\ &+&\frac{1}{2}\Big(S_{\widetilde{(k\widetilde{(il)})}l\widetilde{(lj)}}-S_{\widetilde{(il)}l\widetilde{(lj)}}-S_{2l\widetilde{(k\widetilde{(il)})}}+S_{2l\widetilde{(il)}}\Big)\nonumber\\ &&\times~f_3^0(\h{1},k,\widetilde{(il)}) A_4^0(\h{\b{1}},\widetilde{(k\widetilde{(il)})},2,\widetilde{(lj)})J_{2}^{(2)}(p_{\widetilde{(lj)}},p_{\widetilde{(k\widetilde{(il)})}})\nonumber\\ &+&\frac{1}{2}\Big(S_{\widetilde{(k\widetilde{(il)})}l\widetilde{(lj)}}-S_{\widetilde{(il)}l\widetilde{(lj)}}-2 S_{1l\widetilde{(k\widetilde{(il)})}}+S_{2l\widetilde{(k\widetilde{(il)})}}+2 S_{1l\widetilde{(il)}}-S_{2l\widetilde{(il)}}\Big)\nonumber\\ &&\times~f_3^0(\h{1},k,\widetilde{(il)}) A_4^0(\h{\b{1}},2,\widetilde{(k\widetilde{(il)})},\widetilde{(lj)})J_{2}^{(2)}(p_{\widetilde{(lj)}},p_{\widetilde{(k\widetilde{(il)})}})\nonumber\\ &+& \frac{1}{2}\Big (S_{\widetilde{(k\widetilde{(jl)})}l\widetilde{(li)}}-S_{\widetilde{(jl)}l\widetilde{(li)}}-S_{1l\widetilde{(k\widetilde{(jl)})}}+S_{1l\widetilde{(jl)}}\Big)\nonumber\\ &&\times~f_3^0(\h{2},k,\widetilde{(jl)}) A_4^0(\h{1},\widetilde{(k\widetilde{(jl)})},\h{\b{2}},\widetilde{(li)})J_{2}^{(2)}(p_{\widetilde{(li)}},p_{\widetilde{(k\widetilde{(jl)})}})\nonumber\\ &-&\frac{1}{2}\Big(-S_{\widetilde{(k\widetilde{(jl)})}l\widetilde{(li)}}+S_{\widetilde{(jl)}l\widetilde{(li)}}-S_{1l\widetilde{(k\widetilde{(jl)})}}+2 S_{2l\widetilde{(k\widetilde{(jl)})}}+S_{1l\widetilde{(jl)}}-2 S_{2l\widetilde{(jl)}}\Big)\nonumber\\ &&\times~f_3^0(\h{2},k,\widetilde{(jl)}) A_4^0(\h{1},\h{\b{2}},\widetilde{(li)},\widetilde{(k\widetilde{(jl)})}) J_{2}^{(2)}(p_{\widetilde{(li)}},p_{\widetilde{(k\widetilde{(jl)})}})\nonumber\\ &-&\frac{1}{2}\Big(S_{\widetilde{(k\widetilde{(jl)})}l\widetilde{(li)}}-S_{\widetilde{(jl)}l\widetilde{(li)}}-S_{1l\widetilde{(k\widetilde{(jl)})}}+S_{1l\widetilde{(jl)}}\Big)\nonumber\\ &&\times~f_3^0(\h{2},k,\widetilde{(jl)}) A_4^0(\h{1},\h{\b{2}},\widetilde{(k\widetilde{(jl)})},\widetilde{(li)})J_{2}^{(2)}(p_{\widetilde{(li)}},p_{\widetilde{(k\widetilde{(jl)})}})\nonumber\\ &+&\frac{1}{4}\Big(S_{1l\widetilde{(\widetilde{(il)}k)}}-S_{2l\widetilde{(\widetilde{(il)}k)}}-S_{1l\widetilde{(il)}}+S_{2l\widetilde{(il)}}-S_{1l\widetilde{(k\widetilde{(lj)})}}+S_{2l\widetilde{(k\widetilde{(lj)})}}+S_{1l\widetilde{(lj)}}-S_{2l\widetilde{(lj)}}\Big)\nonumber\\ &&\times~f_3^0(\widetilde{(il)},k,\widetilde{(lj)}) A_4^0(\h{1},\h{2},\widetilde{(\widetilde{(il)}k)},\widetilde{(k\widetilde{(lj)})})J_{2}^{(2)}(p_{\widetilde{(\widetilde{(il)}k)}},p_{\widetilde{(k\widetilde{(lj)})}})\nonumber\\ &-&\frac{1}{4} \Big(S_{1l\widetilde{(\widetilde{(il)}k)}}-S_{2l\widetilde{(\widetilde{(il)}k)}}-S_{1l\widetilde{(il)}}+S_{2l\widetilde{(il)}}-S_{1l\widetilde{(k\widetilde{(lj)})}}+S_{2l\widetilde{(k\widetilde{(lj)})}}+S_{1l\widetilde{(lj)}}-S_{2l\widetilde{(lj)}}\Big)\nonumber\\ &&\times~f_3^0(\widetilde{(il)},k,\widetilde{(lj)}) A_4^0(\h{1},\h{2},\widetilde{(k\widetilde{(lj)})},\widetilde{(\widetilde{(il)}k)})J_{2}^{(2)}(p_{\widetilde{(\widetilde{(il)}k)}},p_{\widetilde{(k\widetilde{(lj)})}})\nonumber\\ &+&\frac{1}{4} \Big(-2 S_{\widetilde{(\widetilde{(il)}k)}l\widetilde{(k\widetilde{(lj)})}}+2 S_{\widetilde{(il)}l\widetilde{(lj)}}+S_{1l\widetilde{(\widetilde{(il)}k)}}+S_{2l\widetilde{(\widetilde{(il)}k)}}-S_{1l\widetilde{(il)}}-S_{2l\widetilde{(il)}}+S_{1l\widetilde{(k\widetilde{(lj)})}}\nonumber\\ &&~+S_{2l\widetilde{(k\widetilde{(lj)})}}-S_{1l\widetilde{(lj)}}-S_{2l\widetilde{(lj)}}\Big)f_3^0(\widetilde{(il)},k,\widetilde{(lj)}) A_{4}^{0}(\h{1},\widetilde{(\widetilde{(il)}k)},\h{2},\widetilde{(k\widetilde{(lj)})}) J_{2}^{(2)}(p_{\widetilde{(\widetilde{(il)}k)}},p_{\widetilde{(k\widetilde{(lj)})}})\Big\}.\nonumber\\ \end{eqnarray} The large angle soft subtraction term is integrated analytically and added back to the real-virtual subtraction term. \section{Real-virtual contribution} \label{sec:RV} The real-virtual matrix element is given by the interference of the one-loop amplitude with the tree, \begin{eqnarray} \bs{A}_{5}^{1}(\{p\})&=&\langle{\cal{A}}_{5}^{0}|{\cal{A}}_{5}^{1}\rangle+\langle{\cal{A}}_{5}^{1}|{\cal{A}}_{5}^{0}\rangle. \end{eqnarray} In a particular colour ordered basis, the one-loop amplitude can be decomposed into the partial amplitudes~\cite{Bern:1993mq,Bern:1990ux}, \begin{eqnarray} \lefteqn{\bs{\cal{A}}_{5}^{1,\{a\}}(\{p\})=\langle\bs{a}|{\cal{A}}_{5}^{1}(\{p\})\rangle}\nonumber\\ &&\ \ \sum_{\sigma\in S_{5}/Z_{5}}N\ {\rm{Tr}}(a_{\sigma(1)},a_{\sigma(2)},a_{\sigma(3)},a_{\sigma(4)},a_{\sigma(5)})\ {\cal{A}}_{5,1}^{1}(\sigma(1),\sigma(2),\sigma(3),\sigma(4),\sigma(5))\nonumber\\ &+&\ \ \sum_{\rho\in S_{5}/Z_{4}}\ \ \ \ {\rm{Tr}}(a_{\rho(1)}){\rm{Tr}}(a_{\rho(2)},a_{\rho(3)},a_{\rho(4)},a_{\rho(5)})\ {\cal{A}}_{5,2}^{1}(\rho(1),\rho(2),\rho(3),\rho(4),\rho(5))\nonumber\\ &+&\sum_{\tau\in S_{5}/Z_{2}\times Z_{3}}\ \ {\rm{Tr}}(a_{\tau(1)}a_{\tau(2)}){\rm{Tr}}(a_{\tau(3)},a_{\tau(4)},a_{\tau(5)})\ {\cal{A}}_{5,3}^{1}(\tau(1),\tau(2),\tau(3),\tau(4),\tau(5))\nonumber\\\label{eq:m51decom} \end{eqnarray} where $\sigma$ is the set of orderings inequivalent under cyclic permutations. $\rho$ is the set of orderings inequivalent under cyclic orderings of the subset of four elements, $\{\rho(2),\rho(3),\rho(4),\rho(5)\}$. $\tau$ is the set of orderings inequivalent under cyclic permutations of the two subsets of orderings $\{\tau(1),\tau(2)\}$ and $\{\tau(3),\tau(4),\tau(5)\}$. The colour factors of the terms proportional to ${\cal{A}}_{5,2}^{1}$ in Eq.~\eqref{eq:m51decom} are identically zero. The sub-leading colour partial amplitude, ${\cal{A}}_{5,3}^{1}$ can be written in terms of the leading colour partial amplitude by using the decoupling identities~\cite{Bern:1990ux}, \begin{eqnarray} {\cal{A}}_{5,3}^{1}(1,2,3,4,5)&=&-{\cal{A}}_{5,2}^{1}(2,1,3,4,5)-{\cal{A}}_{5,2}^{1}(2,1,4,5,3)-{\cal{A}}_{5,2}^{1}(2,1,5,3,4)\label{eq:m53decoup}\\ {\cal{A}}_{5,2}^{1}(1,2,3,4,5)&=&-{\cal{A}}_{5,1}^{1}(1,2,3,4,5)-{\cal{A}}_{5,1}^{1}(1,3,4,5,2)\nonumber\\ &&-{\cal{A}}_{5,1}^{1}(1,4,5,2,3)-{\cal{A}}_{5,1}^{1}(1,5,2,3,4),\label{eq:m52decoup} \end{eqnarray} which leads to an expression for the real-virtual cross section in terms of interferences of leading colour one-loop partial amplitudes with tree-level amplitudes. There are many ways to write the one-loop cross section due to the decoupling identities between partial amplitudes. It was shown in Sec.~\ref{sec:RR} that the double real cross section can be written in terms of the three independent interferences with no common neighbouring partons. In the case of five gluon one-loop scattering there is only one independent ordering containing no common neighbouring partons such that, \begin{eqnarray} \lefteqn{{\cal{SLC}}\Big(\bs{A}_{5}^{1}(\{p\})\Big)=12g^8 N^2(N^2-1)}\nonumber\\ &&\sum_{\sigma\in S_{5}/Z_{5}}2{\rm{Re}}\Big({\cal{A}}_{5}^{0,\dagger}(1,\sigma{(2)},\sigma{(3)},\sigma{(4)},\sigma{(5)}){\cal{A}}_{5,1}^{1}(1,\sigma{(4)},\sigma{(2)},\sigma{(5)},\sigma{(3)})\Big), \end{eqnarray} and therefore the sub-leading colour one-loop five gluon cross section can be written in the optimal form, \begin{eqnarray} {\rm d} \hat\sigma_{NNLO}^{RV}&=&{\cal N}_{LO}\left(\frac{\alpha_s}{2\pi}\right)^2\frac{\bar{C}(\epsilon)^2}{C(\epsilon)}\frac{12}{3!}\sum_{\sigma\in S_{5}/Z_{5}} \int \frac{{\rm d}x_1}{x_1}\frac{{\rm d}x_2}{x_2}{\rm d}\Phi_{3}(p_3,\hdots,p_5;\bar{p}_1,\bar{p}_2)\;\nonumber\\ 2{\rm{Re}}&\bigg\{&{\cal{A}}_{5}^{0,\dagger}(\h{\b{1}},\sigma(2),\sigma(3),\sigma(4),\sigma(5))\;{\cal{A}}_{5,1}^{1}(\h{\b{1}},\sigma(4),\sigma(2),\sigma(5),\sigma(3))\ \bigg\}\nonumber\\ &&\times J_{2}^{(3)}(p_i,p_j,p_k).\label{eq:rvcompact1} \end{eqnarray} This form for the sub-leading colour contribution to the five-gluon one-loop matrix element is equivalent to the expressions found in Eqs.~(9.12) and (9.13) in~\cite{Bern:1990ux} and greatly simplifies the construction of the real-virtual subtraction term. We have cross checked our numerical implementation of the sub-leading colour matrix element in Eq.~\eqref{eq:rvcompact1} against the numerical package {\tt NJET}~\cite{Badger:2012pg} and we find complete agreement between the two. By fixing the position of the second initial-state parton explicitly, the permutation sum reduces to a sum over final-state partons, \begin{eqnarray} {\rm d} \hat\sigma_{NNLO}^{RV}&=&{\cal N}_{LO}\left(\frac{\alpha_s}{2\pi}\right)^2\frac{\bar{C}(\epsilon)^2}{C(\epsilon)}\frac{24}{3!}\sum_{(i,j,k)\in P(3,4,5)} \int \frac{{\rm d}x_1}{x_1}\frac{{\rm d}x_2}{x_2}{\rm d}\Phi_{3}(p_3,\hdots,p_5;\bar{p}_1,\bar{p}_2)\;\nonumber\\ 2{\rm{Re}}&\bigg\{&{\cal{A}}_{5}^{0,\dagger}(\hb{1},\hb{2},i,j,k)\;{\cal{A}}_{5,1}^{1}(\h{\b{1}},j,\hb{2},k,i)-{\cal{A}}_{5}^{0,\dagger}(\hb{1},j,\hb{2},k,i)\;{\cal{A}}_{5,1}^{1}(\h{\b{1}},\hb{2},i,j,k)\ \bigg\}\nonumber\\ &&\times J_{2}^{(3)}(p_i,p_j,p_k).\label{eq:rvcompact2} \end{eqnarray} It should be noted that Eq.~\eqref{eq:rvcompact2} is simply a rearrangement of the sum in Eq.~\eqref{eq:rvcompact1} and is also free from collinear divergences. The real-virtual subtraction term can be divided into three distinct contributions, \begin{eqnarray} {\rm d} \hat\sigma_{NNLO}^{T}&=&{\rm d} \hat\sigma_{NNLO}^{T,a}+{\rm d} \hat\sigma_{NNLO}^{T,b}+{\rm d} \hat\sigma_{NNLO}^{T,c}, \end{eqnarray} which will be explained in detail in the following sections. \subsection{Explicit singularity subtraction} \label{sec:dsigta} The poles of a one-loop interference can be written in terms of integrated dipoles~\cite{Currie:2013vh}, \begin{eqnarray} {\cal P}oles\Big[2{\rm{Re}}\Big({\cal{A}}_{5}^{0,\dagger}(\sigma){\cal{A}}_{5,1}^{1}(\rho)\Big)\Big]&=&\sum_{{\rm{adj.pairs}}(i,j)\in\rho}-\frac{1}{2}\bs{J}_{2}^{(1)}(i,j)\ 2{\rm{Re}}\Big({\cal{A}}_{5}^{0,\dagger}(\sigma){\cal{A}}_{5}^{0}(\rho)\Big),\label{eq:m51poles} \end{eqnarray} where the choice of dipole (final-final, initial-final or initial-initial) depends of the kinematics of the radiators in the dipole. Substituting Eq.~\eqref{eq:m51poles} into Eq.~\eqref{eq:rvcompact2} gives an expression for the poles of the full one-loop interference in terms of integrated dipoles, \begin{eqnarray} {\cal P}oles\bigg({\rm d} \hat\sigma_{NNLO}^{RV}\bigg)&=&{\cal N}_{LO}\left(\frac{\alpha_s}{2\pi}\right)^2\frac{\bar{C}(\epsilon)^2}{C(\epsilon)}\frac{12}{3!}\sum_{(i,j,k)\in P(3,4,5)} \int \frac{{\rm d}x_1}{x_1}\frac{{\rm d}x_2}{x_2}{\rm d}\Phi_{3}(p_3,\hdots,p_5;\bar{p}_1,\bar{p}_2)\;\nonumber\\ 2{\rm{Re}}\bigg\{&\bigg(&\bs{J}_{5}^{(1)}(\h{\b{1}},\hb{2},i,j,k)-\bs{J}_{5}^{(1)}(\h{\b{1}},j,\hb{2},k,i)\bigg)~{\cal{A}}_{5}^{0,\dagger}(\hb{1},\hb{2},i,j,k)\;{\cal{A}}_{5}^{0}(\h{\b{1}},j,\hb{2},k,i)\bigg\}\nonumber\\ &&\timesJ_{2}^{(3)}(p_i,p_j,p_k).\label{eq:rvpoles} \end{eqnarray} Eq.~\eqref{eq:rvpoles} can be written in terms of ten integrated dipoles using Eq.~\eqref{eq:jsum}. It should be noted that the mass factorization kernels in Eq.~\eqref{eq:rvpoles} cancel and so the poles of the one-loop matrix element are given purely in terms of integrated antennae. These ten dipoles correspond to the ten antennae in the single unresolved subtraction term in Eq.~\eqref{eq:siga2}. Explicitly carrying out the integration of the single unresolved subtraction term we find that, \begin{eqnarray} {\rm d} \hat\sigma_{NNLO}^{T,a}&=&-\int_{1}{\rm d} \hat\sigma_{NNLO}^{S,a},\nonumber\\ &=&{\cal N}_{LO}\left(\frac{\alpha_s}{2\pi}\right)^2\frac{\bar{C}(\epsilon)^2}{C(\epsilon)}\frac{12}{3!}\sum_{(i,j,k)=P(3,4,5)} \int \frac{{\rm d}x_1}{x_1}\frac{{\rm d}x_2}{x_2}{\rm d}\Phi_{3}(p_3,\hdots,p_5;\bar{p}_1,\bar{p}_2)\;\nonumber\\ \times2{\rm{Re}}&\bigg\{&\Big({\cal F}_{3}^{0}(s_{\b{1}\b{2}})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})+\frac{1}{3}{\cal F}_{3}^{0}(s_{ij})+\frac{1}{3}{\cal F}_{3}^{0}(s_{jk})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}k})\nonumber\\ &&-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}j})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}k})-\frac{1}{3}{\cal F}_{3}^{0}(s_{ik})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}i})\bigg) \nonumber\\ &\times&~{\cal{A}}_{5}^{0,\dagger}(\hb{1},\hb{2},i,j,k)\;{\cal{A}}_{5}^{0}(\h{\b{1}},j,\hb{2},k,i)\bigg\}J_{2}^{(3)}(p_i,p_j,p_k).\label{eq:dsigta} \end{eqnarray} Eq.~\eqref{eq:dsigta} clearly matches the form for the poles of the 1-loop matrix element in Eq.~\eqref{eq:rvpoles} and so that, \begin{eqnarray} {\cal P}oles\bigg({\rm d} \hat\sigma_{NNLO}^{RV}-{\rm d} \hat\sigma_{NNLO}^{T,a}\bigg)&=&0. \end{eqnarray} \subsection{Implicit singularity subtraction} In single unresolved regions of phase space the jet function allows the real-virtual matrix element to develop implicit divergences. In order to be able to integrate the real-virtual cross section numerically, a single unresolved subtraction term is constructed to remove any implicit singularities of the real-virtual cross section. The form of the cross section in Eq.~\eqref{eq:rvcompact1} makes it particularly clear that the total cross section contains no divergent collinear limits at sub-leading colour; this leaves only soft limits to consider. In the single soft limit, the one-loop colour ordered amplitudes factorize in the following way, \begin{eqnarray} {\cal{A}}_{5,1}^{1}(\cdots,a,i,b,\cdots)&\stackrel{i\to0}{\longrightarrow}&{\cal{S}}^{0}(p_{a},p_{i},p_{b}){\cal{A}}_{4,1}^{1}(\cdots,a,b,\cdots)\nonumber\\ &+&{\cal{S}}^{1}(p_{a},p_{i},p_{b}){\cal{A}}_{4}^{0}(\cdots,a,b,\cdots),\label{eq:m51softfac} \end{eqnarray} where the colour stripped one-loop soft function\cite{Bern:1998sc,Bern:1999ry,Catani:2000pi} can be written in the form, \begin{eqnarray} {\cal{S}}^{1}(p_{a},p_{i},p_{b})&=&-{\cal{S}}^{0}(p_{a},p_{i},p_{b})\cdot{\cal{S}}^{{\rm{sing}}}(a,i,b)\label{eq:s1}, \end{eqnarray} and the singular function, ${\cal{S}}^{{\rm{sing}}}$, is given by, \begin{eqnarray} {\cal{S}}^{{\rm{sing}}}(a,i,b)&=&\b{C}(\epsilon)\frac{1}{\epsilon^2}\bigg(-\frac{\mu^{2}s_{ab}}{s_{ai}s_{ib}}\bigg)^{\epsilon}. \end{eqnarray} Substituting Eqs.~\eqref{eq:m51softfac} and~\eqref{eq:s1} into the sub-leading colour contribution shown in Eq.~\eqref{eq:rvcompact2}, yields an expression where each term containing a one-loop soft current is purely imaginary and so does not contribute to the matrix element. This shows that the one-loop soft gluon current does not contribute to the single unresolved limit of the sub-leading colour one-loop matrix element and so no one-loop antennae are required in the real-virtual subtraction term. Similarly in the colour space approach, the one-loop soft gluon current is proportional to the sandwich $\langle{\cal{A}}_{4}^{0}|\bs{T}_{i}\cdot\bs{T}_{j}|{\cal{A}}_{4}^{0}\rangle$, which vanishes at sub-leading colour, as stated in Eq.~\eqref{eq:slczero}. Promoting the eikonal factors of the soft limit to three-parton antennae leads us to the following subtraction term for a generic one-loop interference, \begin{eqnarray} \lefteqn{{\cal{A}}_{5}^{0,\dagger}(\cdots,a,i,b,\cdots){\cal{A}}_{5}^{1}(\cdots,c,i,d,\cdots)\stackrel{i\to0}{\approx}}\nonumber\\ &+&X_{3}^{0}(a,i,c)\ {\cal{A}}_{4}^{0,\dagger}(\cdots,\widetilde{(ai)},{b},\cdots){\cal{A}}_{4,1}^{1}(\cdots,\widetilde{(ic)},{d},\cdots)\nonumber\\ &+&X_{3}^{0}(b,i,d)\ {\cal{A}}_{4}^{0,\dagger}(\cdots,{a},\widetilde{(bi)},\cdots){\cal{A}}_{4,1}^{1}(\cdots,{c},\widetilde{(id)},\cdots)\nonumber\\ &-&X_{3}^{0}(a,i,d)\ {\cal{A}}_{4}^{0,\dagger}(\cdots,\widetilde{(ai)},{b},\cdots){\cal{A}}_{4,1}^{1}(\cdots,{c},\widetilde{(id)},\cdots)\nonumber\\ &-&X_{3}^{0}(b,i,c)\ {\cal{A}}_{4}^{0,\dagger}(\cdots,{a},\widetilde{(bi)},\cdots){\cal{A}}_{4,1}^{1}(\cdots,\widetilde{(ic)},{d},\cdots),\label{eq:1loopfact} \end{eqnarray} where once again, the explicit form of $X_{3}^{0}$ depends on the kinematics of the hard radiators. Applying Eq.~\eqref{eq:1loopfact} to the real-virtual cross section in Eq.~\eqref{eq:rvcompact2} and simplifying the result yields the single unresolved subtraction term, \begin{eqnarray} &&{\rm d} \hat\sigma_{NNLO}^{T,b}={\cal N}_{LO}\left(\frac{\alpha_s}{2\pi}\right)^2\frac{\bar{C}(\epsilon)^2}{C(\epsilon)}\frac{12}{3!}\sum_{(i,j,k)\in P(3,4,5)} \int \frac{{\rm d}x_1}{x_1}\frac{{\rm d}x_2}{x_2}{\rm d}\Phi_{3}(p_3,\hdots,p_5;\bar{p}_1,\bar{p}_2)\;2{\rm{Re}}\Big\{\nonumber\\ &&+2F_3^0(\hb{1},k,\hb{2}) \Big[{\cal{A}}_4^{0\dagger}(\hbb{1},\hbb{2},\tilde{i},\tilde{j})\Big( {\cal{A}}_4^1(\hbb{1},\tilde{i},\hbb{2},\tilde{j})\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hbb{1},\tilde{i},\hbb{2},\tilde{j}){\cal{A}}_{4}^{0}(\hbb{1},\tilde{i},\hbb{2},\tilde{j})\Big)\nonumber\\ &&\hspace{2.0cm}-\, {\cal{A}}_4^{0\dagger}(\hbb{1},\tilde{i},\hbb{2},\tilde{j}) \Big({\cal{A}}_4^1(\hbb{1},\hbb{2},\tilde{i},\tilde{j})\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hbb{1},\hbb{2},\tilde{i},\tilde{j}){\cal{A}}_{4}^{0}(\hbb{1},\hbb{2},\tilde{i},\tilde{j})\Big)\Big] J_{2}^{(2)}(p_{\tilde{i}},p_{\tilde{j}})\nonumber\\ &&+2f_3^0(\hb{1},k,i)\times\nonumber\\ &&\ \ \Big[ {\cal{A}}_4^{0\dagger}(\hbb{1},\hb{2},j,\widetilde{(ki)})\Big( {\cal{A}}_4^1(\hbb{1},\hb{2},\widetilde{(ki)},j)\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hbb{1},\hb{2},\widetilde{(ki)},j){\cal{A}}_{4}^{0}(\hbb{1},\hb{2},\widetilde{(ki)},j)\Big)\nonumber\\ &&\ \ - {\cal{A}}_4^{0\dagger}(\hbb{1},\hb{2},\widetilde{(ki)},j)\Big( {\cal{A}}_4^1(\hbb{1},\hb{2},j,\widetilde{(ki)})\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hbb{1},\hb{2},j,\widetilde{(ki)}){\cal{A}}_{4}^{0}(\hbb{1},\hb{2},j,\widetilde{(ki)})\Big)\nonumber\\ &&\ \ + {\cal{A}}_4^{0\dagger}(\hbb{1},\widetilde{(ki)},\hb{2},j)\Big( {\cal{A}}_4^1(\hbb{1},\hb{2},\widetilde{(ki)},j)\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hbb{1},\hb{2},\widetilde{(ki)},j){\cal{A}}_{4}^{0}(\hbb{1},\hb{2},\widetilde{(ki)},j)\Big)\nonumber\\ &&\ \ - {\cal{A}}_4^{0\dagger}(\hbb{1},\hb{2},\widetilde{(ki)},j)\Big( {\cal{A}}_4^1(\hbb{1},\widetilde{(ki)},\hb{2},j)\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hbb{1},\widetilde{(ki)},\hb{2},j){\cal{A}}_{4}^{0}(\hbb{1},\widetilde{(ki)},\hb{2},j)\Big)\Big]J_{2}^{(2)}(p_{\widetilde{(ki)}},p_{j})\nonumber\\ &&+2f_3^0(\h{2},k,i) \times\nonumber\\ &&\ \ \Big[{\cal{A}}_4^{0\dagger}(\hb{1},\hbb{2},\widetilde{(ki)},j)\Big( {\cal{A}}_4^1(\hb{1},\hbb{2},j,\widetilde{(ki)})\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hb{1},\hbb{2},j,\widetilde{(ki)}){\cal{A}}_{4}^{0}(\hb{1},\hbb{2},j,\widetilde{(ki)})\Big)\nonumber\\ &&\ \ - {\cal{A}}_4^{0\dagger}(\hb{1},\hbb{2},j,\widetilde{(ki)})\Big( {\cal{A}}_4^1(\hb{1},\hbb{2},\widetilde{(ki)},j)\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hb{1},\hbb{2},\widetilde{(ki)},j){\cal{A}}_{4}^{0}(\hb{1},\hbb{2},\widetilde{(ki)},j)\Big)\nonumber\\ &&\ \ + {\cal{A}}_4^{0\dagger}(\hb{1},\widetilde{(ki)},\hbb{2},j)\Big( {\cal{A}}_4^1(\hb{1},\hbb{2},j,\widetilde{(ki)})\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hb{1},\hbb{2},j,\widetilde{(ki)}){\cal{A}}_{4}^{0}(\hb{1},\hbb{2},j,\widetilde{(ki)})\Big)\nonumber\\ &&\ \ - {\cal{A}}_4^{0\dagger}(\hb{1},\hbb{2},j,\widetilde{(ki)})\Big( {\cal{A}}_4^1(\hb{1},\widetilde{(ki)},\hbb{2},j)\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hb{1},\widetilde{(ki)},\hbb{2},j){\cal{A}}_{4}^{0}(\hb{1},\widetilde{(ki)},\hbb{2},j)\Big)\Big] J_{2}^{(2)}(p_j,p_{\widetilde{(ki)}})\nonumber\\ &&+f_3^0(i,k,j)\times\nonumber\\ &&\ \ \Big[\;{\cal{A}}_4^{0\dagger}(\hb{1},\hb{2},\widetilde{(ik)},\widetilde{(kj)})\Big( {\cal{A}}_4^1(\hb{1},\widetilde{(ik)},\hb{2},\widetilde{(kj)})\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hb{1},\widetilde{(ik)},\hb{2},\widetilde{(kj)}){\cal{A}}_{4}^{0}(\hb{1},\widetilde{(ik)},\hb{2},\widetilde{(kj)})\Big)\nonumber\\ &&\ \ -{\cal{A}}_4^{0\dagger}(\hb{1},\widetilde{(ik)},\hb{2},\widetilde{(kj)})\Big( {\cal{A}}_4^1(\hb{1},\hb{2},\widetilde{(ik)},\widetilde{(kj)})\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hb{1},\hb{2},\widetilde{(ik)},\widetilde{(kj)}){\cal{A}}_{4}^{0}(\hb{1},\hb{2},\widetilde{(ik)},\widetilde{(kj)})\Big)\nonumber\\ &&\ \ +{\cal{A}}_4^{0\dagger}(\hb{1},\hb{2},\widetilde{(kj)},\widetilde{(ik)})\Big( {\cal{A}}_4^1(\hb{1},\widetilde{(ik)},\hb{2},\widetilde{(kj)})\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hb{1},\widetilde{(ik)},\hb{2},\widetilde{(kj)}){\cal{A}}_{4}^{0}(\hb{1},\widetilde{(ik)},\hb{2},\widetilde{(kj)})\Big)\nonumber\\ &&\ \ -{\cal{A}}_4^{0\dagger}(\hb{1},\widetilde{(ik)},\hb{2},\widetilde{(kj)})\Big( {\cal{A}}_4^1(\hb{1},\hb{2},\widetilde{(kj)},\widetilde{(ik)})\delta_1 \delta_{2}+\frac{1}{2}\bs{J}_{4}^{(1)}(\hb{1},\hb{2},\widetilde{(kj)},\widetilde{(ik)}){\cal{A}}_{4}^{0}(\hb{1},\hb{2},\widetilde{(kj)},\widetilde{(ik)})\Big)\Big]\nonumber\\ &&\ \ \times J_{2}^{(2)}(p_{\widetilde{(ik)}},p_{\widetilde{(kj)}})\Big\},\label{eq:dstb} \end{eqnarray} where $\delta_{1,2}=\delta(1-x_{1,2})$. The integrated antenna strings, $\bs{J}_{4}^{(1)}$, in Eq.~\eqref{eq:dstb} are introduced to remove the explicit IR poles of the reduced four gluon one-loop amplitudes. Once again, any mass factorization kernels in the integrated dipoles cancel. For ease of exposition in later sections, we will refer to those terms in Eq.~\eqref{eq:dstb} that are proportional to the one-loop amplitudes as ${\rm d} \hat\sigma_{NNLO}^{T,b_{1}}$ and those proportional to the integrated dipoles as ${\rm d} \hat\sigma_{NNLO}^{T,b_{2}}$. Both of these terms have to be integrated analytically and added to the double virtual subtraction term. \subsection{Spurious singularity subtraction} \label{sec:dstc} There are additional double real subtraction terms, ${\rm d} \hat\sigma_{NNLO}^{S,b}$, ${\rm d} \hat\sigma_{NNLO}^{S,c}$ and ${\rm d} \hat\sigma_{NNLO}^{S,e}$, that are added to the real-virtual cross section after analytic integration. It is useful to consider the spurious singularity subtraction term as a sum of two contributions, \begin{eqnarray} {\rm d} \hat\sigma_{NNLO}^{T,c}&=&{\rm d} \hat\sigma_{NNLO}^{T,c_{1}}+{\rm d} \hat\sigma_{NNLO}^{T,c_{2}}, \end{eqnarray} where ${\rm d} \hat\sigma_{NNLO}^{T,c_{1}}$ consists of the terms inherited directly from the double real subtraction terms after analytic integration, \begin{eqnarray} {\rm d} \hat\sigma_{NNLO}^{T,c_{1}}&=&-\int_{1}\Big[{\rm d} \hat\sigma_{NNLO}^{S,b}+{\rm d} \hat\sigma_{NNLO}^{S,c}+{\rm d} \hat\sigma_{NNLO}^{S,e}\Big].\label{eq:dstc1def} \end{eqnarray} The subtraction term in Eq.~\eqref{eq:dstc1def} produces explicit poles and implicit divergences in the real-virtual contribution. Since all explicit and implicit singularities in the real-virtual cross section have already been removed by ${\rm d} \hat\sigma_{NNLO}^{T,a}$ and ${\rm d} \hat\sigma_{NNLO}^{T,b}$, the singularities introduced by Eq.~\eqref{eq:dstc1def} must be explicitly cancelled by an additional subtraction term, ${\rm d} \hat\sigma_{NNLO}^{T,c_2}$. Following~\cite{GehrmannDeRidder:2011aa} we find that the spurious singularity subtraction term is given by, \begin{eqnarray} \lefteqn{{\rm d} \hat\sigma_{NNLO}^{T,c}={\cal N}_{LO}\left(\frac{\alpha_s}{2\pi}\right)^2\frac{\bar{C}(\epsilon)^2}{C(\epsilon)}\frac{12}{3!}\sum_{(i,j,k)\in P(3,4,5)} \int \frac{{\rm d}x_1}{x_1}\frac{{\rm d}x_2}{x_2}{\rm d}\Phi_{3}(p_3,\hdots,p_5;\bar{p}_1,\bar{p}_2)\Big\{}\nonumber\\ +\frac{1}{2}&\Big[&{\cal F}_{3}^{0}(s_{\b{1}i})-{\cal F}_{3}^{0}(s_{\bb{1}(ki)})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{1}j})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}(ki)})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})+{\cal F}_{3}^{0}(s_{\bb{1}\b{2}})\nonumber\\ &-&{\cal F}_{3}^{0}(s_{\b{1}\b{2}})+\frac{1}{3}{\cal F}_{3}^{0}(s_{(ki)j})-\frac{1}{3}{\cal F}_{3}^{0}(s_{ij})+2{\cal{S}}(s_{\b{1}\widetilde{(ki)}},s_{ij})-2{\cal{S}}(s_{\b{1}i},s_{ij})+{\cal{S}}(s_{ij},s_{ij})\nonumber\\ &-&{\cal{S}}(s_{\widetilde{(ki)j}},s_{ij})+{\cal{S}}(s_{\b{2}i},s_{ij})-{\cal{S}}(s_{\b{2}\widetilde{(ki)}},s_{ij})\Big]~ f_{3}^{0}(\hb{1},k,i)~A_{4}^{0}(\hbb{1},\hb{2},\widetilde{(ki)},j)~J_{2}^{(2)}(p_{j},p_{\widetilde{(ki)}})\nonumber\\\nonumber\\ +\frac{1}{2}&\Big[&{\cal F}_{3}^{0}(s_{\b{2}j})-{\cal F}_{3}^{0}(s_{\bb{2}(kj)})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}(kj)})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{2}i})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})+{\cal F}_{3}^{0}(s_{\b{1}\bb{2}})\nonumber\\ &-&{\cal F}_{3}^{0}(s_{\b{1}\b{2}})+\frac{1}{3}{\cal F}_{3}^{0}(s_{i(kj)})-\frac{1}{3}{\cal F}_{3}^{0}(s_{ij})+2{\cal{S}}(s_{\b{2}\widetilde{(kj)}},s_{ij})-2{\cal{S}}(s_{\b{2}j},s_{ij})+{\cal{S}}(s_{ij},s_{ij})\nonumber\\ &-&{\cal{S}}(s_{\widetilde{i(kj)}},s_{ij})+{\cal{S}}(s_{\b{1}j},s_{ij})-{\cal{S}}(s_{\b{1}\widetilde{(kj)}},s_{ij})\Big]~f_{3}^{0}(\hb{2},k,j)~A_{4}^{0}(\hb{1},\hbb{2},i,\widetilde{(kj)})~J_{2}^{(2)}(p_{i},p_{\widetilde{(kj)}})\nonumber\\\nonumber\\ -\frac{1}{2}&\Big[&\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}i})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{1}\tilde{i}})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}j})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{2}\tilde{j}})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{1}\tilde{j}})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{2}\tilde{i}})\nonumber\\ &-&\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})+{\cal{S}}(s_{\bb{1}\tilde{i}},s_{\tilde{i}\tilde{j}})-{\cal{S}}(s_{\b{1}i},s_{ij})+{\cal{S}}(s_{\bb{2}\tilde{j}},s_{\tilde{i}\tilde{j}})-{\cal{S}}(s_{\b{2}j},s_{ij})-{\cal{S}}(s_{\bb{1}\tilde{j}},s_{\tilde{i}\tilde{j}})\nonumber\\ &+&{\cal{S}}(s_{\b{1}j},s_{ij})-{\cal{S}}(s_{\bb{2}\tilde{i}},s_{\tilde{i}\tilde{j}})+{\cal{S}}(s_{\b{2}i},s_{ij})\Big]~F_{3}^{0}(\hb{1},k,\hb{2})~A_{4}^{0}(\hbb{1},\hbb{2},\tilde{i},\tilde{j})~J_{2}^{(2)}(p_{\tilde{i}},p_{\tilde{j}})\nonumber\\\nonumber\\ -\frac{1}{4}&\Big[&\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}i})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}(ik)})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}j})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}(kj)})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}(kj)})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})\nonumber\\ &+&\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}(ik)})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})+{\cal{S}}(s_{\b{1}\widetilde{(ik)}},s_{ij})-{\cal{S}}(s_{\b{1}i},s_{ij})+{\cal{S}}(s_{\b{2}\widetilde{(kj)}},s_{ij})\nonumber\\ &-&{\cal{S}}(s_{\b{2}j},s_{ij})+{\cal{S}}(s_{\b{1}j},s_{ij})-{\cal{S}}(s_{\b{1}\widetilde{(kj)}},s_{ij})+{\cal{S}}(s_{\b{2}i},s_{ij})-{\cal{S}}(s_{\b{2}\widetilde{(ki)}},s_{ij})\Big]\nonumber\\ &&\times~f_{3}^{0}(i,k,j)~A_{4}^{0}(\hb{1},\hb{2},\widetilde{(ik)},\widetilde{(kj)})~J_{2}^{(2)}(p_{\widetilde{(ik)}},p_{\widetilde{(kj)}})\nonumber\\\nonumber\\ +\frac{1}{4}&\Big[&\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}i})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}(ik)})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}j})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}(kj)})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}(kj)})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})\nonumber\\ &+&\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}(ik)})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})+{\cal{S}}(s_{\b{1}\widetilde{(ik)}},s_{ij})-{\cal{S}}(s_{\b{1}{i}},s_{ij})+{\cal{S}}(s_{\b{2}\widetilde{(kj)}},s_{ij})\nonumber\\ &-&{\cal{S}}(s_{\b{2}{j}},s_{ij})+{\cal{S}}(s_{\b{1}{j}},s_{ij})-{\cal{S}}(s_{\b{1}\widetilde{(kj)}},s_{ij})+{\cal{S}}(s_{\b{2}{i}},s_{ij})-{\cal{S}}(s_{\b{2}\widetilde{(ik)}},s_{ij})\Big]\nonumber\\ &&\times~f_{3}^{0}(i,k,j)~A_{4}^{0}(\hb{1},\hb{2},\widetilde{(kj)},\widetilde{(ik)})~J_{2}^{(2)}(p_{\widetilde{(ik)}},p_{\widetilde{(kj)}})\nonumber\\\nonumber\\ +\frac{1}{2}&\Big[&{\cal F}_{3}^{0}(s_{\b{1}\b{2}})-{\cal F}_{3}^{0}(s_{\bb{1}\b{2}})+\frac{1}{3}{\cal F}_{3}^{0}(s_{ij})-\frac{1}{3}{\cal F}_{3}^{0}(s_{(ki)j})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{1}j})\nonumber\\ &-&\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}\widetilde{(ki)}})+{\cal{S}}(s_{\widetilde{(ki)}j},s_{ij})-{\cal{S}}(s_{ij},s_{ij})+{\cal{S}}(s_{\b{2}i},s_{ij})-{\cal{S}}(s_{\b{2}\widetilde{(ki)}},s_{ij})\Big]\nonumber\\ &&\times~f_{3}^{0}(\hb{1},k,i)~A_{4}^{0}(\hbb{1},\hb{2},j,\widetilde{(ki)})~J_{2}^{(2)}(p_{j},p_{\widetilde{(ki)}})\nonumber\\\nonumber\\ +\frac{1}{2}&\Big[&{\cal F}_{3}^{0}(s_{\b{1}\b{2}})-{\cal F}_{3}^{0}(s_{\b{1}\bb{2}})+\frac{1}{3}{\cal F}_{3}^{0}(s_{ij})-\frac{1}{3}{\cal F}_{3}^{0}(s_{i(kj)})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}(kj)})\nonumber\\ &-&\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{2}i})+{\cal{S}}(s_{i\widetilde{(kj)}},s_{ij})-{\cal{S}}(s_{ij},s_{ij})+{\cal{S}}(s_{\b{1}j},s_{ij})-{\cal{S}}(s_{\b{1}\widetilde{(kj)}},s_{ij})\Big]\nonumber\\ &&\times~f_{3}^{0}(\hb{2},k,j)~A_{4}^{0}(\hb{1},\hbb{2},\widetilde{(kj)},i)~J_{2}^{(2)}(p_{i},p_{\widetilde{(kj)}})\nonumber\\\nonumber\\ +\frac{1}{4}&\Big[&2{\cal F}_{3}^{0}(s_{\b{1}\b{2}})-2{\cal F}_{3}^{0}(s_{\bb{1}\bb{2}})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}{j}})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{1}\tilde{j}})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}{i}})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{2}\tilde{i}})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}{i}})\nonumber\\ &+&\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{1}\tilde{i}})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}{j}})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{2}\tilde{j}})+2{\cal{S}}(s_{\bb{1}\bb{2}},s_{\tilde{i}\tilde{j}})-2{\cal{S}}(s_{\b{1}\b{2}},s_{ij})-{\cal{S}}(s_{\bb{1}\tilde{j}},s_{\tilde{i}\tilde{j}})+{\cal{S}}(s_{\b{1}{j}},s_{ij})\nonumber\\ &-&{\cal{S}}(s_{\bb{2}\tilde{i}},s_{\tilde{i}\tilde{j}})+{\cal{S}}(s_{\b{2}{i}},s_{ij})-{\cal{S}}(s_{\bb{1}\tilde{i}},s_{\tilde{i}\tilde{j}})+{\cal{S}}(s_{\b{1}{i}},s_{ij})-{\cal{S}}(s_{\bb{2}\tilde{j}},s_{\tilde{i}\tilde{j}})+{\cal{S}}(s_{\b{2}{j}},s_{ij})\Big]\nonumber\\ &&\times~F_{3}^{0}(\hb{1},k,\hb{2})~A_{4}^{0}(\hbb{1},\tilde{i},\hbb{2},\tilde{j})~J_{2}^{(2)}(p_{\tilde{i}},p_{\tilde{j}})\nonumber\\\nonumber\\ +\frac{1}{4}&\Big[&\frac{2}{3}{\cal F}_{3}^{0}(s_{ij})-\frac{2}{3}{\cal F}_{3}^{0}(s_{(ik)(kj)})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}(kj)})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}(ik)})\nonumber\\ &-&\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}i})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}(ik)})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}j})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}(kj)})+2{\cal{S}}(s_{\widetilde{(ik)}\widetilde{(kj)}},s_{ij})-2{\cal{S}}(s_{ij},s_{ij})\nonumber\\ &-&{\cal{S}}(s_{\b{1}\widetilde{(kj)}},s_{ij})+{\cal{S}}(s_{\b{1}j},s_{ij})-{\cal{S}}(s_{\b{2}\widetilde{(ik)}},s_{ij})+{\cal{S}}(s_{\b{2}i},s_{ij})-{\cal{S}}(s_{\b{1}\widetilde{(ik)}},s_{ij})+{\cal{S}}(s_{\b{1}i},s_{ij})\nonumber\\ &-&{\cal{S}}(s_{\b{2}\widetilde{(kj)}},s_{ij})+{\cal{S}}(s_{\b{2}j},s_{ij})\Big]~f_{3}^{0}(i,k,j)~A_{4}^{0}(\hb{1},\widetilde{(ik)},\hb{2},\widetilde{(kj)})~J_{2}^{(2)}(p_{\widetilde{(ik)}},p_{\widetilde{(kj)}})\nonumber\\\nonumber\\ -\frac{1}{2}&\Big[&{\cal F}_{3}^{0}(s_{\b{1}\b{2}})-{\cal F}_{3}^{0}(s_{\bb{1}\b{2}})+\frac{1}{3}{\cal F}_{3}^{0}(s_{ij})-\frac{1}{3}{\cal F}_{3}^{0}(s_{(ki)j})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{1}j})\nonumber\\ &-&\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}(ki)})+{\cal{S}}(s_{\widetilde{(ki)}j},s_{ij})-{\cal{S}}(s_{ij},s_{ij})+{\cal{S}}(s_{\b{2}i},s_{ij})-{\cal{S}}(s_{\b{2}\widetilde{(ki)}},s_{ij})\Big]\nonumber\\ &&\times~f_{3}^{0}(\hb{1},k,i)~A_{4}^{0}(\hbb{1},\widetilde{(ki)},\hb{2},j)~J_{2}^{(2)}(p_{j},p_{\widetilde{(ki)}})\nonumber\\\nonumber\\ -\frac{1}{2}&\Big[&{\cal F}_{3}^{0}(s_{\b{1}\b{2}})-{\cal F}_{3}^{0}(s_{\b{1}\bb{2}})+\frac{1}{3}{\cal F}_{3}^{0}(s_{ij})-\frac{1}{3}{\cal F}_{3}^{0}(s_{i(kj)})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}(kj)})\nonumber\\ &-&\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\bb{2}i})+{\cal{S}}(s_{i\widetilde{(kj)}},s_{ij})-{\cal{S}}(s_{ij},s_{ij})+{\cal{S}}(s_{\b{1}j},s_{ij})-{\cal{S}}(s_{\b{1}\widetilde{(kj)}},s_{ij})\Big]\nonumber\\ &&\times~f_{3}^{0}(\hb{2},k,j)A_{4}^{0}(\hb{1},i,\hbb{2},\widetilde{(kj)})J_{2}^{(2)}(p_{i},p_{\widetilde{(kj)}})\Big\}.\label{eq:dstc} \end{eqnarray} In Eq.~\eqref{eq:dstc}, the terms corresponding to ${\rm d} \hat\sigma_{NNLO}^{T,c_{2}}$ are those proportional to integrated antennae with mapped momentum arguments. These terms are integrated analytically over the remaining unresolved phase space and added back into the double virtual cross section. All other terms constitute ${\rm d} \hat\sigma_{NNLO}^{T,c_{1}}$ and terminate in the real-virtual cross section. At this point we have fully constructed the subtraction terms which are used to remove all explicit IR poles and implicit IR divergences from ${\rm d} \hat\sigma_{NNLO}^{RV}$. The pattern of singularity cancellation can be summarised as follows: \begin{itemize} \item The explicit poles of ${\rm d} \hat\sigma_{NNLO}^{RV}$ are cancelled by ${\rm d} \hat\sigma_{NNLO}^{T,a}$. \item The implicit divergences of ${\rm d} \hat\sigma_{NNLO}^{RV}$ are removed by ${\rm d} \hat\sigma_{NNLO}^{T,b_{1}}$. \item The explicit poles of ${\rm d} \hat\sigma_{NNLO}^{T,b_{1}}$ are cancelled by ${\rm d} \hat\sigma_{NNLO}^{T,b_{2}}$. \item The implicit divergences of ${\rm d} \hat\sigma_{NNLO}^{T,b_{2}}$ cancel against those of ${\rm d} \hat\sigma_{NNLO}^{T,a}$. \item ${\rm d} \hat\sigma_{NNLO}^{T,c}$ is free from poles in $\epsilon$ and finite in all unresolved limits. \end{itemize} \section{Double virtual contribution} \label{sec:VV} The poles of the full colour double virtual matrix element can be expressed in terms of single and double unresolved integrated dipoles according to the formula~\cite{Currie:2013vh}, \begin{eqnarray} \lefteqn{{\cal P}oles\bigg({\rm d} \hat\sigma_{NNLO}^{VV}\bigg)=-{\cal N}_{LO}\left(\frac{\alpha_s}{2\pi}\right)^2\bar{C}(\epsilon)^2 \int \frac{{\rm d}z_1}{z_1}\frac{{\rm d}z_2}{z_2}~{\rm d}\Phi_{3}(p_3,p_4;\bar{p}_1,\bar{p}_2)}\nonumber\\ 2&\bigg\{&\sum_{(i,j)}\bs{J}_{2}^{(1)}(i,j)\ 2{\rm{Re}}\langle{\cal A}_{4}^{0}|\bs{T}_{i}\cdot\bs{T}_{j}|{\cal A}_{4}^{1}\rangle\nonumber\\ &-&\sum_{(i,j)}\sum_{(k,l)}\big[\bs{J}_{2}^{(1)}(i,j)\otimes\bs{J}_{2}^{(1)}(k,l)\big]\ \langle{\cal{A}}_{4}^{0}|(\bs{T}_{i}\cdot\bs{T}_{j})(\bs{T}_{k}\cdot\bs{T}_{l})|{\cal A}_{4}^{0}\rangle\nonumber\\ &+&\sum_{(i,j)}N\bs{J}_{2}^{(2)}(i,j)\ \langle{\cal A}_{4}^{0}|\bs{T}_{i}\cdot\bs{T}_{j}|{\cal A}_{4}^{0}\rangle\bigg\}~J_{2}^{(2)}(p_{3},p_{4}).\label{eq:vvpoles} \end{eqnarray} This expression has been confirmed by comparing to the analytic formulae for the two-loop interferences in~\cite{Glover:2001af,Glover:2001rd}. The task of this section is then to demonstrate that the double virtual subtraction term matches this form for the double virtual pole structure. The fact that the poles of the two-loop matrix element are written in terms of integrated antennae makes this demonstration particularly transparent. The sub-leading colour double virtual subtraction term has three contributions, \begin{eqnarray} {\rm d} \hat\sigma_{NNLO}^{U}&=&{\rm d} \hat\sigma_{NNLO}^{U,a}+{\rm d} \hat\sigma_{NNLO}^{U,b}+{\rm d} \hat\sigma_{NNLO}^{U,c}. \end{eqnarray} The subtraction term ${\rm d} \hat\sigma_{NNLO}^{U,a}$ corresponds to the first line of Eq.~\eqref{eq:vvpoles} containing the sandwiches involving one-loop amplitudes, ${\rm d} \hat\sigma_{NNLO}^{U,b}$ corresponds to the second line containing double colour charge insertions to tree-level sandwiches. The last term, ${\rm d} \hat\sigma_{NNLO}^{U,c}$, corresponds to the final line of Eq.~\eqref{eq:vvpoles} containing the double unresolved integrated dipole, $\bs{J}_{2}^{(2)}$. This term is proportional to the sandwich $\langle{\cal A}_{4}^{0}|\bs{T}_{i}\cdot\bs{T}_{j}|{\cal A}_{4}^{0}\rangle$, which has no contribution at sub-leading colour according to Eq.~\eqref{eq:slczero}, and so, \begin{eqnarray} {\rm d} \hat\sigma_{NNLO}^{U,c}&=&0. \end{eqnarray} The double unresolved integrated dipole, $\bs{J}_{2}^{(2)}$, is the only contribution that contains the integrated four-parton, ${\cal{X}}_{4}^{0}$, and one-loop, ${\cal{X}}_{3}^{1}$, antennae. Its absence from the sub-leading colour double virtual subtraction term implies that neither of these types of antennae are present, in unintegrated form, in the double real or real-virtual subtraction terms respectively. This is indeed what was found when explicitly constructing the double real and real-virtual subtraction terms in Secs.~\ref{sec:RR} and \ref{sec:RV}. \subsection*{Single operator insertions into one-loop sandwiches} The two-loop contribution contains a subset of poles which can be written in terms of colour charge insertions to the one-loop interferences of the type, \begin{eqnarray} \bs{J}_{2}^{(1)}(i,j)\ 2{\rm{Re}}\langle{\cal{A}}_{4}^{0}|\bs{T}_{i}\cdot\bs{T}_{j}|{\cal{A}}_{4}^{1}\rangle.\label{eq:m41sandwich} \end{eqnarray} To evaluate these sandwiches explicitly we perform the colour algebra to yield the expression, \begin{eqnarray} \lefteqn{{\cal{SLC}}\Big(\sum_{(i,j)}\bs{J}_{2}^{(1)}(i,j)\ 2{\rm{Re}}\langle{\cal{A}}_{4}^{0}|\bs{T}_{i}\cdot\bs{T}_{j}|{\cal{A}}_{4}^{1}\rangle\Big)=N^2(N^2-1)\frac{12}{2!}\sum_{(i,j)\in P(3,4)}}\nonumber\\ 2{\rm{Re}}&\Big\{&\Big(\bs{J}_{2}^{(1)}(\hb{1},i)+\bs{J}_{2}^{(1)}(\hb{2},j)-\bs{J}_{2}^{(1)}(\hb{1},j)-\bs{J}_{2}^{(1)}(\hb{2},i)\Big){\cal{A}}_{4}^{0\dagger}(\hb{1},\hb{2},i,j){\cal{A}}_{4,1}^{1}(\hb{1},\hb{2},j,i)\nonumber\\ &+&\Big(\bs{J}_{2}^{(1)}(\hb{1},i)+\bs{J}_{2}^{(1)}(\hb{2},j)-\bs{J}_{2}^{(1)}(\hb{1},\hb{2})-\bs{J}_{2}^{(1)}(i,j)\Big){\cal{A}}_{4}^{0\dagger}(\hb{1},\hb{2},i,j){\cal{A}}_{4,1}^{1}(\hb{1},i,\hb{2},j)\nonumber\\ &+&\Big(\bs{J}_{2}^{(1)}(\hb{1},\hb{2})+\bs{J}_{2}^{(1)}(i,j)-\bs{J}_{2}^{(1)}(\hb{1},i)-\bs{J}_{2}^{(1)}(\hb{2},j)\Big){\cal{A}}_{4}^{0\dagger}(\hb{1},i,\hb{2},j){\cal{A}}_{4,1}^{1}(\hb{1},\hb{2},i,j)\Big\}.\nonumber\\\label{eq:m42poles1} \end{eqnarray} Once again, the mass factorization kernels used to define the integrated dipoles cancel. The piece of the double virtual subtraction term proportional to the one-loop four gluon amplitudes is obtained by the analytic integration of ${\rm d} \hat\sigma_{NNLO}^{T,b_{1}}$, \begin{eqnarray} {\rm d} \hat\sigma_{NNLO}^{U,a}&=&-\int_{1}{\rm d} \hat\sigma_{NNLO}^{T,b_{1}}\nonumber\\ &=&-{\cal N}_{LO}\left(\frac{\alpha_s}{2\pi}\right)^2\bar{C}(\epsilon)^2\int \frac{{\rm d}z_1}{z_1}\frac{{\rm d}z_2}{z_2}{\rm d}\Phi_{3}(p_3,p_4;\bar{p}_1,\bar{p}_2)\;J_{2}^{(2)}(p_{3},p_{4})\ \frac{24}{2!}\sum_{(i,j)\in P(3,4)}\nonumber\\ \times2{\rm{Re}}&\Big\{&\Big(\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}i})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}j})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})\Big){\cal{A}}_{4}^{0\dagger}(\hb{1},\hb{2},i,j){\cal{A}}_{4,1}^{1}(\hb{1},\hb{2},j,i)\nonumber\\ &+&\Big(\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}i})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}j})-{\cal F}_{3}^{0}(s_{\b{1}\b{2}})-\frac{1}{3}{\cal F}_{3}^{0}(s_{ij})\Big){\cal{A}}_{4}^{0\dagger}(\hb{1},\hb{2},i,j){\cal{A}}_{4,1}^{1}(\hb{1},i,\hb{2},j)\nonumber\\ &+&\Big({\cal F}_{3}^{0}(s_{\b{1}\b{2}})+\frac{1}{3}{\cal F}_{3}^{0}(s_{ij})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}i})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}j})\Big){\cal{A}}_{4}^{0\dagger}(\hb{1},i,\hb{2},j){\cal{A}}_{4,1}^{1}(\hb{1},\hb{2},i,j)\Big\}.\nonumber\\\label{eq:dsigua} \end{eqnarray} The poles of Eq.~\eqref{eq:dsigua} match those of Eq.~\eqref{eq:m42poles1}. \subsection*{Double operator insertions into tree-level sandwiches} The second subset of poles contained in the two-loop interferences is written in terms of double charge operator insertions carrying poles given by convolutions of integrated dipoles, \begin{eqnarray} \sum_{(i,j)}\sum_{(k,l)}\big[\bs{J}_{2}^{(1)}(i,j)\otimes\bs{J}_{2}^{(1)}(k,l)\big]\ \langle{\cal{A}}_{4}^{0}|(\bs{T}_{i}\cdot\bs{T}_{j})(\bs{T}_{k}\cdot\bs{T}_{l})|{\cal{A}}_{4}^{0}\rangle. \end{eqnarray} Evaluating the colour sums explicitly and keeping only the sub-leading colour contribution yields, \begin{eqnarray} \lefteqn{\hspace{-3cm}{\cal{SLC}}\Big(\sum_{(i,j)}\sum_{(k,l)}\big[\bs{J}_{2}^{(1)}(i,j)\otimes\bs{J}_{2}^{(1)}(k,l)\big]\ \langle{\cal{A}}_{4}^{0}|(\bs{T}_{i}\cdot\bs{T}_{j})(\bs{T}_{k}\cdot\bs{T}_{l})|{\cal{A}}_{4}^{0}\rangle\Big)=N^2(N^2-1)}\nonumber\\ \frac{12}{2!}\sum_{(i,j)\in P(3,4)}\frac{1}{2}&\bigg[&\Big(\bs{J}_{2}^{(1)}(\hb{1},i)+\bs{J}_{2}^{(1)}(\hb{2},j)-\bs{J}_{2}^{(1)}(\hb{1},j)-\bs{J}_{2}^{(1)}(\hb{2},i)\Big)\nonumber\\ &&\otimes\Big(\bs{J}_{2}^{(1)}(\hb{1},i)+\bs{J}_{2}^{(1)}(\hb{2},j)-\bs{J}_{2}^{(1)}(\hb{1},\hb{2})-\bs{J}_{2}^{(1)}(i,j)\Big)A_{4}^{0}(\hb{1},\hb{2},i,j)\nonumber\\ &+&\frac{1}{2}\Big(\bs{J}_{2}^{(1)}(\hb{1},\hb{2})+\bs{J}_{2}^{(1)}(i,j)-\bs{J}_{2}^{(1)}(\hb{1},j)-\bs{J}_{2}^{(1)}(\hb{2},i)\Big)\nonumber\\ &&\otimes\Big(\bs{J}_{2}^{(1)}(\hb{1},\hb{2})+\bs{J}_{2}^{(1)}(i,j)-\bs{J}_{2}^{(1)}(\hb{1},i)-\bs{J}_{2}^{(1)}(\hb{2},j)\Big)A_{4}^{0}(\hb{1},i,\hb{2},j)\bigg].\label{eq:JxJpoles} \end{eqnarray} The relevant piece of the double virtual subtraction term is constructed from the analytic integration of the real-virtual subtraction terms, ${\rm d} \hat\sigma_{NNLO}^{T,b_{2}}$ and ${\rm d} \hat\sigma_{NNLO}^{T,c_{2}}$ and the double real subtraction term ${\rm d} \hat\sigma_{NNLO}^{S,d}$, \begin{eqnarray} {\rm d} \hat\sigma_{NNLO}^{U,b}&=&-\int_{1}\Big[{\rm d} \hat\sigma_{NNLO}^{T,b_{2}}+{\rm d} \hat\sigma_{NNLO}^{T,c_{2}}\Big]-\int_{2}{\rm d} \hat\sigma_{NNLO}^{S,d}\nonumber\\ &=&-{\cal N}_{LO}\left(\frac{\alpha_s}{2\pi}\right)^2\bar{C}(\epsilon)^2 \int \frac{{\rm d}z_1}{z_1}\frac{{\rm d}z_2}{z_2}~{\rm d}\Phi_{3}(p_3,p_4;\bar{p}_1,\bar{p}_2)\;J_{2}^{(2)}(p_{3},p_{4})\frac{24}{2!}\sum_{(i,j)\in P(3,4)}\nonumber\\ \frac{1}{2}&\Big\{&\Big(\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}i})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}j})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})\Big)\nonumber\\ &&\otimes\Big(\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}i})+\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}j})-{\cal F}_{3}^{0}(s_{\b{1}\b{2}})-\frac{1}{3}{\cal F}_{3}^{0}(s_{ij})\Big)A_{4}^{0}(\hb{1},\hb{2},i,j)\nonumber\\ &+&\frac{1}{2}\Big({\cal F}_{3}^{0}(s_{\b{1}\b{2}})+\frac{1}{3}{\cal F}_{3}^{0}(s_{ij})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}j})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}i})\Big)\nonumber\\ &&\otimes\Big({\cal F}_{3}^{0}(s_{\b{1}\b{2}})+\frac{1}{3}{\cal F}_{3}^{0}(s_{ij})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{1}i})-\frac{1}{2}{\cal F}_{3}^{0}(s_{\b{2}j})\Big)A_{4}^{0}(\hb{1},i,\hb{2},j)\Big\}.\label{eq:dsub} \end{eqnarray} It can be easily seen that the poles of Eq.~\eqref{eq:dsub} match those of Eq.~\eqref{eq:JxJpoles}. At this point, we have shown that all explicit poles of the two-loop matrix elements cancel against ${\rm d} \hat\sigma_{NNLO}^{U,a}$ and ${\rm d} \hat\sigma_{NNLO}^{U,b}$ and there are no further contributions from the analytic integration from either the double real or real-virtual subtraction terms, \begin{eqnarray} {\cal P}oles\bigg({\rm d} \hat\sigma_{NNLO}^{VV}-{\rm d} \hat\sigma_{NNLO}^{U}\bigg)&=&0. \end{eqnarray} \section{Numerical evaluation of the differential cross section} In Secs.~\ref{sec:RR},~\ref{sec:RV} and~\ref{sec:VV} the double real, real-virtual and double virtual subtraction terms were constructed and, where appropriate, the explicit pole cancellation against one and two-loop matrix elements at sub-leading colour was carried out. The remaining task is to numerically integrate each of these partonic channels over the appropriate phase space to obtain the physical cross section. Our numerical studies for proton-proton collisions at centre-of-mass energy $\sqrt{s}=8$ TeV concern the single jet inclusive cross section (where every identified jet in an event that passes the selection cuts contributes, such that a single event potentially enters the distributions multiple times) and the two-jet exclusive cross section (where events with exactly two identified jets contribute). We use in our default setup the anti-$k_t$ jet algorithm~\cite{Cacciari:2008gp} with resolution parameter $R=0.7$ to reconstruct the final state jets where jets are accepted at central rapidity $|y| < 4.4$, and ordered in transverse momentum. An event is retained if the leading jet has $p_{T1} > 80$ GeV. For the dijet invariant mass distribution, a second jet must be observed with $p_{T2} > 60$ GeV. All calculations are carried out with the MSTW08NNLO gluon distribution function~\cite{Martin:2009bu}, including the evaluation of the LO and NLO contributions.\footnote{Note that the evolution of the gluon distribution within the PDF set together with the value of $\alpha_s$ intrinsically includes contributions from the light quarks. The NNLO calculation presented here is ``gluons-only" in the sense that only gluonic matrix elements are involved.} This choice of parameters allows us to quantify the size of the genuine NNLO contributions to the parton-level subprocess. As default value, we set $\mu$ equal to the transverse momentum of the leading jet so that $\mu = p_{T1}$. \begin{figure}[t] \centering \includegraphics[width=0.71\textwidth]{newfig1}\\\vspace{-0.2cm} \caption{The percentage contribution of the sub-leading colour to full colour NNLO correction, $\delta$, for the single jet inclusive transverse energy distribution as a function of $p_{T}$.} \label{fig:subratio} \end{figure} The cross section can be written as, \begin{equation} {\rm{d}}\sigma = \alpha_s^2 A +\alpha_s^3 B +\alpha_s^4 C, \end{equation} where the coefficients $A$, $B$ and $C$ depend on the PDF, the scale choice and the observable. The NNLO coefficient $C$ can be further subdivided into leading and sub-leading colour contributions, \begin{equation} C = C^{LC} + C^{SLC}. \end{equation} To quantify the size of the sub-leading colour NNLO corrections, Fig.~\ref{fig:subratio} shows the ratio, $$\delta = \frac{C^{SLC}}{C}$$ as a percentage for the single jet inclusive transverse energy distribution. We see that $\delta$ is roughly 10\% as expected from naive power counting of colours ($1/N^2$), but exhibits a $p_{T}$ dependence, rising from 8\% at low $p_{T}$ to 15\% at high $p_{T}$. \begin{figure}[t] \centering \includegraphics[width=0.71\textwidth]{etj0-1} \caption{Inclusive jet transverse energy distribution, $d\sigma/dp_T$, for jets constructed with the anti-$k_T$ algorithm with $R=0.7$ and with $p_T > 80$~GeV, $|y| < 4.4$ and $\sqrt{s} = 8$~TeV at NNLO (blue), NLO (red) and LO (dark-green). The lower panel shows the ratios of NNLO, NLO and LO cross sections.} \label{fig:dsdet} \end{figure} In Fig.~\ref{fig:dsdet} we present the inclusive jet cross section for the anti-$k_T$ algorithm with $R=0.7$ and with $p_T > 80$~GeV, $|y| < 4.4$ as a function of the jet $p_{T}$ at LO, NLO and NNLO, for the central scale choice $\mu = p_{T1}$ retaining the full dependence of the number of colours. The NNLO/NLO $k$-factor shows the ratio of the NNLO and NLO cross sections in each bin. For this scale choice we see that the NNLO/NLO $k$-factor across the $p_{T}$ range corresponds to a 16-26\% increase compared to the NLO cross section. \begin{figure}[h!] \centering \begin{minipage}[b]{0.45\linewidth} \includegraphics[width=0.9\textwidth]{etj0-3} \end{minipage} \quad \begin{minipage}[b]{0.45\linewidth} \includegraphics[width=1.1\textwidth]{etj0-4}\\ \end{minipage} \caption{The left panel shows the doubly differential inclusive jet transverse energy distribution, $d^2\sigma/dp_T d|y|$, at $\sqrt{s} = 8$~TeV for the anti-$k_T$ algorithm with $R=0.7$ and for $p_T > 80$~GeV and various $|y|$ slices at NNLO. The right panel shows the ratios of NNLO, NLO and LO cross sections for three rapidity slices: $|y | < 0.3$, $0.3 < |y| < 0.8$ and $0.8 < |y| < 1.2$.} \label{fig:d2sdetslice} \end{figure} \begin{figure}[h!] \centering \begin{minipage}[b]{0.45\linewidth} \includegraphics[width=0.9\textwidth]{mjj0-1} \end{minipage} \quad \begin{minipage}[b]{0.45\linewidth} \includegraphics[width=1.1\textwidth]{mjj0-2}\\ \end{minipage} \caption{The left panel shows the doubly differential exclusive dijet invariant mass distribution, $d^2\sigma/dm_{jj} dy^*$, at $\sqrt{s} = 8$~TeV for the anti-$k_T$ algorithm with $R=0.7$ and for $p_{T1} > 80$~GeV, $p_{T2}>60$~GeV and various $y^*=|y_{1}-y_{2}|/2$ slices at NNLO. The right panel shows the ratios of NNLO, NLO and LO cross sections for three rapidity slices: $y^* < 0.5$, $0.5 < y^* < 1.0$ and $1.0 < y^* < 1.5$.} \label{fig:d2sdmjjslice} \end{figure} In Fig.~\ref{fig:d2sdetslice} we present the inclusive jet cross section in double differential form. The inclusive jet cross section is computed in jet $p_{T}$ and rapidity bins over the range 0.0-4.4 covering central and forward jets. To quantify the impact of the NNLO correction we present the double differential $k$-factors containing ratios of NNLO, NLO and LO cross sections in the same figure. We observe that the NNLO correction increases the cross section between 26\% at low $p_{T}$ to 14\% at high $p_{T}$ with respect to the NLO calculation. This behaviour is similar for each of the three rapidity slices presented. As a final observable, we computed the exclusive dijet cross section at NNLO. For this cross section we require two jets in the final state from which we reconstruct the invariant mass of the dijet system and compute the double differential dijet cross section in bins of invariant mass $m_{jj}$ and $y^*=|y_{1}-y_{2}|/2$ slices over the range 0.0-4.5. The results at NNLO are presented in Fig.~\ref{fig:d2sdmjjslice}. The exclusive dijet events are a subset of the inclusive jet events and we observe that the NNLO/NLO $k$-factor is approximately flat across the $m_{jj}$ range corresponding to a 16-21\% increase when compared to the NLO cross section. \section{Summary} In this paper we have computed the full colour contributions to jet production from gluon scattering at NNLO. Previous work~\cite{Glover:2010im,GehrmannDeRidder:2011aa,GehrmannDeRidder:2012dg,GehrmannDeRidder:2013mf} focussed on the leading colour contribution. The new element is the inclusion of the sub-leading colour effects which contribute first at NNLO. Unlike at leading colour, the double real and real-virtual contributions cannot be written in terms of squared partial amplitudes, but appear as interferences of different colour ordered amplitudes. To isolate the soft singularities we used the antenna subtraction technique which required no significant alterations or new ingredients in order to deal with the incoherent interferences of partial amplitudes. We found that the single and double unresolved limits of the double real matrix element at sub-leading colour could be fully described using just three-parton tree-level antennae and soft factors, without the need for four-parton antenna functions. Similarly, the single unresolved limits of the real-virtual matrix element did not require the one-loop three-parton antenna and could be described with only tree-level three-parton antennae to remove all explicit and implicit singularities. In the process, we found a very compact form for the real-virtual matrix element which we believe to be a new addition to the literature. The double virtual subtraction term, generated by integrating the remaining double real and real-virtual subtraction terms, also involves incoherent interferences of four-parton one-loop and tree-level amplitudes. We showed that it analytically cancels the explicit poles present in the formula for the two-loop matrix elements \cite{Glover:2001af,Glover:2001rd}. With the double real, real-virtual and double virtual subtraction terms in place, the matrix elements are free from explicit poles in $\epsilon$ and finite in all unresolved regions of phase space and so can be numerically integrated in four dimensions to produce finite corrections to the physical distributions. This work provides the first quantitative estimate for the size of sub-leading colour contributions to jet production relative to the leading-colour approximation. The corrections are found to be in line with prior expectations, providing approximately a 10\% correction to the NNLO leading colour contribution. This completes the study of jet production at NNLO in the all-gluon approximation; future work will move beyond this approximation and include scattering processes involving light quarks. \acknowledgments We thank Thomas Gehrmann for useful discussions, constructive comments and reading of the manuscript. This research was supported by the Swiss National Science Foundation (SNF) under contract PP00P2-139192, in part by the European Commission through the `LHCPhenoNet' Initial Training Network PITN-GA-2010-264564, in part by the UK Science and Technology Facilities Council through grant ST/G000905/1 and in part by the National Science Foundation under Grant No. PHY11-25915. AG would like to thank the Kavli Institute for Theoretical Physics, Santa Barbara, for its kind hospitality, where part of this work was carried out. EWNG gratefully acknowledges the support of the Wolfson Foundation and the Royal Society and thanks the Institute for Theoretical Physics at the ETH and the Pauli Center for Theoretical Studies for their kind hospitality during the completion of this work. \bibliographystyle{JHEP}
2,869,038,156,440
arxiv
\section*{Result} {\it Experiment.} High quality epitaxial Bi$_{2}$Se$_{3}$ and $\rm(Bi_{1-x}In_{x})_{2}Se_{3}$ thin films were grown on Al$_{2}$O$_{3}$ and SiO$_{2}$/Si substrates using the MBE method \cite{bansal2011epitaxial}. Optical transmittance T($\omega$) was measured from Far-infrared to UV by using Fourier transform infrared spectroscopy (FTIR) spectrometer in combination with spectroscopic ellipsometer. For gate-dependent optical measurement gate-voltage $V_{\rm{G}}$ was applied between $\rm{Bi_{2}Se_{3}}$ and Si of the substrate. Optical conductivity $\sigma_{1}(\omega)$ was calculated from transmission data through rigorous Kramers-Kronig transformation by using RefFit\cite{kuzmenko2005kramers}. The experimental details are described in Supplemental Material 1\cite{suppley} and references therein. \newline {\it Results.} Figure 1 shows the wide-range optical conductivity $\sigma_{1}(\omega)$ of the 50QL-thick $\rm{Bi_{2}Se_{3}}$ film. In the Far-infrared region, $\sigma_{1}(\omega)$ consists of Drude absorption and optical phonon peak, both coming from the bulk, where the former one arises from the Se-vacancy driven carrier \cite{post2013thickness,di2012optical}. For $\hbar\omega$$>$0.25eV, the interband (IB) transition VB$\rightarrow$CB leads to the rapid rise of $\sigma_{1}(\omega)$. Note that there is an absorption peak at $\hbar\omega$=1eV ($\equiv$Peak-A hereafter) which we will pay particular attention to. \newline In Figure 2 we show optical conductivity measured for a series of In-substituted $\rm{(Bi_{1-x}In_{x})_{2}Se_{3}}$ films. The In-concentration $x$ was varied for 0$\leq$x$\leq$0.9 range. Previous studies showed that as Bi is replaced by the light element In, the spin-orbit interaction is reduced and the topological property of Bi$_{2}$Se$_{3}$ becomes weaker \cite{brahlek2012topological, salehi2016finite, wu2013sudden, sim2015tunable}. At a critical concentration x$_{c}$, phase transition from TI to non-TI (NTI) phase occurs where the bulk band gap is closed, and CB and VB begin to reinvert. The x$_{c}$ lies between x=0.04 and x=0.06 depending on the film thickness, and for at x$\geq$x$_{c}$ the topological SS is completely vanished \cite{brahlek2012topological,salehi2016finite}. The $\sigma_{1}(\omega)$ shows that Peak-A becomes weaker as $x$ increases. For quantitative analysis of this behavior we isolate Peak-A by removing background conductivity from $\sigma_{1}(\omega)$ as $\sigma_{1}^{A}(\omega)$=$\sigma_{1}(\omega)$-$\sigma_{1}^{BG}(\omega)$ as illustrated in the inset of Figure 2d, (a polynomial function was used for the $\sigma_{1}^{BG}$) and calculated the strength of Peak-A as $S$=$\int\sigma_{1}^{A}(\omega)d\omega$. Figure 2d shows that $S$ is quenched at x=0.06. To double check this behavior we performed independent analysis of Peak-A: we calculate the second derivative $\frac{d^{2}\sigma_{1}}{dE^{2}}$ and measure the distance (w) and depth ($d$) of the extrema pattern, which allows determination of strength $S$ (=$\frac{1}{12}$$\sqrt{\frac{\pi}{6}}$$\cdot$dw$^{3}$) as well as width (= $\frac{1}{\sqrt{3}}$$\cdot$w) and height (=$\frac{1}{12}$ $\cdot$dw$^{2}$) of Peak-A. (See Supplemental Fig. S1 for details\cite{suppley}). The $S$ is quenched at x=0.06 again, which confirms the $S$=$\int\sigma_{1}^{A}(\omega)d\omega$ analysis. Importantly x=0.06 is the critical x$_{c}$ for the TI$\rightarrow$NTI transition for the thickness $d$=50QL of our films: that is, the SS is vanished at this x. This correlation of Peak-A with the TI$\rightarrow$NTI transition strongly suggests that Peak-A is related with the topological SS of Bi$_{2}$Se$_{3}$. We emphasize that this behavior is strikingly different from those of the other optical absorption features: In Supplemental Fig.S2\cite{suppley}, we show that the Drude absorption is vanished at x$\sim$0.5, the phonon peak splits at x= 0.12, and the IB survives up to x=0.9. (For x=1, $\rm{In_{2}Se_{3}}$ is a large gap band insulator with $E_{\rm{g}}$ $>$ 1.5eV) Note that none of these features are correlated with x$_{c}$. In contrast Peak-A manifests clear correlation with x$_{c}$ and is the only absorption of such kind. \newline Given the correlation of Peak-A with SS, one can propose possible pictures on how Peak-A is created. Specifically, Peak-A can arise when (1) SS electron is excited into empty state lying 1eV above, or alternatively (2) electron lying at 1eV below is excited into the SS. In both scenarios Peak-A becomes extinct when SS is suppressed at x$_{c}$. To find out which scenario is correct, we performed electrical gating experiment on the Bi$_{2}$Se$_{3}$ film ($d$=8QL). For this a Bi$_{2}$Se$_{3}$ film was grown on SiO$_{2}$/Si substrate and optical transmission was measured while gate voltage $V_{\rm{G}}$ is applied between the film and Si. In this back-gate configuration, the Fermi energy E$_{F}$ shifts down (up) for the negative (positive) $V_{\rm{G}}$ due to electron depletion (accumulation) in the film. Fig.3(b) shows that T($V_{\rm{G}}$)/T(0) changes in the Far-IR, mid-IR, and at 1eV. Figure 3(c) shows that Peak-A becomes stronger (weaker) for negative (positive) $V_{\rm{G}}$. Such change supports the scenario (2) over (1) for the following reason: For $V_{\rm{G}}$$<$0 the electron occupation of SS is reduced and more empty SS become available, which strengthens the transition of (2), which agrees with the increase of Peak-A strength. This relation is visualized in Fig.4. In the scenario (1), on the other hand, the surface electron is decreased and the peak becomes weaker, opposite to the observed behavior of Peak-A. Therefore, the $V_{\rm{G}}$-dependent result demonstrates that Peak-A arises most likely by excitation of electron from a state lying 1eV below into the SS. In this transition electron occupation of SS is increased, or equally, the SS is optically populated by illuminating Bi$_{2}$Se$_{3}$ with $\hbar\omega$=1eV. Here we remark that the $V_{\rm{G}}$-dependent change in Figure 3c is very small, less than even 0.1\%. Nevertheless the Peak-A change is successfully measured, demonstrating the superior sensitivity and stability of our experiment. For later analysis we calculate the $V_{\rm{G}}$-dependent $\sigma_{1}(\omega)$ from the T($V_{\rm{G}}$)/T(0) data \cite{yu2016infrared, yu2019infrared,yu2019gate} and show it in Figure 3(d). \newline With the nature of Peak-A been identified, the next question to be addresed is the origin of the initial state. Before we discuss this issue, we give further thoughts on the gate-dependent growth of Peak-A. Figure 3 implies that the SS-population will become stronger if E$_{F}$ could be brought down further. The latter would be possible when $V_{\rm{G}}$ is applied to high value beyond the limit of our measurement, where such high-gating was in fact demonstrated experimentally \cite{bansal2014robust}. Here we will consider how large Peak-A will grow in the strong-gating regime. In Figure 4a, we show the bulk interband transition in the mid-IR range. The onset energy ($\equiv$E$_{op}$) of this transition corresponds to the thick arrow in Fig.4(b). The E$_{op}$ increases when the Fermi level shifts up. Fig.4(c) shows that the increase rate is dE$_{op}$/d$V_{\rm{G}}$ =1.74$\times10^{-4}$[eV/V]$\equiv$$a$. In the mean time the $S$=$\int\sigma_{1}^{A}(\omega)d\omega$ calculated from Figure3(d) decreases at the rate d$s$/d$V_{\rm{G}}$ =-6.06$\times10^{-4}$[1/V]$\equiv$$b$, where $s$=S/S(0V) is the normalized $S$ by ungated $S(0)$. Given $a$ and $b$ we can eliminate $V_{\rm{G}}$ and obtain the S-change against E$_{F}$ as d$s$/dE$_{F}$ = (d$s$/d$V_{\rm{G}}$)$\cdot$(d$V_{\rm{G}}$/dE$_{F}$)= $b$/($a$/2)=6.96[1/eV]. Here d$V_{\rm{G}}$/dE$_{F}$=$(a/2)^{-1}$ was derived by utilizing the Bernstein-Moss relation\cite{burstein1954anomalous,moss1954interpretation,hamberg1984band}, namely, dE$_{F}$/d$V_{G}$= $\frac{1}{2}$$\times$dE$_{op}$/d$V_{\rm{G}}$=$\frac{a}{2}$ where the factor $\frac{1}{2}$ comes from $\frac{1}{m_{CV}^*}=\frac{1}{m_{CB}^*}+\frac{1}{m_{VB}^*}$ and $m_{CB}^*$=$m_{VB}^*$\cite{analytis2010bulk}. This result d$s$/dE$_{F}$=6.96[1/eV] enables us estimate $S$ at high gating: For the pristine, electron-doped Bi$_{2}$Se$_{3}$ films like ours, the Fermi level lies typically at $E_{F}$$\sim$0.1eV from the CB bottom (CBB). If the gating shifts $E_{F}$ down to the CBB, i.e, $\Delta$E$_{F}$=0.1, then $s$ will increase approximately by $\Delta$$s$ $\approx$ [$\frac{ds}{d\rm{E}_{F}}$]$\cdot$ $\Delta$E$_{F}$=0.69. that is, Peak-A grows by $\sim$70$\%$ compared with the ungated strength. If E$_{F}$ is shifted further to the Dirac point, the latter lying $\sim$0.2eV from the CBB, we have $s$=$s(0)$+$\Delta$$s$$\approx$1+[$\frac{ds}{d\rm{E}_{F}}$]$\cdot$(0.1+0.2)=3.09. That is, Peak-A grows as large as three-times. (Here we assumed $\frac{ds}{d\rm{E}_{F}}$ is constant for simplicity, neglecting its $E_{F}$-dependence.) This estimation shows that substantial increase of the peak will occur at high-gating. From this exercise we also learn that if $S$ is precisely characterized as function of E$_{F}$, it could be used to determine the location of the Fermi level in Bi$_{2}$Se$_{3}$ films, whereas usually more difficult ARPES should be performed. \newline {\it Discussion.} We now search for possible candidate for the initial state of the Peak-A. For this we refer to the band structure of Bi$_{2}$Se$_{3}$ reported in experimental \cite{hsieh2009tunable,nechaev2013evidence,hasan2010colloquium,pan2011electronic,wray2010observation,dubroka2017interband,piot2016hole} and theoretical \cite{aguilera2013g,guo2016tuning,forster2015ab,hermanowicz2019iodine} literatures, and schematically redrew it in Figure5. In Figure 5 note that there is an energy branch lying $\sim$1eV below the SS. Interestingly, this branch E(k) runs in near-parallel with the SS. If we consider the optical transition from this E(k) branch (=i) to SS (=f), their parallel dispersion $\nabla_{k}\rm{E}^{i}(k)$$\cong$$\nabla_{k}\rm{E}^{f}(k)$ leads to strong absorption due to that transition strength S$\sim$$\int \frac{M_{fi}}{|\nabla_{k}\rm{E}^{f}(k)-\nabla_{\emph{k}}\rm{E}^{i}|} d^{2}k$ becomes divergent. This yields a pronounced absorption at $\hbar\omega$=E$_{f}$-E$_{i}$=1eV, which agrees with the profile of Peak-A. (Here the transition matrix element M$_{fi}$ is assumed to be constant for simplicity.) Therefore this 1eV-E(k) is a plausible candidate for the i-state. We think that this assignment can be confirmed when theoretical calculation of $\sigma_{1}(\omega)$, not available currently, is performed. We remark that optical transition of Bi$_{2}$Se$_{3}$ between bulk and surface in particular was poorly studied so far with a rare exception\cite{li2015optical}. To consider another candidate native defect such as Se-vacancy can produce defect levels below $E_{F}$. However their energy locations are not well known, and generally such localized, dispersionless levels do not fulfill the $\nabla_{k}\rm{E}^{i}(k)$$\cong$$\nabla_{k}\rm{E}^{f}(k)$ condition. We emphasize that, while supporting works should follow to definitely identify the 1eV-bulk E(k) as origin of i-state, the occurrence of the optical population of SS in Bi$_{2}$Se$_{3}$ is evident judging from the properties of Peak-A we have unveiled regardless of the i-state origin. To make further remark on the T($V_{\rm{G}}$)/T(0) data, Fig.3(b) shows that gate-dependent change occurs in the Far-IR and mid-IR ranges also. Similar change was reported for bulk-insulating $\rm{(Bi_{1-x}Sb_{x})_{2}Te_{3}}$ films\cite{whitney2016gate}. While for $\rm{(Bi_{1-x}Sb_{x})_{2}Te_{3}}$ the mid-IR modulation peaks at $\sim$0.3eV, the modulation for Bi$_{2}$Se$_{3}$ occurs at higher energy, peaked at 0.45 eV. This difference is attributed to that the interband transition taking place at E$_{op}$=E$_{g}$+2E$_{F}$ is higher for Bi$_{2}$Se$_{3}$ where E$_{F}$ is significant ($\sim$0.1 eV) compared with the insulating $\rm{(Bi_{1-x}Sb_{x})_{2}Te_{3}}$ where E$_{F}$ is considered to be much lower. Also, while the modulations in mid-IR and far-IR inevitably contain contribution from both bulk and surface states, the modulation strength of the 1eV feature is weaker, which may further support the surface-related origin. Further quantitative analysis and comparison will be published separately. \newline In conclusion we performed broadband optical absorption measurement on pristine Bi$_{2}$Se$_{3}$ and In-substituted $\rm{(Bi_{1-x}In_{x})_{2}Se_{3}}$ thin films, as well as electrically gated Bi$_{2}$Se$_{3}$. The absorption Peak-A that occurs at $\hbar\omega$=1eV showed clear correlation with the In-driven TI-NTI phase transition: it is activated at x$<$x$_{c}$ (TI-phase) but is completely vanished for x$>$x$_{c}$ (NTI) along with the quenching of the topological surface state. Furthermore, the Peak-A become stronger/weaker by the electron depletion/injection into the Bi$_{2}$Se$_{3}$ in the electrical gating measurement. The two experimental results provide convincing evidence that Peak-A arises from the population of SS, i.e, the optical excitation of electron from 1eV below into into SS. This SS-optical population increases the density of the surface electron, thus can enhance the topological electrical conduction, which promotes TI device application. Similar optically driven SS-population may be realized in other TI materials as well, which should be investigated in the future. {\it Note added:} For our $\rm{(Bi_{1-x}In_{x})_{2}Se_{3}}$ films, the bulk transition E$_{op}$ shows a different x-dependent behavior from Ref.\cite{wu2013sudden}. It may come from that carrier doping due to Se-vacancy is sample-dependent for these TI films. See Supplemental Fig.S3\cite{suppley}. \newline {\it Acknowledgments.} This work was supported by the 2016 sabbatical year research grant of the University of Seoul. JM and SO are supported by the Gordon and Betty Moore Foundation’s EPiQS Initiative (GBMF4418) and National Science Foundation (NSF) grant EFMA-1542798.
2,869,038,156,441
arxiv
\section{Introduction} \label{chap:ADiT-sec:Intro} Top-$k$ queries retrieve the $k$ tuples of a query result which score best for a given objective function. Top-$k$ queries help to overcome the problem of too large query results on one hand and too low recall, if the query is more constrained, and are therefore a promising technology for improving and accelerating search in for various data collections, e.g. for the search for suitable samples in biobanks \cite{Eder:2009:ISF:1616930.1616937}, our main application area. Top-$k$ queries are also popular for providing users ranked search results they are used from web search engines. Top-$k$ queries, in particular, the optimization of top-$k$ query processing for central databases received a lot of attention \cite{BPA2,Bruno:2002:TKS:568518.568519,Dabringer:2011:ETR:1982185.1982414,Dabringer:2011:FTQ:2033546.2033563,962155,1325952,1391730,1272749,MS_TopK,OPT}. Optimizing top-$k$ queries in distributed environment, in particular in highly distributed networks of federated or peer to peer databases still has significant research needs. Current distributed top-$k$ query processing approaches either focus on reducing the amount of transmitted queries \cite{Akbarinia:2006:RNT:1136637.1136639,Hagihara:2009:MPM:1590953.1590977} or on keeping the amount of transported objects low \cite{Balke2005,Conner2007,Fang:2010:BPA:1917832.1918852,Ryeng:2011:EDT:1997251.1997277}, or to reduce the communication costs{\cite{fang2014efficient}. However, both the transmitted objects and messages affect the \textit{system effort} and \textit{query response time} in a peer to peer system. Therefore, we introduce an adaptive distributed top-$k$ (short ADiT) query processing approach considering both. To the best of our knowledge ADiT is the first approach using the amount of messages \textit{and} the amount of transmitted objects to measure \textit{system effort} and \textit{query response time}. Processing a top-$k$ query in a p2p network with horizontal partitioning involves sending a top-$k$ query to each peer. The optimization problem now is to determine a proper $k_p$ for each $p$ of the peers, i.e. how many objects should be fetched from which peer. If this $k_p$ is too large, it results in unnecessary computation at the peers' site and unnecessary traffic. If it is too low, it is necessary to send additional queries to the peers. In our approach several parameters are used to calculate a proper $k_p$: \begin{itemize} \item size of the peer to peer network \item amount k of searched objects \item network capabilities of each peer, i.e. the transmission rate \item amount of objects stored on each connected peer \item speed of a peer, i.e. the searching performance of that peer \end{itemize} In the following we show the general architecture for processing top-$k$ queries in a p2p environment and derive some heuristics based on the parameters outlined above. We describe the implementation of the ADiT approach and show in an extensive set of experiments the performance gains using this approach. \section{ADiT in General} \label{chap:ADiT-sec:General} \underline{A}daptive \underline{Di}stributed \underline{T}op-K query processing (short ADiT) is able to process distributed top-$k$ queries over horizontally partitioned data exactly. ADiT assumes a dynamic peer to peer network. Each peer has variable bandwidth capabilities and individual message costs. In contrast to other approaches \cite{Ryeng:2011:EDT:1997251.1997277,Vlachou:2008:ETQ:1376616.1376692} ADiT does not rely on caching techniques. Thus the performance is not dependent on stable data or on reoccurring queries. The aim of ADiT is to achieve a low overall system effort as well as a fast query response time. The first parameter, the overall system effort is defined as sum all amounts of time of the peers needed for (1) sending requests to other peers in the network to obtain further objects, (2) searching objects and (3) transmitting objects. The second parameter is the query response time, the time elapsed between submitting a query and the return of the result. Formula \ref{chap:ADiT-eq:syseffort} and formula \ref{chap:ADiT-eq:queryAnswerTime} define the system effort, respectively the query response time where $MsgCount_i$ is the total amount of messages sent to peer $P_i$ and $n_i$ is the amount of objects retrieved from peer $P_i$. We use the following abbreviations throughout of this paper: N is the peer to peer network, Q is the top-$k$ query, R is the queried relation, and $P_i$ is a peer in the peer to peer system. \begin{eqnarray} \label{chap:ADiT-eq:syseffort} SE(N, Q, R) & = & \sum_{i=1}^{|P|} CCN.P_i, MsgC_i) +\\ & & DBCosts(N.P_i, Q, R, n_i) + \nonumber \\ & & TransCosts(N.P_i, R, n_i) \nonumber \end{eqnarray} \begin{eqnarray} \label{chap:ADiT-eq:queryAnswerTime} QueryAnswerTime (N, Q, R) & = & max(CommCosts(N.P_i, MsgCount_i),\\ & & DBCosts(N.P_i, Q, R, n_i), \nonumber \\ & & TransCosts(N.P_i, R, n_i)) \nonumber \end{eqnarray} The unit of system effort as well as of query response time is seconds. Thus it is needed to map the different costs to a time factor. Function \ref{chap:ADiT-eq:CommunicationCosts} defines how sending $MsgCount$ requests to peer $P$ is mapped to a time factor. The amount of incoming messages is multiplied with the constant costs that arise when establishing a connection to peer $P$. This gives the amount of time that is spent by sending $MsgCount$ messages to peer $P$. \begin{eqnarray} \label{chap:ADiT-eq:CommunicationCosts} CommCosts (P, MsgCount) = P_{MsgCosts} * MsgCount \end{eqnarray} Function \ref{chap:ADiT-eq:TransmissionCosts} defines how retrieving $n$ objects from relation $R$ of peer $P$ is mapped to a time factor. The transmission costs are influenced by the size of the object in relation $R$ on peer $P$ and by the transmission rate of peer $P$. \begin{eqnarray} \label{chap:ADiT-eq:TransmissionCosts} TransCosts (P, R, n) = \frac{(P_{R_{ObjectSize}} * n)} {P_{TransRate}} \end{eqnarray} The database costs ($DBCosts(N.P_i, Q, R, n)$) for searching the best $n$ objects in relation $R$ on peer $P_i$ strongly depend on the top-$k$ approach used on peer $P_i$, performance of the answering peer $P_i$, and the issued query $Q$, e.g. on the number of restrictions. ADiT assumes that each peer provides an estimate of the time needed to return the \textit{top-$k$} objects for a query with \textit{m} restrictions on a relation with size \textit{N}. There is no assumption which procedure a peer uses to process top-$k$ queries.. ADiT works iteratively and calculates a separate fetch size $k'_p$ for each peer in each iteration. Then ADiT broadcasts the query $Q$ \textit{in parallel} and gathers the top-$k'_p$ from each peer $p$. Then ADiT tries to publish objects and repeats if necessary. There are two major possibilities for tuning: Choosing an appropriate fetch size $k'_p$ for each peer in each iteration and avoiding to contact peers which cannot contribute to the result. For choosing the fetch size there are two extreme cases: \begin{enumerate} \item Setting $k'_p = 1$ for each peer leads to a minimal amount of \textit{transmitted objects} but to a higher amount of \textit{transmitted messages}. \item Setting $k'_p = k$ for each peer leads to a minimal amount of \textit{transmitted messages} but to a higher amount of \textit{transmitted objects}. \end{enumerate} In the rest of this paper we will focus on how to tune this basic distributed top-$k$ query processing approach. \section{Heuristic Fetch Size Calculations} \label{chap:ADiT-sec:LargeTestDerivingHeuristics} Analyzing a large number of queries varying the influencing factors \cite{phddabringer} we developed two heuristics (basic and enhanced) for choosing a good fetch size $k'_p$ for each individual peer $p$. \noindent \textbf{Basic Heuristics.} The basic heuristics shown in equation \ref{chap:ADiT-eq:fetchSizeHeuristicBasic} only uses the amount of relevant peers $N_{Size}$ and the amount of searched objects $k$ to derive a common fetch size $f$ for all peers. The basic heuristics does not assume any particular data distribution. Thus it tries to retrieve an equal amount of objects from each peer. In case $k$ is larger than $N_{Size}$ the basic heuristics equally distributes $k$ among the available peers. Otherwise the basic heuristics calculates the smallest multiple of $k$ which is greater or equal than $N_{Size}$ and equally distributes this amount among the available peers. The $consFactor$ is used to increase the fetch size since it is unlikely that each peer will contribute the same number of objects. This increasing is used to fetch more objects and keep the number of iterations small. Our initial experiments showed that a $consFactor$ of 2 leads to good results, e.g. few iterations and thus few messages exchanged in the p2p network. If the data is not distributed equally, $consFactor$ should be chosen higher. \begin{eqnarray} \label{chap:ADiT-eq:fetchSizeHeuristicBasic} f & = & min(k, consFactor * \left \lceil \frac {N_{Size}} {k} \right \rceil * \frac{k} {N_{Size}})) \end{eqnarray} \noindent \textbf{Enhanced Heuristics.} The enhanced heuristics calculates the fetch size $k'_p$ for each peer p \textit{separately}. It uses additional parameters to adjust the fetch size for each peer properly: \begin{itemize} \item $ObjectsStored_{p}$: Amount of objects stored on peer p. \item $ObjectsStored_{N}$: Amount of objects stored in the peer to peer system N, i.e. $sum(ObjectsStored_{p})$ \item $Speed_{p}$: Query processing speed of peer p, e.g. a value between 1 and 10 where 1 is the slowest and 10 the fastest speed. \item $maxSpeed_{N}$: Maximum query processing speed of a peer in the peer to peer system N. \item $TransRate_{p}$: The transmission rate describing how fast the network connection of a certain peer is. This value is given in MBit per second. \item $maxTransRate_{N}$: Maximum transmission rate of a peer in the peer to peer system N. \end{itemize} The knowledge gathered during query processing iterations comprises the following parameters: \begin{itemize} \item $ObjectsRetrieved_{p}$: Amount of objects of peer p which have already been retrieved, initially 0. \item $ObjectsPublished_{p}$: Amount of objects of peer p which made it in the top-$k$ answers, initially 0. \item $ObjPub_{N}$: Amount of objects returned to the user, initially 0. \end{itemize} All these parameters are used to calculate different weights which influence the enhanced heuristics. Applying the basic heuristics to the large test scenarios showed that the proposed fetch size should be treated as a lower limit. Therefore, the enhanced heuristics uses the different weights to \textit{increase the fetch size} determined with the basic heuristics. To accomplish that the enhanced heuristics maps its weights to the interval of [1, 2]. This prevents from fetching fewer objects than the basic heuristics suggested. The enhanced heuristics assumes that all previous iterations can be used to reason about following iterations, e.g. it assumes that peers that contributed more objects in previous iterations will also contribute more objects in the following iterations. This assumption is reflected in weight $w_{pF}$ which is defined in equation \ref{chap:ADiT-eq:weightPubFrac}. The more objects a peer published compared to all other peers, the more objects are gathered from this peer \textit{in the next iteration}. \begin{eqnarray} \label{chap:ADiT-eq:weightPubFrac} w_{pF} = (1 + \frac{ObjectsPublished_{p}} {ObjPub_{N}}) \end{eqnarray} The enhanced heuristics tries to reduce the amount of fetched objects which are not needed. Thus it fetches more objects from peers where the ratio between fetched objects and published objects is high. Equation \ref{chap:ADiT-eq:weightUsedFrac} shows the definition of weight $w_{uF}$. \begin{eqnarray} \label{chap:ADiT-eq:weightUsedFrac} w_{uF} = (1 + \frac{ObjectsPublished_{p}} {ObjectsRetrieved_{p}}) \end{eqnarray} The enhanced heuristics assumes that peers which store more objects will contribute more to the final answer. Thus it suggests to fetch more objects from larger peers. It uses equation \ref{chap:ADiT-eq:weightDBFrac} to incorporate that fact, namely weight $w_{DBF}$. \begin{eqnarray} \label{chap:ADiT-eq:weightDBFrac} w_{DBF} = (1 + \frac{ObjectsStored_{p}} {ObjectsStored_{N}}) \end{eqnarray} Since it is cheap to ask a faster peer for more objects the enhanced heuristics defines $w_{Speed}$ and $w_{TransRate}$. Equation \ref{chap:ADiT-eq:weightSpeed} models the fact that more objects should be fetched from peers which are faster in searching their databases. \begin{eqnarray} \label{chap:ADiT-eq:weightSpeed} w_{Speed} = (1 + \frac{Speed_{p}} {maxSpeed_{N}}) \end{eqnarray} Equation \ref{chap:ADiT-eq:weightTransRate} deals with the transmission of objects. It reflects that more objects should be fetched from peers which have a higher transmission rate. \begin{eqnarray} \label{chap:ADiT-eq:weightTransRate} w_{TransRate} = (1 + \frac{TransRate_{p}} {maxTransRate_{N}}) \end{eqnarray} The weights described in equations \ref{chap:ADiT-eq:weightPubFrac}-\ref{chap:ADiT-eq:weightTransRate} are used by the enhanced heuristics to influence the basic heuristics. The weighted fetch size is determined with the heuristic function shown in equation \ref{chap:ADiT-eq:fetchSizeHeuristicWeights}. \begin{eqnarray} \label{chap:ADiT-eq:fetchSizeHeuristicWeights} k'_{p} & = & min(k - ObjPub_{N}, \left \lceil f * w_{pF} * w_{uF} * w_{DBF} * w_{Speed} * w_{TransRate} \right \rceil) \nonumber \\ & & \end{eqnarray} The upper bound for fetch size $k'_{p}$ is obviously the amount of missing objects, namely $k - ObjPub_{N}$. The enhanced heuristics does not fetch more objects than the amount of missing objects from any of the peers in the peer to peer system. \subsection{ADiT Processing Iterations} \label{chap:ADiT-sec:DetailProcessingIterations} ADiT processes a given distributed top-$k$ query through a number of iterations. Each iteration is used to gather objects from the peers within the system to satisfy the distributed top-$k$ query. In this section we focus on the relevant steps in each iteration. The pseudo-code in listing \ref{chap:ADiT-listing:ADiT} shows how ADiT obtains the best k objects for a list of restrictions. The variables used for storing the maximum remaining score ($maxRemScore$) and all fetched objects ($fetchedObjs$) are assumed to be globally visible to all threads during execution. They are depicted as in-out parameters in all pseudo-codes where they are used. The output produced by the \textit{ADiT}-method is a sorted list of the $k$ objects which score best among all objects in the peer to peer system with respect to the objective function. \textbf{Identify Relevant Peers.} ADiT only distributes the top-$k$ queries to \textit{relevant} peers. A peer p is relevant iff the last delivered object of peer p (i.e. the one with the maximum remaining score on peer p) is among the best k objects of already fetched objects, otherwise peer p is irrelevant and can be pruned, since peer p cannot return a better object than its last published object. The set of relevant peers is updated in each iteration. \textbf{Calculating Individual Fetch Sizes.} In each iteration ADiT assigns an individual fetch size $k'_{p}$ to each relevant peer p. The fetch size is determined using the enhanced heuristics discussed in section \ref{chap:ADiT-sec:LargeTestDerivingHeuristics}. \begin{lstlisting}[caption=Pseudo-code for ADiT, label=chap:ADiT-listing:ADiT] program ADiT (IN string tableName, IN Number k, IN Set<Restriction> restr, IN Set<Function> Sim, IN Function Obj, I_O Map<Number, object> objects) var maxRemScore: Number; var fetchedObjs: Map<Number, object>; var ObjPublished: Number; var relPeers: Set<Peer>; var t: Thread; begin loop maxRemScore = 0; GetRelevantPeers(objects, I_O relPeers); CalcFetchSize(k - objects.count, I_O relPeers); -- broadcast foreach Peer p in relPeers t = new Thread(); t.start(LocalTopKCall(tableName, restr, Sim, Obj, p, fetchedObjs, maxRemScore)); end-for; -- publish PublishObjects(I_O fetchedObjs, I_O maxRemScore, k, relPeers, ObjPublished, objects); until ObjPublished == k end. \end{lstlisting} \textbf{Broadcasting Top-K Query.} Within each iteration ADiT gathers objects to satisfy the distributed top-$k$ query. Therefore, ADiT distributes the query throughout the system and obtains $k'_{p}$ objects from each peer \textit{in parallel}. For each relevant peer ADiT starts a separate thread ($LocalTopKCall$) which encapsulates two major tasks: (1) execution of a local top-$k$ query and (2) updating of the maximum remaining score if it changed (\ref{chap:ADiT-listing:LocalTopKCall}). \begin{lstlisting}[caption=Pseudo-code for sending a top-$k$ query to a certain peer, label=chap:ADiT-listing:LocalTopKCall] program LocalTopKCall (IN string tableName, IN Set<Restriction> restr, IN Set<Function> Sim, IN Function Obj, IN Peer p, I_O Map<Number, object> fetchedObjs, I_O Number maxRemScore) begin p.TQQA(tableName, q = 0, p.k', restr, searchType = AT_MOST, Sim, Obj, p.Objects)); lock(fetchedObjs, maxRemScore); fetchedObjs.AddAll(p.Objects); if p.maxScore > maxRemScore then maxRemScore = p.maxScore; end-if; end-lock; end. \end{lstlisting} The first part shows the call of a local $TQQA$ query processor \cite{phddabringer} which is reentrant, i.e. gathering k'=5 objects in the first iteration and k'=10 objects in the second iteration finally gives the best 15 objects from peer p. After the best (or even next in each following iteration) $k'$ objects have been retrieved they are added to a global buffer. Finally the maximum remaining score is updated in case peer p has a higher maximum score than all other peers. \textbf{Publishing Objects.} The last step in each iteration is the publishing of relevant objects (Listing \ref{chap:ADiT-listing:PublishObjects}). Since ADiT is an \textit{exact distributed top-$k$ query processing approach} it is necessary to wait for all peers to return at least one result. This is indicated with the $waitForAll$ method. After all peers provided their results ADiT iterates over the \textit{sorted} map and tests for each object whether its score is greater or equal than the maximum remaining score. In that case an object can be published. ADiT stops when enough objects have been published. \begin{lstlisting}[caption=Pseudo-code for the publishing of objects in ADiT, label=chap:ADiT-listing:PublishObjects] program PublishObjects (I_O Map<Number, object> fetchedObjs, I_O Number maxRemScore, IN Number k, IN Set<Peer> relPeers, I_O Map<Number, object> objects) begin waitForAll(relPeers); foreach Element e in fetchedObjs.Elements if e.Score >= maxRemScore objects[e.Score] = e.Object; if objects.count == k then break; end-if else break; end-if end-for end. \end{lstlisting} \section{Prototype and Experiments} \label{chap:ADiT-sec:ExperimentalResults} ADiT has been completely implemented in PL-SQL \cite{PLSQL,Feuerstein:1999:OPP:555010} as a set of stored procedures \cite{Owens:1998:BID:272975}. To compare ADiT against a state of the art distributed top-$k$ query processing technique we also implemented the \textit{algorithm with remainder top-$k$ queries (short ARTO) \cite{Ryeng:2011:EDT:1997251.1997277}} in this database layer. \subsection{Experimental Setup} \label{chap:ADiT-sec:ExperimentalSetup} We performed experiments on 2 databases: One filled with randomly generated data, and the other consisting of a single relation containing 68 categorical attributes taken from the \textit{UCI Machine Learning Repository} \cite{UCICensusData,Frank+Asuncion:2010} which contains over 2.400.000 entries in this single relation which we distributed among the peers in the network such that the size of the database of each peer varied between 5.000 objects and 500.000 objects. Within this section we present various diagrams generated from the data produced by the conducted test runs. We primarily focused on the \textit{system effort} caused by a certain query and on the \textit{query response time}. To make precise statements about ADiT and the enhanced heuristics we used the basic heuristics with a $consFactor$ of 2 and four other heuristics to compare them to the enhanced heuristics: \begin{enumerate} \item $k'_p = k$ \item $k'_p = 1$ \item $k'_p = \left \lceil \frac{k}{N} \right \rceil$ \item $k'_p = \left \lfloor \frac{k}{N} \right \rfloor$ \item $k'_p = min(k, 2 * \left \lceil \frac {N_{Size}} {k} \right \rceil * \frac{k} {N_{Size}}))$ \end{enumerate} For an easier comparison of the achieved results we defined two ratios: gain with respect to system effort is defined in equation \ref{chap:ADiT-eq:RatioSysEffort}; gain achieved for the query response time is shown in equation \ref{chap:ADiT-eq:RatioQAT}. The respective ratios for the comparison with ARTO are defined accordingly. \begin{eqnarray} \label{chap:ADiT-eq:RatioSysEffort} Ratio_{SE} & = & \frac{SystemEffort_{heuristic_i}}{SystemEffort_{heuristic_{enhanced}}} \end{eqnarray} \begin{eqnarray} \label{chap:ADiT-eq:RatioQAT} Ratio_{QAT} & = & \frac{QueryAnswerTime_{heuristic_i}}{QueryAnswerTime_{heuristic_{enhanced}}} \end{eqnarray} \subsection{Discussion of Results} In figure \ref{chap:ADiT-fig:ratio_qat_19peers_04cons_cens} we can see the $Ratio_{QAT}$ for a query with 4 restrictions. Comparing with figure \ref{chap:ADiT-fig:ratio_qat_49peers_04cons_cens} we can see that all curves get higher in a peer to peer network with 49 peers. Additionally, these first figures already show that the heuristics $k'_p = 1$ is not a good choice since it involves high interaction between the query initiator and the other peers. We can also observe that for the query response time the gain over ARTO is rapidly increasing when the amount of searched objects increases. The ratio is growing fast because ARTO needs more sequential message processing when the search amount increases (when the first parallel call was not sufficient). \begin{figure*}[!ht] \centering \parbox{10cm} \includegraphics[width=10cm]{./ratio_qat_19peers_04cons_cens.pdf} \caption{Ratio for query response time between \textit{enhanced heuristics}, approximated optimum, ARTO and five different approaches to determine the fetch size $k'_p$ in a peer to peer system with 19 peers and varying search amount K and 4 restrictions on census data.}% \label{chap:ADiT-fig:ratio_qat_19peers_04cons_cens}}% \end{figure*} In figure \ref{chap:ADiT-fig:ratio_sysEff_49peers_04cons_cens} and figure \ref{chap:ADiT-fig:ratio_qat_49peers_04cons_cens} we can see $Ratio_{SE}$ and $Ratio_{QAT}$ for a query with 4 restrictions in a peer to peer network with 49 peers storing census data. We can observe that the ratio $Ratio_{SE}$ and $Ratio_{QAT}$ are almost identical with respect to their curves. They only differ in the magnitude which is a little higher for the $Ratio_{QAT}$. This means that the usage of ADiT brings slightly more benefits to a single user than to the whole peer to peer system. This result can be observed over all of the tests. The reason for this behaviour is that ADiT tries to fetch fewer objects from less important peers. Thus these peers do not influence the search process that much than in a setting where all peers are contributing the same amount of objects. Another reason is that the search time is dominated by the slowest peer. Avoiding high interaction and fetching few objects from such peers can clearly boost query processing. Another observation is that ARTO has a lower system effort for a small search amount. This can be seen in figure \ref{chap:ADiT-fig:ratio_sysEff_49peers_04cons_cens}. The reason for this is that ARTO can answer queries with fewer messages and fewer transmitted objects. This is because ARTO sequentially asks the peer with the highest remaining score for further objects which results in fewer work for the remaining peers. However in figure \ref{chap:ADiT-fig:ratio_qat_49peers_04cons_cens} we can observe that the query response time is better for ADiT in the same scenario. \begin{figure* \label{chap:ADiT-fig:ratios_49peers_04cons_cens} \centering \begin{minipage}{10cm \includegraphics[width=10cm]{./ratio_sysEff_49peers_04cons_cens.pdf} \caption{Ratio for system effort between \textit{enhanced heuristics}, approximated optimum, ARTO and five different approaches to determine the fetch size $k'_p$ in a peer to peer system with 49 peers and varying search amount K and 4 restrictions on census data.}% \label{chap:ADiT-fig:ratio_sysEff_49peers_04cons_cens}% \end{minipage}% \qquad \parbox{10cm} \includegraphics[width=10cm]{./ratio_qat_49peers_04cons_cens.pdf} \caption{Ratio for query response time between \textit{enhanced heuristics}, approximated optimum, ARTO and five different approaches to determine the fetch size $k'_p$ in a peer to peer system with 49 peers and varying search amount K and 4 restrictions on census data.}% \label{chap:ADiT-fig:ratio_qat_49peers_04cons_cens}}% \end{figure*} In figure \ref{chap:ADiT-fig:ratio_sysEff_49peers_12cons_cens} and figure \ref{chap:ADiT-fig:ratio_qat_49peers_12cons_cens} we can see the $Ratio_{SE}$ and the $Ratio_{QAT}$ for a query with 12 restrictions in a peer to peer network with 49 peers storing census data. In these two figures we can observe the situation where the enhanced heuristics needs more iterations than the heuristics fetching $k'_p = k$ objects. This situation only occurred once in all of the test cases. Additionally, we see the same effect as in figure \ref{chap:ADiT-fig:ratio_sysEff_49peers_04cons_cens} and figure \ref{chap:ADiT-fig:ratio_qat_49peers_04cons_cens}, i.e. the curves are very similar but the $Ratio_{QAT}$ is a little higher than $Ratio_{SE}$. When comparing figures \ref{chap:ADiT-fig:ratio_sysEff_49peers_12cons_cens} and \ref{chap:ADiT-fig:ratio_qat_49peers_12cons_cens} with figures \ref{chap:ADiT-fig:ratio_sysEff_49peers_04cons_cens} and \ref{chap:ADiT-fig:ratio_qat_49peers_04cons_cens} we can observe that the magnitude of the ratios is almost independent of the amount of restrictions. Furthermore, we observe in figure \ref{chap:ADiT-fig:ratio_sysEff_49peers_12cons_cens} and figure \ref{chap:ADiT-fig:ratio_qat_49peers_12cons_cens} that the ratios for ARTO increases at the point where the search amount exceeds the amount of peers in the network. This shows that it is better to ask each peer for more than only one object even when calling them sequentially. \begin{figure* \centering \begin{minipage}{10cm \includegraphics[width=10cm]{./ratio_sysEff_49peers_12cons_cens.pdf} \caption{Ratio for system effort between \textit{enhanced heuristics}, approximated optimum, ARTO and five different approaches to determine the fetch size $k'_p$ in a peer to peer system with 49 peers and varying search amount K and 12 restrictions on census data.}% \label{chap:ADiT-fig:ratio_sysEff_49peers_12cons_cens}% \end{minipage}% \qquad \parbox{10cm} \includegraphics[width=10cm]{./ratio_qat_49peers_12cons_cens.pdf} \caption{Ratio for query response time between \textit{enhanced heuristics}, approximated optimum, ARTO and five different approaches to determine the fetch size $k'_p$ in a peer to peer system with 49 peers and varying search amount K and 12 restrictions on census data.}% \label{chap:ADiT-fig:ratio_qat_49peers_12cons_cens}}% \end{figure*} The most important observations gathered through the performed test runs on random and US Census data are: \begin{enumerate} \item ADiT is up to 200 times faster than ARTO in case the search amount gets higher than the amount of peers in the network. \item The system effort caused by ADiT is up to 8 times lower than the system effort caused by ARTO in case the search amount gets higher than the number of peers in the network. \item The query response time of ARTO is in most cases worse than the query response time achieved with any of the presented ADiT heuristics. \end{enumerate} Additionally, we found some characteristics appearing in almost all test runs: \begin{itemize} \item The enhanced heuristics is close to the approximated optimum gathered through the extensive tests on the \textit{US Census Data (1990) Data Set}. \item The enhanced heuristics is better than all other presented heuristics, except in one single query (see figure \ref{chap:ADiT-fig:ratio_sysEff_49peers_12cons_cens} and figure \ref{chap:ADiT-fig:ratio_qat_49peers_12cons_cens}). \item The enhanced heuristics is between 2 and 32 times faster than the heuristics always fetching \textit{1 object} from each peer in parallel. \item The enhanced heuristics is about 3 to 8 times faster than heuristics fetching $\left \lceil \frac{k}{N} \right \rceil$ or $\left \lfloor \frac{k}{N} \right \rfloor$ objects from each peer in parallel. \item The enhanced heuristics is between 1.5 and 2.5 times faster than the heuristics fetching \textit{k objects} from each peer in parallel. \item The basic heuristics and the heuristics fetching \textit{k objects} from each peer in parallel turned out to be better than the other heuristics. \end{itemize} \section{Conclusion} \label{chap:ADiT-sec:Conclusion} We discussed distributed top-$k$ query processing from a new perspective. We motivated the need for an adaptive distributed top-$k$ query processing approach (short ADiT) and defined two goal measures, namely (1) the system effort and (2) the query response time. Based on data gathered through extensive experiments we derived a heuristics which can be used to determine a separate fetch size for each peer. We tested the developed heuristics with a large real data set, namely the US Census Data. In these tests we compared the enhanced heuristics against other heuristics and against ARTO \cite{Ryeng:2011:EDT:1997251.1997277}. We could show that ADiT can accelerate the query response time and reduce the consumption of system resources significantly. Furthermore, we saw that the \textit{enhanced heuristics} is in most cases close to the best system effort and query response time approximately determined upfront. Additionally, we found that a heuristics fetching more objects is usually the better choice since searching and transmitting a few more objects has much lower costs than sending an additional request. Last but not least the gains achieved with ADiT increase with the size of the peer to peer network and the number of requested results $k$. \bibliographystyle{abbrv}
2,869,038,156,442
arxiv
\section{Introduction} \label{sec:intro} The value of astrophysical sky surveys in various wavelength ranges for their subsequent use in studying the physical and statistical properties of various populations of sources is directly related both to the completeness of the surveys themselves and to the completeness of identifying and determining the nature of the objects detected in them. At present, the most complete all-sky hard X-ray ($>15$ keV) surveys are the INTEGRAL (Krivonos et al. 2010a, 2012; Bird et al. 2010) and Swift (Cusumano et al. 2010; Baumgartner et al. 2013) surveys. These have a very high identification completeness of the detected sources; in particular, this completeness reaches 92\% in the INTEGRAL Galactic survey (Krivonos et al. 2012). Such a high percentage has been reached through the long-term work of several scientific groups in the world (see, e.g., the review by Parisi et al. 2013, and references therein; Masetti et al. 2007, 2010; Tomsick et al. 2008, 2009), including our work (Bikmaev et al. 2006, 2008; Burenin et al. 2008, 2009; Lutovinov et al. 2012a, 2012b; Karasev et al. 2012), using soft X-ray ($<10$ keV), optical, and infrared observations. This work is the next one in our program on the localization and identification of hard X-ray sources from the INTEGRAL and Swift catalogs. Four objects from the INTEGRAL Galactic survey catalog (Krivonos et al. 2012) and the Swift 70- month all-sky catalog (Baumgartner et al. 2013) were included in the sample: IGR J22534+6243, SWIFT J1553.6+2606, SWIFT J1852.2+8424, and SWIFT J1852.8+3002. Apart from the results of our optical observations, we also present the results of our spectral and timing analysis for these sources obtained from INTEGRAL, Swift, ROSAT and Chandra data. \section{Observations and data analysis} \label{sec:data} The objects being investigated here were detected by the IBIS/INTEGRAL (Winkler et al. 2003) and BAT/Swift (Gehrels et al. 2004) telescopes as weak hard X-ray ($>15$ keV) sources. The typical positional accuracy of such objects for the above instruments is $4-7$\arcmin\ (Krivonos et al. 2010b; Tueller et al. 2010), which makes their identification in the optical and infrared wavelength ranges virtually impossible. The sky regions around SWIFT J1553.6+2606, SWIFT J1852.2+8424, and SWIFT J1852.8+3002 were observed by the XRT/Swift telescope, which allowed one to detect them in the soft X-ray ($0.6-10$ keV) energy band and to improve the positional accuracy to a few arcseconds (Baumgartner et al. 2013). IGR J22534+6243 was first detected on the total sky map obtained during the INTEGRAL nine-year Galactic survey (Krivonos et al. 2012). The study of the archival data showed that the sky region around IGR J22534+6243 was previously observed by the ROSAT and Chandra observatories as well as by the XRT/Swift telescope and that a soft X-ray source that can be identified with the objects 1RXS J22535.2+624354, CXOU J225355.1+624336, and 2MASS J22535512+6243368 (Landi et al. 2012; Israel and Rodriguez 2012) is registered at a statistically significant level within the INTEGRAL error circle in all these observations. In addition, based on Chandra and XRT/Swift data, Halpern (2012) detected X-ray pulsations from this object with a period of $\sim46.67$ s. In combination with the properties of the optical star (Masetti et al. 2012), this allowed IGR J22534+6243 to be presumably classified as belonging to the class of X-ray pulsars that are members of high-mass Xray binaries. The coordinates of all four objects being studied that we obtained by analyzing the XRT data are given in Table 1; the positional accuracy is $\simeq3.5$\arcsec. \begin{table}[] \centering \footnotesize{ \caption{List and coordinates of sources} \medskip \begin{tabular}{lcc} \hline \hline Name & RA & Dec \\ & (J2000) & (J2000) \\[1mm] \hline SWIFT\,J1553.6+2606 & 15$^h$ 53$^m$ 34.91$^s$ & 26\deg 14\arcmin 41.8\arcsec \\ SWIFT\,J1852.2+8424A& 18$^h$ 50$^m$ 24.76$^s$ & 84\deg 22\arcmin 41.0\arcsec \\ SWIFT\,J1852.2+8424B& 18$^h$ 46$^m$ 50.25$^s$ & 84\deg 25\arcmin 02.2\arcsec \\ SWIFT\,J1852.8+3002 & 18$^h$ 52$^m$ 49.50$^s$ & 30\deg 04\arcmin 27.3\arcsec \\ IGR\,J22534+6243 & 22$^h$ 53$^m$ 55.10$^s$ & 62\deg 43\arcmin 36.8\arcsec \\ \hline \end{tabular} } \end{table} The main optical observations were carried out on the night of July 5, 2012, with the 1.5-m Russian-Turkish (RTT-150) telescope using the \emph{TFOSC}\footnote{http://hea.iki.rssi.ru/rtt150/ru/index.php?page=tfosc} medium- and low-resolution spectrograph. For our spectroscopy, we used grism N15 with a spectral resolution of $\approx12\AA$ (the full width at half maximum that gives the widest wavelength range ($3500 - 9000\AA$) and the highest quantum efficiency; the signal integration time was 900 s for SWIFT J1852.8+3002 and IGR J22534+6243, 1200 s for SWIFT J1852.2+8424A, and 1800 s for SWIFT J1553.6+2606 and SWIFT J1852.2+8424B. Additional optical spectroscopy for IGR J22534+6243 was performed on March 1, 2013, with the AZT-33IK telescope at the Sayansk Observatory of the Institute for Solar-Terrestrial Physics, the Siberian Branch of the Russian Academy of Sciences. For these observations, we used the UAGS spectrograph mounted at the Cassegrain focus of the telescope and equipped with a $1300$ lines/mm grating, which made it possible to take a spectrum in the range $6250-6850\AA$, near the $H\alpha$ line, with a resolution of about $4\AA$. We processed the optical data in a standard way, using the \emph{IRAF}\footnote{http://iraf.noao.edu} software and our own software. For the spectral and timing analysis of the sources in the $0.2-10$ keV energy band, we used XRT/Swift and ROSAT and Chandra (for IGR J22534+6243) observational data. They were processed with the appropriate software\footnote{http://swift.gsfc.nasa.gov and http://cxc.harvard.edu/ciao/} and the FTOOLS 6.11 software package. A hard X-ray ($>20$ keV) spectrum of IGR J22534+6243 was reconstructed from INTEGRAL data using software developed at the Space Research Institute of the Russian Academy of Sciences (for more details, see Krivonos et al. 2010b). \section{Results} \label{sec:res} Most of the objects from Table 1 lie fairly high above the Galactic plane, while IGR J22534+6243, though this source is close to it ($b\simeq3$\deg), is nevertheless far from the central Galactic regions ($l\simeq110$\deg). Therefore, in principle, the positional accuracy ($\simeq3.5$\arcsec) is high enough for soft X-ray sources to be identified at optical wavelengths. On the other hand, the weak objects from the Swift catalog detected by the BAT telescope have a positional accuracy of $\sim6-7$\arcmin\ (Tueller et al. 2010). This can lead to ambiguities even at the stage of their identification with soft X-ray sources. There are two such sources among the four objects from Table 1. \bigskip \subsection*{SWIFT\,J1553.6+2606} \begin{figure*} \vbox{ \hbox{ \includegraphics[width=\columnwidth,bb=37 169 574 621,clip]{Lutovinov_2013_Fig1a.ps} \hspace{3mm}\includegraphics[width=\columnwidth,bb=37 169 574 621,clip]{Lutovinov_2013_Fig1b.ps} } \hbox{ \includegraphics[width=1.02\columnwidth,bb=33 167 568 670,clip]{Lutovinov_2013_Fig1v.ps} \includegraphics[width=0.98\columnwidth,bb=140 270 547 680,clip]{Lutovinov_2013_Fig1g.ps} } } \caption{(a) X-ray and (b) optical images of the sky regions around SWIFT J1553.6+2606. The circle indicates the BAT error circle of the object (7\arcmin\ in radius). Numbers 1 and 2 and the arrows mark the positions of its presumed X-ray and optical counterparts. (c) The RTT-150 optical spectrum of the second source. (d) The XRT energy spectra of sources 1 (dots) and 2 (crosses). The solid and dashed lines, respectively, indicate the best fits. For clarity, the spectrum of source 2 was multiplied by 0.1. \label{swiftj15536}} \end{figure*} \begin{figure*} \vbox{ \hbox{ \includegraphics[width=\columnwidth,bb=37 169 574 621,clip]{Lutovinov_2013_Fig2a.ps} \hspace{3mm}\includegraphics[width=\columnwidth,bb=37 169 574 621,clip]{Lutovinov_2013_Fig2b.ps} } \hbox{ \includegraphics[width=\columnwidth,bb=33 167 568 681,clip]{Lutovinov_2013_Fig2v.ps} \includegraphics[width=\columnwidth,bb=33 167 568 681,clip]{Lutovinov_2013_Fig2g.ps} } } \caption{(a) Soft X-ray and (b) optical images of the sky regions around SWIFT J1852.2+8424. The circle indicates the BAT error circle (7\arcmin). Letters A and B indicate the positions of its X-ray and optical counterparts. The RTT-150 optical spectra of sources A (c) and B (d). \label{swiftj18522a}} \end{figure*} In the Swift 70-month catalog (Baumgartner et al. 2013), the object SDSS J155334.73+261441.4, which is a quasar at redshift $z=0.1664\pm0.0013$, is specified as an optical counterpart of SWIFT J1553.6+2606. However, the separation between the BAT position of SWIFT J1553.6+2606 and the quasar's position in the sky is $\simeq9$\arcmin, which exceeds the BAT positional accuracy (Tueller et al. 2010). At the same time, within the Swift/BAT error circle there is another X-ray source with coordinates RA=15$^h$ 53$^m$ 38.12$^s$, Dec=26\deg 04\arcmin 38.8\arcsec\ that can also be associated with SWIFT J1553.6+2606. This fact is illustrated in Fig. 1a, where an XRT X-ray image of the sky region around SWIFT J1553.6+2606 is shown; numbers 1 and 2 indicate the quasar SDSS J155334.73+261441.4 and the second soft Xray source. Figure 1b shows the same sky region at optical wavelengths based on red Palomar Digital Sky Survey plates. Since the nature of the first object is known, we performed spectroscopy only for the second source to determine its nature. Balmer emission lines corresponding to zero redshift and broad absorption lines are clearly seen in our spectrum (Fig. 1c). Such an optical spectrum can correspond to the spectrum of an M dwarf and a star with an active chromosphere. The XRT X-ray spectra of sources 1 and 2 (the observations were carried out several times in June-October 2010, ObsID 41177, the total exposure time is $\sim9.3$ ks) show a striking difference (Fig. 1d). The spectrum of the first of them is typical of quasars and can be fitted by a power-law dependence of the photon flux density on energy, $dN/dE\propto E^{-\Gamma}$, with a photon index $\Gamma=1.7\pm0.4$. There is absorption in the source's spectrum that exceeds the interstellar one in this direction ($\sim4\times10^{20}$ cm$^{-2}$; Dickey and Lockman 1990), but the significance of this measurement is not very high, $N_{H}=0.32^{+0.26}_{-0.16}\times10^{22}$ cm$^{-2}$. The X-ray spectrum of the second objects turns out to be considerably softer; no signal is detected above 2 keV and the spectrum itself corresponds to blackbody radiation with a temperature of $\sim2\times10^{6}$ K and a flux of $\simeq10^{-13}$ \flux\ in the $0.5-2$ keV energy band. Such a temperature is typical of the chromospheres of late-type stars, including M dwarfs. Thus, this source (within the BAT error circle) cannot provide the hard X-ray emission from SWIFT J1553.6+2606 and the latter is a quasar at redshift $0.1664$ with a luminosity of $\sim4\times10^{43}$ \ergs\ in the $2-10$ keV energy band. \bigskip \subsection*{SWIFT\,J1852.2+8424} \begin{figure} \centering \includegraphics[width=\columnwidth,bb=140 260 547 672,clip]{Lutovinov_2013_Fig3.ps} \caption{XRT energy spectra of sources A (dots) and B (crosses). The solid and dashed lines, respectively, indicate the power-law best fits. As in Fig. 1, the spectrum of source B was multiplied by 0.1 for clarity. \label{swiftj18522b}} \end{figure} SWIFT J1852.2+8424 is another hard X-ray source from the Swift survey with two possible soft X-ray counterparts located within its BAT error circle (Fig. 2a). In contrast to the preceding object, the intensities of both soft X-ray sources (following Baumgartner et al. 2013, they are designated by letters A and B here) are essentially identical in the $2-10$ keV energy band ($F_{X,A}\simeq7.6\times10^{-13}$ and $F_{X,B}\simeq9.1\times10^{-13}$ \flux, respectively). In the optical wavelength range, they correspond to objects with magnitudes $m_{r,A}\simeq17.4$ and $m_{r,B}\simeq15.6$ and the coordinates given in Table 3 (see also Fig. 2b). The RTT-150 optical spectra also turn out to be very similar (Figs. 2c and 2d). They exhibit sets of emission lines typical of Seyfert 1 galaxies -- broad Balmer hydrogen lines, narrow O[III], 4959, 5007 oxygen lines, etc. The redshifts of the galaxies measured from narrow lines are $z = 0.1828$ and $z = 0.2249$ for sources A and B, respectively. The XRT X-ray spectra are typical of active galactic nuclei -- they are well fitted by a simple power law with photon indices $\Gamma_A=1.75\pm0.07$ and $\Gamma_B=1.67\pm0.07$ (Fig. 3). The luminosities of the galaxies in the 2-10 keV energy band are $L_{X,A}\simeq0.7\times10^{44}$ and $L_{X,B}\simeq1.4\times10^{44}$ \ergs, according to their redshifts. Thus, the X-ray emission from SWIFT J1852.2+8424 detected at energies $>15$ keV is the sum of the emissions from two Seyfert 1 galaxies, with the contribution from each galaxy being approximately the same. \bigskip \subsection*{SWIFT\,J1852.8+3002} \begin{figure*} \vbox{ \hbox{ \includegraphics[width=\columnwidth,bb=50 164 560 620,clip]{Lutovinov_2013_Fig4a.ps} \hspace{3mm}\includegraphics[width=0.95\columnwidth,bb=29 167 568 672,clip]{Lutovinov_2013_Fig4b.ps} } \vspace{3mm} \hbox{ \includegraphics[width=0.93\columnwidth,bb=137 272 547 672,clip]{Lutovinov_2013_Fig4v.ps} \includegraphics[width=1.07\columnwidth,bb=37 169 574 621,clip]{Lutovinov_2013_Fig4g.ps} } } \caption{(a) Optical image of the sky around SWIFT J1852.8+3002. The arrow indicates the position of the optical counterpart. (b) The RTT-150 optical spectrum of the source. (c) The XRT energy spectrum of the source. The solid line indicates the power-law best fit with absorption at low energies. (d) An enlarged infrared (2MASS, the H band) image of the sky region around SWIFT J1852.8+3002. The big white circle indicates its XRT error circle, the cross indicates the position of the optical star according to the USNO-B1 catalog, and the small circles indicate the positions of infrared objects from the 2MASS catalog.\label{swiftj18528}} \end{figure*} According to the Palomar Digital Sky Survey plates, a fairly bright star with coordinates (J2000) RA=18$^h$ 52$^m$ 49.590$^s$, Dec=30\deg 04\arcmin 26.48\arcsec\ (Fig. 4a) and a magnitude $m_{r}\simeq12.5$ lies almost at the center of the XRT error circle for SWIFT J1852.8+3002 (the observations in May. June 2010, ObsID 40998, the total exposure time is $\sim9.6$ ks). A set of absorption lines corresponding to $H\alpha$, $H\beta$, the CaII H and K doublet, and the Balmer jump below $\sim3800\AA$ (Fig. 4b) typical of F5 III stars are clearly seen in its RTT-150 optical spectrum. Comparison of the apparent and absolute magnitudes for stars of this type gives an estimate of its distance, $\sim1$kpc. On the other hand, the X-ray spectrum of SWIFT J1852.8+3002 fitted by a simple power law with a slope $\Gamma\simeq1.7$ exhibits significant absorption, $N_{H}\simeq1.6\times10^{22}$ cm$^{-2}$ (Fig. 4c). This value is approximately an order of magnitude higher than the column density of the matter in our Galaxy in this direction, $N_H\simeq1.4\times10^{21}$ cm$^{-2}$ (Dickey and Lockman 1990), suggesting the presence of a substantial amount of matter in the binary itself possibly associated with the stellar wind from the optical companion. The X-ray flux from the source is $\simeq10^{-13}$ \flux\ in the 2-10 keV energy band, which corresponds to its luminosity $L_X\simeq10^{30}$ \ergs\ for the above estimate of the distance to the binary. Such a low luminosity in combination with an optical F5 III companion is rather unusual for Xray binaries, especially if the source's hard X-ray spectrum is taken into account. A study of infrared maps and catalogs for this sky region showed that there are actually two close objects with coordinates RA=18$^h$ 52$^m$ 49.647$^s$, Dec=30\deg 04\arcmin 25.44\arcsec\ and RA=18$^h$ 52$^m$ 49.431$^s$, Dec=30\deg 04\arcmin 27.80\arcsec\ and JHK (2MASS) magnitudes $\simeq12.25, 12.08, 11.82$ and $\simeq12.71, 13.75, 12.11$, respectively, at the position of the bright optical star (Fig. 4d). The first of these objects probably corresponds to the optical star from the Palomar Digital Sky Survey whose spectrum was taken with RTT-150, while the second object whose position is closer to the center of the error circle for the X-ray source SWIFT J1852.8+3002 (Table 1) is its optical counterpart. However, further studies, in particular, infrared spectroscopy, are needed to ultimately answer this question and to determine the object's class. In conclusion, note that SWIFT J1852.8+3002 lies fairly high above the Galactic plane ($b\simeq13$\deg), which is atypical of high mass X-ray binaries whose vertical distribution does not exceed a hundred parsecs (see, e.g., Lutovinov et al. 2013). \bigskip \subsection*{IGR\,J22534+6243} \begin{figure*} \vbox{ \hbox{ \includegraphics[width=1.02\columnwidth,bb=50 164 560 625,clip]{Lutovinov_2013_Fig5a.ps} \hspace{3mm}\includegraphics[width=0.98\columnwidth,bb=33 167 568 681,clip]{Lutovinov_2013_Fig5b.ps} } \vspace{3mm} \hbox{ \includegraphics[width=1.015\columnwidth,bb=45 184 583 692,clip]{Lutovinov_2013_Fig5v.ps} \hspace{3mm}\includegraphics[width=0.98\columnwidth,bb=138 272 547 672,clip]{Lutovinov_2013_Fig5g.ps} } } \caption{(a) Optical image of the sky around IGR J22534+6243. The arrow indicates the position of the optical counterpart. (b) The RTT-150 optical spectrum of the source taken in July 2012. (c) Part of the AZT-33IK optical spectrum for the source near $H\alpha$ obtained in March 2013. The solid line indicates the best fit to the line by a Gaussian profile; the dotted line indicates the characteristic profile of atmospheric lines obtained by assuming $\Delta \lambda/\lambda = {\rm const}$. (d) The source's broadband energy spectrum from Chandra (crosses, ObsID. 10811) and INTEGRAL (dots) data. The solid line indicates the best fit by a simple power law with low-energy absorption and a high-energy cutoff. \label{igrj22534}} \end{figure*} The hard X-ray emission from IGR J22534+6243 was detected only on the total map of the Galactic plane constructed from nine-year-long INTEGRAL observations (Krivonos et al. 2012). No significant variations of the source's intensity (in particular, outbursts) were detected on its reconstructed light curve for 2003--2012 and the mean flux was $F_X=(0.6\pm0.1)\times10^{-11}$ \flux\ in the $17-60$ keV energy band. As has already been said above, the study of the archival data showed that the sky region around IGR J22534+6243 previously fell within the XRT/Swift field of view when the afterglow from GRB 060421 was investigated (ObsId. 00206257, April 21--24, 2006, the total exposure time is $\sim72$ ks) and the Chandra field of view in March--April 2009 (ObsIDs. 9919, 9920, 10810, 10811, 10812; the exposure time of each pointing was $23-28$ ks). It should be noted that during the Chandra observations IGR J22534+6243 was almost at the edge of the telescope's field of view, but it was detected at a statistically significant level in all pointings. The source's coordinates measured from these observations closely coincided with those measured from XRT data (see Table 1), which allowed the optical counterpart of IGR J22534+6243 to be determined. It turned out to be a fairly bright object with coordinates (J2000) RA=22$^h$ 53$^m$ 55.130$^s$, Dec=62\deg 43\arcmin 36.90\arcsec\ (Fig. 5a) and a magnitude $m_{r}\simeq13.0$, which is clearly seen in both optical and infrared (2MASS J22535512+6243368) wavelength ranges. The RTT-150 optical spectrum of the object corresponds to a star of early spectral type O-B (Fig. 5b). At the same time, an intense Balmer $H\alpha$ emission line and a weaker $H\beta$ line are clearly seen in this spectrum. The spectrum near the emission $H\alpha$ line was taken at the AZT-33IK telescope with a spectral resolution better than that of RTT-150 (Fig. 5c). The equivalent width of the emission line is $13\pm1\AA$. The $H\alpha$ emission lines with such equivalent widths are commonly observed from Be disks (see, e.g., Clark et al. 2001). Since the AZT-33IK spectrum has a good resolution, we managed to detect a finite width of the $H\alpha$ emission line. At an instrumental resolution of our spectroscopic data of about $3.8\AA$ (FWHM), the observed $H\alpha$ line width was about $4.3\AA$. Thus, we may conclude that in the source itself the line is broadened with typical velocities of about 180 km s$^{-1}$, which is also a commonly observed characteristic of the emission lines in Be systems, and is associated with a rotating equatorial disk around a Be star. It should also be noted that the line equivalent width changed by more than a factor of 2 in half a year elapsed between the RTT-150 and AZR-33IK observations (it was $\simeq33\AA$ in July 2012), which may be indicative of equatorial disk evolution. Significant detection of the source by different instruments in different time intervals allowed us not only to carry out spectral and timing analysis of its emission but also to trace the evolution of its parameters. In particular, the source's intensity in the $2-10$ keV energy band remained essentially constant during the Swift and Chandra observations at $F_X\simeq(2.5-2.9)\times10^{-12}$ \flux, increasing only once (ObsID. 10810) to $F_X\simeq3.1\times10^{-12}$ \flux. However, given the typical flux measurement error of $\simeq(0.13-0.16)\times10^{-12}$ \flux, such a change may be considered insignificant. The same can also be said about the source's spectrum, which can be well fitted by a simple power law with low-energy absorption. The slope of the spectrum varies insignificantly between $\Gamma=1.35\pm0.14$ and $\Gamma=1.63\pm0.10$, becoming slightly harder, $\Gamma=1.18\pm0.13$, at the beginning of the series of Chandra observations (ObsID. 9920, April 16, 2009). We found no correlations between the variations of the flux from the source and the hardness of its spectrum. The parameters of the spectrum for IGR J22534+6243 determined using data from the ROSAT observatory that observed this sky region on June 18-19, 1993, (ObsID. 500321, the total exposure time is $\sim18.5$ ks) agree with the Swift and Chandra measurements, but the typical measurement errors turned out to be considerably larger than those given above. \begin{table}[] \centering \caption{ } \vspace{1mm} \begin{tabular}{ll} \hline \vspace{0.5mm} Data, MJD & Period, s \\ \hline \vspace{0.5mm} 49156.08 & $46.4040\pm0.0008$ \\ 53846.81 & $46.6148\pm0.0002$ \\ 54937.45 & $46.6799\pm0.0031$ \\ 54949.29 & $46.6695\pm0.0032$ \\ 54954.07 & $46.6718\pm0.0024$ \\ 54958.85 & $46.6723\pm0.0042$ \\ 54959.14 & $46.6658\pm0.0033$ \\[1mm] \hline \end{tabular} \end{table} The measured absorption, $N_H=(2.08-2.27)\times10^{22}$ cm$^{-2}$, slightly exceeds the column density of matter in our Galaxy in this direction, $N_H\simeq10^{22}$ cm$^{-2}$ (Dickey and Lockman 1990). This suggest the presence of an additional amount of material in the binary itself possibly associated with the stellar wind from the optical companion. Taking into account the stability of the parameters for the X-ray spectrum of IGR J22534+6243 over a long time, we may extend it to the hard energy region using the INTEGRAL observational data. Such a broadband spectrum is shown in Fig.5c. Since the source is weak, a relatively significant signal from it can be registered only in broad channels. However, these measurements suggest a cutoff in the spectrum at high energies. The characteristic cutoff energy, $E_{cut}\simeq25-30$ keV, turns out to be slightly higher than that commonly observed in high-mass Xray binaries with neutron stars (see, e.g., Filippova et al. 2005). However, it should be noted that such a hard spectrum was recorded recently from low luminosity X-ray pulsars that are members of binaries with Be stars (see, e.g., Tsygankov et al. 2012; Lutovinov et al. 2012c). \begin{figure} \centering \includegraphics[width=\columnwidth,bb=73 270 495 690]{Lutovinov_2013_Fig6.ps} \caption{Pulse profile for the X-ray pulsar IGR J22534+6243 in various energy bands obtained from one of the Chandra observations (ObsID.10811) and folded with the corresponding period. \label{pprofile}} \end{figure} \begin{table*}[] \centering \caption{Optical identification of the hard X-ray sources} \label{tab:2} \vspace{1mm} \begin{tabular}{lccll} \hline Name & RA & Dec & Type & $z$ \\ \hline \hline SWIFT\,J1553.6+2606 & 15$^h$ 53$^m$ 34.734$^s$ & 26\deg 14\arcmin 41.45\arcsec & QSO & 0.1664 \\ SWIFT\,J1852.2+8424A & 18$^h$ 50$^m$ 25.090$^s$ & 84\deg 22\arcmin 44.69\arcsec & Sy1 & 0.1828 \\ SWIFT\,J1852.2+8424B & 18$^h$ 46$^m$ 49.689$^s$ & 84\deg 25\arcmin 05.58\arcsec & Sy1 & 0.2489 \\ SWIFT\,J1852.8+3002 & 18$^h$ 52$^m$ 49.431$^s$ & 30\deg 04\arcmin 27.80\arcsec & XRB/HMXB(?) & \\ IGR\,J22534+6243 & 22$^h$ 53$^m$ 55.130$^s$ & 62\deg 43\arcmin 36.90\arcsec & HMXB, X-ray pulsar & \\ \hline \hline \end{tabular} \end{table*} Swift and Chandra data allowed Halpern (2012) to detect X-ray pulsations with a period of $\sim46.7$ s in the light curve for IGR J22534+6243, suggesting the presence of a neutron star as a compact object in the binary. Pulsations with a similar period were also found in the ROSAT archival data for this sky region (Israel and Rodriguez 2012). We analyzed in detail all observational data and traced the evolution of the source's pulsation period, which is presented in Table 2 (the first measurement is based on ROSAT data, the second measurement is based on XRT data, and the remaining measurements are based on Chandra data). It can be seen from the Table 2 that in the 16-year time interval between the ROSAT and Chandra observations, the neutron star spun down significantly, with the mean spin down rate having remained almost the same over the entire period of ROSAT, Swift, and Chandra observations near $\dot P/P\simeq3.5\times10^{-4}$ yr$^{-1}$, which is typical of X-ray pulsars (see, e.g., Lutovinov et al. 1994; Bildsten et al. 1997). At the same time, during almost a month of Chandra observations, the pulsation period changed insignificantly, remaining mainly near 46.674 s within the measurement errors. The measurement errors themselves were determined by the so-called bootstrap method (for more detail, see Lutovinov et al. 2012c). One of the most important characteristics for an X-ray pulsar is its pulse profile. As a rule, the pulse shape remains fairly stable for each specific pulsar over a long time, although it can depend on its luminosity and energy (see, e.g., Lutovinov and Tsygankov 2009). Our study of the pulse profile for IGR J22534+6243 showed that in all Chandra and Swift observations it has a similar shape (see Fig. 6) --- a complex double-peak, triple-peak structure is seen in the softest $0.6-2.0$ keV energy band, which turns into one broad peak in the $2-6$ keV energy band, with the signatures of this structure remaining in it. As the energy increases, several peaks again begin to manifest themselves in the pulse profile. The latter may be due to a shortage of statistics at energies $>6$ keV, while the multi-peak structure at soft energies can be a consequence of absorption in the binary. The pulsed fraction in the $2-6$ keV energy band is $\sim40$\%. \section{Conclusions} \label{sec:concl} We made an optical identification of four Xray sources from the INTEGRAL and Swift catalogs and determined the nature of three of them. Two of these sources are extragalactic in nature: SWIFT J1553.6+2606 is a quasar at redshift $z\simeq0.1164$; the detected flux from SWIFT J1852.2+8424 is the sum of the fluxes from two Seyfert 1 galaxies of approximately the same intensity at redshifts $z\simeq0.1828$ and $z\simeq0.2489$. The other two objects are located in our Galaxy: IGR J22534+6243 belongs to the class of X-ray pulsars (with a pulsation period of $\simeq46.674$ s) in high-mass X-ray binaries most likely with a Be companion; SWIFT J1852.8+3002 may also be a high-mass X-ray binary, but infrared spectroscopy is needed for the ultimate answer to the question about its nature. Obtained results are summarized in Table 3, which gives the coordinates of the optical counterparts of the X-ray sources, their types, and (for the extragalactic objects) redshifts. \bigskip ~\bigskip \vspace{10mm} \acknowledgements This work was supported by the Russian Foundation for Basic Research (project nos. 12-02-01265, 11-02-01328), the Presidium of Russian Academy of Sciences (programs P-21 and OFN-17), the Programs of the President of Russia for Support of Leading Scientific Schools (project NSh-5603.2012.2) and the Ministry of Education and Science (Contracts N8701 and N8629). We wish to thank the TUBITAK National Observatory (TUG, Turkey), the Space Research Institute of the Russian Academy of Sciences, and the Kazan State University for support in using the 1.5-m Russian-Turkish (RTT-150) telescope. We are also grateful to E.M. Churazov, who developed the IBIS/INTEGRAL data analysis methods and provided the software.
2,869,038,156,443
arxiv
\section{Introduction} Decision tree ensembles have proven very successful in various machine learning applications. Indeed, they are often referred to as the best ``off-the-shelf'' learners \cite{friedman2001elements}, as they exhibit several appealing properties such as ease of tuning, robustness to outliers, and interpretability \cite{friedman2001elements, xgboost}. Another natural property in trees is \textsl{conditional computation}, which refers to their ability to route each sample through a small number of nodes (specifically, a single root-to-leaf path). Conditional computation can be broadly defined as the ability of a model to activate only a small part of its architecture in an input-dependent fashion \cite{DBLP:journals/corr/BengioBPP15}. This can lead to both computational benefits and enhanced statistical properties. On the computation front, routing samples through a small part of the tree leads to substantial training and inference speed-ups compared to methods that do not route samples. Statistically, conditional computation offers the flexibility to reduce the number of parameters used by each sample, which can act as a regularizer \cite{Breiman1983ClassificationAR, friedman2001elements, DBLP:journals/corr/BengioBPP15}. However, the performance of trees relies on feature engineering, since they lack a good mechanism for representation learning \cite{bengio2013representation}. This is an area in which neural networks (NNs) excel, especially in speech and image recognition applications \cite{bengio2013representation, he2015delving, yu2016automatic}. However, NNs do not naturally support conditional computation and are harder to tune. In this work, we combine the advantages of neural networks and tree ensembles by designing a hybrid model. Specifically, we propose the \textsl{Tree Ensemble Layer (TEL)} for neural networks. This layer is an additive model of differentiable decision trees, can be inserted anywhere in a neural network, and is trained along with the rest of the network using gradient-based optimization methods (e.g., SGD). While differentiable trees in the literature show promising results, especially in the context of neural networks, e.g., \citet{criminisi2016deep, FrosstH17}, they do not offer true conditional computation. We equip TEL with a novel mechanism to perform conditional computation, during both training and inference. We make this possible by introducing a new sparse activation function for sample routing, along with specialized forward and backward propagation algorithms that exploit sparsity. Experiments on 23 real datasets indicate that TEL achieves over $10$x speed-ups compared to the current differentiable trees, without sacrificing predictive performance. Our algorithms pave the way for jointly optimizing over both wide and deep tree ensembles. Here joint optimization refers to updating all the trees simultaneously (e.g., using first-order methods like SGD). This has been a major computational challenge prior to our work. For example, jointly optimizing over classical (non-differentiable) decision trees is a hard combinatorial problem \cite{friedman2001elements}. Even with differentiable trees, the training complexity grows exponentially with the tree depth, making joint optimization difficult \cite{criminisi2016deep}. A common approach is to train tree ensembles using greedy ``stage-wise'' procedures, where only one tree is updated at a time and never updated again---this is a main principle in gradient boosted decision trees (GBDT) \cite{friedman2001greedy}\footnote{\textcolor{black}{There are follow-up works on GBDT which update the leaves of all trees simultaneously, e.g., see \citet{johnson2013learning}. However, our approach allows for updating both the internal node and leaf weights simultaneously.}}. We hypothesize that joint optimization yields more compact and expressive ensembles than GBDT. Our experiments confirm this, indicating that TEL can achieve over $20$x reduction in model size. This can have important implications for interpretability, latency and storage requirements during inference. \textbf{Contributions: } Our contributions can be summarized as follows: \textbf{(i)} We design a new differentiable activation function for trees which allows for routing samples through small parts of the tree (similar to classical trees). \textbf{(ii)} We realize conditional computation by developing specialized forward and backward propagation algorithms that exploit sparsity to achieve an optimal time complexity. Notably, the complexity of our backward pass can be independent of the tree depth and is generally better than that of the forward pass---this is not possible in backpropagation for neural networks. \textbf{(iii)} We perform experiments on a collection of 26 real datasets, which confirm TEL as a competitive alternative to current differentiable trees, GBDT, and dense layers in CNNs. \textbf{(iv)} We provide an open-source TensorFlow implementation of TEL along with a Keras interface\footnote{\url{https://github.com/google-research/google-research/tree/master/tf_trees} \label{footnote:url}}. \paragraph{Related Work: } Table \ref{table:related} summarizes the most relevant related work. \begin{table}[htbp] \centering \caption{Related work on conditional computation} \label{table:related} \footnotesize \begin{tabular}{@{}lllll@{}} \toprule Paper & CT & CI & DO & Model/Optim \\ \midrule \citet{criminisi2016deep} & N & N & Y & Soft tree/Alter \\ \citet{ioannou2016decision} & N & H & Y & Tree-NN/SGD \\ \citet{FrosstH17} & N & H & Y & Soft tree/SGD \\ \citet{DBLP:journals/corr/ZoranLB17} & N & H & N & Soft tree/Alter \\ \citet{DBLP:journals/corr/ShazeerMMDLHD17} & H & Y & N & Tree-NN/SGD \\ \citet{tanno2018adaptive} & N & H & Y & Soft tree/SGD \\ \citet{Biau2019} & H & N & Y & Tree-NN/SGD \\ \citet{hehn2019end} & N & H & Y & Soft tree/SGD \\ Our method & Y & Y & Y & Soft tree/SGD \\ \bottomrule \multicolumn{5}{l}{ \footnotesize \parbox[t]{.94\linewidth}{ \textit{H} is heuristic (e.g., training model is different from inference), \textit{CT} is conditional training. \textit{CI} is conditional inference. \textit{DO} indicates whether the objective function is differentiable. \textit{Soft tree} refers to a differentiable tree, whereas \textit{Tree-NN} refers to NNs with a tree-like structure. \textit{Optim} stands for optimization (SGD or alternating minimization). }} \end{tabular} \end{table} \raggedbottom \textcolor{black}{Differentiable decision trees (a.k.a. soft trees) are an instance of the Hierarchical Mixture of Experts introduced by \citet{jordan1994hierarchical}}. The internal nodes of these trees act as routers, sending samples to the left and right with different proportions. This framework does not support conditional computation as each sample is processed in all the tree nodes. Our work avoids this issue by allowing each sample to be routed through small parts of the tree, without losing differentiability. A number of recent works have used soft trees in the context of deep learning. For example, \citet{criminisi2016deep} equipped soft trees with neural representations and used alternating minimization to learn the feature representations and the leaf outputs. \citet{hehn2019end} extended \citet{criminisi2016deep}'s approach to allow for conditional inference and growing trees level-by-level. \citet{FrosstH17} trained a (single) soft tree using SGD and leveraged a deep neural network to expand the dataset used in training the tree. \citet{DBLP:journals/corr/ZoranLB17} also leveraged a tree structure with a routing mechanism similar to soft trees, in order to equip the k-nearest neighbors algorithm with neural representations. All of these works have observed that computation in a soft tree can be expensive. Thus, in practice, heuristics are used to speed up inference, e.g., \citet{FrosstH17} uses the root-to-leaf path with the highest probability during inference, leading to discrepancy between the models used in training and inference. Instead of making a tree differentiable, \citet{pmlr-v70-jernite17a} hypothesized about properties the best tree should have, and introduced a pseudo-objective that encourages balanced and pure splits. They optimized using SGD along with intermediate processing steps. Another line of work introduces tree-like structure to NNs via some routing mechanism. For example, \citet{ioannou2016decision} employed tree-shaped CNNs with branches as weight matrices with sparse block diagonal structure. \citet{DBLP:journals/corr/ShazeerMMDLHD17} created the Sparsely-Gated Mixture-of-Experts layer where samples are routed to subnetworks selected by a trainable gating network. \citet{Biau2019} represented a decision tree using a 3-layer neural network and combined CART and SGD for training. \citet{tanno2018adaptive} looked into adaptively growing an NN with routing nodes for performing tree-like conditional computations. However, in these works, the inference model is either different from training or the router is not differentiable (but still trained using SGD)---see Table \ref{table:related} for details. \section{The Tree Ensemble Layer} \label{section:tree} TEL is an additive model of differentiable decision trees. In this section, we introduce TEL formally and then discuss the routing mechanism used in our trees. For simplicity, we assume that TEL is used as a standalone layer. Training trees with other layers will be discussed in Section \ref{sec:conditional}. We assume a supervised learning setting, with input space $\mathcal{X} \subseteq \mathbb{R}^p$ and output space $\mathcal{Y} \subseteq \mathbb{R}^{k}$. For example, in the case of regression (with a single output) $k=1$, while in classification $k$ depends on the number of classes. Let $m$ be the number of trees in the ensemble, and let $T^{(j)}: \mathcal{X} \to \mathbb{R}^{k}$ be the $j$th tree in the ensemble. For an input sample $x \in \mathbb{R}^{p}$, the output of the layer is a sum over all the tree outputs: \begin{align} \label{eq:additivetrees} \mathcal{T}(x) = T^{(1)} (x) + T^{(2)} (x) + \dots + T^{(m)} (x). \end{align} \textcolor{black}{The output of the layer, $\mathcal{T}(x)$, is a vector in $\mathbb{R}^{k}$ containing raw predictions}. In the case of classification, mapping from raw predictions to $\mathcal{Y}$ can be done by applying a softmax and returning the class with the highest probability. Next, we introduce the key building block of the approach: the differentiable decision tree. \textbf{The Differentiable Decision Tree}: Classical decision trees perform \textsl{hard routing}, i.e., a sample is routed to exactly one direction at every internal node. Hard routing introduces discontinuities in the loss function, making trees unamenable to continuous optimization. Therefore, trees are usually built in a greedy fashion. In this section, we present an enhancement of the soft trees proposed by \citet{jordan1994hierarchical} and utilized in \citet{criminisi2016deep, FrosstH17, hehn2019end}. Soft trees are a variant of decision trees that perform \textsl{soft routing}, where every internal node can route the sample to the left and right simultaneously, with different proportions. This routing mechanism makes soft trees differentiable, so learning can be done using gradient-based methods. Soft trees cannot route a sample exclusively to the left or to the right, making conditional computation impossible. Subsequently, we introduce a new activation function for soft trees, which allows conditional computation while preserving differentiability. We consider a single tree in the additive model \eqref{eq:additivetrees}, and denote the tree by $T$ (we drop the superscript to simplify the notation). Recall that $T$ takes an input sample and returns an output vector (logit), i.e., $T: \mathcal{X} \subseteq \mathbb{R}^{p} \to \mathbb{R}^{k}$. Moreover, we assume that $T$ is a perfect binary tree with depth $d$. We use the sets $\mathcal{I}$ and $\mathcal{L}$ to denote the internal (split) nodes and the leaves of the tree, respectively. For any node $i \in \mathcal{I} \cup \mathcal{L}$, we define $\mathcal{A}(i)$ as its set of ancestors and use the notation $\{ x \to i \}$ for the event that a sample $x \in \mathbb{R}^{p}$ reaches $i$. \textcolor{black}{A summary of the notation used in this paper can be found in Table A.1 in the appendix. } \textbf{Soft Routing: } Internal tree nodes perform soft routing, where a sample is routed left and right with different proportions. We will introduce soft routing using a probabilistic model. While we use probability to model the routing process, we will see that the final prediction of the tree is an expectation over the leaves, making $T$ a deterministic function. \textcolor{black}{Unlike classical decision trees which use axis-aligned splits, soft trees are based on hyperplane (a.k.a. oblique) splits \citep{10.5555/1622826.1622827}, where a linear combination of the features is used in making routing decisions. Particularly, each internal node $i \in \mathcal{I}$ is associated with a trainable weight vector $w_i \in \mathbb{R}^{p}$ that defines the node's hyperplane split}. Let $\mathcal{S}: \mathbb{R} \to [0,1]$ be an activation function. Given a sample $x \in \mathbb{R}^{p}$, the probability that internal node $i$ routes $x$ to the left is defined by $\mathcal{S}(\langle w_i, x \rangle)$. Now we discuss how to model the probability that $x$ reaches a certain leaf $l$. Let $[l \swarrow i]$ (resp. $[i \searrow l]$) denote the event that leaf $l$ belongs to the left (resp. right) subtree of node $i \in \mathcal{I}$. Assuming that the routing decision made at each internal node in the tree is independent of the other nodes, the probability that $x$ reaches $l$ is given by: \begin{align} \label{eq:probxtol} P(\{ x \to l \}) = \prod\nolimits_{i \in \mathcal{A}(l)} r_{i,l}(x), \end{align} where $r_{i,l}(x)$ is the probability of node $i$ routing $x$ towards the subtree containing leaf $l$, i.e., $ {r_{i,l}(x) := \mathcal{S}(\langle x, w_i \rangle ) ^{\mathds{1}[l \swarrow i]} (1 - \mathcal{S}(\langle x, w_i \rangle))^{\mathds{1}[i \searrow l]}}. $ Next, we define how the root-to-leaf probabilities in \eqref{eq:probxtol} can be used to make the final prediction of the tree. \textbf{Prediction: } As with classical decision trees, we assume that each leaf stores a weight vector $o_{l} \in \mathbb{R}^{k}$ (learned during training). Note that, during a forward pass, $o_l$ is a constant vector, meaning that it is not a function of the input sample(s). For a sample $x \in \mathbb{R}^p$, we define the prediction of the tree as the expected value of the leaf outputs, i.e., \begin{align} \label{eq:T} T(x) = \sum\nolimits_{l \in \mathcal{L}} P(\{ x \to l \}) o_{l}. \end{align} \textbf{Activation Functions: } In soft routing, the internal nodes use an activation function $\mathcal{S}$ in order to compute the routing probabilities. The logistic (a.k.a. sigmoid) function is the common choice for $\mathcal{S}$ in the literature on soft trees (see \citet{jordan1994hierarchical, criminisi2016deep, FrosstH17, tanno2018adaptive, hehn2019end}). While the logistic function can output arbitrarily small values, it cannot output an exact zero. This implies that any sample $x$ will reach every node in the tree with a positive probability (as evident from \eqref{eq:probxtol}). Thus, computing the output of the tree in \eqref{eq:T} will require computation over every node in the tree, an operation which is exponential in tree depth. We propose a novel \textit{smooth-step activation function}, which can output exact zeros and ones, thus allowing for true conditional computation. Our smooth-step function is S-shaped and continuously differentiable, similar to the logistic function. Let $\gamma$ be a non-negative scalar parameter. The smooth-step function is a cubic polynomial in the interval $[-\gamma/2,\gamma/2]$, $0$ to the left of the interval, and $1$ to the right. More formally, we assume that the function takes the parametric form $\mathcal{S}(t) = a t^3 + b t^2 + ct + d$ for $t \in [-\gamma/2,\gamma/2]$, where $a,b,c,d$ are scalar parameters. We then solve for the parameters under the following continuity and differentiability constraints: (i) $\mathcal{S}(-\gamma/2) = 0$, (ii) $\mathcal{S}(\gamma/2) = 1$, (iii) $\mathcal{S}'(t)|_{t=-\gamma/2} =\mathcal{S}'(t)|_{t=\gamma/2} = 0$. This leads to: \begin{align} \label{eq:smoothstep} \mathcal{S}(t) = \begin{cases} 0 & \text{ if } t \leq -\gamma/2 \\ -\frac{2}{\gamma^{3}}t^3 + \frac{3}{2\gamma}t + \frac{1}{2} & \text{ if } -\gamma/2 \leq t \leq \gamma/2 \\ 1 & \text{ if } t \geq \gamma/2 \end{cases} \end{align} By construction, the smooth-step function in \eqref{eq:smoothstep} is continuously differentiable for any $t \in \mathbb{R}$ (including $-\gamma/2$ and $\gamma/2$). In Figure \ref{fig:smoothstep}, we plot the smooth-step (with $\gamma=1$) and logistic activation functions; the logistic function here takes the form $(1 + e^{-6t})^{-1}$, i.e., it is a rescaled variant of the standard logistic function, so that the two functions are on similar scales. The two functions can be very close in the middle of the fractional region. The main difference is that the smooth-step function outputs exact zero and one, whereas the logistic function converges to these asymptotically. \begin{figure}[htbp] \centering \captionsetup[subfloat]{farskip=0pt} \includegraphics[trim={1cm 0.5cm 0.5cm 1cm},clip, width=6.2cm]{Figures/actplot.png} \caption Smooth-step vs. Logistic $(1 + e^{-6t})^{-1}$.} \label{fig:smoothstep} \end{figure} Outside $[-\gamma/2, \gamma/2]$, the smooth-step function performs hard routing, similar to classical decision trees. The choice of $\gamma$ controls the fraction of samples that are hard routed. A very small $\gamma$ can lead to many zero gradients in the internal nodes, whereas a very large $\gamma$ might limit the extent of conditional computation. In our experiments, we use batch normalization \cite{ioffe2015} before the tree layer so that the inputs to the smooth-step function remain centered and bounded. This turns out to be very effective in preventing the internal nodes from having zero gradients, at least in the first few training epochs. Moreover, we view $\gamma$ as a hyperparameter, which we tune over the range $[10^{-4},1]$. This range works well for balancing the training performance and conditional computation across the 26 datasets we used (see Section \ref{label-experiments}). For a given sample $x$, we say that a node $i$ is reachable if $P(x \to i ) > 0$. The number of reachable leaves directly controls the extent of conditional computation. In Figure \ref{fig:reachable_leaves}, we plot the average number of reachable leaves (per sample) as a function of the training epochs, for a single tree of depth $10$ (i.e., with $1024$ leaves) and different $\gamma$'s. This is for the diabetes dataset \cite{Olson2017PMLB}, using Adam \cite{kingma2014adam} for optimization (see the appendix for details). The figure shows that for small enough $\gamma$ (e.g., $\gamma \le 1$), the number of reachable leaves rapidly converges to $1$ during training (note that the y-axis is on a log scale). We observed this behavior on all the datasets in our experiments. \begin{figure}[htbp] \centering \captionsetup[subfloat]{farskip=0pt} \includegraphics[trim={0cm 0.2cm 0.2cm 0.3cm},clip, width=6.2cm]{Figures/reachable.png} \caption{Number of reachable leaves (per sample) during training a tree of depth $10$.} \label{fig:reachable_leaves} \end{figure} \textcolor{black}{We note that variants of the smooth-step function are popular in computer graphics \citep{ebert2003texturing,rost2009opengl}. However, to our knowledge, the smooth-step function has not been used in soft trees or neural networks. It is also worth mentioning that the cubic polynomial used for interpolation in \eqref{eq:smoothstep} can be substituted with higher-order polynomials (e.g, polynomial of degree 5, where the first and second derivatives vanish at $\pm \gamma/2$). The algorithms we propose in Section \ref{sec:conditional} directly apply to the case of higher-order polynomials.} In the next section, we show how the sparsity in the smooth-step function and in its gradient can be exploited to develop efficient forward and backward propagation algorithms. \section{Conditional Computation} \label{sec:conditional} We propose using first-order optimization methods (e.g., SGD and its variants) to optimize TEL. A main computational bottleneck in this case is the gradient computation, whose time and memory complexities can grow exponentially in the tree depth. This has hindered training large tree ensembles in the literature. In this section, we develop efficient forward and backward propagation algorithms for TEL by exploiting the sparsity in both the smooth-step function and its gradient. We show that our algorithms have optimal time complexity and discuss cases where they run significantly faster than standard backpropagation. \textbf{Setup: } We assume a general setting where TEL is a hidden layer. Without loss of generality, we consider only one sample and one tree. Let $x \in \mathbb{R}^{p}$ be the input to TEL and denote the tree output by $T(x) \in \mathbb{R}^{k}$, where $T(x)$ is defined in \eqref{eq:T}. We use the same notation as in Section \ref{section:tree}, and we collect the leaf vectors $o_l$, $l\in\mathcal{L}$ into the matrix $O \in \mathbb{R}^{|\mathcal{L}| \times k}$ and the internal node weights $w_i$, $i \in \mathcal{I}$ into the matrix $W \in \mathbb{R}^{|\mathcal{I}| \times p}$. Moreover, for a differentiable function $h(z)$ which maps $\mathbb{R}^{s} \to \mathbb{R}^{u}$, we denote its Jacobian by $\fracpartial{h}{z} \in \mathbb{R}^{u \times s}$. Let $L$ be the loss function to be optimized (e.g., cross-entropy). Our goal is to efficiently compute the following three gradients: $\fracpartial{L}{O}$, $\fracpartial{L}{W}$, and $\fracpartial{L}{x}$. The first two gradients are needed by the optimizer to update $O$ and $W$. The third gradient is used to continue the backpropagation in the layers preceding TEL. We assume that a backpropagation algorithm has already computed the gradients associated with the layers after TEL and has computed $\fracpartial{L}{T}$. \textbf{Number of Reachable Nodes: } To exploit conditional computation effectively, each sample should reach a relatively small number of leaves. This can be enforced by choosing the parameter $\gamma$ of the smooth-step function to be sufficiently small. When analyzing the complexity of the forward and backward passes below, we will assume that the sample $x$ reaches $U$ leaves and $N$ internal nodes. \subsection{Conditional Forward Pass} Prior to computing the gradients, a forward pass over the tree is required. This entails computing expression \eqref{eq:T}, which is a sum of probabilities over all the root-to-leaf paths in $T$. Our algorithm exploits the following observation: if a certain edge on the path to leaf $l$ has a zero probability, then $P(x \to l )=0$ so there is no need to continue evaluation along that path. Thus, we traverse the tree starting from the root, and every time a node outputs a $0$ probability on one side, we ignore all of its descendants lying on that side. The summation in \eqref{eq:T} is then performed only over the leaves reached by the traversal. We present the conditional forward pass in Algorithm 1, where for any internal node $i$, we denote the left and right children by $left(i)$ and $right(i)$. \begin{algorithm}[htbp] \caption{Conditional Forward Pass} \label{alg:forward} \begin{algorithmic}[1] \STATE {\bfseries Input:} Sample $x \in \mathbb{R}^p$ and tree parameters $W$ and $O$. \STATE {\bfseries Output:} $T(x)$ \STATE \COMMENT{For any node $i$, $i.prob$ denotes $P(x \to i)$.} \STATE \COMMENT{$to\_traverse$ is a stack for traversing nodes.} \STATE $output \gets 0$, $to\_traverse \gets \{ root \}$, $root.prob \gets 1$ \WHILE{$to\_traverse$ is not empty } \STATE Remove a node $i$ from $to\_traverse$ \IF{$i$ is an internal node} \STATE $left(i).prob = i.prob * \mathcal{S}(\langle w_i, x \rangle)$ \STATE $right(i).prob = i.prob * (1-\mathcal{S}(\langle w_i, x \rangle))$ \STATE if $\mathcal{S}(\langle w_i, x \rangle) > 0$, add $left(i)$ to $to\_traverse$ \STATE if $\mathcal{S}(\langle w_i, x \rangle) < 1$, add $right(i)$ to $to\_traverse$ \ELSE \STATE $output \gets output + i.prob * o_i$ \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} \raggedbottom \\ \textbf{Time Complexity: } The algorithm visits each reachable node in the tree once. Every reachable internal node requires $\mathcal{O}(p)$ operations to compute $\mathcal{S}(\langle w_i, x \rangle)$, whereas each reachable leaf requires $\mathcal{O}(k)$ operations to update the output variable. Thus, the overall complexity is $\mathcal{O}(N p + U k)$ (recall that $N$ and $U$ are the number of reachable internal nodes and leaves, respectively). This is in contrast to a dense forward pass\footnote{By dense forward pass, we mean evaluating the tree without conditional computation (as in a standard forward pass).}, whose complexity is $\mathcal{O}(2^{d} p + 2^{d}k)$ (recall that $d$ is the depth). As long as $\gamma$ is chosen so that $U$ is sub-exponential\footnote{A function $f(t)$ is sub-exp. in $t$ if $\lim_{t \to \infty} {\log(f(t))}/{t} = 0$.} in $d$, the conditional forward pass has a better complexity than the dense pass (this holds since $N = \mathcal{O}(U d)$, implying that $N$ is also sub-exponential in $d$). \textbf{Memory Complexity: } The memory complexity for inference and training is $\mathcal{O}(d)$ and $\mathcal{O}(d + U)$, respectively. See the appendix for a detailed analysis. This is in contrast to a dense forward pass, whose complexity in training is $\mathcal{O}(2^\text{d})$. \subsection{Conditional Backward Pass} \label{sec:backward} Here we develop a backward pass algorithm to efficiently compute the three gradients: $\fracpartial{L}{O}$, $\fracpartial{L}{W}$, and $\fracpartial{L}{x}$, assuming that $\fracpartial{L}{T}$ is available from a backpropagation algorithm. In what follows, we will see that as long as $U$ is sufficiently small, the gradients $\fracpartial{L}{O}$ and $\fracpartial{L}{W}$ will be sparse, and $\fracpartial{L}{x}$ can be computed by considering only a small number of nodes in the tree. Let $\mathcal{R}$ be the set of leaves reached by Algorithm 1. The following set turns out to be critical in understanding the sparsity structure in the problem: ${\mathcal{F} := \{i \in \mathcal{I} \ | \ i \in \mathcal{A}(l), \ l \in \mathcal{R}, \ 0 < \mathcal{S}(\langle x, w_i \rangle) < 1 \}}$. In words, $\mathcal{F}$ is the set of ancestors of the reachable leaves, whose activation is fractional. In Theorem 1, we show how the three gradients can be computed by only considering the internal nodes in $\mathcal{F}$ and leaves in $\mathcal{R}$. Moreover, the theorem presents sufficient conditions for which the gradients are zero; in particular, $\fracpartial{L}{w_i} = 0$ for every internal node $i \in \mathcal{F}^c$ and $\fracpartial{L}{o_l} = 0$ for every leaf $l \in \mathcal{R}^{c}$ (where $A^c$ is the complement of a set $A$). \begin{theorem} Define $\mu_1(x,i) = {\fracpartial{\mathcal{S}(\langle x, w_i \rangle)}{\langle x, w_i \rangle}}/{\mathcal{S}(\langle x, w_i \rangle)}$, $\mu_2(x,i) = {\fracpartial{\mathcal{S}(\langle x, w_i \rangle)}{\langle x, w_i \rangle}}/{(1 - \mathcal{S}(\langle x, w_i \rangle))}$, and $g(l) = P(\{ x \to l \}) \langle \fracpartial{L}{T}, o_{l} \rangle$. The gradients needed for backpropagation can be expressed as follows: \begin{align*} & \fracpartial{L}{x} = \sum_{i \in \mathcal{F}} w_i^T \Big[ \mu_1(x,i) \sum_{\mathclap{l \in \mathcal{R} | [l \swarrow i]}} g(l) - \mu_2(x,i) \sum_{\mathclap{l \in \mathcal{R} | [i \searrow l]}} g(l) \Big] \\ & \fracpartial{L}{w_i} = \begin{dcases} 0 & \hspace{-0.3cm} i \in \mathcal{F}^c \\ x^T \Big[ \mu_1(x,i) \sum_{\mathclap{l \in \mathcal{R} | [l \swarrow i]}} g(l) - \mu_2(x,i) \sum_{\mathclap{l \in \mathcal{R} | [i \searrow l]}} g(l) \Big]& \hspace{-0.2cm} \text{o.w.} \end{dcases} \\ & \fracpartial{L}{o_l} = \fracpartial{L}{T} P(\{ x \to l \}), \ \forall \ l \in \mathcal{L} \end{align*} \end{theorem} In Theorem 1, the quantities $\mu_1(x,i)$ and $\mu_2(x,i)$ can be obtained in $\mathcal{O}(1)$ since in Algorithm 1 we store $\langle x, w_i \rangle$ for every $i \in \mathcal{F}$. Moreover, $P(\{ x \to l \})$ is stored in Algorithm 1 for every reachable leaf. However, a direct evaluation of these gradients leads to a suboptimal time complexity because the terms $\sum_{{l \in \mathcal{R} | [l \swarrow i]}} g(l)$ and $ \sum_{{l \in \mathcal{R} | [i \searrow l]}} g(l)$ will be computed from scratch for every node $i \in \mathcal{F}$. Our conditional backward pass traverses a \textsl{fractional tree}, composed of only the nodes in $\mathcal{F}$ and $\mathcal{R}$, while deploying smart bookkeeping to compute these sums during the traversal and avoid recomputation. We define the {fractional tree} below. \begin{definition} \label{def:fractional} Let~$T_{\text{reachable}}$~be the tree traversed by the conditional forward pass (Algorithm 1). We define the fractional tree $T_{\text{fractional}}$ as the result of the following two operations: (i) remove every internal node $i \in \mathcal{F}^c$ from $T_{\text{reachable}}$ and (ii) connect every node with no parent to its closest ancestor. \end{definition} \textcolor{black}{In Section C.1 of the appendix, we provide an example of how the fractional tree is constructed.} \textcolor{black}{$T_{\text{fractional}}$ is a binary tree with $U$ leaves and $|\mathcal{F}|$ internal nodes, \textcolor{black}{each with exactly 2 children}. It can be readily seen that $|\mathcal{F}| = U - 1$; this relation is useful for analyzing the complexity of the conditional backward pass.} Note that $T_{\text{fractional}}$ can be constructed on-the-fly while performing the conditional forward pass (without affecting its complexity). In Algorithm 2, we present the conditional backward pass, which traverses the fractional tree once and returns $\fracpartial{L}{x}$ and any (potentially) non-zero entries in $\fracpartial{L}{O}$ and $\fracpartial{L}{W}$. \begin{algorithm}[htbp] \caption{Conditional Backward Pass} \label{alg:backward} \begin{algorithmic}[1] \STATE {\bfseries Input:} Sample $x \in \mathbb{R}^p$, tree parameters, and $\fracpartial{L}{T}$. \STATE {\bfseries Output:} $\fracpartial{L}{x}$ and (potential) non-zeros in $\fracpartial{L}{W}$ and $\fracpartial{L}{O}$. \STATE $\fracpartial{L}{x} = 0$ \STATE \COMMENT{For any node $i$, $i.sum\_g$ denotes $\sum_{l \in \mathcal{R} | i \in \mathcal{A}(l)} g(l)$} \STATE Traverse $T_{\text{fractional}}$ in post order: \begin{ALC@g} \STATE Denote the current node by $i$ \IF {$i$ is a leaf} \STATE $\fracpartial{L}{o_i} = \fracpartial{L}{T} P(\{ x \to i \})$ \STATE $i.sum\_g = g(i)$ \ELSE \STATE $a = \mu_1(x,i)$ $(left(i).sum\_g)$ \STATE $b = \mu_2(x,i)$ $(right(i).sum\_g)$ \STATE $\fracpartial{L}{x} \mathrel{+}= w_i^T (a - b)$ \STATE $\fracpartial{L}{w_i} = x^T (a - b)$ \STATE $i.sum\_g = left(i).sum\_g + right(i).sum\_g$ \ENDIF \end{ALC@g} \end{algorithmic} \end{algorithm} \textbf{Time Complexity: } The worst-case complexity of the algorithm is $\mathcal{O}(Up + Uk)$, whereas the best-case complexity is $\mathcal{O}(k)$ (corresponds to $U=1$), and in the worst case, the number of non-zero entries in the three gradients is $\mathcal{O}(Up + Uk)$---see the appendix for analysis. Thus, the complexity is optimal, in the sense that it matches the number of non-zero gradient entries, in the worst case. The worst-case complexity is generally lower than the $\mathcal{O}(Np + Uk)$ complexity of the conditional forward pass. This is because we always have $U = \mathcal{O}(N)$, and there can be many cases where $N$ grows faster than $U$. For example, consider a tree with only two reachable leaves ($U=2$) and where the root is the (only) fractional node, then $N$ grows linearly with the depth $d$. As long as $U$ is sub-exponential in $d$, Algorithm 2's complexity can be significantly lower than that of a dense backward pass whose complexity is $\mathcal{O}(2^d p + 2^d k)$. \textbf{Memory Complexity: } We store one scalar per node in the fractional tree (i.e., $i.sum\_g$ for every node $i$ in the fractional tree). Thus, the memory complexity is $\mathcal{O}(|\mathcal{F}| + U) = \mathcal{O}(U)$. If $\gamma$ is chosen so that $U$ is upper-bounded by a constant, then Algorithm 2 will require constant memory. \textbf{Connections to Backpropagation: } An interesting observation in our approach is that the conditional backward pass generally has a better time complexity than the conditional forward pass. This is usually impossible in standard backpropagation for NNs, as the forward and backward passes traverse the same computational graph \cite{Goodfellow-et-al-2016}. The improvement in complexity of the backward pass in our case is due to Algorithm 2 operating on the fractional tree, which can contain a significantly smaller number of nodes than the tree traversed by the forward pass. In the language of backpropagation, our fractional tree can be viewed as a ``simplified'' computational graph, where the simplifications are due to Theorem 1. \section{Experiments} We study the performance of TEL in terms of prediction, conditional computation, and compactness. We evaluate TEL as a standalone learner and as a layer in a NN, and compare to standard soft trees, GBDT, and dense layers. \textbf{Model Implementation: } TEL is implemented in TensorFlow 2.0 using custom C++ kernels for forward and backward propagation, along with a Keras Python-accessible interface. \textcolor{black}{The implementation is open source\textsuperscript{\ref{footnote:url}}}. \textbf{Datasets: } We use a collection of 26 classification datasets (binary and multiclass) from various domains (e.g., healthcare, genetics, and image recognition). 23 of these are from the Penn Machine Learning Benchmarks (PMLB) \cite{Olson2017PMLB}, and the 3 remaining are CIFAR-10 \cite{krizhevsky2009learning}, MNIST \cite{lecun1998gradient}, and Fashion MNIST \cite{xiao2017fashion}. Details are in the appendix. \textbf{Tuning, Toolkits, and Details: } For all the experiments, we tune the hyperparameters using Hyperopt \cite{Bergstra2013} with the Tree-structured Parzen Estimator (TPE). We optimize for either AUC or accuracy with stratified 5-fold cross-validation. NNs (including TEL) were trained using Keras with the TensorFlow backend, using Adam \cite{kingma2014adam} and cross-entropy loss. As discussed in Section \ref{section:tree}, TEL is always preceded by a batch normalization layer. GBDT is from XGBoost \cite{xgboost}, Logistic regression and CART are from Scikit-learn \cite{scikit-learn}. Additional details are in the appendix. \label{label-experiments} \subsection{Soft Trees: Smooth-step vs. Logistic Activation} \label{sec:softexperiment} We compare the run time and performance of the smooth-step and logistic functions using 23 PMLB datasets. \textbf{Predictive Performance: } We fix the TEL architecture to 10 trees of depth 4. We tune the learning rate, batch size, and number of epochs (ranges are in the appendix). We assume the following parametric form for the logistic function $f(t) = (1+e^{-t/\alpha})^{-1}$, where $\alpha$ is a hyperparameter which we tune in the range $[10^{-4}, 10^{4}]$. The smooth-step's parameter $\gamma$ is tuned in the range $[10^{-4}, 1]$. Here we restrict the upper range of $\gamma$ to $1$ to enable conditional computation over the whole tuning range. While $\gamma$'s larger than 1 can lead to slightly better predictive performance in some cases, they can slow down training significantly. For tuning, Hyperopt is run for 50 rounds with AUC as the metric. After tuning, models with the best hyperparameters are retrained. We repeat the training procedure 5 times using random weight initializations. The mean test AUC along with its standard error (SE) are in Table \ref{table:activation}. \begin{table}[htbp] \footnotesize \centering \caption{Test AUC for the smooth-step and logistic functions (fixed TEL architecture). A $*$ indicates statistical significance based on a paired two-sided t-test at a significance level of $0.05$. Best results are in \textbf{bold}. AUCs on the 9 remaining datasets match and are hence omitted.} \label{table:activation} \begin{tabular}{@{}lll@{}} \toprule Dataset & Smooth-step & Logistic \\ \midrule ann-thyroid & $\bm{0.997} \pm 0.0001$ & $0.996 \pm 0.0006$ \\ breast-cancer-w. & $0.992 \pm 0.0015$ & $\bm{0.994} \pm 0.0002$ \\ churn & $0.897 \pm 0.0014$ & $\bm{0.898} \pm 0.0014$ \\ crx & $0.916 \pm 0.0025$ & $\bm{0.929^{*}} \pm 0.0021$ \\ diabetes & $\bm{0.832^{*}} \pm 0.0009$ & $0.816 \pm 0.0021$ \\ dna & $0.993 \pm 0.0004$ & $\bm{0.994^{*}} \pm 0.0$ \\ ecoli & $\bm{0.97^{*}} \pm 0.0004$ & $0.952 \pm 0.0038$ \\ flare & $0.78 \pm 0.0027$ & $\bm{0.784} \pm 0.0018$ \\ heart-c & $\bm{0.936} \pm 0.002$ & $0.927 \pm 0.0036$ \\ pima & $\bm{0.828^{*}} \pm 0.0005$ & $0.82 \pm 0.0003$ \\ satimage & $\bm{0.988^{*}} \pm 0.0002$ & $0.987 \pm 0.0002$ \\ solar-flare\_2 & $0.926 \pm 0.0002$ & $\bm{0.927^{*}} \pm 0.0007$ \\ vehicle & $0.956 \pm 0.0015$ & $\bm{0.965^{*}} \pm 0.0007$ \\ yeast & $\bm{0.876^{*}} \pm 0.0014$ & $0.86 \pm 0.0026$ \\ \midrule \textit{\# wins} & \textit{7} & \textit{7} \\ \bottomrule \end{tabular} \end{table} \raggedbottom The smooth-step outperforms the logistic function on 7 datasets (5 are statistically significant). The logistic function also wins on 7 datasets (4 are statistically significant). The two functions match on the rest of the datasets. The differences on the majority of the datasets are small (even when statistically significant), suggesting that using the smooth-step function does not hurt the predictive performance. However, as we will see next, the smooth-step has a significant edge in terms of computation time. \textbf{Training Time: } We measure the training time over $50$ epochs as a function of tree depth for both activation functions. We keep the same ensemble size ($10$) and use $\gamma = 1$ for the smooth-step as this corresponds to the worst-case training time (in the tuning range $[10^{-4}, 1]$), and we fix the optimization hyperparameters (batch size = 256 and learning rate = 0.1). We report the results for three of the datasets in Figure \ref{figure:time}; the results for the other datasets have very similar trends and are omitted due to space constraints. The results indicate a steep exponential increase in training time for the logistic activation after depth $6$. In contrast, the smooth-step has a slow growth, achieving over $10$x speed-up at depth $10$. \begin{figure*}[htbp] \centering \subfloat{{\includegraphics[width=5.8cm]{Figures/diabetes.png} }}% \hspace{-0.5cm} \subfloat{{\includegraphics[width=5.8cm]{Figures/spambase.png} }}% \hspace{-0.5cm} \subfloat{{\includegraphics[width=5.8cm]{Figures/yeast.png} }}% \caption{Training time (sec) vs. tree depth for the smooth-step and logistic functions, averaged over 5 repetitions.}% \label{figure:time} \end{figure*} \subsection{TEL vs. Gradient Boosted Decision Trees} \label{sec:telvsxgb} \paragraph{Predictive Performance: } We compare the predictive performance of TEL and GBDT on the 23 PMLB datasets, and we include L2-regularized logistic regression (LR) and CART as baselines. For a fair comparison, we use TEL as a standalone layer. For TEL and GBDT, we tune over the \# of trees, depth, learning rate, and L2 regularization. For TEL we also tune over the batch size, epochs, and $\gamma \in [10^{-4},1]$. For LR and CART, we tune the L2 regularization and depth, respectively. We use $50$ tuning rounds in Hyperopt with AUC as the metric. We repeat the tuning/testing procedures on 15 random training/testing splits. The results are in Table \ref{tab:auc}. As expected, no algorithm dominates on all the datasets. TEL outperforms GBDT on 9 datasets (5 are statistically significant). GBDT outperforms TEL on 8 datasets (7 of which are statistically significant). There were ties on the 6 remaining datasets; these typically correspond to easy tasks where an AUC of (almost) 1 can be attained. LR outperforms both TEL and GBDT on only 3 datasets with very marginal difference. Overall, the results indicate that TEL's performance is competitive with GBDT. Moreover, adding feature representation layers before TEL can potentially improve its performance further, e.g., see Section \ref{sec:telvdense} \begin{table*}[htbp] \centering \caption{Test AUC on 23 PMLB datasets. Averages over 15 random repetitions are reported along with the SE. A star ($\bm{*}$) indicates statistical significance based on a paired two-sided t-test at a significance level of 0.05. Best results are in \textbf{bold}. } \label{tab:auc} \footnotesize \begin{tabular}{@{}lllll@{}} \toprule Dataset & TEL & GBDT & L2 Logistic Reg. & CART \\ \midrule ann-thyroid & $0.996 \pm 0.0$ & $\bm{1.0^{*}} \pm 0.0$ & $0.92 \pm 0.002$ & $0.997 \pm 0.0$ \\ breast-cancer-wisconsin & $\bm{0.995^{*}} \pm 0.001$ & $0.992 \pm 0.001$ & $0.991 \pm 0.001$ & $0.929 \pm 0.004$ \\ car-evaluation & $\bm{1.0} \pm 0.0$ & $\bm{1.0} \pm 0.0$ & $0.985 \pm 0.001$ & $0.981 \pm 0.001$ \\ churn & $0.916 \pm 0.004$ & $\bm{0.92^{*}} \pm 0.004$ & $0.814 \pm 0.003$ & $0.885 \pm 0.004$ \\ crx & $0.911 \pm 0.005$ & $\bm{0.933^{*}} \pm 0.004$ & $0.916 \pm 0.005$ & $0.905 \pm 0.005$ \\ dermatology & $\bm{0.998} \pm 0.001$ & $\bm{0.998} \pm 0.001$ & $\bm{0.998} \pm 0.001$ & $0.962 \pm 0.005$ \\ diabetes & $\bm{0.831^{*}} \pm 0.006$ & $0.82 \pm 0.006$ & $0.824 \pm 0.008$ & $0.774 \pm 0.008$ \\ dna & $0.993 \pm 0.0$ & $\bm{0.994^{*}} \pm 0.0$ & $0.991 \pm 0.0$ & $0.964 \pm 0.001$ \\ ecoli & $0.97^{*} \pm 0.003$ & $0.962 \pm 0.003$ & $\bm{0.972} \pm 0.003$ & $0.902 \pm 0.007$ \\ flare & $0.732 \pm 0.009$ & $\bm{0.738} \pm 0.01$ & $0.736 \pm 0.009$ & $0.717 \pm 0.01$ \\ heart-c & $0.903 \pm 0.006$ & $0.893 \pm 0.008$ & $\bm{0.908} \pm 0.005$ & $0.829 \pm 0.012$ \\ hypothyroid & $0.971 \pm 0.003$ & $\bm{0.987^{*}} \pm 0.002$ & $0.93 \pm 0.005$ & $0.926 \pm 0.011$ \\ nursery & $\bm{1.0} \pm 0.0$ & $\bm{1.0} \pm 0.0$ & $0.916 \pm 0.001$ & $0.996 \pm 0.0$ \\ optdigits & $\bm{1.0} \pm 0.0$ & $\bm{1.0} \pm 0.0$ & $0.998 \pm 0.0$ & $0.958 \pm 0.001$ \\ pima & $0.831 \pm 0.008$ & $0.825 \pm 0.006$ & $\bm{0.832} \pm 0.008$ & $0.758 \pm 0.011$ \\ satimage & $\bm{0.99} \pm 0.0$ & $\bm{0.99} \pm 0.0$ & $0.955 \pm 0.001$ & $0.949 \pm 0.001$ \\ sleep & $0.925 \pm 0.0$ & $\bm{0.927^{*}} \pm 0.0$ & $0.889 \pm 0.0$ & $0.876 \pm 0.001$ \\ solar-flare\_2 & $\bm{0.925} \pm 0.002$ & $0.924 \pm 0.002$ & $0.92 \pm 0.002$ & $0.907 \pm 0.002$ \\ spambase & $0.986 \pm 0.001$ & $\bm{0.989^{*}} \pm 0.001$ & $0.972 \pm 0.001$ & $0.926 \pm 0.002$ \\ texture & $\bm{1.0} \pm 0.0$ & $\bm{1.0} \pm 0.0$ & $\bm{1.0} \pm 0.0$ & $0.974 \pm 0.001$ \\ twonorm & $\bm{0.998^{*}} \pm 0.0$ & $0.997 \pm 0.0$ & $\bm{0.998} \pm 0.0$ & $0.865 \pm 0.002$ \\ vehicle & $\bm{0.953^{*}} \pm 0.003$ & $0.931 \pm 0.002$ & $0.941 \pm 0.002$ & $0.871 \pm 0.004$ \\ yeast & $\bm{0.861} \pm 0.004$ & $0.859 \pm 0.004$ & $0.852 \pm 0.004$ & $0.779 \pm 0.005$ \\ \midrule \textit{\# wins} & \textit{12} & \textit{14} & \textit{6} & \textit{0} \\ \bottomrule \end{tabular} \end{table*} \begin{figure*}[htbp] \centering \subfloat{{\includegraphics[width=5.8cm]{Figures/auc_params_plot_heart-c.png} }}% \hspace{-0.2cm} \subfloat{{\includegraphics[trim={0.7cm 0cm 0cm 0cm},clip,width=5.5cm]{Figures/auc_params_plot_pima.png} }}% \hspace{-0.5cm} \subfloat{{\includegraphics[width=5.8cm]{Figures/auc_params_plot_spambase.png} }}% \caption{Mean test AUC vs \# of trees (15 trials). SE is shaded. TEL and GBDT have (roughly) the same \# of params/tree. }% \label{fig:aucvsnumtrees} \end{figure*} \begin{table*}[htbp] \centering \caption{Average and SE for test accuracy, loss and \# of params for CNN-Dense and CNN-TEL over $5$ random initializations. A star $\bm{*}$ indicates statistical significance based on a paired two-sided t-test at a level of $5\%$. Best values are in \textbf{bold}.} \footnotesize \label{table:densevtel} \begin{tabular}{@{}lcccccc@{}} \toprule & \multicolumn{3}{c}{CNN-Dense} & \multicolumn{3}{c}{CNN-TEL} \\ \midrule Dataset & Accuracy & Loss & \multicolumn{1}{c|}{\# Params} & Accuracy & Loss & \# Params \\ CIFAR10 & $0.7278 \pm 0.0047$ & $1.673 \pm 0.170$ & \multicolumn{1}{c|}{$7,548,362$} & $\bm{0.7296} \pm 0.0109$ & $\bm{1.202^{*}} \pm 0.011$ & $\bm{926,465}$ \\ MNIST & $0.9926 \pm 0.0002$ & $0.03620 \pm 0.00121$ & \multicolumn{1}{c|}{$5,830,538$} & $\bm{0.9930} \pm 9e-5$ & $\bm{0.03379} \pm 0.00093$ & $\bm{699,585}$ \\ Fashion MNIST & $\bm{0.9299} \pm 0.0012$ & $0.6930 \pm 0.0291$ & \multicolumn{1}{c|}{$5,567,882$} & $0.9297 \pm 0.0012$ & $\bm{0.3247^{*}} \pm 0.0045$ & $\bm{699,585}$ \\ \bottomrule \end{tabular} \end{table*} \textbf{Compactness and Sensitivity: } We compare the number of trees and sensitivity of TEL and GBDT on datasets from Table \ref{tab:auc} where both models achieve comparable AUCs---namely, the heart-c, pima and spambase datasets. With similar predictive performance, compactness can be an important factor in choosing a model over the other. For TEL, we use the models trained in Table \ref{tab:auc}. As for GBDT, for each dataset, we fix the depth so that the number of parameters per tree in GBDT (roughly) matches that of TEL. We tune over the main parameters of GBDT (50 iterations of Hyperopt, under the same parameter ranges of Table \ref{tab:auc}). We plot the test AUC versus the number of trees in Figure \ref{fig:aucvsnumtrees}. On all datasets, the test AUC of TEL peaks at a significantly smaller number of trees compared to GBDT. For example, on pima, TEL's AUC peaks at 5 trees, whereas GBDT requires more than 100 trees to achieve a comparable performance---this is more than $20$x reduction in the number of parameters. Moreover, the performance of TEL is less sensitive w.r.t. to changes in the number of trees. These observations can be attributed to the joint optimization performed in TEL, which can lead to more expressive ensembles compared to the stage-wise optimization in GBDT. \subsection{TEL vs. Dense Layers in CNNs} \label{sec:telvdense} We study the potential benefits of replacing dense layers with TEL in CNNs, on the CIFAR-10, MNIST, and Fashion MNIST datasets. We consider 2 convolutional layers, followed by intermediate layers (max pooling, dropout, batch normalization), and finally dense layers; we refer to this as CNN-Dense. We also consider a similar architecture, where the final dense layers are replaced with a single dense layer followed by TEL; we refer to this model as CNN-TEL. We tune over the optimization hyperparameters, the number of filters in the convolutional layers, the number and width of the dense layers, and the different parameters of TEL (see appendix for details). We run Hyperopt for 25 iterations with classification accuracy as the target metric. After tuning, the models are trained using 5 random weight initializations. The classification accuracy and loss on the test set and the total number of parameters are reported in Table \ref{table:densevtel}. While the accuracies are comparable, CNN-TEL achieves a lower test loss on the three datasets, where the $28\%$ and $53\%$ relative improvements on CIFAR and Fashion MNIST are statistically significant. Since we are using cross-entropy loss, this means that TEL gives higher scores on average, when it makes correct predictions. Moreover, the number of parameters in CNN-TEL is $\sim 8$x smaller than CNN-Dense. This example also demonstrates how representation layers can be effectively leveraged by TEL---GBDT's performance is significantly lower on MNIST and CIFAR-10, e.g., see the comparisons in \citet{ponomareva2017compact}. \vspace{-0.1cm} \textcolor{black}{\section{Conclusion and Future Work}} We introduced the tree ensemble layer (TEL) for neural networks. The layer is composed of an additive model of differentiable decision trees that can be trained end-to-end with the neural network, using first-order methods. Unlike differentiable trees in the literature, TEL supports conditional computation, i.e., each sample is routed through a small part of the tree's architecture. This is achieved by using the smooth-step activation function for routing samples, along with specialized forward and backward passes for reducing the computational complexity. Our experiments indicate that TEL achieves competitive predictive performance compared to gradient boosted decision trees (GBDT) and dense layers, while leading to significantly more compact models. In addition, by effectively leveraging convolutional layers, TEL significantly outperforms GBDT on multiple image classification datasets. One interesting direction for future work is to equip TEL with mechanisms for exploiting feature sparsity, which can further speed up computation. Promising works in this direction include feature bundling \citep{ke2017lightgbm} and learning under hierarchical sparsity assumptions \citep{hazimeh2020learning}. Moreover, it would be interesting to study whether the smooth-step function, along with specialized optimization methods, can be an effective alternative to the logistic function in other machine learning models. \clearpage \section*{Acknowledgements} We would like to thank Mehryar Mohri for the useful discussions. Part of this work was done when Hussein Hazimeh was at Google Research. At MIT, Hussein acknowledges research funding from the Office of Naval Research (ONR-N000141812298). Rahul Mazumder acknowledges research funding from the Office of Naval Research (ONR-N000141812298, Young Investigator Award), the National Science Foundation (NSF-IIS-1718258), and IBM.
2,869,038,156,444
arxiv
\section*{Organization of the Appendix} \begin{itemize} \item[\ref{app:proofs-pd}] \nameref{app:proofs-pd} \item[\ref{app:proof-relaxed}] \nameref{app:proof-relaxed} \item[\ref{app:proof-strict}] \nameref{app:proof-strict} \item[\ref{app:proofs-concentration}] \nameref{app:proofs-concentration} \item[\ref{app:proof-lb}] \nameref{app:proof-lb} \item[\ref{app:est-zeta}] \nameref{app:est-zeta} \end{itemize} \section{Estimating $\zeta$} \label{app:est-zeta} In this section, we show that $\zeta$ can be estimated up to error $0.2\zeta$ using small number of samples. The formal guarantee is provided in the following theorem. \begin{theorem} There exists an algorithm that, with probability at least $1-\delta$, halts, takes \[ {O}\left(\frac{c_{max}{\log\left(\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)\delta\zeta}\right)}}{(1-\gamma)^3\zeta^2}\right) \] samples per state-action pair, and outputs an estimator $\hat{\zeta}$, such that \[ |\hat{\zeta} - \zeta| \le 0.2\zeta. \] \end{theorem} \begin{proof} Let $\zeta_i = 2^{-i}/(1-\gamma)$ for $i=0, 1,2, \ldots, $ and \[ N_i = \frac{c_{\max}C_i(\delta)}{(1-\gamma)^3\zeta_i^2}, \] where \[ C_i(\delta) = {c'\log\left(2\frac{|\mathcal{S}||\mathcal{A}|i^2}{(1-\gamma)\zeta_i\delta}\right)} \] for some constant $c'$. We start by running the algorithm in \cite{li2020breaking} for $N_i$ samples per state-action pair on the MDP with $c$ as reward , for $i=0, 1, \ldots$ and stop if the following is satisfied: \begin{align*} |\hat{V}_{c,i}^*(\rho) - b| \ge 9\zeta_i, \end{align*} where $\hat{V}_{c,i}^*$ is the empirical optimal value function obtained for using $N_i$ samples. Then we output $\hat{\zeta} = \hat{V}_{c,i}^*(\rho)$. Next we show that the algorithm halts. Let $\mathcal{E}_i$ be event for iteration $i$, \[ |\hat{V}_{c,i}^*(\rho) - {V}_{c}^*(\rho)|\le \zeta_i. \] Thus, by Theorem~1 of \cite{li2020breaking}, \[ \Pr[\mathcal{E}_i] \ge 1-\frac{\delta}{2i^2}. \] Next, let $i^*$ be such that $0.05\zeta\le \zeta_{i^*} < 0.1\zeta$. Hence, if $\mathcal{E}_{i^*}$ happens, then \[ |\hat{V}_{c,i^*}^*(\rho) - {V}_{c}^*(\rho)|\le \zeta_{i^*} \] and \[ |{V}_{c}^*(\rho) - b| - |\hat{V}_{c,i^*}^*(\rho) - {V}_{c}^*(\rho)| \le |\hat{V}_{c,i^*}^*(\rho) - b|. \] Hence, on $\mathcal{E}_{i^*}$, \[ |\hat{V}_{c,i^*}^*(\rho) - b|\ge \zeta - \zeta_{i^*}\ge 0.9\zeta \ge 9\zeta_{i^*} \] and the algorithm halts at least before iteration $i^*$. Next, suppose the algorithm halts at $i\le i^*$, then on $\mathcal{E}_{i}$, we have \[ |\hat{V}_{c,i}^*(\rho) - {V}_{c}^*(\rho)|\le \zeta_{i}, \] \[ |\hat{V}_{c,i}^*(\rho) - b| \ge 9 \zeta_{i} \ge 9|\hat{V}_{c,i}^*(\rho) - {V}_{c}^*(\rho)|, \] and \[ |(|\hat{V}_{c,i}^*(\rho) - b| - |{V}_{c,i}^*(\rho) - b|)| \le |\hat{V}_{c,i}^*(\rho) - {V}_{c}^*(\rho)|\le |\hat{V}_{c,i}^*(\rho) - b|/9. \] Note that $\zeta = |{V}_{c}^*(\rho) - b|$, we have \[ |\hat{\zeta} - \zeta| \le \hat{\zeta}/9\implies \zeta \ge 8/9\hat{\zeta} ~ \text{and}~ |\hat{\zeta} - \zeta| \le {\zeta}/8. \] Thus, on event $\mathcal{E} = \mathcal{E}_1\cap \mathcal{E}_2\cap \mathcal{E}_3 \ldots \mathcal{E}_{i^*}$, which happens with probability at least \[ 1-\sum_{i=1}^{i^*}\frac{\delta}{2i^2} \ge 1-\frac{\pi^2\delta}{12}, \] we have $|\hat{\zeta} - \zeta| \le \zeta / 8$, proving the correctness. We now consider the overall sample complexity. Suppose $\mathcal{E}$ happens, then the number of samples consumed is upper bounded by \[ \sum_{i=1}^{i^*} N_i \le \frac{c''c_{max}C_{i^*}(\delta)}{(1-\gamma)^3\zeta^2} =\frac{c''c'c_{max}{\log\left(\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)\delta\zeta}\right)}}{(1-\gamma)^3\zeta^2}, \] for some constant $c''$, completing the proof. \end{proof} \section{Bounding the concentration terms} \label{sec:concentration} We have seen that proving~\cref{thm:ub-relaxed} and~\cref{thm:ub-strict} require bounding the concentration terms in~\cref{eq:conc-relaxed} and~\cref{eq:conc-strict} respectively. In this section, we detail the techniques to achieve these bounds. Our approach requires reasoning about a general unconstrained MDP $M_{\alpha} = (\mathcal{S}, \mathcal{A}, \mathcal{P}, \gamma, \alpha)$ with the same state-action space, transition probabilities and discount factor as the CMDP in~\cref{eq:true-CMDP} but with rewards equal to $\alpha \in [0, \alpha_{\max}]$. Analogously, we define the empirical MDP $\hat{M}_{\alpha} = (\mathcal{S}, \mathcal{A}, \hat{\mathcal{P}}, \gamma, \alpha)$ where the empirical transition matrix $\hat{\mathcal{P}}$ is the same as that of the empirical CMDP in~\cref{eq:emp-CMDP}. Similarly, we define MDP (and its empirical counterpart) $M_{\beta} = (\mathcal{S}, \mathcal{A}, \mathcal{P}, \gamma, \beta)$ (and $\hat{M}_{\beta}$) where the rewards $\beta \in [0, \beta_{\max}]$. Note that the rewards $\alpha$ and $\beta$ are independent of the sampling of the transition matrix. The corresponding value functions for policy $\pi$ in $M_\alpha$ and $\hat{M}_\alpha$ (and $M_\beta$ and $\hat{M}_\beta$) are denoted as $\rewarda{\pi}$ and $\rewardahat{\pi}$ (and $\rewardb{\pi}$ and $\rewardbhat{\pi}$) respectively, with the optimal value functions denoted as $\rewarda{*}$ and $\rewardahat{*}$ (and $\rewardb{*}$ and $\rewardbhat{*}$) respectively. The action-value function in $M_\alpha$ for policy $\pi$ and state-action pair $(s,a)$ is denoted as ${Q}^\pi_{\alpha}(s,a)$ and analogously for $\hat{M}_\alpha$. For the subsequent technical results, we require that $\hat{M}_\alpha$ satisfy the following gap condition~\citep{li2020breaking}: \begin{definition}[$\iota$-Gap Condition] MDP $\hat{M}_\alpha$ satisfies the $\iota$-gap condition if $\forall s$, $\hat{V}^*_{\alpha}(s) - \max_{a': a \neq \hat{\pi}^*_{\alpha}(s)}\hat{Q}^*_{\alpha}(s,a') \ge \iota$, where $\hat{\pi}^*_{\alpha} := \argmax \hat{V}^\pi_{\alpha}$ and $\hat{\pi}^*_{\alpha}(s) = \argmax_{a} \hat{Q}^*_{\alpha}(s,a)$ is the optimal action in state $s$. \label{def:gap-condition} \end{definition} Intuitively, the gap condition states that there is a unique optimal action at each state and there is a gap between the performance of best action and the second best action. With this gap condition, we use techniques in~\citet{li2020breaking} to prove the following lemma in~\cref{app:proofs-concentration}. \begin{restatable}{lemma}{main} \label{lemma:main} Define $\hat{\pi}^*_\alpha := \argmax_{\pi} \rewardahat{\pi}$. If (i) $\mathcal{E}$ is the event that the $\iota$-gap condition in~\cref{def:gap-condition} holds for $\hat{M}_\alpha$ and (ii) for $\delta \in (0,1)$ and $C(\delta) = 72 \log \left( \frac{16 \alpha_{\max} S A \log\left(\nicefrac{e}{1 - \gamma}\right)}{(1 - \gamma)^2 \, \iota \, \delta} \right)$, the number of samples per state-action pair is $N \geq \frac{4 \, C(\delta)}{1-\gamma}$, then with probability at least $\Pr[\mathcal{E}] - \delta/10$, \[ \norminf{\rewardbhat{\hat{\pi}^*_\alpha} - \rewardb{\hat{\pi}^*_\alpha}} \leq \sqrt{ \frac{C(\delta)}{N\cdot (1-\gamma)^3 }} \norminf{\beta}. \] \end{restatable} \vspace{-1ex} Hence, for policy $\hat{\pi}^*_\alpha$, we can obtain a concentration result in another MDP $\hat{M}_{\beta}$ with an independent reward function $\beta$ and the same empirical transition matrix $\hat{\mathcal{P}}$. We wish to use the above lemma for the unconstrained MDP formed at every iteration of the primal update in~\cref{eq:primal-update}. In particular, for a given $\lambda_t$, we will use~\cref{lemma:main} with $\alpha = r_p + \lambda_t c$ and $\beta = r_p$. Doing so will immediately give us a bound on $\norminf{\rewardrp{\hat{\pi}_t} - \rewardrphat{\hat{\pi}_t}}$ and hence $\abs{\rewardp{\hat{\pi}_t} - \rewardhatp{\hat{\pi}_t}}$. In order to use~\cref{lemma:main}, we require the unconstrained MDP $\hat{M}_{r_p + \lambda c}$ to satisfy the gap condition in~\cref{def:gap-condition} for any $\lambda \in \Lambda$. This is achieved by the perturbation of the rewards in Line 3 of~\cref{alg:cmdp-generative}. Specifically, using~\citet[Lemma 6]{li2020breaking} with a union-bound over $\Lambda$, we prove (in~\cref{lemma:gap} in~\cref{app:proofs-concentration}) that with probability $1 - \delta/10$, $\hat{M}_{r_p + \lambda c}$ satisfies the gap condition in~\cref{def:gap-condition} with $\iota = \frac{\omega \, \delta \, (1-\gamma)}{30 \, |\Lambda||S||A|^2}$ for every $\lambda \in \Lambda$. This allows us to use~\cref{lemma:main} with $\alpha = r_p + \lambda_t c$ for all $t \in [T]$, and $\beta = r_p$ and $\beta = c$. In the following theorem, we obtain a concentration result for each $\hat{\pi}_t$ and hence for the mixture policy $\bar{\pi}_{T}$. \begin{restatable}{theorem}{mainconcentrationemp} \label{thm:main-concentration-emp} For $\delta\in(0,1)$, $\omega \leq 1$ and $C(\delta) = 72 \log \left(\frac{16 (1 + U + \omega) \, S A \log\left(\nicefrac{e}{1 - \gamma}\right)}{(1 - \gamma)^2 \, \iota \, \delta} \right)$ where $\iota = \frac{\omega \, \delta \, (1-\gamma) \, \varepsilon_{\text{\tiny{l}}}}{30 \, U |S||A|^2}$, if $N \geq \frac{4 \, C(\delta)}{1-\gamma}$, then for policy $\bar{\pi}_{T}$ output by~\cref{alg:cmdp-generative}, with probability at least $1 - \delta/5$, \[ \abs{\rewardp{\bar{\pi}_{T}} - \rewardhatp{\bar{\pi}_{T}}} \leq 2 \sqrt{ \frac{C(\delta)}{N \cdot (1-\gamma)^3 }} \quad \text{;} \quad \abs{\const{\bar{\pi}_{T}} - \const{\bar{\pi}_{T}}} \leq \sqrt{ \frac{C(\delta)}{N \cdot (1-\gamma)^3 }}. \] \end{restatable} \cref{eq:conc-relaxed,eq:conc-strict} also require proving concentration bounds for fixed (that do not depend on the data) policies $\pi^*$ and $\pi^*_c$. This can be done by directly using~\citet[Lemma 1]{li2020breaking}. Specifically, we prove the following the lemma in~\cref{app:proofs-concentration}. \begin{restatable}{lemma}{mainconcentrationopt} \label{lemma:main-concentration-opt} For $\delta \in (0,1)$, $\omega \leq 1$ and $C'(\delta) = 72 \log \left(\frac{4 |S| \log(e/1-\gamma)}{\delta}\right)$, if $N \geq \frac{4 \, C'(\delta)}{1 - \gamma}$ and $B(\delta,N) := \sqrt{\frac{C'(\delta)}{(1 - \gamma)^3 N}}$, then with probability at least $1 - 3 \delta$, \begin{align*} \abs{\rewardp{\pi^*} - \rewardhatp{\pi^*}} & \leq 2 B(\delta,N) \, \text{;} \, \abs{\const{\pi^*} - \consthat{\pi^*}} \leq B(\delta,N) \, \text{;} \, \abs{\const{\pi^*_c} - \consthat{\pi^*_c}} \leq B(\delta,N). \end{align*} \end{restatable} Using~\cref{thm:main-concentration-emp} and~\cref{lemma:main-concentration-opt}, we can bound each term in~\cref{eq:conc-relaxed} and~\cref{eq:conc-strict}, completing the proof of~\cref{thm:ub-relaxed} and~\cref{thm:ub-strict} respectively. In the next section, we prove a lower-bound on the sample-complexity in the strict feasibility setting. \section{Discussion} \label{sec:discussion} We proposed a model-based primal-dual algorithm for planning in CMDPs. Via upper and lower bounds, we proved that our algorithm is near minimax optimal for both the relaxed and strict feasibility settings. Our results demonstrate that solving CMDPs is as easy as MDPs when small constraint violations are allowed, but inherently more difficult when we demand zero constraint violation. Algorithmically, we required a specific primal-dual approach that involved solving a sequence of MDPs. In contrast, model-based approaches for MDPs~\citep{agarwal2020model,li2020breaking} allow the use of any black-box planner. In the future, we aim to extend our sample complexity results to black-box CMDP solvers. \section{Introduction} \label{sec:introduction} Common reinforcement learning (RL) algorithms focus on optimizing an unconstrained objective, and have found applications in games such as Atari~\citep{mnih2015human} or Go~\citep{silver2016mastering}, robot manipulation tasks~\citep{tan2018sim,zeng2020tossingbot} or clinical trials~\citep{schaefer2005modeling}. However, many applications require the planning agent to satisfy constraints -- for example, in wireless sensor networks~\citep{buratti2009overview} where there is a constraint on average power consumption. More generally, in the constrained Markov decision processes (CMDP) framework, the goal is to find a policy that maximizes the value associated with a reward function subject to the policy achieving a return (for a second reward function) that exceeds an apriori determined threshold \citep{altman1999constrained}. There has been substantial work addressing the planning problem to find a near-optimal policy in a known CMDP~\citep{borkar2005actor,borkar2014risk, tessler2018reward,paternain2019constrained,achiam2017constrained, xu2021crpo}. However, since the CMDP is unknown in most practical applications, we consider the problem of finding a near-optimal policy in this more challenging setting. There have been multiple recent approaches to obtain a near-optimal policy in CMDPs in the regret-minimization or PAC-RL settings~\citep{efroni2020exploration, zheng2020constrained,brantley2020constrained,kalagarla2020sample,wachi2020safe, ding2021provably, gattami2021reinforcement,hasan2021model, chen2021primal}. These works tackle the exploration, estimation and planning problems simultaneously. On the other hand, recent works~\citep{hasan2021model, wei2021provably,bai2021achieving} consider an easier, but even more fundamental problem of obtaining a near-optimal policy with access to a simulator or \emph{generative model}~\citep{kearns1999finite,kakade2003sample,agarwal2020model}. In particular, these works assume that the transition probabilities in the underlying CMDP are unknown, but the planner has access to a sampling oracle (the generative model) that returns a sample of the next state when given any state-action pair as input. This is the problem setting we consider and \emph{aim to obtain matching upper and lower bounds on the sample complexity of planning in CMDPs with access to a generative model}. Given a target error $\epsilon > 0$, the approximate CMDP objective is to return a policy that achieves a cumulative reward within an $\epsilon$ additive error of the optimal policy in the CMDP. Previous work can be classified into two categories based on how it tackles the constraint -- for the easier problem that we term \emph{relaxed feasibility}, the policy returned by an algorithm is allowed to violate the constraint by at most $\epsilon$. On the other hand, for the more difficult \emph{strict feasibility} problem, the returned policy is required to strictly satisfy the constraint and achieve zero constraint violation. Except for the recent works of~\citet{wei2021provably} and \citet{bai2021achieving}, most provably efficient approaches including those in the regret-minimization and PAC-RL settings consider the relaxed feasibility setting. For this problem, the best model-based algorithm requires $\tilde{O}\left(\frac{S^2 A}{(1 - \gamma)^3 \epsilon^2}\right)$ samples to return an $\epsilon$-optimal policy in an infinite-horizon $\gamma$-discounted CMDP with $S$ states and $A$ actions \citep{hasan2021model}, while the best model-free approach requires $\tilde{O}\left(\frac{S A}{(1 - \gamma)^5 \epsilon^2}\right)$ samples for achieving the objective~\citep{ding2021provably}. On the other hand, the best known upper bounds for the strict feasibility problem are achieved by the model-free algorithm of~\citet{bai2021achieving}. In particular, this algorithm requires $\tilde{O}\left(\frac{L\cdot S A}{(1 - \gamma)^4 \epsilon^2}\right)$ samples~\citep[Theorem 2]{bai2021achieving} that depends on a potentially large parameter $L$ that is not explicitly bounded by~\citet{bai2021achieving}. Importantly, there are no lower bounds characterizing the difficulty of either the relaxed or strict feasibility problems (except in degenerate cases where the constraint is always satisfied and the CMDP problem reduces to an unconstrained MDP). To get an indication of what the optimal bounds might be, it is instructive to compare these results to the unconstrained MDP setting. For unconstrained MDPs with access to a generative model, both model-based~\citep{agarwal2020model,li2020breaking} and model-free approaches~\citep{sidford2018near} can return an $\epsilon$-optimal policy within near-optimal $\tilde{\Theta} \left(\frac{S A}{(1 - \gamma)^3 \epsilon^2}\right)$ sample-complexity~\citep{azar2012sample}. Hence, compared to the sample-complexity for unconstrained MDPs, the best-known upper-bounds for CMDPs are worse for both the relaxed and strict feasibility settings. However, it is unclear whether solving CMDPs is inherently more difficult than unconstrained MDPs. We resolve these questions for both the relaxed and strict feasibility settings, and make the following contributions. \textbf{Generic model-based algorithm}: In~\cref{sec:method}, we provide a generic model-based primal-dual algorithm (\cref{alg:cmdp-generative}) that can be used to achieve both the relaxed and strict feasibility objectives (with appropriate parameter settings). The proposed algorithm requires solving a sequence of unconstrained empirical MDPs using any black-box MDP planner. \textbf{Upper-bound on sample complexity under relaxed feasibility}: In~\cref{sec:ub-relaxed}, we prove that with a specific set of parameters,~\cref{alg:cmdp-generative} uses no more than $\tilde{O}\left(\frac{S A}{(1 - \gamma)^3 \epsilon^2}\right)$ samples to achieve the relaxed feasibility objective. This improves upon the bounds of~\citet{hasan2021model} and matches the lower-bound in the easier unconstrained MDP setting, implying that our bounds are near-optimal. Our result indicates that under relaxed feasibility solving CMDPs is as easy as solving unconstrained MDPs. To the best of our knowledge, these are the first such bounds. \textbf{Upper-bound on sample-complexity under strict feasibility}: In~\cref{sec:ub-strict}, we prove that with a specific set of parameters,~\cref{alg:cmdp-generative} uses no more than $\tilde{O}\left(\frac{S A}{(1 - \gamma)^5 \zeta^2 \epsilon^2}\right)$ to achieve the strict feasibility objective. Here $\zeta \in \left(0, \nicefrac{1}{1 - \gamma} \right]$ is the problem-dependent \emph{Slater constant} that characterizes the size of the feasible region and influences the difficulty of the problem. Unlike~\citet{bai2021achieving}, our bounds do not depend on additional (potentially large) problem-dependent quantities. \textbf{Lower-bound on sample-complexity under strict feasibility}: In~\cref{sec:lb-strict}, we prove a matching problem-dependent $\Omega \left(\frac{SA}{(1 - \gamma)^5 \, \zeta^2 \, \epsilon^2} \right)$ lower bound on the sample-complexity in the strict feasibility setting. Our results thus demonstrate that the proposed model-based algorithm is near minimax optimal. Furthermore, our bounds indicate that under strict feasibility (i) solving CMDPs is inherently more difficult than solving unconstrained MDPs, and (ii) the problem hardness (in terms of the sample-complexity) increases as $\zeta$ (and hence the size of the feasible region) decreases. To the best of our knowledge, these are first results characterizing the difficulty of solving CMDPs with access to a generative model and demonstrate a separation between the relaxed and strict feasibility settings. \textbf{Overview of techniques}: For proving the upper bounds, we use a specific primal-dual algorithm that reduces the CMDP planning problem to solving multiple unconstrained MDPs. Specifically, by using a strong-duality argument, we show that we can obtain an optimal CMDP policy by averaging the optimal policies of a specific sequence of MDPs. For each MDP in this sequence, we use the model-based techniques from~\citet{agarwal2020model, li2020breaking} to prove concentration results for data-dependent policies. This allows us to prove concentration for the optimal data-dependent policy in the CMDP, and subsequently bound the sample complexity for both the relaxed and strict feasibility problems. For the lower bound, we modify the MDP hard instances~\citep{azar2013minimax,xiao2021sample} to handle a constraint reward. This makes the resulting gadgets significantly more complex than those required for MDPs, but we show that similar likelihood arguments can be used to prove the lower-bound. \section{Lower-bound under strict feasibility} \label{sec:lb-strict} \begin{figure}[!ht] \subfigure { \includegraphics[width=0.9\textwidth]{LB.png} } \caption{ The lower-bound instance consists of CMDPs with $S = 2^m - 1$ (for some integer $m > 0$) states and $A$ actions. We consider $SA + 1$ CMDPs -- $M_0$ and $M_{i,a}$ ($i \in \{1,\ldots S\}$, $a \in \{1,\ldots, A\}$) that share the same structure shown in the figure. For each CMDP, $o_0$ is the fixed starting state and there is a deterministic path of length $m+1$ from $o_0$ to each of the $S + 1$ states -- $s_i$ (for $i \in \{0, 1, \ldots, S\}$). Except for states $\tilde{s}_i$, the transitions in all other states are deterministic. For $i \neq 0$, for action $a \in \mathcal{A}$ in state $\tilde{s}_i$, the probability of staying in $\tilde{s}_i$ is $p_{i,a}$, while that of transitioning to state $z_i$ is $1 - p_{i,a}$. There is only one action $a_0$ in $\tilde{s}_0$ and the probability of staying in $\tilde{s}_0$ is $p_{0,a_0}$, while that of transitioning to state $z_0$ is $1 - p_{0,a_0}$. The CMDPs $M_0$ and $M_{i,a}$ only differ in the values of $p_{i,a}$. The rewards $r$ and constraint rewards $c$ are the same in all CMDPs and are denoted in green and red respectively for each state and action. } \label{fig:lb-main} \end{figure} \vspace{1ex} We define an algorithm to be $(\epsilon, \delta)$-sound if it outputs a policy $\hat{\pi}$ such that with probability $1 - \delta$, $V_r^{*}(\rho) - V_r^{\hat{\pi}}(\rho) \leq \epsilon$ and $V_c^{\hat{\pi}}(\rho) \geq b$ i.e. the algorithm achieves the strict feasibility objective in~\cref{eq:strict-objective}. We prove a lower bound on the number of samples required by any $(\epsilon, \delta)$-sound algorithm on the CMDP instance in~\cref{fig:lb-main}. For this instance, with a specific setting of the rewards and probabilities $p_0 < \bar{p} < p_1$, we prove that any $(\epsilon, \delta)$-sound algorithm requires at least $\Omega \left(\frac{\ln(|\mathcal{S}| |\mathcal{A}|/4 \delta)}{\epsilon^2 \zeta^2 (1- \gamma)^5} \right)$ samples to distinguish between $M_0$ and $M_{i,a}$. In particular, we prove the following theorem in~\cref{app:proof-lb}. \begin{restatable}{theorem}{lbstrict} There exists constants $\gamma_0 \in (1-1/\log(|\mathcal{S}|),1)$, $0\le \epsilon_0 \le \frac{1}{(1 - \gamma)} \, \min\left\{1, \frac{\gamma}{(1 - \gamma) \zeta} \right\}$, $\delta_0\in (0, 1)$, such that, for any $\gamma \in (\gamma_0, 1), \epsilon \in (0, \epsilon_0), \delta \in (0,\delta_0)$, any $(\epsilon, \delta)$-sound algorithm requires $\Omega \left(\frac{SA \ln(1/4 \delta)}{\epsilon^2 \zeta^2 (1- \gamma)^5} \right)$ samples from the generative model in the worst case. \label{thm:lb-strict} \end{restatable} The above lower bound matches the upper bound in~\cref{thm:ub-strict} and proves that~\cref{alg:cmdp-generative} is near minimax optimal in the strict feasibility setting. It also demonstrates that solving CMDPs under strict feasibility is inherently more difficult than solving unconstrained MDPs or CMDPs in the relaxed feasibility setting. Finally, we can conclude that the problem becomes more difficult (requires more samples) as the Slater constant $\zeta$ decreases and the feasible region shrinks. \section{Methodology} \label{sec:method} \begin{algorithm}[!t] \caption{Model-based algorithm for CMDPs with generative model} \label{alg:cmdp-generative} \textbf{Input}: $\mathcal{S}$ (state space), $\mathcal{A}$ (action space), $r$ (rewards), $c$ (constraint rewards), $\zeta$ (Slater constant), $N$ (number of samples), $b'$ (constraint RHS), $\omega$ (perturbation magnitude), $U$ (projection upper bound), $\varepsilon_{\text{\tiny{l}}}$ (epsilon-net resolution), $T$ (number of iterations), $\lambda_0 = 0$ (initialization). \\ For each state-action $(s,a)$ pair, collect $N$ samples from $\mathcal{P}(.|s,a)$ and form $\hat{\mathcal{P}}$. \\ Perturb the rewards to form vector $r_p(s,a) := r(s,a) + \xi(s,a)$ where $\xi(s,a) \sim \mathcal{U}[0, \omega]$. \\ Form the empirical CMDP $\hat{M} = \langle \mathcal{S}, \mathcal{A}, \hat{\mathcal{P}}, r_p, c, b', \rho, \gamma \rangle$. \\ Form the epsilon-net $\Lambda = \{0, \varepsilon_{\text{\tiny{l}}}, 2 \varepsilon_{\text{\tiny{l}}}, \ldots, U\}$. \\ \For{$t \leftarrow 0$ \KwTo $T-1$}{ Update the policy by solving an unconstrained MDP: $\hat{\pi}_t = \argmax \hat{V}^{\pi}_{r_p + \lambda_t c}$. \\ Update the dual-variables: $\lambda_{t+1} = \mathcal{R}_{\Lambda}\left[\mathbb{P}_{[0,U]} \left[\lambda_t - \eta \, (\consthat{\hat{\pi}_t} - b')\right]\right]$. } \textbf{Output}: Mixture policy $\bar{\pi}_T = \frac{1}{T} \sum_{t = 0}^{T-1} \hat{\pi}_t$. \end{algorithm} We will use a model-based approach~\citep{agarwal2020model,li2020breaking,hasan2021model} for achieving the objectives in~\cref{eq:relaxed-objective} and~\cref{eq:strict-objective}. In particular, for each $(s,a)$ pair, we collect $N$ independent samples from $\mathcal{P}(\cdot|s,a)$ and form an empirical transition matrix $\hat{\mathcal{P}}$ such that $\hat{\mathcal{P}}(s'|s,a) = \frac{N(s'|s,a)}{N}$, where $N(s'|s,a)$ is the number of samples that have transitions from $(s,a)$ to $s'$. These estimated transition probabilities are used to form an empirical CMDP. Due to a technical requirement, (which we will clarify in the~\cref{sec:concentration}), we require adding a small random perturbation to the rewards in the empirical CMDP\footnote{Similar to MDPs~\citep{li2020breaking}, we can instead perturb the $Q$ function while planning in the empirical CMDP.}. In particular, for each $s \in \mathcal{S}$ and $a \in \mathcal{A}$, we define the perturbed rewards $r_p(s,a) := r(s,a) + \xi(s,a)$ where $\xi(s,a) \sim \mathcal{U}[0, \omega]$ are i.i.d. uniform random variables. Finally, compared to~\cref{eq:true-CMDP}, we will require solving the empirical CMDP with a constraint right-hand side equal to $b'$. Note that setting $b' < b$ corresponds to loosening the constraint, while $b' > b$ corresponds to tightening the constraint. This completes the specification of the empirical CMDP $\hat{M}$ that is defined by the tuple $\langle \mathcal{S}, \mathcal{A}, \hat{\mathcal{P}}, r_p, c, b', \rho, \gamma \rangle$. For $\hat{M}$, the corresponding reward value function (and constraint value function) for policy $\pi$ is denoted as $\rewardhatp{\pi}$ (and $\consthat{\pi}$ respectively). In order to fully instantiate $\hat{M}$, we require setting the values of $\omega$ (the magnitude of the perturbation) and $b'$ (the constraint right-hand side). This depends on the specific setting (relaxed vs strict feasibility) and we do this in~\cref{sec:ub-relaxed,sec:ub-strict} respectively. We compute the optimal policy for the empirical CMDP $\hat{M}$ as follows: \begin{align} \hat{\pi}^* \in \argmax \rewardhatp{\pi} \, \text{s.t.} \, \consthat{\pi} \geq b' \label{eq:emp-CMDP} \end{align} In contrast to~\citet{agarwal2020model,li2020breaking} that consider model-based approaches for unconstrained MDPs and can solve the resulting empirical MDP using any black-box approach, we will require solving~\cref{eq:emp-CMDP} using a specific primal-dual approach that we outline next. Using this algorithm enables us to prove optimal sample complexity bounds under both relaxed and strict feasibility. First, observe that~\cref{eq:emp-CMDP} can be written as an equivalent saddle-point problem -- $\max_{\pi} \min_{\lambda \geq 0} \left[\rewardhatp{\pi} + \lambda \left(\consthat{\pi} - b' \right) \right]$, where $\lambda \in \mathbb{R}$ corresponds to the Lagrange multiplier for the constraint. The solution to this saddle-point problem is $(\hat{\pi}^*, \lambda^*)$ where $\hat{\pi}^*$ is the optimal empirical policy and $\lambda^*$ is the optimal Lagrange multiplier. We solve the above saddle-point problem iteratively, by alternatively updating the policy (primal variable) and the Lagrange multiplier (dual variable). If $T$ is the total number of iterations of the primal-dual algorithm, we define $\hat{\pi}_t$ and $\lambda_t$ to be the primal and dual iterates for $t \in [T] :=\{1,\dots,T\}$. The primal update at iteration $t$ is given as: \begin{align} \hat{\pi}_t = \argmax \left[\rewardhatp{\pi} + \lambda_t \consthat{\pi} \right] = \argmax \hat{V}^{\pi}_{r_p + \lambda_t c}. \label{eq:primal-update} \end{align} Hence, iteration $t$ of the algorithm requires solving an unconstrained MDP with a reward equal to $r_p + \lambda_t c$. This can be done using any black-box MDP solver such as policy iteration. The algorithm updates the Lagrange multipliers using a gradient descent step and requires projecting and rounding the resulting dual variables. In particular, the dual variables are first projected onto the $[0,U]$ interval, where $U$ is chosen to be an upper-bound on $|\lambda^*|$. After the projection, the resulting iterates are rounded to the closest element in the set $\Lambda = \{0, \varepsilon_{\text{\tiny{l}}}, 2 \varepsilon_{\text{\tiny{l}}}, \ldots, U\}$, a one-dimensional epsilon-net (with resolution $\varepsilon_{\text{\tiny{l}}}$) over the dual variables. In~\cref{sec:concentration}, we will see that constructing such an $\varepsilon_{\text{\tiny{l}}}$-net will enable us to prove concentration results for all $\lambda \in \Lambda$. Formally, the dual update at iteration $t$ is given as: \begin{align} \lambda_{t+1} = \mathcal{R}_{\Lambda}\left[\mathbb{P}_{[0,U]} \left[\lambda_t - \eta \, (\consthat{\hat{\pi}_t} - b')\right]\right] \, , \label{eq:dual-update} \end{align} where $\mathbb{P}_{[0,U]} [\lambda] = \argmin_{p \in [0,U]} \abs{\lambda - p}$ projects $\lambda$ onto the $[0,U]$ interval and $\mathcal{R}_{\Lambda}[\lambda] = \argmin_{p \in \Lambda } \abs{\lambda - p}$ rounds $\lambda$ to the closest element in $\Lambda$. Since $\Lambda$ is an epsilon-net, for all $\lambda \in [0,U]$, $\vert \lambda - \mathcal{R}_{\Lambda}[\lambda] \vert \leq \varepsilon_{\text{\tiny{l}}}$. Finally, $\eta$ in~\cref{eq:dual-update} corresponds to the step-size for the gradient descent update. The above primal-dual updates are similar to the dual-descent algorithm proposed in~\citet{paternain2019constrained}. The pseudo-code summarizing the entire model-based algorithm is given in~\cref{alg:cmdp-generative}. We note that although~\cref{alg:cmdp-generative} requires the knowledge of $\zeta$, this is not essential and we can instead use an estimate of $\zeta$. In~\cref{app:est-zeta}, we show that we can estimate $\zeta$ to within a factor of 2 using $\tilde{O}\left(\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^3\zeta^2}\right)$ additional queries. Next, we show that the primal-dual updates in~\cref{alg:cmdp-generative} can be used to solve the empirical CMDP $\hat{M}$. Specifically, we prove the following theorem (proof in~\cref{app:proofs-pd}) that bounds the average optimality gap (in the reward value function) and constraint violation for the mixture policy returned by~\cref{alg:cmdp-generative}. \begin{restatable}[Guarantees for the primal-dual algorithm]{theorem}{pdguarantees} \label{thm:pd-guarantees} For a target error $\varepsilon_{\text{\tiny{opt}}} > 0$ and the primal-dual updates in~\cref{eq:primal-update}-\cref{eq:dual-update} with $U > |\lambda^*|$, $T = \frac{4 U^2}{\varepsilon_{\text{\tiny{opt}}}^2 \, (1 - \gamma)^2} \left[1 + \frac{1}{(U - \lambda^*)^2} \right]$, $\eta = \frac{U (1 - \gamma)}{\sqrt{T}}$ and $\varepsilon_{\text{\tiny{l}}} = \frac{\varepsilon_{\text{\tiny{opt}}}^2 (1 - \gamma)^2 \, (U - \lambda^*)}{6 U}$, the mixture policy $\bar{\pi}_{T} := \frac{1}{T} \sum_{t = 0}^{T-1} \hat{\pi}_t$ satisfies, \begin{align*} \rewardhatp{\bar{\pi}_{T}} & \geq \rewardhatp{\hat{\pi}^*} - \varepsilon_{\text{\tiny{opt}}} \quad \text{;} \quad \consthat{\bar{\pi}_{T}} \geq b' - \varepsilon_{\text{\tiny{opt}}}. \end{align*} \end{restatable} Hence, with $T = O(\nicefrac{1}{\varepsilon_{\text{\tiny{opt}}}^2})$ and $\varepsilon_{\text{\tiny{l}}} = O(\varepsilon_{\text{\tiny{opt}}}^2)$, the algorithm outputs a policy $\bar{\pi}_{T}$ that achieves a reward $\varepsilon_{\text{\tiny{opt}}}$ close to that of the optimal empirical policy $\hat{\pi}^*$, while violating the constraint by at most $\varepsilon_{\text{\tiny{opt}}}$. Hence, with sufficient number of iterations $T$ and by choosing a sufficiently small resolution $\varepsilon_{\text{\tiny{l}}}$ for the epsilon-net, we can use the above primal-dual algorithm to approximately solve the problem in~\cref{eq:emp-CMDP}. In order to completely instantiate the primal-dual algorithm, we require setting $U > |\lambda^*|$. We will subsequently do this for the the relaxed and strict feasibility settings in~\cref{sec:ub-relaxed,sec:ub-strict} respectively. We note that in contrast to~\citet[Theorem 3]{paternain2019constrained} that bounds the Lagrangian,~\cref{thm:pd-guarantees} provides explicit bounds on both the reward suboptimality and constraint violation. We conclude this section by making some observations about the primal-dual algorithm -- while the subsequent bounds for both settings heavily depend on using the ``best-response'' primal update in~\cref{eq:primal-update}, the algorithm does not require using the specific form of the dual updates in~\cref{eq:dual-update}. Indeed, when used in conjunction with the projection and rounding operations in~\cref{eq:dual-update}, we can use any method to update the dual variables (not necessarily gradient descent) provided that it results in an $O\left(T^{a} + \varepsilon_{\text{\tiny{l}}} T^{b} \right)$ (for $a < 1$) bound on the dual regret (see the proof of~\cref{thm:pd-guarantees} for the definition). Next, we specify the values of $N, b'$, $\omega$, $\varepsilon_{\text{\tiny{l}}}$, $T$, $U$ in~\cref{alg:cmdp-generative} to achieve the objective in~\cref{eq:relaxed-objective}. \section{Problem Formulation} \label{sec:problem} We consider an infinite-horizon discounted constrained Markov decision process (CMDP)~\citep{altman1999constrained} denoted by $M$, and defined by the tuple $\langle \mathcal{S}, \mathcal{A}, \mathcal{P}, r, c, b, \rho, \gamma \rangle$ where $\mathcal{S}$ is the set of states, $\mathcal{A}$ is the action set, $\mathcal{P} : \mathcal{S} \times \mathcal{A} \rightarrow \Delta_\mathcal{S}$ is the transition probability function, $\rho \in \Delta_{\mathcal{S}}$ is the initial distribution of states and $\gamma \in [0, 1)$ is the discount factor. The primary reward to be maximized is denoted by $r : \mathcal{S} \times \mathcal{A} \rightarrow [0,1]$, whereas the constraint reward is denoted by $c: \mathcal{S} \times \mathcal{A} \rightarrow [0,1]$\footnote{These ranges for $r$ and $c$ are chosen for simplicity. Our results can be easily extended to handle other ranges.}. If $\Delta_{\mathcal{A}}$ denotes the simplex over the action space, the expected discounted return or \emph{reward value function} of a stationary, stochastic policy\footnote{The performance of an optimal policy in a CMDP can always be achieved by a stationary, stochastic policy~\citep{altman1999constrained}. On the other hand, for an MDP, it suffices to only consider stationary, deterministic policies~\citep{puterman2014markov}.} $\pi: \mathcal{S} \rightarrow \Delta_{\mathcal{A}}$ is defined as $\reward{\pi} = \mathbb{E}_{s_0, a_0, \ldots} \Big[\sum_{t=0}^\infty \gamma^t r(s_t, a_t)\Big]$, where $s_0 \sim \rho, a_t \sim \pi( \cdot | s_t),$ and $s_{t+1} \sim \mathcal{P}( \cdot | s_t, a_t)$. For each state-action pair $(s,a)$ and policy $\pi$, the reward action-value function is defined as $\rewardq{\pi}: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$, and satisfies the relation: $V_{r}^{\pi}(s) = \langle \pi(\cdot | s), \rewardq{\pi}(s,\cdot) \rangle$, where $V_{r}^{\pi}(s)$ is the reward value function when the starting state is equal to $s$. Analogously, the \emph{constraint value function} and constraint action-value function of policy $\pi$ is denoted by $\const{\pi}$ and $\constq{\pi}$ respectively. The CMDP objective is to return a policy that maximizes $\reward{\pi}$, while ensuring that $\const{\pi} \geq b$. Formally, \begin{align} \max_{\pi} \reward{\pi} \quad \text{s.t.} \quad \const{\pi} \geq b. \label{eq:true-CMDP} \end{align} The optimal stochastic policy for the above CMDP is denoted by $\pi^*$ and the corresponding reward value function is denoted by $\reward{*}$. We also define $\zeta := \max_{\pi} \const{\pi} - b$ as the problem-dependent quantity referred to as the Slater constant~\citep{ding2021provably,bai2021achieving}. The Slater constant is a measure of the size of the feasible region and determines the difficulty of solving~\cref{eq:true-CMDP}. For simplicity of exposition, we assume that the rewards $r$ and constraint rewards $c$ are known, but the transition matrix $\mathcal{P}$ is unknown and needs to be estimated. We note that assuming the knowledge of the rewards does not affect the leading terms of the sample complexity since learning these is an easier problem compared to the transition matrix~\citep{azar2013minimax, sidford2018near}. We assume access to a \emph{generative model} or simulator that allows the agent to obtain samples from the $\mathcal{P}(\cdot|s,a)$ distribution for any $(s,a)$. Assuming access to such a generative model, our aim is to characterize the sample complexity required to return a near-optimal policy $\hat{\pi}$ in $M$. Given a target error $\epsilon > 0$, we can characterize the performance of policy $\hat{\pi}$ in two ways: \textbf{Relaxed feasibility}: We require $\hat{\pi}$ to achieve an approximately optimal reward value, while allowing it to have a small constraint violation in $M$. Formally, we require $\hat{\pi}$ s.t. \begin{align} \reward{\hat{\pi}} \geq V_r^{*}(\rho) - \epsilon, \text{ and } \const{\hat{\pi}} \geq b -\epsilon. \label{eq:relaxed-objective} \end{align} \textbf{Strict feasibility} We require $\hat{\pi}$ to achieve an approximately optimal reward value, while simultaneously demanding zero constraint violation in $M$. Formally, we require $\hat{\pi}$ s.t. \begin{align} \reward{\hat{\pi}} \geq V_r^{*}(\rho) - \epsilon, \text{ and } \const{\hat{\pi}} \geq b \label{eq:strict-objective} \end{align} Next, we describe a general model-based algorithm to handle both these cases, and subsequently instantiate the algorithm for the relaxed feasibility (\cref{sec:ub-relaxed}) and strict feasibility (\cref{sec:ub-strict}) settings. \section{Concentration proofs} \label{app:proofs-concentration} \begin{thmbox} \main* \end{thmbox} \begin{proof} Since the policy $\hat{\pi}_\alpha^{*}$ depends on the sampling, we can not directly apply the standard concentration results to bound $\norminf{\rewardbhat{\hat{\pi}_\alpha^{*}} - \rewardbhat{\hat{\pi}_\alpha^{*}}}$. We thus seek to apply a critical lemma established in~\citet{li2020breaking}. It begins with introducing a sequence of vectors for a general data-dependent policy $\pi$ and reward $\ beta$, defined recursively as \begin{align*} V_{\beta}^{\pi,(0)} = (I-\gamma P^{\pi})^{-1}\beta^{\pi} \quad\text{and} \quad V_{\beta}^{\pi, (l)} = (I-\gamma P^{\pi})^{-1}\sqrt{\mathrm{Var}_{P_{\pi}}(V_{\beta}^{\pi, (l-1)})}, ~\forall l\ge 1. \end{align*} In their Lemma~2 (restated below), they show that if certain concentration relation can be established between the empirical and ground truth MDP, then $\norminf{\rewardbhat{\pi} - \rewardbhat{\pi}}$ can be bounded. \begin{lemma}[Lemma~2 of \cite{li2020breaking}] \label{lemma:bern2con} For a data-dependent policy $\pi$, suppose there exists a $\nu_1\ge 0$ such that $\{V_{\beta}^{\pi, (l)}\}$ obeys \begin{align*} \left| (\hat{\mathcal{P}}_{\pi} - \mathcal{P}_{\pi})V^{\pi,(l)}_{\beta} \right|\le \sqrt{\frac{\nu_1}{N}}\sqrt{\mathrm{Var}_{\mathcal{P}_{\pi}}[V_{\beta}^{{\pi},(l)}]} +\frac{\nu_1\norminf{V_{\beta}^{\pi, (l)}}}{N}, \text{ for all } 0\le l \le \log\left( \frac{e}{1-\gamma}\right). \end{align*} Suppose that $N\ge \frac{16e^2}{1-\gamma}\nu_1$. Then \[ \norminf{\hat{V}_{\beta}^{\pi} - V_{\beta}^{\pi}} \le \frac{6}{1-\gamma}\cdot \sqrt{\frac{\nu_1}{N(1-\gamma)}} \norminf{\beta}. \] \end{lemma} To use this lemma for $\pi = \hat{\pi}_\alpha^{*}$, we will need to establish Bernstein-type bounds on $(\mathcal{P}_{s,a}-\hat{\mathcal{P}}_{s,a}) V_{\beta}^{\hat{\pi}_\alpha^{*}, (l)}$ for all $(s,a)$ and integer $0\le l \le \log(e/(1-\gamma))$. Since $\hat{\pi}_\alpha^{*}$ depends on the sampling, a direct concentration bound is not possible. Instead, we will first bound $(\mathcal{P}_{s,a}-\hat{\mathcal{P}}_{s,a})V_{\beta}^{\pi, (l)}$ for all $\pi\in \Pi_{s,a}$, where $\Pi_{s,a}$ is a random set independent of $\hat{\mathcal{P}}$, and then show that $\hat{\pi}^*\in \Pi_{s,a}$ with good probability. First, we describe the construction of $\Pi_{s,a}$. We will follow the ideas in~\citet{agarwal2020model} and \citet{li2020breaking}, and construct an absorbing empirical MDP $\hat{M}_{s,a}$, which is the same as the original empirical MDP, but state-action pair $(s,a)$ is absorbing, i.e., $\hat{\mathcal{P}}'(s^\prime | s, a) = 1$ if and only if $s^\prime = s$. The reward for $(s,a)$ is equal to $u$. We define $\hat{V}_{s,a,\alpha, u}^{\pi}$ and $\hat{Q}_{s,a,\alpha, u}^{\pi}$ to be the value function and $Q$-function of policy $\pi$ for $\hat{M}_{s,a}$ with reward function $\alpha$, and define $\hat{\pi}_{s,a,\alpha, u}^*$ to be the optimal policy i.e. $\hat{\pi}_{s,a, \alpha, u}^* = \argmax \hat{V}_{s,a,\alpha, u}^{\pi}$. We will use the shorthand -- $\hat{V}_{\alpha, u}^{*} := \max \hat{V}_{s,a,\alpha, u}^{\pi}$ and $\hat{Q}_{\alpha, u}^{*} := \max \hat{Q}_{s,a,\alpha, u}^{\pi}$. We consider a grid, \[U_{s,a} = \{0, \pm\iota(1-\gamma)/2, \pm2\iota(1-\gamma)/2, \pm3\iota(1-\gamma)/2 \ldots, \pm \alpha_{\max}\},\] and define $\Pi_{s,a} = \{\hat{\pi}_{s,a,\alpha, u}^*: u\in U_{s,a}\}$. Then $\Pi_{s,a}$ is a random set independent of $\hat{\mathcal{P}}_{s,a}$. Let $L = \{0, 1, \ldots, \lceil\log(e/(1-\gamma))\rceil\}$. Then, by Lemma~\ref{lemma:bern}, we have, with probability at least $1-\delta/|S|/|A|$, for all $\pi \in \Pi_{s,a}$ and $l\in L$ \begin{align*} \left|( \mathcal{P}_{s,a}- \hat{\mathcal{P}}_{s,a}) \cdot V_{\beta}^{\pi, (l)} \right| & \leq \sqrt{\frac{2 \log(4|U_{s,a}||L||S||A|/\delta)}{N}} \sqrt{\Var{P_{s,a}}{V_{\beta}^{\pi, (l)}}} + \frac{2 \log(4|U_{s,a}||S||A||L|/\delta)\norminf{V_{\beta}^{\pi, (l)}}}{3 N}, \end{align*} which we denote as event $\mathcal{E}_{s,a}$. Next, we show that if $u^* =\hat{Q}_{\alpha}^{*}(s,a) - \gamma \hat{V}_{\alpha}^{*}(s) $, then $\hat{\pi}_{s,a,\alpha, u^*}^* = \hat{\pi}_{\alpha}^*$: \\ (1) If $\pi^*(s) = a$, it straightforward to verify that \[ \hat{V}_{\alpha}^{*}(s) = \hat{Q}_{\alpha}^{*}(s,a)= u^* + \gamma \hat{V}_{\alpha}^{*}(s) \ge r(s, a') + \hat{\mathcal{P}}(\cdot|s', a')^\top \hat{V}_{\alpha}^{*}, \quad \forall a'\neq a. \] \\ (2) If $\pi^*(s) \neq a$, then \[ \hat{V}_{\alpha}^{*}(s) = \max_{a'}\hat{Q}_{\alpha}^{*}(s,a') = \max_{a'}(r(s, a') + \hat{\mathcal{P}}(\cdot|s, a')^\top \hat{V}_{\alpha}^{*}) = r(s, \hat{\pi}^*(s)) + \hat{\mathcal{P}}(\cdot|s, \hat{\pi}^*(s))^\top \hat{V}_{\alpha}^{*}.\] (3) For $s'\neq s$, we have \[\hat{V}_{\alpha}^{*}(s') = \hat{Q}_{\alpha}^*(s',\hat{\pi}^*(s')) = r(s', \hat{\pi}^*(s')) + \hat{\mathcal{P}}(\cdot|s', \hat{\pi}^*(s'))^\top \hat{V}_{\alpha}^{*} = \max_{a'}(r(s, a') + \hat{\mathcal{P}}(\cdot|s', a')^\top \hat{V}_{\alpha}^{*}).\] Therefore, $\hat{Q}_{\alpha}^*(s',a')$ and $\hat{\pi}_\alpha^{*}$ satisfies the Bellman equations in the absorbing MDP; consequently, we have $\hat{Q}_{s,a,\alpha, u}^{\hat{\pi}_\alpha^{*}}=\hat{Q}_{\alpha, u}^{*} =\hat{Q}_{\alpha}^*$ and $\hat{V}_{s,a,\alpha, u}^{\hat{\pi}_\alpha^{*}} = \hat{V}_{\alpha, u}^{*} =\hat{V}_{\alpha}^{\hat{\pi}_\alpha^{*}}$ and $\hat{\pi}_\alpha^{*}$ is an optimal policy in the absorbing MDP. Moreover, suppose event $\mathcal{E}$ happens, then $\hat{Q}_{\alpha}^{*}$ satisfies the $\iota$-gap condition. By Lemma~\ref{policy-gap}, for all $|u-u^*|\le \iota(1-\gamma)/2$, we have \[ \hat{\pi}_\alpha^{*} = \hat{\pi}_{s,a,\alpha, u^*}^* =\hat{\pi}_{s,a,\alpha, u}^*. \] Thus, if $\mathcal{E}$ happens, then $\hat{\pi}_\alpha^{*} = \hat{\pi}_{s,a,\alpha, u}^*$ for some $u\in U_{s,a}$ and thus $\hat{\pi}^*\in \Pi_{s,a}$. On $\mathcal{E}\cap \mathcal{E}_{s,a}$, we have, for all $l\in L$, \begin{align*} \left|( \mathcal{P}_{s,a}- \hat{\mathcal{P}}_{s,a}) \cdot V_{\beta}^{\hat{\pi}_\alpha^{*}, (l)} \right| & \leq \sqrt{\frac{2 \log(4|U_{s,a}||L||S||A|/\delta)}{N}} \sqrt{\Var{P_{s,a}}{V_{\beta}^{\hat{\pi}_\alpha^{*}, (l)}}} + \frac{2 \log(4|U_{s,a}||S||A||L|/\delta)\norminf{V_{\beta}^{\hat{\pi}_\alpha^{*}, (l)}}}{3 N}. \end{align*} By a union bound over all $(s,a)$, we have, with probability at least $\Pr[\mathcal{E}\cap (\cap_{s,a}\mathcal{E}_{s,a})] \ge \Pr[\mathcal{E}]-\delta$, for all $(s,a)$, and $l\in L$, \begin{align*} \left|( \mathcal{P}_{s,a}- \hat{\mathcal{P}}_{s,a}) \cdot V_{\beta}^{\hat{\pi}_\alpha^{*}, (l)} \right| & \leq \sqrt{\frac{2 \log(4|U_{s,a}||L||S||A|/\delta)}{N}} \sqrt{\Var{P_{s,a}}{V_{\beta}^{\hat{\pi}_\alpha^{*}, (l)}}} + \frac{2 \log(4|U_{s,a}||S||A||L|/\delta)\norminf{V_{\beta}^{\hat{\pi}_\alpha^{*}, (l)}}}{3 N}. \end{align*} Let $\nu_1 = 2 \log(4 |U_{s,a}||S||A||L|/\delta)$ and apply Lemma~\ref{lemma:bern2con}, we arrive at, \[ \norminf{\hat{V}_{\beta}^{\hat{\pi}^*} - V_{\beta}^{\hat{\pi}^*}} \le \frac{6}{1-\gamma}\cdot \sqrt{\frac{\nu_1}{N(1-\gamma)}} \norminf{\beta} \] provided $N\ge \frac{16e^2}{1-\gamma}\nu_1$. For instantiating $\nu_1$, note that $|U_{s,a}| = \frac{4 \alpha_{\max}}{(1 - \gamma)^2 \iota}$, $|L| = \log\left(\nicefrac{e}{1 - \gamma}\right)$. Hence, $\nu_1 = 2 \log \left(\frac{16 \alpha_{\max} S A \log\left(\nicefrac{e}{1 - \gamma}\right)}{(1 - \gamma)^2 \, \iota \, \delta} \right)$. \end{proof} \begin{thmbox} \begin{lemma} For any policy $\pi$, we have \[ \norminf{\reward{\pi} - \rewardp{\pi}} \leq \frac{\omega}{1-\gamma} \quad \text{;} \quad \norminf{\rewardhat{\pi} - \rewardhatp{\pi}} \leq \frac{\omega}{1-\gamma} \] \label{lemma:perturbation-error} \end{lemma} \end{thmbox} \begin{proof} For policy $\pi$, $\reward{\pi} = \left(I - \gamma P_{\pi}\right)^{-1} r^{\pi}$ and $\rewardp{\pi} = \left(I - \gamma P_{\pi}\right)^{-1} r_p^{\pi}$. \begin{align*} \reward{\pi} - \rewardp{\pi} &= \left(I - \gamma P_{\pi}\right)^{-1} [r^{\pi} - r_p^{\pi}] \\ \implies \norminf{\reward{\pi} - \rewardp{\pi}} & \leq \|\left(I - \gamma P_{\pi}\right)^{-1}\|_{1} \norminf{r^\pi - r^\pi_p} \\ \intertext{Since $\|\left(I - \gamma P_{\pi}\right)^{-1}\|_{1} \leq \frac{1}{1 - \gamma}$ and $\norminf{r^\pi - r^\pi_p} \leq \omega$} \norminf{\reward{\pi} - \rewardp{\pi}} & \leq \frac{\omega}{1 - \gamma}. \end{align*} The same argument can be used to bound $\norminf{\rewardhat{\pi} - \rewardhatp{\pi}}$ completing the proof. \end{proof} \mainconcentrationemp* \begin{proof} \begin{align*} \abs{\rewardp{\bar{\pi}_{T}} - \rewardhatp{\bar{\pi}_{T}}} &= \abs{\frac{1}{T} \sum_{t = 0}^{T-1} \left[\rewardp{\pi_t} - \rewardhatp{\pi_t}\right]} \leq \frac{1}{T} \sum_{t = 0}^{T-1} \abs{\rewardp{\pi_t} - \rewardhatp{\pi_t}} \\ & \leq \frac{1}{T} \sum_{t = 0}^{T-1} \norminf{V_{r_p}^{\pi_t} - \hat{V}_{r_p}^{\pi_t}} \end{align*} Recall that $\hat{M}_{r + \lambda_t c}$ satisfies the gap condition with $\iota = \frac{\omega \, \delta}{30 \, |\Lambda| |S||A|^2}$ for every $\lambda_t \in \Lambda$. Since $|\Lambda| = \frac{U}{\varepsilon_{\text{\tiny{l}}}}$, $\iota = \frac{\omega \, \delta (1 - \gamma) \, \varepsilon_{\text{\tiny{l}}}}{30 \, U |S||A|^2}$. Since $\pi_t := \argmax \hat{V}_{r_p + \lambda_t c}^{\pi}$, we use~\cref{lemma:main} with $\alpha = r_p + \lambda_t c$ and $\beta = r_p$, and obtain the following result. For $N \geq \frac{4 \, C(\delta)}{1-\gamma}$, for each $t \in [T]$, with probability at least $1 - \delta/5$, \begin{align*} \norminf{V_{r_p}^{\pi_t} - \hat{V}_{r_p}^{\pi_t}} & \leq \sqrt{\frac{C(\delta)}{N \cdot (1-\gamma)^3 }} \, (1 + \omega) \leq 2 \sqrt{\frac{C(\delta)}{N \cdot (1-\gamma)^3 }} \end{align*} Using the above relations, \begin{align*} \abs{\rewardp{\bar{\pi}_{T}} - \rewardhatp{\bar{\pi}_{T}}} & \leq 2 \sqrt{\frac{C(\delta)}{N \cdot (1-\gamma)^3 }} \end{align*} Similarly, invoking~\cref{lemma:main} with $\alpha = r_p + \lambda_t c$ and $\beta = c$ gives the bound on $\abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}}$. \end{proof} \begin{thmbox} \mainconcentrationopt* \end{thmbox} \begin{proof} Since $\pi^*$ and $\pi^*_c$ are fixed policies independent of the sampling, we can directly use~\citet[Lemma 1]{li2020breaking}. \end{proof} \subsection{Helper lemmas} \label{app:common-helper} \begin{thmbox} \begin{lemma} With probability at least $1 - \delta/10$, for every $\lambda \in \Lambda$, $\hat{M}_{r_p + \lambda c}$ satisfies the gap condition in~\cref{def:gap-condition} with $\iota = \frac{\omega \, \delta \, (1-\gamma)}{30 \, |\Lambda||S||A|^2}$. \label{lemma:gap} \end{lemma} \end{thmbox} \begin{proof} Using~\cref{lemma:gapli} with a union bound over $\Lambda$ gives the desired result. \end{proof} \begin{thmbox} \begin{lemma} \label{policy-gap} Let $\pi^{*}_{\alpha}$ and $\pi^{*}_{\alpha'} $ be two optimal policies for an MDP with rewards $\alpha$ and $\alpha'$ respectively. Suppose $Q_\alpha^*$ satisfies the $\iota$-gap condition. Then, for all $\alpha'$ with $\norminf{\alpha' - \alpha} < \iota(1-\gamma)/2$, we have \[ \pi^{*}_{\alpha} = \pi^{*}_{\alpha'}. \] \label{lemma:abs-lipschitz-policy} \end{lemma} \end{thmbox} \begin{proof} Since $\norminf{\alpha' - \alpha} < \iota(1-\gamma)/2$, we thus have \[ \|{Q}^*_{\alpha} - {Q}^*_{\alpha'}\|_{\infty} < \frac{\iota(1-\gamma)}{2(1-\gamma)} = \frac{\iota}{2}. \] Note that, for all $s$, \begin{align*} {Q}^*_{\alpha}(s,{\pi}^*_{\alpha'}(s)) &> {Q}^*_{\alpha'}(s,{\pi}^*_{\alpha'}(s)) - \frac{\iota}{2}\\ &\ge {Q}^*_{\alpha'}(s, {\pi}^*_{\alpha}(s)) - \frac{\iota}{2},\\ &> {Q}^*_{\alpha}(s, {\pi}^*_{\alpha}(s)) - {\iota}\\ &> {Q}^*_{\alpha}(s, a'). \quad\forall a'\neq {\pi}^*_{\alpha}(s). \end{align*} Hence, ${\pi}^*_{\alpha'}(s)\not\in\{a': a'\neq {\pi}^*_{\alpha}(s)\}$ for all $s\in S$, and consequently ${\pi}^*_{\alpha} = {\pi}^*_{\alpha'}$. \end{proof} \begin{thmbox} \begin{lemma}[Lemma~6 of~\citet{li2020breaking}] \label{lemma:gapli} Consider the MDP $M=(S,A, P, \gamma, r_p)$ with randomly perturbed rewards $r_p$ ($r_p(s,a) = r(s,a) + \xi(s,a)$ where $\xi(s,a) \sim \mathcal{U}[0,\omega]$ and $r(s,a)\in \mathbb{R}$). If optimal $Q$-function is denoted as $Q^*_{r_p}$ and $\pi_{r_p}^* := \argmax V^\pi_{r_p}$ is the optimal deterministic policy, then with probability at least $1-\delta$, for some $\delta\in(0,1]$, we have, for all $s$ and $a\neq \pi_{r'}^*(s)$ \[ Q^{*}_{r_p}(s,\pi_{r_p}^*(s)) \ge Q^{*}_{r_p}(s, a) + \frac{\omega \delta (1-\gamma)}{3|S||A|^2} \, \] \end{lemma} i.e. $M$ satisfies the gap condition in~\cref{def:gap-condition} with $\iota = \frac{\omega \delta (1-\gamma)}{3|S||A|^2}$. \end{thmbox} \begin{thmbox} \begin{lemma}[Bernstein inequality] \label{lemma:bern} Fix a state $s$, an action $a$. Let $\delta \geq 0$. Then, for any fixed vector $V$, with probability greater than $1-\delta$, it holds that, \begin{align*} \left|( \mathcal{P}_{s,a}- \hat{\mathcal{P}}_{s,a}) \cdot V \right| & \leq \sqrt{\frac{2 \log(4/\delta)}{N}} \sqrt{\Var{P_{s,a}}{V}} + \frac{2 \log(4/\delta)\norminf{V}}{ 3 N}. \end{align*} \label{lemma:bernstein} \end{lemma} \end{thmbox} \section{Proof of~\cref{thm:lb-strict}} \label{app:proof-lb} \lbstrict* \begin{figure}[!ht] \subfigure { \includegraphics[width=0.9\textwidth]{LB.png} } \caption{ The lower-bound instance consists of CMDPs with $S = 2^m - 1$ (for some integer $m > 0$) states and $A$ actions. We consider $SA + 1$ CMDPs -- $M_0$ and $M_{i,a}$ ($i \in \{1,\ldots S\}$, $a \in \{1,\ldots, A\}$) that share the same structure shown in the figure. For each CMDP, $o_0$ is the fixed starting state and there is a deterministic path of length $m+1$ from $o_0$ to each of the $S + 1$ states -- $s_i$ (for $i \in \{0, 1, \ldots, S\}$). Except for states $\tilde{s}_i$, the transitions in all other states are deterministic. For $i \neq 0$, for action $a \in \mathcal{A}$ in state $\tilde{s}_i$, the probability of staying in $\tilde{s}_i$ is $p_{i,a}$, while that of transitioning to state $z_i$ is $1 - p_{i,a}$. There is only one action $a_0$ in $\tilde{s}_0$ and the probability of staying in $\tilde{s}_0$ is $p_{0,a_0}$, while that of transitioning to state $z_0$ is $1 - p_{0,a_0}$. The CMDPs $M_0$ and $M_{i,a}$ only differ in the values of $p_{i,a}$. The rewards $r$ and constraint rewards $c$ are the same in all CMDPs and are denoted in green and red respectively for each state and action. } \label{fig:lb} \end{figure} \begin{proof}[Proof of Theorem~\ref{thm:lb-strict}] Without loss of generality, we assume $|\mathcal{S}|=2^m - 1$ for some integer $m$. In what follows, we first introduce our hard instance. Note that some of the states of this instance may have less than $|\mathcal{A}|$ actions. This is without loss of generality as one can easily duplicate actions to make each state having exactly $|\mathcal{A}|$ actions. \textbf{Hard Instance.} We will consider the basic gadget defined in Figure~\ref{fig:lb}. We will make $|\mathcal{S}|+1$ copies of this gadget. In the $i$-th gadet for $i=0,1,\ldots, |\mathcal{S}|$, there is an input state $s_i$ with 2 actions $a_l, a_r$. Playing $a_l$ deterministically transitions to $\tilde{s}_i$ with reward and constraint $0$. Playing $a_r$ goes to $z'_i$ with reward and constraint $0$. In state $\tilde{s}_i$, there are $|\mathcal{A}|$ actions (only 1 action $a_0$ on $\tilde{s}_0$). Playing action $a\in \mathcal{A}$, this state transitions to state $z_i$ with probability $1-p_{i,a}$ and self loop with probability $p_{i,a}$, where $p_{i,a}$ will be determined shortly. The reward at this state is $1$ and constraint reward is $u$ to be determined. In state $z_i'$, there is only one action, whose reward is $0$ and constraint is $(b+\zeta)(1-\gamma)/\gamma^{m+3}$. Lastly, we consider $2|\mathcal{S}|+1$ routing state $o_0, o_1, \ldots$ that form a binary tree, i.e., in each of these routing state, there are two actions $a_0$ and $a_1$. The state-action pair $(o_i, a_j)$ transitions to state $o_{2i + j+1}$ for $i< |\mathcal{S}|$. Each state $o_{|\mathcal{S}|+k}$ transitions deterministically to the gate state $s_k$ for $k=0,1,\ldots |\mathcal{S}|$. Rewards and constraint are all 0 for these states. Note that, for any state $s_k$, there is a unique deterministic path of length $m+1$ connecting $o_0$. Hence we require $\gamma \ge 1 - 1/(cm)$ for some constant $c$ and hence $\gamma^{\Theta(m)} = \Theta(1)$. Note that this instance modifies the MDP instance in \cite{feng2019does}. Some parameters chosen are also adapted from there. \textbf{Hypotheses.} With the above defined hard instance structure, we now define a family of hypotheses on the probability transitions. Later we will show that any sound algorithm would be able to test the hypothesises but would require a large number of samples. Let $q_0, q_1, q_2 \in (0, 1)$ be some parameters to be determined. We define, \begin{itemize} \item \emph{Null case} MDP $M_0$: $p_{0,a_0} = q_1$, and $p_{i, a} = q_0$ for all $i\in [|\mathcal{S}|]$ and $a\in \mathcal{A}$. \item \emph{Alternative case} MDP $M_{i, a}$: $p_{0,a_0} = q_1$, $p_{j, a'} = q_0$ for all $(j, a')\neq (i,a)$, and $p_{i,a} = q_2$. \end{itemize} Note that all these MDPs $M_0, M_1\ldots $ share exactly the same graph structure, with only the transition probability changes. Moreover, $M_{i,a}$ differs from $M_0$ only on state-action pair $(\tilde{s}_i, a)$. \textbf{Optimal Policies.} Now we specify $q_0, q_1$ and $q_2$ and check the optimal policies of each hypothesis. Let $\epsilon' = (1-\gamma)\zeta\epsilon$, \[ {q}_0 = \frac{1-{c_1(1-\gamma)}}{\gamma}, q_1 = {q}_0+\alpha_1, \text{ and }q_2 = {q}_0+\alpha_2, \] and \[ \alpha_1 = \frac{c_2(1-\gamma {q}_0)^2\epsilon'}{\gamma}, \quad\text{and} \quad \alpha_2 = \frac{c_3(1-\gamma {q}_0)^2\epsilon'}{\gamma} \] for some absolute constants $c_1, c_2, c_3 > 0$ and that $\alpha_1/q_0\in (0, 1/2)$, $\alpha_1/(1-q_0)\in (0, 1/2)$, $\alpha_2/q_0\in (0, 1/2)$, and $\alpha_2/(1-q_0)\in (0, 1/2)$. We choose these parameters such that \[ \frac{1}{c_1(1-\gamma)} = \frac{1}{1-\gamma q_0}< \frac{1}{1-\gamma q_1}< \frac{1}{1-\gamma q_2} = \frac{1}{c_1(1-\gamma)} + c_4\epsilon' \] for some constants $c_4>0$ and that \[ \left|\frac{1}{1-\gamma q_1} - \frac{1}{1-\gamma q_0} \right| = \Theta(\epsilon'), \text{ and } \left|\frac{1}{1-\gamma q_1} - \frac{1}{1-\gamma q_2} \right| = \Theta(\epsilon'). \] Note that, for the reward values, if $\zeta=\Theta(1)$, then these actions only differ by $\Theta(\epsilon')\ll \epsilon$. A correct algorithm would not need to distinguish these actions. Yet, once the constraints are concerns, we will show shortly that these actions do differ because of the constraint values. With these parameters, we can derive the optimal CMDP policy for $M_{0}$ and each $M_{i, a}$. For any policy $\pi$, we denote the state occupancy measure as $\mu^{\pi}$, i.e., \begin{align} \label{eqn:flux} \mu^{\pi}(s,a) = \sum_{z}\rho(z)\sum_{t=1}^{\infty}\gamma^{t-1}\sum_{z} \Pr_{\pi}[s_t=s, a_t=a|s_1=z] \end{align} where $\rho$ is the initial distribution with $\rho(o_0) = 1$ and $\rho(s)=0$ for all $s\neq o_0$. This occupancy measure describes the discounted reachablity from $o_0$ to an arbitrary state action pair. The reward value and constraint value can be written as \[ V^{\pi} = \sum_{s,a} \mu^{\pi}(s,a)r(s,a), \quad \text{and}\quad V_{c}^{\pi} = \sum_{s,a}\mu^{\pi}(s,a) c(s,a). \] Note that, given a state-occupancy measure $\mu$, a policy can be specified as $\pi_{\mu}(a|s) = \mu(s,a)/\sum_{a'} \mu(s, a')$. We can use the LP formulation for the CMDP as follows \begin{align} \max &\sum_{s,a} \mu(s,a)r(s,a) \text{ subject to },\notag\\ &\forall s: ~\sum_{a}\mu(s,a) = \rho(s) + \gamma \sum_{s', a'} P(s|s', a')\mu(s', a'),\notag\\ &\sum_{s,a}\mu^{\pi}(s,a) c(s,a) \ge b,\notag\\ &\mu(s,a)\ge 0. \end{align} Due to the structure of the CMDPs, we can further simply the structure of the constraints. In particular, we have \begin{align} \forall i:\quad \sum_{a}\mu(\tilde{s}_i, a) = \gamma\mu({s}_i, a_l) + \gamma \sum_{a}p_{i, a}\mu(\tilde{s}_i, a) \end{align} let $\bar{\nu} = \sum_{i} \mu(s_i, a_r)$. We then have \[ \sum_{i}\mu({s}_i, a_l) + \bar{\nu} = \gamma^{m+2}, \] and \[ \mu(z_i', a) = \frac{\gamma\mu(s_i, a_r)}{1-\gamma}. \] Consider $M_0$, let $\mu_0 := \mu({s}_0,a_l)$, $\mu^c_0:= \sum_{i>0} \mu({s}_i,a_l)$ we have \[ \sum_{i>0, a} \mu(\tilde{s}_i,a) = \frac{\gamma \mu^c_0}{1-\gamma q_0}, \quad \text{and}\quad \mu(\tilde{s}_0,a_0) = \frac{\gamma \mu_0}{1-\gamma q_1}. \] The LP can be rewritten as \begin{align*} &\max \quad \frac{\gamma\mu_0}{1-\gamma q_1} + \frac{\gamma\mu_0^c}{1-\gamma q_0} \\ & \text{ s.t. } \frac{\gamma\mu_0 \cdot u}{1-\gamma q_1} + \frac{\gamma\mu_0^c \cdot u}{1-\gamma q_0} + \frac{ \bar{\nu}\cdot (b+\zeta)}{\gamma^{m+2}} \ge b, ~ \mu_0 +\mu_0^c +\bar{\nu} = \gamma^{m+2}, \text{ and } \mu(s,a)\ge 0, ~\forall (s,a). \end{align*} Here, we specify the values of $u$ as, \[ u = \frac{(1-\gamma q_0) (b-x)}{\gamma^{m+3}}, \] for some $x=\Theta(\zeta)$, such that, \[ \frac{\gamma^{m+3}u}{1-\gamma q_0} = b-x< \frac{\gamma^{m+3}u}{1-\gamma q_1} = b-x + \epsilon'_1< \frac{\gamma^{m+3}u}{1-\gamma q_2} = b-x+\epsilon'_2< b, \] where $c'\epsilon'\le \epsilon'_1 \le \epsilon'_1 + c'' \epsilon' \le \epsilon'_2 \le c'''\epsilon'$ for some positive constants $c', c'', c'''$ determined by $c_1, c_2, c_3$. For this value of $u$, the maximum value of the constraint value function $\max V_c^\pi$ is $b + \zeta$, implying that $\zeta$ is the Slater constant for all these CMDPs. Thus, for $M_0$, the solution is, \[ \mu_0 = \frac{\gamma^{m+2}\zeta}{\zeta + x - \epsilon'_1},\quad \mu_0^c= 0, \quad\text{and}\quad \bar{\nu} = \frac{(x-\epsilon'_1)\gamma^{m+2}}{\zeta+x - \epsilon'_1}. \] Note that this implies the policy deterministically choose a path to reach state $s_0$, and then plays action $a_l$ with probability $\mu_0/\gamma^{m+2}$. The optimal value in this case is \[ V_{M_0}^*(o_0) = \frac{\zeta}{\zeta + x -\epsilon'_1}\cdot \frac{\gamma^{m+3}}{1-\gamma q_1}. \] Similarly, for $M_{i, a}$ with $i\ge 1$, let $\mu_{i,a} := \mu({s}_i, a)$, $\mu_{i,a}^c := \sum_{i'>0, (i', a')\neq (i,a)}\mu(s_{i'}, a')$, the LP can be written as \begin{align*} &\max \quad \frac{\gamma\mu_0}{1-\gamma q_1} + \frac{\gamma\mu_{i,a}^c}{1-\gamma q_0} + \frac{\gamma\mu_{i,a}}{1-\gamma q_2} \\ & \text{ s.t. } \frac{\gamma\mu_0u}{1-\gamma q_1} + \frac{\gamma\mu_{i,a}^cu}{1-\gamma q_0} + \frac{\gamma\mu_{i,a}u}{1-\gamma q_2} + \frac{ \bar{\nu}\cdot (b+\zeta)}{\gamma^{m+2}} \ge b, ~ \mu_0 +\mu_{i,a}^c + \mu_{i,a} +\bar{\nu} = \gamma^{m+2}, \\ &\text{ and } \mu(s,a)\ge 0, ~\forall (s,a). \end{align*} the solution is \[ \mu_{i,a}=\frac{\zeta\gamma^{m+2}}{\zeta+x-\epsilon'_2},~ \mu_{i,a}^c = \mu_0 = 0, \quad\text{and}\quad \bar{\nu} =\frac{(x-\epsilon'_2)\gamma^{m+2}}{\zeta+x - \epsilon'_2}, \] i.e., the policy chooses a path to reach state $s_i$ deterministically and with optimal value \[ V_{M_{i,a}}^*(o_0) = \frac{\zeta}{\zeta+x-\epsilon'_2}\cdot\frac{\gamma^{m+3}}{1-\gamma q_2}. \] Lastly, we shall check the gap of the value functions. \[ \left|V_{M_{i,a}}^*(o_0) - V_{M_{0}}^*(o_0)\right| \ge \left(\frac{\zeta}{\zeta+x-\epsilon'_2} -\frac{\zeta}{\zeta+x-\epsilon'_1}\right)\cdot \frac{\gamma^{m+3}}{1-\gamma q_1} = \frac{\epsilon''}{\zeta + x} \cdot \frac{\gamma^{m+3}}{1-\gamma q_1} \ge c_7 \epsilon. \] where $\epsilon''\ge c_8 \epsilon'$ for some constants $c_7, c_8$ determined by $c_1, c_2, c_3$ and $x$. Thus the error in $V_c^{\pi}$ is amplified by a factor of $\Theta[(1-\gamma)^{-1}\zeta^{-1}]$. \textbf{Implications of Soundness: Near-Optimal Policies.} Let $\mathcal{K}$ be a $(\epsilon, \delta)$-sound algorithm, i.e., on input any CMDP with a generative model, it outputs a policy, which is $\epsilon$-optimal with probability at least $1-\delta$. We thus define the event \[ \mathcal{E}_{0} = \left\{\mathcal{K} \text{ outputs policy $\pi$ such that } \mu^{\pi}({s}_0, a_l) \ge \frac{\zeta\gamma^{m+2}}{(\zeta + x - \epsilon_1'/2)}\right\}, \] i.e., this event requires the output policy reaching $s_0$ and play action $a_l$ with sufficiently high probability. We now measure the probability of $\mathcal{E}_{0}$ on different input CMDPs. Due to the soundness, it must be the case that \[ \Pr_{M_{i,a}}[\mathcal{E}_{0}] < \delta. \] If not, on $\mathcal{E}_{0}$, $\mu^{\pi}({s}_0, a_l)\ge \frac{\zeta \gamma^{m+2}}{\zeta + x - \epsilon_1'/2}$, we can then compute the best possible $V^{\pi}_{M_{i,a}}(o_0)$ as solving the following LP, \begin{align*} &\max \quad \frac{\gamma\mu_0}{1-\gamma q_1} + \frac{\gamma\mu_{i,a}^c}{1-\gamma q_0} + \frac{\gamma\mu_{i,a}}{1-\gamma q_2} \\ & \text{ s.t. } \frac{\gamma\mu_0u}{1-\gamma q_1} + \frac{\gamma\mu_{i,a}^cu}{1-\gamma q_0} + \frac{\gamma\mu_{i,a}u}{1-\gamma q_2} + \frac{ \bar{\nu}\cdot (b+\zeta)}{\gamma^{m+2}}\ge b, ~ \mu_0 +\mu_{i,a}^c + \mu_{i,a} +\bar{\nu} = \gamma^{m+2}, \\ & \mu_0 \ge \frac{\zeta\gamma^{m+2}}{(\zeta + x - \epsilon_1'/2)} \text{ and } \mu(s,a)\ge 0, ~\forall (s,a). \end{align*} Plugging the values of $p_{i,a}$, and due to $q_0 < q_1 < q_2$, we obtain $\mu_0 = \frac{\zeta\gamma^{m+2}}{\zeta+x-\epsilon_1'/2}$, $\mu_{i,a}^c = 0$ and, \begin{align*} \mu_{i,a}\cdot (b-x + \epsilon'_2) + \mu_0\cdot (b-x + \epsilon'_1) + \bar{\nu}(b+\zeta) = b\gamma^{m+2}, \mu_{i,a}+ \mu_0 + \bar{\nu} = \gamma^{m+2}. \end{align*} Hence, \[ \mu_{i,a} = \gamma^{m+2}\cdot\frac{\zeta - \mu_0 (\zeta+x-\epsilon'_1)}{\zeta+x-\epsilon_2'} \le \frac{c_9\epsilon_1'\zeta\cdot \gamma^{m+2}}{2(\zeta+x-\epsilon_1'/2)(\zeta+x-\epsilon_2)} \] for some constant $c_9$ depending on $c_1, c_2, c_3, x$, and \begin{align*} V^{\pi}(o_0) & \le \frac{\zeta}{\zeta+x-\epsilon_1'/2}\cdot\frac{\gamma^{m+3}}{1-\gamma q_1} + \frac{c_9\epsilon_1'\zeta}{2(\zeta+x-\epsilon_1'/2)(\zeta+x-\epsilon_2)}\cdot\frac{\gamma^{m+3}}{1-\gamma q_2} \end{align*} Note that \[ V^*_{M_{i,a}}(o_0) = \frac{\zeta}{\zeta + x-\epsilon_2'}\cdot \frac{\gamma^{m+3}}{1-\gamma q_2} \] and \[ V^{*}_{M_{i,a}}(o_0) - V^{\pi}_{M_{i,a}}(o_0) \ge \left(\frac{\zeta}{\zeta+x-\epsilon'_2} -\frac{\zeta}{\zeta+x-\epsilon'_1/2}-\frac{c_9\epsilon_1'\zeta}{2(\zeta+x-\epsilon_1'/2)(\zeta+x-\epsilon_2')}\right)\cdot \frac{\gamma^{m+3}}{1-\gamma q_2} \ge \epsilon \] for some appropriately chosen $c_1, c_2, c_3, x$, which is a contradiction of the $(\epsilon, \delta)$-soundness. \textbf{Implications of Soundness: Expectation on Null Hypothesis.} Let $N_{i,a}$ be the number of samples the algorithm $\mathcal{K}$ takes on state-action $(s_i, a)$. Next we show that, $\mathbb{E}[N_{i,a}]$ has to be big on $M_0$. \begin{thmbox} \begin{lemma} Let $t_* = \frac{c_{10}\log\delta^{-1}}{(1-\gamma)^3{\epsilon'}^2}$ for some constant $c_{10}$. For any $(\epsilon, \delta)$-sound algorithm $\mathcal{K}$, for any $(i,a)$, we have \[ \mathbb{E}_{M_0}(N_{i,a}) \ge t_*. \] \end{lemma} \end{thmbox} \begin{proof} Suppose $\mathbb{E}_{M_0}(N_{i,a})<t_*$, then we aim to show a contradiction: $\Pr_{M_{i,a}}[\mathcal{E}_0]\ge \delta$. Similar to the proof above, since $\mathcal{K}$ is $(\epsilon, \delta)$-sound, it must be the case that \[ \Pr_{M_{0}}[\mathcal{E}_{0}]\ge 1-\delta. \] We now consider the likelihood ratio \[ \Pr_{M_{i,a}}[\mathcal{E}_{0}] / \Pr_{M_{0}}[\mathcal{E}_{0}]. \] For any realization of the empirical samples, consider the samples the algorithm takes as $\tau = \{(s_{i,a}^{(1)}, s_{i,a}^{(2)}, \ldots, s_{i,a}^{(N_{i,a})}): (i,a)\in [|\mathcal{S}|]\times [\mathcal{A}]\}$. Let us define $N_{s', s, a}$ as the number of samples from $(s, a)\to s'$. By Markov property, since the only difference of the probability matrix between $M_{i,a}$ and $M_{0}$ is on $p_{i, a}$, we have \begin{align*} \frac{\Pr_{M_{i,a}}[\tau]}{\Pr_{M_{0}}[\tau]} &= \frac{q_2^{N_{\tilde{s}_i, \tilde{s}_i,a}}(1-q_2)^{N_{z_i, \tilde{s}_i,a}}}{q_0^{N_{\tilde{s}_i, \tilde{s}_i,a}}(1-q_0)^{N_{z_i, \tilde{s}_i,a}}}= \left(\frac{q_2}{q_0}\right)^{N_{\tilde{s}_i, \tilde{s}_i,a}}\cdot\left(\frac{1-q_2}{1-q_0}\right)^{N_{z_i, \tilde{s}_i,a}}\\ &= \left(1+\frac{\alpha_2}{q_0}\right)^{Nq_0-\Delta}\cdot \left(1-\frac{\alpha_2}{1-q_0}\right)^{N(1-q_0)+\Delta} \end{align*} where $\Pr_{M}[\tau]$ denotes the probability of $\mathcal{K}$ taking the samples $\tau$ in CMDP $M$, $\Delta = N_{i,a}{q}_0 - N_{\tilde{s}_i, \tilde{s}_i, a}$, and $N=N_{i,a}$. By a similar derivation of Lemma~5 in \cite{feng2019does} (page 15-19), on the following event, \[ \mathcal{E}'_{i,a} = \left\{N_{i,a} \le 10t_*, \text{ and } |N_{\tilde{s}_i, \tilde{s}_i,a} - N_{i,a}q_0| \le \sqrt{20(1-q_0)q_0N_{i,a}}\right\} \] we have \[ \frac{\Pr_{M_{i,a}}[\tau]}{\Pr_{M_{0}}[\tau]} \ge 4\delta \] provided appropriately chosen $c_1, c_2, c_3, x, c_{10}$. By Markov inequality and Doob's inequality (e.g. Lemma~7-8 of \cite{feng2019does}), we have \[ \Pr_{M_0}[\mathcal{E}'_{i,a}] \ge 1 - \frac{1}{10} - \frac{1}{10} = \frac{4}{5}. \] We are able to compute the probability of $\mathcal{E}_{0}$ on $M_{i,a}$ as follows: \begin{align*} \Pr_{M_{i,a}}[\mathcal{E}_{0}] &= \sum_{\tau \in \mathcal{E}_{0}} \Pr_{M_{i,a}}[\tau] \ge \sum_{\tau \in \mathcal{E}_{0}\cap \mathcal{E}_{i,a}'} \Pr_{M_{i,a}}[\tau] \\ &= \sum_{\tau \in \mathcal{E}_{0}\cap \mathcal{E}_{i,a}'} \frac{\Pr_{M_{i,a}}[\tau]}{\Pr_{M_{0}}[\tau]}\cdot\Pr_{M_{0}}[\tau] \ge 4\delta \sum_{\tau \in \mathcal{E}_{0}\cap \mathcal{E}_{i,a}'} \Pr_{M_{0}}[\tau] \ge 3\delta, \end{align*} provided $\delta \le c_{11}$ for some absolute constant $c_{11}$, hence a contradiction of soundness. \end{proof} \paragraph{Wrapping up.}Hence, if the algorithm is $(\epsilon, \delta)$-sound for all $\{M_{i,a}\}$, it must be the case that \[ \mathbb{E}_{M_0}[N_{i,a}] \ge t_*, \forall (i,a)\in [|\mathcal{S}|]\times \mathcal{A}. \] By linearity of expectation, we have \[ \mathbb{E}_{M_0}\left[\sum_{i,a}N_{i,a}\right] \ge |\mathcal{S}||\mathcal{A}|t_*. \] Since $\epsilon' = \epsilon (1 - \gamma) \zeta$, $\mathbb{E}_{M_0}\left[\sum_{i,a}N_{i,a}\right] \geq \frac{c_{10} |\mathcal{S}| |\mathcal{A}| \, \log\delta^{-1}}{(1-\gamma)^5 \zeta^2 {\epsilon}^2}$, which completes the proof. \end{proof} \section{Proofs for primal-dual algorithm} \label{app:proofs-pd} \pdguarantees* \begin{proof} We will define the dual regret w.r.t $\lambda$ as the following quantity, $R^{d}(\lambda, T)$. \begin{align} R^{d}(\lambda, T) & := \sum_{t = 0}^{T-1} (\lambda_t - \lambda) \, (\consthat{\hat{\pi}_t} - b') \label{eq:dual-regret-def} \end{align} Using the primal update in~\cref{eq:primal-update}, for any $\pi$, \begin{align} \rewardhatp{\hat{\pi}_t} + \lambda_t \consthat{\hat{\pi}_t} \geq \rewardhatp{\pi} + \lambda_t \consthat{\pi} \end{align} Substituting $\pi = \hat{\pi}^*$, we have, \begin{align} \rewardhatp{\hat{\pi}^*} - \rewardhatp{\hat{\pi}_t} & \leq \lambda_t [\consthat{\hat{\pi}_t} - \consthat{\hat{\pi}^*}] \end{align} Since $\hat{\pi}^*$ is a solution to the empirical CMDP, $\consthat{\hat{\pi}^*} \geq b'$, we have \begin{align} \rewardhatp{\hat{\pi}^*} - \rewardhatp{\hat{\pi}_t} & \leq \lambda_t [\consthat{\hat{\pi}_t} - b'] \label{eq:pd-guarantees-inter} \\ \intertext{Averaging from $t = 0$ to $T - 1$, and using the definition of the dual regret in~\cref{eq:dual-regret-def},} \frac{1}{T} \left[\sum_{t = 0}^{T-1} [\rewardhatp{\hat{\pi}^*} - \rewardhatp{\hat{\pi}_t}] \right] & \leq \frac{R^{d}(0, T)}{T} \label{eq:pd-guarantees-inter2} \end{align} This allows us to get a handle on the average optimality gap in terms of the average dual regret. For the second part, starting from the definition of the dual regret, \begin{align*} \sum_{t = 0}^{T-1} (\lambda_t - \lambda) \, (\consthat{\hat{\pi}_t} - b') & = R^d(\lambda, T) \\ \intertext{Dividing by $T$, and using~\cref{eq:pd-guarantees-inter},} \frac{1}{T} \sum_{t = 0}^{T-1} \left[ \rewardhatp{\hat{\pi}^*} - \rewardhatp{\hat{\pi}_t} \right] + \frac{\lambda}{T} \underbrace{\sum_{t = 0}^{T-1} (b' - \consthat{\hat{\pi}_t})}_{:= A} & \leq \frac{R^d(\lambda, T)}{T} \end{align*} The above statement holds for all $\lambda$. If $A \leq 0$, it implies that $\frac{1}{T} \left[\sum_{t = 0}^{T-1} (b' - \consthat{\hat{\pi}_t}) \right] \leq 0$, which is the desired bound on the constraint violation. In this case, we can set $\lambda = 0$ and this recovers~\cref{eq:pd-guarantees-inter2}. If $A > 0$, we will set $\lambda = U$. We define $\bar{\pi}_T = \frac{1}{T} \sum_{t = 0}^{T-1} \hat{\pi}_t$ as a mixture policy. Using the linearity of $\rewardhatp{\pi}$ and $\consthat{\pi}$, i.e. $\frac{1}{T} \sum_{t = 0}^{T-1} \rewardhatp{\hat{\pi}_t} = \rewardhatp{\bar{\pi}_T}$, and $\frac{1}{T} \sum_{t = 0}^{T-1} \consthat{\hat{\pi}_t} = \consthat{\bar{\pi}_T}$, we obtain the following equation, \begin{align*} \left[ \rewardhatp{\hat{\pi}^*} - \rewardhatp{\bar{\pi}_T} \right] + U \, (b' - \consthat{\bar{\pi}_T}) & \leq R^d(U, T) \\ \intertext{Since $(b' - \consthat{\bar{\pi}_T}) > 0$,} \left[ \rewardhatp{\hat{\pi}^*} - \rewardhatp{\bar{\pi}_T} \right] + U \, \left[b' - \consthat{\bar{\pi}_T} \right]_{+} & \leq \frac{R^d(U, T)}{T} \\ \intertext{where, $[x]_{+} = \max\{x,0\}$} \end{align*} Using~\cref{lemma:lag-constraint} for $U > \lambda^*$, \begin{align} \left[b' - \consthat{\bar{\pi}_T} \right]_{+} & \leq \frac{R^d(U, T)}{T (U - \lambda^*)} \label{eq:pd-guarantees-inter3} \end{align} It remains to bound the dual regret for the gradient descent procedure for updating $\lambda_t$. For this, we bound the distance $|\lambda_{t+1} - \lambda|$ for a general $\lambda \in [0,U]$. Defining $\lambda_{t+1}' := \mathbb{P}_{[0,U]}[\lambda_t - \eta \, (\consthat{\hat{\pi}_t} - b')]$, \begin{align*} |\lambda_{t+1} - \lambda| & = |\mathcal{R}_{\Lambda}[\lambda_{t+1}'] - \lambda| = |\mathcal{R}_{\Lambda}[\lambda_{t+1}'] - \lambda_{t+1}' + \lambda_{t+1}' - \lambda| \leq |\mathcal{R}_{\Lambda}[\lambda_{t+1}'] - \lambda_{t+1}'| + |\lambda_{t+1}' - \lambda| \\ & \leq \varepsilon_{\text{\tiny{l}}} + |\lambda_{t+1}' - \lambda| \tag{Since $\vert \lambda - \mathcal{R}_{\Lambda}[\lambda] \vert \leq \varepsilon_{\text{\tiny{l}}}$ for all $\lambda \in [0,U]$ because of the epsilon-net.} \\ \intertext{Squaring both sides,} |\lambda_{t+1} - \lambda|^2 & = \varepsilon_{\text{\tiny{l}}}^2 + |\lambda_{t+1}' - \lambda|^2 + 2 \varepsilon_{\text{\tiny{l}}} \, |\lambda_{t+1}' - \lambda| \leq \varepsilon_{\text{\tiny{l}}}^2 + 2 \varepsilon_{\text{\tiny{l}}} U + |\lambda_{t+1}' - \lambda|^2 \tag{Since $\lambda$, $\lambda_{t+1}' \in [0,U]$, } \\ & \leq \varepsilon_{\text{\tiny{l}}}^2 + 2 \varepsilon_{\text{\tiny{l}}} U + |\lambda_t - \eta \, (\consthat{\hat{\pi}_t} - b') - \lambda|^2 \tag{Since projections are non-expansive} \\ & = \varepsilon_{\text{\tiny{l}}}^2 + 2 \varepsilon_{\text{\tiny{l}}} U + |\lambda_t - \lambda|^2 - 2 \eta \, (\lambda_t - \lambda) \, (\consthat{\hat{\pi}_t} - b') + \eta^2 (\consthat{\hat{\pi}_t} - b')^2 \\ & \leq \varepsilon_{\text{\tiny{l}}}^2 + 2 \varepsilon_{\text{\tiny{l}}} U + |\lambda_t - \lambda|^2 - 2 \eta \, (\lambda_t - \lambda) \, (\consthat{\hat{\pi}_t} - b') + \frac{\eta^2}{(1 - \gamma)^2} \\ \intertext{Rearranging and dividing by $2 \eta$,} (\lambda_t - \lambda) \, (\consthat{\hat{\pi}_t} - b') & \leq \frac{\varepsilon_{\text{\tiny{l}}}^2 + 2 \varepsilon_{\text{\tiny{l}}} U}{2 \eta} + \frac{|\lambda_t - \lambda|^2 - |\lambda_{t+1} - \lambda|^2}{2 \eta} + \frac{\eta}{2 (1 - \gamma)^2} \intertext{Summing from $t = 0$ to $T-1$ and using the definition of the dual regret,} R^{d}(\lambda, T) & \leq T \, \frac{\varepsilon_{\text{\tiny{l}}}^2 + 2 \varepsilon_{\text{\tiny{l}}} U}{2 \eta} + \frac{1}{2 \eta} \sum_{t = 0}^{T-1} [|\lambda_t - \lambda|^2 - |\lambda_{t+1} - \lambda|^2] + \frac{\eta T}{2 (1 - \gamma)^2} \intertext{Telescoping and bounding $|\lambda_0 - \lambda|$ by $U$,} & \leq T \, \frac{\varepsilon_{\text{\tiny{l}}}^2 + 2 \varepsilon_{\text{\tiny{l}}} U}{2 \eta} + \frac{U^2}{2 \eta} + \frac{\eta T}{2 (1 - \gamma)^2} \end{align*} Setting $\eta = \frac{U (1 - \gamma)}{\sqrt{T}}$, \begin{align} R^{d}(\lambda, T) & \leq T^{3/2} \, \frac{\varepsilon_{\text{\tiny{l}}}^2 + 2 \varepsilon_{\text{\tiny{l}}} U}{2 U (1- \gamma)} + \frac{U \sqrt{T}}{1 - \gamma} \label{eq:dual-regret-bound} \end{align} Using~\cref{eq:dual-regret-bound} with the expressions for the optimality gap (\cref{eq:pd-guarantees-inter2}) and constraint violation (\cref{eq:pd-guarantees-inter3}), we obtain the following bounds. For the reward optimality gap, since $\lambda = 0 \in [0,U]$, \begin{align*} \rewardhatp{\hat{\pi}^*} - \rewardhatp{\bar{\pi}_{T}} & \leq \sqrt{T} \, \frac{\varepsilon_{\text{\tiny{l}}}^2 + 2 \varepsilon_{\text{\tiny{l}}} U}{2 U (1- \gamma)} + \frac{U}{(1 - \gamma) \sqrt{T}} < \sqrt{T} \, \frac{3 \varepsilon_{\text{\tiny{l}}}}{2 (1- \gamma)} + \frac{U}{(1 - \gamma) \sqrt{T}} \tag{Since $\varepsilon_{\text{\tiny{l}}} < U$} \end{align*} For the constraint violation, since $U \in [0,U]$, \begin{align*} \left[b' - \consthat{\bar{\pi}_T} \right] \leq \left[b' - \consthat{\bar{\pi}_T} \right]_{+} & \leq \sqrt{T} \, \frac{\varepsilon_{\text{\tiny{l}}}^2 + 2 \varepsilon_{\text{\tiny{l}}} U}{2 U (1- \gamma) \, (U - \lambda^*)} + \frac{U}{(U - \lambda^*) \, (1 - \gamma) \sqrt{T}} \\ & < \sqrt{T} \, \frac{3 \varepsilon_{\text{\tiny{l}}}}{2 (1- \gamma) \, (U - \lambda^*)} + \frac{U}{(U - \lambda^*) \, (1 - \gamma) \sqrt{T}} \tag{Since $\varepsilon_{\text{\tiny{l}}} < U$} \end{align*} Let us set $T$ s.t. that the second term in both quantities is bound by $\frac{\varepsilon_{\text{\tiny{opt}}}}{2}$, \begin{align*} T & = T_0 := \frac{4 U^2}{\varepsilon_{\text{\tiny{opt}}}^2 \, (1 - \gamma)^2} \left[1 + \frac{1}{(U - \lambda^*)^2} \right] \end{align*} With $T = T_0$, the above expressions can be simplified as: \begin{align*} \rewardhatp{\hat{\pi}^*} - \rewardhatp{\bar{\pi}_{T}} & \leq \frac{2 U}{(1 - \gamma) \varepsilon_{\text{\tiny{opt}}}} \, \left(1 + \frac{1}{U - \lambda^*} \right) \frac{3 \varepsilon_{\text{\tiny{l}}}}{2 (1- \gamma)} + \frac{\varepsilon_{\text{\tiny{opt}}}}{2} \\ \left[b' - \consthat{\bar{\pi}_T} \right] & \leq \frac{2 U}{(1 - \gamma) \varepsilon_{\text{\tiny{opt}}}} \, \left(1 + \frac{1}{U - \lambda^*} \right) \frac{3 \varepsilon_{\text{\tiny{l}}}}{2 (1- \gamma) \, (U - \lambda^*)} + \frac{\varepsilon_{\text{\tiny{opt}}}}{2} \end{align*} We want to set $\varepsilon_{\text{\tiny{l}}}$ s.t. the first term in both quantities is also bounded by $\frac{\varepsilon_{\text{\tiny{opt}}}}{2}$, \begin{align*} \varepsilon_{\text{\tiny{l}}} & = \frac{\varepsilon_{\text{\tiny{opt}}}^2 (1 - \gamma)^2 \, (U - \lambda^*)}{6 U} \end{align*} With these values, the algorithm ensures that, \begin{align*} \rewardhatp{\hat{\pi}^*} - \rewardhatp{\bar{\pi}_{T}} & \leq \varepsilon_{\text{\tiny{opt}}} \quad \text{;} \quad \left[b' - \consthat{\bar{\pi}_T} \right] \leq \varepsilon_{\text{\tiny{opt}}}. \end{align*} \end{proof} \begin{thmbox} \begin{lemma}[Bounding the dual variable] The objective~\cref{eq:emp-CMDP} satisfies strong duality. Defining $\pi^*_c := \argmax \const{\pi}$. We consider two cases: (1) If $b' = b - \epsilon'$ for $\epsilon' > 0$ and event $\mathcal{E}_1 = \left\{\abs{\consthat{\pi^*_c} - \const{\pi^*_c}} \leq \frac{\epsilon'}{2} \right\}$ holds, then $\lambda^* \leq \frac{2 (1 + \omega)}{\epsilon' (1 - \gamma)}$ and (2) If $b' = b + \Delta$ for $\Delta \in \left(0,\frac{\zeta}{2}\right)$ and event $\mathcal{E}_2 = \left\{\abs{\consthat{\pi^*_c} - \const{\pi^*_c}} \leq \frac{\zeta}{2} - \Delta \right\}$ holds, then $\lambda^* \leq \frac{2 (1 + \omega)}{\zeta (1 - \gamma)}$. \label{lemma:dual-bound} \end{lemma} \end{thmbox} \begin{proof} Writing the empirical CMDP in~\cref{eq:emp-CMDP} in its Lagrangian form, \begin{align*} \rewardhatp{\hat{\pi}^*} & = \max_{\pi} \min_{\lambda \geq 0} \rewardhatp{\pi} + \lambda [\consthat{\pi} - b'] \intertext{Using the linear programming formulation of CMDPs in terms of the state-occupancy measures $\mu$, we know that both the objective and the constraint are linear functions of $\mu$, and strong duality holds w.r.t $\mu$. Since $\mu$ and $\pi$ have a one-one mapping, we can switch the min and the max~\citep{paternain2019constrained}, implying,} & = \min_{\lambda \geq 0} \max_{\pi} \rewardhatp{\pi} + \lambda [\consthat{\pi} - b'] \intertext{Since $\lambda^*$ is the optimal dual variable for the empirical CMDP in~\cref{eq:emp-CMDP}, } & = \max_{\pi} \rewardhatp{\pi} + \lambda^* \, [\consthat{\pi} - b'] \\ \intertext{Define $\pi^*_c := \argmax \const{\pi}$ and $\hat{\pi}^*_c := \argmax \consthat{\pi}$} & \geq \rewardhatp{\hat{\pi}^*_c} + \lambda^* \, [\consthat{\hat{\pi}^*_c} - b'] \\ & = \rewardhatp{\hat{\pi}^*_c} + \lambda^* \, \left[\left(\consthat{\hat{\pi}^*_c} - \const{\pi^*_c} \right) + (\const{\pi^*_c} - b) + (b - b') \right] \\ \intertext{By definition, $\zeta = \const{\pi^*_c} - b$} & = \rewardhatp{\hat{\pi}^*_c} + \lambda^* \, \left[\left(\consthat{\hat{\pi}^*_c} - \consthat{\pi^*_c} \right) + \left(\consthat{\pi^*_c} - \const{\pi^*_c} \right) + \zeta + (b - b') \right] \intertext{By definition of $\hat{\pi}^*_c$, $\left(\consthat{\hat{\pi}^*_c} - \consthat{\pi^*_c} \right) \geq 0$} \rewardhatp{\hat{\pi}^*} & \geq \rewardhatp{\hat{\pi}^*_c} + \lambda^* \, \left[\zeta + (b - b') - \abs{\consthat{\pi^*_c} - \const{\pi^*_c}} \right] \end{align*} 1) If $b' = b - \epsilon'$ for $\epsilon' > 0$. Hence, \begin{align*} \rewardhatp{\hat{\pi}^*} & \geq \rewardhatp{\hat{\pi}^*_c} + \lambda^* \, \left[\zeta + \epsilon' - \abs{\consthat{\pi^*_c} - \const{\pi^*_c}} \right] \\ \intertext{If the event $\mathcal{E}_1$ holds, $\abs{\consthat{\pi^*_c} - \const{\pi^*_c}} \leq \frac{\epsilon'}{2}$, implying, $\abs{\consthat{\pi^*_c} - \const{\pi^*_c}} < \zeta + \frac{\epsilon'}{2}$, then,} & \geq \rewardhatp{\hat{\pi}^*_c} + \lambda^* \, \frac{\epsilon'}{2} \\ \implies \lambda^* & \leq \frac{2}{\epsilon'} [\rewardhatp{\hat{\pi}^*} - \rewardhatp{\hat{\pi}^*_c}] \leq \frac{2 (1 + \omega)}{\epsilon' (1 - \gamma)} \end{align*} 2) If $b' = b + \Delta$ for $\Delta \in \left(0,\frac{\zeta}{2}\right)$. Hence, \begin{align*} \rewardhatp{\hat{\pi}^*} & \geq \rewardhatp{\hat{\pi}^*_c} + \lambda^* \, \left[\zeta - \Delta - \abs{\consthat{\pi^*_c} - \const{\pi^*_c}} \right] \intertext{If the event $\mathcal{E}_2$ holds, $\abs{\consthat{\pi^*_c} - \const{\pi^*_c}} \leq \frac{\zeta}{2} - \Delta$ for $\Delta < \frac{\zeta}{2}$, then,} & \geq \rewardhatp{\hat{\pi}^*_c} + \lambda^* \, \frac{\zeta}{2} \\ \implies \lambda^* & \leq \frac{2}{\zeta} [\rewardhatp{\hat{\pi}^*} - \rewardhatp{\hat{\pi}^*_c}] \leq \frac{2 (1 + \omega)}{\zeta (1 - \gamma)} \end{align*} \end{proof} \begin{thmbox} \begin{lemma}[Lemma B.2 of~\citet{jain2022towards}] For any $C > \lambda^*$ and any $\tilde{\pi}$ s.t. $\rewardhatp{\hat{\pi}^*} - \rewardhatp{\tilde{\pi}} + C [b - \consthat{\tilde{\pi}}]_{+} \leq \beta$, we have $[b - \consthat{\tilde{\pi}}]_{+} \leq \frac{\beta}{C - \lambda^*}$. \label{lemma:lag-constraint} \end{lemma} \end{thmbox} \begin{proof} Define $\nu(\tau) = \max_{\pi} \{\reward{\pi} \mid \const{\pi} \geq b + \tau \}$ and note that by definition, $\nu(0) = \reward{\hat{\pi}^*}$ and that $\nu$ is a decreasing function for its argument. Let $\lag{\pi}{\lambda} = \reward{\pi}+\lambda(\const{\pi}-b)$. Then, for any policy $\pi$ s.t. $\const{\pi} \geq b + \tau$, we have \begin{align} \lag{\pi}{\lambda^*} & \leq \max_{\pi'} \lag{\pi'}{\lambda^*} \nonumber \\%= V_d^{*}(\rho) &\tag{by definition} \\ &= \reward{\hat{\pi}^*} &\tag{by strong duality} \\ & = \nu(0) & \tag{from above relation} \\ \implies \nu(0) - \tau \lambda^* & \geq \lag{\pi}{\lambda^*} - \tau \lambda^* = \reward{\pi} + \lambda^* \underbrace{(\const{\pi} - b - \tau)}_{\text{Non-negative}} \nonumber \\ \implies \nu(0) - \tau \lambda^* & \geq \max_{\pi} \{\reward{\pi} \mid \const{\pi} \geq b + \tau \} = \nu(\tau) \,.\nonumber \\ \implies \tau \lambda^* \leq \nu(0) - \nu(\tau)\,. \label{eq:inter-1} \end{align} Now we choose $\tilde{\tau} = -(b - \const{\tilde{\pi}})_{+}$. \begin{align*} (C - \lambda^*) |\tilde{\tau}| &= \lambda^* \tilde{\tau} + C |\tilde{\tau}| & \tag{since $\tilde{\tau} \leq 0$} \\ & \leq \nu(0) - \nu(\tilde{\tau}) + C |\tilde{\tau}| & \tag{\cref{eq:inter-1}} \\ & = \reward{\hat{\pi}^*} - \reward{\tilde{\pi}} + C |\tilde{\tau}| + \reward{\tilde{\pi}} - \nu(\tilde{\tau}) & \tag{definition of $\nu(0)$} \\ & = \reward{\hat{\pi}^*} - \reward{\tilde{\pi}} + C (b - \const{\tilde{\pi}})_{+} + \reward{\tilde{\pi}} - \nu(\tilde{\tau}) \\ & \leq \beta + \reward{\tilde{\pi}} - \nu(\tilde{\tau})\,. \intertext{Now let us bound $\nu(\tilde{\tau})$:} \nu(\tilde{\tau}) & = \max_{\pi} \{\reward{\pi} \mid \const{\pi} \geq b - (b - \const{\tilde{\pi}})_{+} \} \\ & \geq \max_{\pi} \{\reward{\pi} \mid \const{\pi} \geq \const{\tilde{\pi}} \} & \tag{tightening the constraint} \\ \nu(\tilde{\tau}) & \geq \reward{\tilde{\pi}} \implies (C - \lambda^*) |\tilde{\tau}| \leq \beta \implies (b - \const{\tilde{\pi}})_{+} \leq \frac{\beta}{C - \lambda^*} \end{align*} \end{proof} \section{Proof of~\cref{thm:ub-relaxed}} \label{app:proof-relaxed} \ubrelaxed* \begin{proof} We fill in the details required for the proof sketch in the main paper. Proceeding according to the proof sketch, we first detail the computation of $T$ and $\varepsilon_{\text{\tiny{l}}}$ for the primal-dual algorithm. Recall that $U = \frac{32}{5 \epsilon \, (1 - \gamma)}$ and $\varepsilon_{\text{\tiny{opt}}} = \frac{\epsilon}{4}$. Using~\cref{thm:pd-guarantees}, we need to set \begin{align*} T & = \frac{4 U^2}{\varepsilon_{\text{\tiny{opt}}}^2 \, (1 - \gamma)^2} \left[1 + \frac{1}{(U - \lambda^*)^2} \right] = \frac{64}{\epsilon^2 (1 - \gamma)^2} \left[1 + \frac{1}{(U - \lambda^*)^2} \right] \intertext{Recall that $|\lambda^*| \leq C := \frac{16}{5 \epsilon \, (1 - \gamma)}$ and $U = 2 C$. Simplifying,} & \leq \frac{256}{\epsilon^2 (1 - \gamma)^2} \left[C^2 + 1 \right] < \frac{512}{\epsilon^2 (1 - \gamma)^2} C^2 = \frac{512}{\epsilon^2 (1 - \gamma)^2} \, \frac{256}{25 \epsilon^2 \, (1 - \gamma)^2} \\ \implies T &= O \left(\nicefrac{1}{\epsilon^4 (1 - \gamma)^4}\right). \end{align*} Using~\cref{thm:pd-guarantees}, we need to set $\varepsilon_{\text{\tiny{l}}}$, \begin{align*} \varepsilon_{\text{\tiny{l}}} & = \frac{\varepsilon_{\text{\tiny{opt}}}^2 (1 - \gamma)^2 \, (U - \lambda^*)}{6 U} = \frac{\epsilon^2 (1 - \gamma)^2 \, (U - \lambda^*)}{96 U} \leq \frac{\epsilon^2 (1 - \gamma)^2}{96} \\ \implies \varepsilon_{\text{\tiny{l}}} &= O \left(\epsilon^2 (1 - \gamma)^2\right). \end{align*} For bounding the concentration terms for $\bar{\pi}_{T}$ in~\cref{eq:conc-relaxed}, we use~\cref{thm:main-concentration-emp} with $U = \frac{32}{5 \epsilon \, (1 - \gamma)}$, $\omega = \frac{\epsilon (1 - \gamma)}{8}$ and $\varepsilon_{\text{\tiny{l}}} = \frac{\epsilon^2 (1 - \gamma)^2}{96}$. In this case, $\iota = \frac{\omega \, \delta \, (1-\gamma) \, \varepsilon_{\text{\tiny{l}}}}{30 \, U |S||A|^2} = O\left(\frac{\delta \epsilon^4 \, (1 - \gamma)^4}{SA^2}\right)$ and \begin{align*} C(\delta) = 72 \log \left(\frac{16 (1 + U + \omega) \, S A \log\left(\nicefrac{e}{1 - \gamma}\right)}{(1 - \gamma)^2 \, \iota \, \delta} \right) = O\left(\log\left(\frac{S^2 A^3}{\delta^2 \epsilon^5 (1 - \gamma)^7}\right)\right). \end{align*} With this value of $C(\delta)$, in order to satisfy the concentration bounds for $\bar{\pi}_{T}$, we require that \begin{align*} 2 \sqrt{\frac{C(\delta)}{N \cdot (1-\gamma)^3 }} \leq \frac{\epsilon}{4} \implies N \geq O\left(\frac{C(\delta)}{(1 - \gamma)^3 \, \epsilon^2}\right) \end{align*} We use the~\cref{lemma:main-concentration-opt} to bound the remaining concentration terms for $\pi^*$ and $\pi^*_c$ in~\cref{eq:conc-relaxed}. In this case, for $C'(\delta) = 72 \log \left(\frac{4 S \log(e/1-\gamma)}{\delta}\right)$, we require that, \begin{align*} 2 \sqrt{\frac{C'(\delta)}{N \cdot (1-\gamma)^3 }} \leq \frac{\epsilon}{4} \implies N \geq O\left(\frac{C'(\delta)}{(1 - \gamma)^3 \, \epsilon^2}\right) \end{align*} Hence, if $N \geq \tilde{O}\left(\frac{\log(1/\delta)}{(1 - \gamma)^3 \, \epsilon^2}\right)$, the bounds in~\cref{eq:conc-relaxed} are satisfied, completing the proof. \end{proof} \begin{thmbox} \begin{lemma}[Decomposing the suboptimality] For $b' = b - \frac{\epsilon - \varepsilon_{\text{\tiny{opt}}}}{2}$, if (i) $\varepsilon_{\text{\tiny{opt}}} < \epsilon$, and (ii) the following conditions are satisfied, \begin{align*} \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} \leq \frac{\epsilon - \varepsilon_{\text{\tiny{opt}}}}{2} \, \text{;} \, \abs{\const{\pi^*} - \consthat{\pi^*}} \leq \frac{\epsilon - \varepsilon_{\text{\tiny{opt}}}}{2} \, \text{;} \, \abs{\consthat{\pi^*_c} - \const{\pi^*_c}} \leq \frac{\zeta}{4} \end{align*} where $\pi^*_c := \argmax \const{\pi}$, then (a) policy $\bar{\pi}_{T}$ violates the constraint by at most $\epsilon$ i.e. $\const{\bar{\pi}_{T}} \geq b - \epsilon$ and (b) its optimality gap can be bounded as: \begin{align*} \reward{\pi^*} - \reward{\bar{\pi}_{T}} & \leq \frac{2 \omega}{1 - \gamma} + \varepsilon_{\text{\tiny{opt}}} + \abs{\rewardp{\pi^*} - \rewardhatp{\pi^*}} + \abs{\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}} \end{align*} \label{lemma:decomposition-relaxed} \end{lemma} \end{thmbox} \begin{proof} From~\cref{thm:pd-guarantees}, we know that, \begin{align*} & \consthat{\bar{\pi}_{T}} \geq b' - \varepsilon_{\text{\tiny{opt}}} \implies \const{\bar{\pi}_{T}} \geq \const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}} + b' - \varepsilon_{\text{\tiny{opt}}} \geq - \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} + b' - \varepsilon_{\text{\tiny{opt}}} \\ \intertext{Since we require $\bar{\pi}_{T}$ to violate the constraint in the true CMDP by at most $\epsilon$, we require $\const{\bar{\pi}_{T}} \geq b - \epsilon$. From the above equation, a sufficient condition for ensuring this is,} & - \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} + b' - \varepsilon_{\text{\tiny{opt}}} \geq b - \epsilon \\ & \text{meaning that we require} \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} \leq (b' - b) - \varepsilon_{\text{\tiny{opt}}} + \epsilon. \end{align*} In the subsequent analysis, we will require $\pi^*$ to be feasible for the constrained problem in~\cref{eq:emp-CMDP}, meaning that we require $\consthat{\pi^*} \geq b'$. Since $\pi^*$ is the solution to~\cref{eq:true-CMDP}, we know that, \begin{align*} & \const{\pi^*} \geq b \implies \consthat{\pi^*} \geq b - \abs{\const{\pi^*} - \consthat{\pi^*}} \intertext{Since we require $\consthat{\pi^*} \geq b'$, using the above equation, a sufficient condition to ensure this is} & b - \abs{\const{\pi^*} - \consthat{\pi^*}} \geq b' \text{meaning that we require} \abs{\const{\pi^*} - \consthat{\pi^*}} \leq b - b'. \intertext{Since $b' = b - \frac{\epsilon - \varepsilon_{\text{\tiny{opt}}}}{2}$, we require the following statements to hold:} & \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} \leq \frac{\epsilon - \varepsilon_{\text{\tiny{opt}}}}{2} \quad \text{;} \quad \abs{\const{\pi^*} - \consthat{\pi^*}} \leq \frac{\epsilon - \varepsilon_{\text{\tiny{opt}}}}{2}. \end{align*} Given that the above statements hold, we can decompose the suboptimality in the reward value function as follows: \begin{align*} & \reward{\pi^*} - \reward{\bar{\pi}_{T}} \\ & = \reward{\pi^*} - \rewardp{\pi^*} + \rewardp{\pi^*} - \reward{\bar{\pi}_{T}} \\ &= [\reward{\pi^*} - \rewardp{\pi^*}] + \rewardp{\pi^*} - \rewardhatp{\pi^*} + \rewardhatp{\pi^*} - \reward{\bar{\pi}_{T}} \\ & \leq [\reward{\pi^*} - \rewardp{\pi^*}] + [\rewardp{\pi^*} - \rewardhatp{\pi^*}] + \rewardhatp{\hat{\pi}^*} - \reward{\bar{\pi}_{T}} & \tag{By optimality of $\hat{\pi}^*$ and since we have ensured that $\pi^*$ is feasible for~\cref{eq:emp-CMDP}} \\ & = [\reward{\pi^*} - \rewardp{\pi^*}] + [\rewardp{\pi^*} - \rewardhatp{\pi^*}] + [\rewardhatp{\hat{\pi}^*} - \rewardhatp{\bar{\pi}_{T}}] + \rewardhatp{\bar{\pi}_{T}} - \reward{\bar{\pi}_{T}} \\ & = \underbrace{[\reward{\pi^*} - \rewardp{\pi^*}]}_{\text{Perturbation Error}} + \underbrace{[\rewardp{\pi^*} - \rewardhatp{\pi^*}]}_{\text{Concentration Error}} + \underbrace{[\rewardhatp{\hat{\pi}^*} - \rewardhatp{\bar{\pi}_{T}}]}_{\text{Primal-Dual Error}} + \underbrace{[\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}]}_{\text{Concentration Error}} + \underbrace{[\rewardp{\bar{\pi}_{T}} - \reward{\bar{\pi}_{T}}]}_{\text{Perturbation Error}} \\ \intertext{For a perturbation magnitude equal to $\omega$, we use~\cref{lemma:perturbation-error} to bound both perturbation errors by $\frac{\omega}{1 - \gamma}$. Using~\cref{thm:pd-guarantees} to bound the primal-dual error by $\varepsilon_{\text{\tiny{opt}}}$,} & \reward{\pi^*} - \reward{\bar{\pi}_{T}} \leq \frac{2 \omega}{1 - \gamma} + \varepsilon_{\text{\tiny{opt}}} + \underbrace{[\rewardp{\pi^*} - \rewardhatp{\pi^*}]}_{\text{Concentration Error}} + \underbrace{[\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}]}_{\text{Concentration Error}}. \end{align*} \end{proof} \section{Proof of~\cref{thm:ub-strict}} \label{app:proof-strict} \ubstrict* \begin{proof} We fill in the details required for the proof sketch in the main paper. Proceeding according to the proof sketch, we first detail the computation of $T$ and $\varepsilon_{\text{\tiny{l}}}$ for the primal-dual algorithm. Recall that $U = \frac{8}{\zeta (1 - \gamma)}$, $\Delta = \frac{\epsilon (1- \gamma) \zeta}{40}$ and $\varepsilon_{\text{\tiny{opt}}} = \frac{\Delta}{5}$. Using~\cref{thm:pd-guarantees}, we need to set \begin{align*} T & = \frac{4 U^2}{\varepsilon_{\text{\tiny{opt}}}^2 \, (1 - \gamma)^2} \left[1 + \frac{1}{(U - \lambda^*)^2} \right] = \frac{100}{\Delta^2 (1 - \gamma)^2} \left[1 + \frac{1}{(U - \lambda^*)^2} \right] \intertext{Recall that $|\lambda^*| \leq C := \frac{4}{\zeta (1 - \gamma)}$ and $U = 2 C$. Simplifying,} & \leq \frac{400}{\Delta^2 (1 - \gamma)^2} \left[C^2 + 1 \right] < \frac{800}{\Delta^2 (1 - \gamma)^2} C^2 = \frac{800}{\Delta^2 (1 - \gamma)^2} \, \frac{16}{\zeta^2 \, (1 - \gamma)^2} \\ \implies T &\leq \frac{800 \cdot 1600}{\epsilon^2 \zeta^2 (1 - \gamma)^4} \, \frac{16}{\zeta^2 \, (1 - \gamma)^2} = O \left(\nicefrac{1}{\epsilon^2 \, \zeta^4 \, (1 - \gamma)^6}\right). \end{align*} Using~\cref{thm:pd-guarantees}, we need to set $\varepsilon_{\text{\tiny{l}}}$, \begin{align*} \varepsilon_{\text{\tiny{l}}} & = \frac{\varepsilon_{\text{\tiny{opt}}}^2 (1 - \gamma)^2 \, (U - \lambda^*)}{6 U} = \frac{\Delta^2 (1 - \gamma)^2 \, (U - \lambda^*)}{150 U} \leq \frac{\Delta^2 (1 - \gamma)^2}{150} \\ \implies \varepsilon_{\text{\tiny{l}}} &\leq \frac{\epsilon^2 \, \zeta^2 \, (1 - \gamma)^4}{150 \cdot 1600} = O \left(\epsilon^2 \, \zeta^2 \, (1 - \gamma)^4 \right). \end{align*} For bounding the concentration terms for $\bar{\pi}_{T}$ in~\cref{eq:conc-strict}, we use~\cref{thm:main-concentration-emp} with $U = \frac{8}{\zeta (1 - \gamma)}$, $\omega = \frac{\epsilon (1 - \gamma)}{10}$ and $\varepsilon_{\text{\tiny{l}}} = \frac{\epsilon^2 \, \zeta^2 \, (1 - \gamma)^4}{150 \cdot 1600}$. In this case, $\iota = \frac{\omega \, \delta \, (1-\gamma) \, \varepsilon_{\text{\tiny{l}}}}{30 \, U |S||A|^2} = O\left(\frac{\delta \epsilon^3 \zeta^3 (1 - \gamma)^7}{SA^2}\right)$ and \begin{align*} C(\delta) = 72 \log \left(\frac{16 (1 + U + \omega) \, S A \log\left(\nicefrac{e}{1 - \gamma}\right)}{(1 - \gamma)^2 \, \iota \, \delta} \right) = O\left(\log\left(\frac{S^2 A^3}{(1 - \gamma)^{10} \delta^2 \epsilon^3 \zeta^4}\right)\right). \end{align*} With this value of $C(\delta)$, in order to satisfy the concentration bounds for $\bar{\pi}_{T}$, we require that \begin{align*} 2 \sqrt{\frac{C(\delta)}{N \cdot (1-\gamma)^3 }} \leq \frac{\Delta}{5} \implies N \geq O\left(\frac{C(\delta)}{(1 - \gamma)^3 \, \Delta^2}\right) \geq O\left(\frac{C(\delta)}{(1 - \gamma)^5 \, \zeta^2 \, \epsilon^2}\right) \end{align*} We use the~\cref{lemma:main-concentration-opt} to bound the remaining concentration terms for $\pi^*$ and $\pi^*_c$ in~\cref{eq:conc-strict}. In this case, for $C'(\delta) = 72 \log \left(\frac{4 S \log(e/1-\gamma)}{\delta}\right)$, we require that, \begin{align*} 2 \sqrt{\frac{C'(\delta)}{N \cdot (1-\gamma)^3 }} \leq \frac{\Delta}{5} \implies N \geq O\left(\frac{C'(\delta)}{(1 - \gamma)^3 \, \Delta^2}\right) \geq O\left(\frac{C'(\delta)}{(1 - \gamma)^5 \, \zeta^2 \, \epsilon^2}\right) \end{align*} Hence, if $N \geq \tilde{O}\left(\frac{\log(1/\delta)}{(1 - \gamma)^5 \, \zeta^2 \, \epsilon^2}\right)$, the bounds in~\cref{eq:conc-strict} are satisfied, completing the proof. \end{proof} \clearpage \begin{thmbox} \begin{lemma}[Decomposing the suboptimality] For a fixed $\Delta > 0$ and $\varepsilon_{\text{\tiny{opt}}} < \Delta$, if $b' = b + \Delta$, then the following conditions are satisfied, \begin{align*} \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} \leq \Delta - \varepsilon_{\text{\tiny{opt}}} \, \text{;} \, \abs{\const{\pi^*} - \consthat{\pi^*}} \leq \Delta \end{align*} then (a) policy $\bar{\pi}_{T}$ satisfies the constraint i.e. $\const{\bar{\pi}_{T}} \geq b$ and (b) its optimality gap can be bounded as: \begin{align*} \reward{\pi^*} - \reward{\bar{\pi}_{T}} & \leq \frac{2 \omega}{1 - \gamma} + \varepsilon_{\text{\tiny{opt}}} + 2 \Delta \lambda^* + \abs{\rewardp{\pi^*} - \rewardhatp{\pi^*}} + \abs{\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}}. \end{align*} \label{lemma:decomposition-strict} \end{lemma} \end{thmbox} \begin{proof} Compared to~\cref{eq:emp-CMDP}, we define a slightly modified CMDP problem by changing the constraint RHS to $b''$ for some $b''$ to be specified later. We denote its corresponding optimal policy as $\tilde{\pi}^*$. In particular, \begin{align} \tilde{\pi}^* \in \argmax_{\pi} \rewardhatp{\pi} \, \text{s.t.} \, \consthat{\pi} \geq b'' \label{eq:emp-CMDP-2} \end{align} From~\cref{thm:pd-guarantees}, we know that, \begin{align*} & \consthat{\bar{\pi}_{T}} \geq b' - \varepsilon_{\text{\tiny{opt}}} \implies \const{\bar{\pi}_{T}} \geq \const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}} + b' - \varepsilon_{\text{\tiny{opt}}} \geq - \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} + b' - \varepsilon_{\text{\tiny{opt}}} \\ \intertext{Since we require $\bar{\pi}_{T}$ to satisfy the constraint in the true CMDP, we require $\const{\bar{\pi}_{T}} \geq b$. From the above equation, a sufficient condition for ensuring this is,} & - \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} + b' - \varepsilon_{\text{\tiny{opt}}} \geq b \\ & \text{meaning that we require} \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} \leq (b' - b) - \varepsilon_{\text{\tiny{opt}}}. \end{align*} In the subsequent analysis, we will require $\pi^*$ to be feasible for the constrained problem in~\cref{eq:emp-CMDP-2}. This implies that we require $\consthat{\pi^*} \geq b''$. Since $\pi^*$ is the solution to~\cref{eq:true-CMDP}, we know that, \begin{align*} & \const{\pi^*} \geq b \implies \consthat{\pi^*} \geq b - \abs{\const{\pi^*} - \consthat{\pi^*}} \intertext{Since we require $\consthat{\pi^*} \geq b''$, using the above equation, a sufficient condition to ensure this is} & b - \abs{\const{\pi^*} - \consthat{\pi^*}} \geq b'' \text{meaning that we require} \abs{\const{\pi^*} - \consthat{\pi^*}} \leq b - b''. \intertext{Hence we require the following statements to hold:} & \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} \leq (b' - b) - \varepsilon_{\text{\tiny{opt}}} \quad \text{;} \quad \abs{\const{\pi^*} - \consthat{\pi^*}} \leq b - b''. \end{align*} Given that the above statements hold, we can decompose the suboptimality in the reward value function as follows: \begin{align*} \reward{\pi^*} - \reward{\bar{\pi}_{T}} & = \reward{\pi^*} - \rewardp{\pi^*} + \rewardp{\pi^*} - \reward{\bar{\pi}_{T}} \\ & = [\reward{\pi^*} - \rewardp{\pi^*}] + [\rewardp{\pi^*} - \rewardhatp{\pi^*}] + \rewardhatp{\pi^*} - \reward{\bar{\pi}_{T}} \\ & \leq [\reward{\pi^*} - \rewardp{\pi^*}] + [\rewardp{\pi^*} - \rewardhatp{\pi^*}] + \rewardhatp{\tilde{\pi}^*} - \reward{\bar{\pi}_{T}} & \tag{By optimality of $\hat{\pi}^*$ and since we have ensured that $\pi^*$ is feasible for~\cref{eq:emp-CMDP-2}} \\ & = [\reward{\pi^*} - \rewardp{\pi^*}] + [\rewardp{\pi^*} - \rewardhatp{\pi^*}] + [\rewardhatp{\tilde{\pi}^*} - \rewardhatp{\hat{\pi}^*}] + \rewardhatp{\hat{\pi}^*} - \reward{\bar{\pi}_{T}} \\ & = [\reward{\pi^*} - \rewardp{\pi^*}] + [\rewardp{\pi^*} - \rewardhatp{\pi^*}] + [\rewardhatp{\tilde{\pi}^*} - \rewardhatp{\hat{\pi}^*}] + \rewardhatp{\hat{\pi}^*} - \reward{\bar{\pi}_{T}} \\ & = [\reward{\pi^*} - \rewardp{\pi^*}] + [\rewardp{\pi^*} - \rewardhatp{\pi^*}] + [\rewardhatp{\tilde{\pi}^*} - \rewardhatp{\hat{\pi}^*}] + [\rewardhatp{\hat{\pi}^*} - \rewardhatp{\bar{\pi}_{T}}] \\ & + \rewardhatp{\bar{\pi}_{T}} - \reward{\bar{\pi}_{T}} \\ & = \underbrace{[\reward{\pi^*} - \rewardp{\pi^*}]}_{\text{Perturbation Error}} + \underbrace{[\rewardp{\pi^*} - \rewardhatp{\pi^*}]}_{\text{Concentration Error}} + \underbrace{[\rewardhatp{\tilde{\pi}^*} - \rewardhatp{\hat{\pi}^*}]}_{\text{Sensitivity Error}} + \underbrace{[\rewardhatp{\hat{\pi}^*} - \rewardhatp{\bar{\pi}_{T}}]}_{\text{Primal-Dual Error}} \\ & + \underbrace{[\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}]}_{\text{Concentration Error}} + \underbrace{[\rewardp{\bar{\pi}_{T}} - \reward{\bar{\pi}_{T}}]}_{\text{Perturbation Error}} \\ \intertext{For a perturbation magnitude equal to $\omega$, we use~\cref{lemma:perturbation-error} to bound both perturbation errors by $\frac{\omega}{1 - \gamma}$. Using~\cref{thm:pd-guarantees} to bound the primal-dual error by $\varepsilon_{\text{\tiny{opt}}}$,} & \leq \frac{2 \omega}{1 - \gamma} + \varepsilon_{\text{\tiny{opt}}} + \underbrace{[\rewardp{\pi^*} - \rewardhatp{\pi^*}]}_{\text{Concentration Error}} + \underbrace{[\rewardhatp{\tilde{\pi}^*} - \rewardhatp{\hat{\pi}^*}]}_{\text{Sensitivity Error}} + \underbrace{[\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}]}_{\text{Concentration Error}} \end{align*} Since $b' = b + \Delta$ and setting $b'' = b - \Delta$, we use~\cref{lemma:sensitivity} to bound the sensitivity error term, \begin{align*} \reward{\pi^*} - \reward{\bar{\pi}_{T}} & \leq \frac{2 \omega}{1 - \gamma} + \varepsilon_{\text{\tiny{opt}}} + 2 \Delta \lambda^* + \underbrace{[\rewardp{\pi^*} - \rewardhatp{\pi^*}]}_{\text{Concentration Error}} + \underbrace{[\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}]}_{\text{Concentration Error}} \end{align*} With these values of $b'$ and $b''$, we require the following statements to hold, \begin{align*} \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} & \leq \Delta - \varepsilon_{\text{\tiny{opt}}} \quad \text{;} \quad \abs{\const{\pi^*} - \consthat{\pi^*}} \leq \Delta. \end{align*} \end{proof} \begin{thmbox} \begin{lemma}[Bounding the sensitivity error] If $b' = b + \Delta$ and $b'' = b - \Delta$ in~\cref{eq:emp-CMDP} and~\cref{eq:emp-CMDP-2} such that, \[ \hat{\pi}^* \in \argmax_{\pi} \rewardhatp{\pi} \, \text{s.t.} \, \consthat{\pi} \geq b + \Delta \] \[ \tilde{\pi}^* \in \argmax_{\pi} \rewardhatp{\pi} \, \text{s.t.} \, \consthat{\pi} \geq b - \Delta \,, \] then the sensitivity error term can be bounded by: \begin{align*} \abs{\rewardhatp{\hat{\pi}^*} - \rewardhatp{\tilde{\pi}^*}} & \leq 2 \Delta \lambda^*. \end{align*} \label{lemma:sensitivity} \end{lemma} \end{thmbox} \begin{proof} Writing the empirical CMDP in~\cref{eq:emp-CMDP} in its Lagrangian form, \begin{align*} \rewardhatp{\hat{\pi}^*} & = \max_{\pi} \min_{\lambda \geq 0} \rewardhatp{\pi} + \lambda [\consthat{\pi} - (b + \Delta)] \\ & = \min_{\lambda \geq 0} \max_{\pi} \rewardhatp{\pi} + \lambda [\consthat{\pi} - (b + \Delta)] & \tag{By strong duality~\cref{lemma:dual-bound}} \\ \intertext{Since $\lambda^*$ is the optimal dual variable for the empirical CMDP in~\cref{eq:emp-CMDP}, } & = \max_{\pi} \rewardhatp{\pi} + \lambda^* \, [\consthat{\pi} - (b + \Delta)] \\ & \geq \rewardhatp{\tilde{\pi}^*} + \lambda^* \, [\consthat{\tilde{\pi}^*} - (b + \Delta)] & \tag{The relation holds for $\pi = \tilde{\pi}^*$.} \\ \intertext{Since $\consthat{\tilde{\pi}^*} \geq b - \Delta$,} \rewardhatp{\hat{\pi}^*} & \geq \rewardhatp{\tilde{\pi}^*} - 2 \lambda^* \Delta \\ \implies \rewardhatp{\tilde{\pi}^*} - \rewardhatp{\hat{\pi}^*} & \leq 2 \Delta \lambda^* \intertext{Since the CMDP in~\cref{eq:emp-CMDP-2} (with $b'' = b - \Delta$) is a less constrained problem than the one in~\cref{eq:emp-CMDP} (with $b' = b + \Delta$), $\rewardhatp{\tilde{\pi}^*} \geq \rewardhatp{\hat{\pi}^*}$, and hence,} \abs{\rewardhatp{\tilde{\pi}^*} - \rewardhatp{\hat{\pi}^*}} & \leq 2 \Delta \lambda^*. \end{align*} \end{proof} \section{Upper-bound under Relaxed Feasibility} \label{sec:ub-relaxed} In order to achieve the objective in~\cref{eq:relaxed-objective} for a target error $\epsilon > 0$, we require setting $N = \tilde{O}\left(\frac{\log(1/\delta)}{(1 - \gamma)^3 \epsilon^2} \right)$, $b' = b - \frac{3 \epsilon}{8}$ and $\omega = \frac{\epsilon (1 - \gamma)}{8}$. This completely specifies the empirical CMDP $\hat{M}$ and the problem in~\cref{eq:emp-CMDP}. In order to specify the primal-dual algorithm, we set $U = O\left(\nicefrac{1}{\epsilon \, (1 - \gamma)}\right)$, $\varepsilon_{\text{\tiny{l}}} = O\left(\epsilon^2 (1 - \gamma)^2 \right)$ and $T = O\left(\nicefrac{1}{(1 - \gamma)^4 \epsilon^4}\right)$. With these choices, we prove the following theorem in~\cref{app:proof-relaxed} and provide a proof sketch below. \begin{restatable}{theorem}{ubrelaxed} For a fixed $\epsilon \in \left(0, \nicefrac{1}{1 - \gamma}\right]$ and $\delta\in(0,1)$, \cref{alg:cmdp-generative} with $N = \tilde{O}\left(\frac{\log(1/\delta)}{(1 - \gamma)^3 \epsilon^2} \right)$ samples, $b' = b - \frac{3 \epsilon}{8}$, $\omega = \frac{\epsilon (1 - \gamma)}{8}$, $U = O\left(\nicefrac{1}{\epsilon \, (1 - \gamma)}\right)$, $\varepsilon_{\text{\tiny{l}}} = O\left(\epsilon^2 (1 - \gamma)^2 \right)$ and $T = O\left(\nicefrac{1}{(1 - \gamma)^4 \epsilon^4}\right)$, returns policy $\bar{\pi}_{T}$ that satisfies the objective in~\cref{eq:relaxed-objective} with probability at least $1 - 4 \delta$. \label{thm:ub-relaxed} \end{restatable} \begin{proofsketch} We prove the result for a general primal-dual error $\varepsilon_{\text{\tiny{opt}}} < \epsilon$ and $b' = b - \frac{\epsilon - \varepsilon_{\text{\tiny{opt}}}}{2}$, and subsequently specify $\varepsilon_{\text{\tiny{opt}}}$ and hence $b'$. In~\cref{lemma:decomposition-relaxed} (proved in~\cref{app:proof-relaxed}), we show that if the constraint value functions are sufficiently concentrated (the empirical value function is close to the ground truth value function) for both the optimal policy $\pi^*$ in $M$ and the mixture policy $\bar{\pi}_{T}$ returned by~\cref{alg:cmdp-generative}, i.e., if \begin{align} \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} \leq \frac{\epsilon - \varepsilon_{\text{\tiny{opt}}}}{2} \quad \text{;} \quad \abs{\const{\pi^*} - \consthat{\pi^*}} \leq \frac{\epsilon - \varepsilon_{\text{\tiny{opt}}}}{2}, \label{eq:sampling-const-relaxed} \end{align} then (i) policy $\bar{\pi}_{T}$ violates the constraint in $M$ by at most $\epsilon$, i.e., $\const{\bar{\pi}_{T}} \geq b - \epsilon$, and (ii) its suboptimality in $M$ (compared to $\pi^*$) can be decomposed as: \begin{align} \reward{\pi^*} - \reward{\bar{\pi}_{T}} & \leq \frac{2 \omega}{1 - \gamma} + \varepsilon_{\text{\tiny{opt}}} + \abs{\rewardp{\pi^*} - \rewardhatp{\pi^*}} + \abs{\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}}. \label{eq:decomposition-relaxed} \end{align} In order to instantiate the primal-dual algorithm, we require a concentration result for policy $\pi^*_c$ that maximizes the the constraint value function, i.e. if $\pi^*_c := \argmax \const{\pi}$, then we require $\abs{\const{\pi^*_c} - \consthat{\pi^*_c}} \leq \epsilon + \varepsilon_{\text{\tiny{opt}}}$. In Case 1 of~\cref{lemma:dual-bound} (proved in~\cref{app:proofs-pd}), we show that if this concentration result holds, then we can upper-bound the optimal dual variable $|\lambda^*|$ by $\frac{2 (1 + \omega)}{(\epsilon + \varepsilon_{\text{\tiny{opt}}}) (1 - \gamma)}$. With these results in hand, we can instantiate all the algorithm parameters except $N$ (the number of samples required for each state-action pair). In particular, we set $\varepsilon_{\text{\tiny{opt}}} = \frac{\epsilon}{4}$ and hence $b' = b - \frac{3 \epsilon}{8}$, and $\omega = \frac{\epsilon (1 - \gamma)}{8} < 1$. Setting $U = \frac{32}{5 \epsilon \, (1 - \gamma)}$ ensures that the $U > |\lambda^*|$ condition required by~\cref{thm:pd-guarantees} holds. In order to guarantee that the primal-dual algorithm outputs an $\frac{\epsilon}{4}$-approximate policy, we use~\cref{thm:pd-guarantees} to set $T = O\left(\frac{1}{(1 - \gamma)^4 \epsilon^4}\right)$ iterations and $\varepsilon_{\text{\tiny{l}}} = O\left(\epsilon^2 (1 - \gamma)^2 \right)$.~\cref{eq:decomposition-relaxed} can then be simplified as, \begin{align} \reward{\pi^*} - \reward{\bar{\pi}_{T}} & \leq \frac{\epsilon}{2} + \abs{\rewardp{\pi^*} - \rewardhatp{\pi^*}} + \abs{\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}}. \nonumber \end{align} Putting everything together, in order to guarantee an $\epsilon$-reward suboptimality for $\bar{\pi}_{T}$, we require the following results: \begin{align} \abs{\const{\pi^*_c} - \consthat{\pi^*_c}} & \leq \frac{5 \epsilon}{4} \, \text{;} \, \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} \leq \frac{3 \epsilon}{8} \, \text{;} \, \abs{\const{\pi^*} - \consthat{\pi^*}} \leq \frac{3 \epsilon}{8} \nonumber \\ \abs{\rewardp{\pi^*} - \rewardhatp{\pi^*}} & \leq \frac{\epsilon}{4} \, \text{;} \, \abs{\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}} \leq \frac{\epsilon}{4} \label{eq:conc-relaxed}. \end{align} We control such concentration terms for both the constraint and reward value functions in~\cref{sec:concentration}, and bound the terms in~\cref{eq:conc-relaxed}. In particular, we prove that for a fixed $\epsilon \in \left(0, \nicefrac{1}{1 - \gamma}\right]$, using $N \geq \tilde{O}\left(\frac{\log(1/\delta)}{(1 - \gamma)^3 \, \epsilon^2}\right)$ samples enssures that the statements in~\cref{eq:conc-relaxed} hold with probability $1 - 4 \delta$. This guarantees that $\reward{\pi^*} - \reward{\bar{\pi}_{T}} \leq \epsilon$ and $\const{\bar{\pi}_{T}} \geq b - \epsilon$. \end{proofsketch} Hence, the total sample-complexity of achieving the objective in~\cref{eq:relaxed-objective} is $\tilde{O}\left(\frac{S A \log(1/\delta)}{(1 - \gamma)^3 \epsilon^2} \right)$. This result improves over the $\tilde{O}\left(\frac{S^2 A \log(1/\delta)}{(1 - \gamma)^3 \epsilon^2} \right)$ result in~\citet{hasan2021model}. Furthermore, our result matches the lower-bound in the easier unconstrained setting~\citep{azar2012sample}, implying that our bounds are near-optimal. We conclude that under relaxed feasibility and with access to a generative model, solving constrained MDPs is as easy as solving MDPs. Algorithmically, we do not require constructing an optimistic CMDP like in~\citet{hasan2021model}. Instead, we solve the empirical CMDP in~\cref{eq:emp-CMDP} using specific primal-dual updates~\cref{eq:primal-update,eq:dual-update}. In the next section, we instantiate~\cref{alg:cmdp-generative} for the strict feasibility setting. \section{Upper-bound under Strict Feasibility} \label{sec:ub-strict} In order to achieve the objective in~\cref{eq:strict-objective} for a target error $\epsilon > 0$, we require setting $N = \tilde{O}\left(\frac{\log(1/\delta)}{(1 - \gamma)^5 \zeta^2 \epsilon^2} \right)$,\footnote{Again, we do not need to know $\zeta$ and it can be replaced by the estimator constructed in Section~\ref{app:est-zeta}.} $b' = b + \frac{\epsilon (1 - \gamma) \zeta}{20}$ and $\omega = \frac{\epsilon (1 - \gamma)}{10}$. This completely specifies the empirical CMDP $\hat{M}$ and the problem in~\cref{eq:emp-CMDP}. In order to specify the primal-dual algorithm, we set $U = \frac{4 (1 + \omega)}{\zeta (1 - \gamma)}$, $\varepsilon_{\text{\tiny{l}}} = O\left(\epsilon^2 (1 - \gamma)^4 \zeta^2 \right)$ and $T = O\left(\nicefrac{1}{(1 - \gamma)^6 \zeta^4 \epsilon^2}\right)$. With these choices, we prove the following theorem in~\cref{app:proof-strict}, and provide a proof sketch below. \begin{restatable}{theorem}{ubstrict} For a fixed $\epsilon \in \left(0, \nicefrac{1}{1 - \gamma}\right]$ and $\delta\in(0,1)$, \cref{alg:cmdp-generative}, with $N = \tilde{O}\left(\frac{\log(1/\delta)}{(1 - \gamma)^5 \epsilon^2 \zeta^2} \right)$ samples, $b' = b + \frac{\epsilon (1 - \gamma) \zeta}{20}$, $\omega = \frac{\epsilon (1 - \gamma)}{10}$, $U = \frac{4 (1 + \omega)}{\zeta (1 - \gamma)}$, $\varepsilon_{\text{\tiny{l}}} = O\left(\epsilon^2 (1 - \gamma)^4 \zeta^2 \right)$ and $T = O\left(\nicefrac{1}{(1 - \gamma)^6 \zeta^4 \epsilon^2}\right)$ returns policy $\bar{\pi}_{T}$ that satisfies the objective in~\cref{eq:strict-objective}, with probability at least $1 - 4 \delta$. \label{thm:ub-strict} \end{restatable} \begin{proofsketch} We prove the result for a general $b' = b + \Delta$ for $\Delta > 0$ and primal-dual error $\varepsilon_{\text{\tiny{opt}}} < \Delta$, and subsequently specify $\Delta$ (and hence $b'$) and $\varepsilon_{\text{\tiny{opt}}}$. In~\cref{lemma:decomposition-strict} (proved in~\cref{app:proof-strict}), we prove that if the constraint value functions are sufficiently concentrated (the empirical value function is close to the ground truth value function) for both the optimal policy $\pi^*$ in $M$ and the mixture policy $\bar{\pi}_{T}$ returned by~\cref{alg:cmdp-generative} i.e. if \begin{align} \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} \leq \Delta - \varepsilon_{\text{\tiny{opt}}} \quad \text{;} \quad \abs{\const{\pi^*} - \consthat{\pi^*}} \leq \Delta \label{eq:sampling-const-strict} \end{align} then (i) policy $\bar{\pi}_{T}$ satisfies the constraint in $M$ i.e. $\const{\bar{\pi}_{T}} \geq b$, and (ii) its suboptimality in $M$ (compared to $\pi^*$) can be decomposed as: \begin{align} \reward{\pi^*} - \reward{\bar{\pi}_{T}} & \leq \frac{2 \omega}{1 - \gamma} + \varepsilon_{\text{\tiny{opt}}} + 2 \Delta |\lambda^*| + \abs{\rewardp{\pi^*} - \rewardhatp{\pi^*}} + \abs{\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}} \label{eq:decomposition-strict} \end{align} In order to upper-bound $|\lambda^*|$, we require a concentration result for policy $\pi^*_c := \argmax \const{\pi}$ that maximizes the the constraint value function. In particular, we require $\Delta \in \left(0,\frac{\zeta}{2}\right)$ and $\abs{\const{\pi^*_c} - \consthat{\pi^*_c}} \leq \frac{\zeta}{2} - \Delta$. In Case 2 of~\cref{lemma:dual-bound} (proved in~\cref{app:proofs-pd}), we show that if this concentration result holds, then we can upper-bound the optimal dual variable $|\lambda^*|$ by $\frac{2 (1 + \omega)}{\zeta (1 - \gamma)}$. Using the above bounds to simplify~\cref{eq:decomposition-strict}, \begin{align} \reward{\pi^*} - \reward{\bar{\pi}_{T}} & \leq \frac{2 \omega}{1 - \gamma} + \varepsilon_{\text{\tiny{opt}}} + \frac{4 \Delta (1 + \omega)}{\zeta (1 - \gamma)} + \abs{\rewardp{\pi^*} - \rewardhatp{\pi^*}} + \abs{\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}} \nonumber. \end{align} With these results in hand, we can instantiate all the algorithm parameters except $N$ (the number of samples required for each state-action pair). In particular, we set $\Delta = \frac{\epsilon \, (1 - \gamma) \, \zeta}{40} < \frac{\zeta}{2}$, $\varepsilon_{\text{\tiny{opt}}} = \frac{\Delta}{5} = \frac{\epsilon \, (1 - \gamma) \, \zeta}{200} < \frac{\epsilon}{5}$, and $\omega = \frac{\epsilon (1 - \gamma)}{10} < 1$. We set $U = \frac{8}{\zeta (1 - \gamma)}$ for the primal-dual algorithm, ensuring that the $U > |\lambda^*|$ condition required by~\cref{thm:pd-guarantees} holds. In order to guarantee that the primal-dual algorithm outputs an $\frac{\epsilon \, (1 - \gamma) \, \zeta}{200}$-approximate policy, we use~\cref{thm:pd-guarantees} to set $T = O\left(\frac{1}{(1 - \gamma)^6 \zeta^4 \epsilon^2}\right)$ iterations and $\varepsilon_{\text{\tiny{l}}} = O\left(\epsilon^2 (1 - \gamma)^4 \zeta^2 \right)$. With these values, we can further simplify~\cref{eq:decomposition-strict}, \begin{align} \reward{\pi^*} - \reward{\bar{\pi}_{T}} & \leq \frac{3 \epsilon}{5} + \abs{\rewardp{\pi^*} - \rewardhatp{\pi^*}} + \abs{\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}}. \nonumber \end{align} Putting everything together, in order to guarantee an $\epsilon$-reward suboptimality for $\bar{\pi}_{T}$, we require the following concentration results to hold for $\Delta = \frac{\epsilon (1 - \gamma) \zeta}{40}$, \begin{align} \abs{\const{\bar{\pi}_{T}} - \consthat{\bar{\pi}_{T}}} & \leq \frac{4 \Delta}{5} \, \text{;} \, \abs{\const{\pi^*} - \consthat{\pi^*}} \leq \Delta \, \text{;} \abs{\const{\pi^*_c} - \consthat{\pi^*_c}} \leq \frac{19 \Delta}{5} \nonumber \\ \abs{\rewardp{\pi^*} - \rewardhatp{\pi^*}} & \leq \frac{\epsilon}{5} \, \text{;} \, \abs{\rewardhatp{\bar{\pi}_{T}} - \rewardp{\bar{\pi}_{T}}} \leq \frac{\epsilon}{5}. \label{eq:conc-strict} \end{align} We control such concentration terms for both the constraint and reward value functions in~\cref{sec:concentration}, and bound the terms in~\cref{eq:conc-strict}. In particular, we prove that for a fixed $\epsilon \in \left(0, \nicefrac{1}{1 - \gamma}\right]$, using $N \geq \tilde{O}\left(\frac{\log(1/\delta)}{(1 - \gamma)^5 \, \zeta^2 \, \epsilon^2}\right)$ ensures that the statements in~\cref{eq:conc-strict} hold with probability $1 - 4 \delta$. This guarantees that $\reward{\pi^*} - \reward{\bar{\pi}_{T}} \leq \epsilon$ and $\const{\bar{\pi}_{T}} \geq b$. \end{proofsketch} Hence, the total sample-complexity of achieving the objective in~\cref{eq:strict-objective} is $ \tilde{O} \left( \frac{S A \, \log(1/\delta)}{(1 - \gamma)^5 \, \zeta^2 \epsilon^2} \right)$. In this setting, we note that~\citet{bai2021achieving} propose a model-free algorithm with an $ \tilde{O} \left(\frac{S A \, L \, \log(1/\delta)}{(1 - \gamma)^4 \, \zeta^2 \epsilon^2} \right)$ sample complexity. Here, $L$ is a potentially large problem-dependent parameter that is not explicitly bounded by~\citet{bai2021achieving} and depends on the Lipschitz constant of the reward and constraint reward value functions. In~\cref{sec:lb-strict}, we prove a matching lower bound showing that~\cref{alg:cmdp-generative} is minimax optimal in the strict feasibility setting. In the next section, we give more details for the bounding the concentration terms in~\cref{thm:ub-relaxed} and~\cref{thm:ub-strict}. \section*{Acknowledgements} We would like to thank Reza Babanezhad and Arushi Jain for helpful feedback on the paper. Csaba Szepesv\'ari gratefully acknowledges the funding from Natural Sciences and Engineering Research Council (NSERC) of Canada, ``Design.R AI-assisted CPS Design'' (DARPA) project and the Canada CIFAR AI Chairs Program for Amii. Lin Yang is supported in part by DARPA grant HR00112190130. This work was partly done while Lin Yang was visiting Deepmind. \bibliographystyle{plainnat}
2,869,038,156,445
arxiv
\section{Introduction} Dozens of terrestrial planets in the habitable zone (HZ) have been found so far. Most of these planets orbit cool host stars, the so-called M dwarfs\footnote{phl.upr.edu/projects/habitable-exoplanets-catalog}. The Transiting Exoplanet Survey Satellite (TESS) is expected to find many more of these systems in our solar neighbourhood in the near future \citep{ricker2014,sullivan2015,barclay2018}. Whether these planets can have surface conditions to sustain (complex) life is still debated \citep[e.g.][]{tarter2007,shields2016}. Because of the close-in HZ, M-dwarf planets are likely tidally locked \citep[e.g.][]{kasting1993,selsis2008}. To allow for habitable surface conditions they require a mechanism to redistribute the heat from the dayside to the nightside \citep[e.g. atmospheric or ocean heat transport;][]{joshi1997,joshi2003,selsis2008,yang2013,hu2014}. \begin{table*} \centering \caption{Stellar parameters for the Sun and all M dwarfs used in this study. Stars labelled with an asterisk are active M dwarfs. Effective temperatures of the stars which are included in the MUSCLES database ($T_{\text{eff}}$[K] MUSC.) are taken from \citet{loyd2016}. } \label{table:stars} \centering \begin{tabular}{l l llllll} \hline\hline Star & Type & $T_{\text{eff}}$[K] Lit. & $T_{\text{eff}}$[K] MUSC. & $R/R_\text{$\odot$}$ & $M/M_\text{$\odot$}$ & $L/L_\text{$\odot$}$ [10$^{-3}$] & $d$ [pc] \\ \hline Sun & G2 & 5772 & - & 1.000 & 1.000 & 1000.00 & 0.00 \\ GJ 832 & M1.5$^a$ & 3657$^b$ & 3816$\pm$250$^c$ & 0.480$^a$ & 0.450$\pm$0.050$^a$ & 26.00$^d$ & 4.93$^a$ \\ GJ 176 & M2$^e$ & 3679$\pm$77$^e$ & 3416$\pm$100$^c$ & 0.453$\pm$0.022$^e$ & 0.450$^e$ & 33.70$\pm$1.80$^e$ & 9.27$^e$ \\ GJ 581 & M2.5$^f$ & 3498$\pm$56$^f$ & 3295$\pm$140$^c$ & 0.299$\pm$0.010$^f$ & 0.300$^f$ & 12.05$\pm$0.24$^f$ & 6.00$^f$ \\ GJ 436 & M3$^g$ & 3416$\pm$56$^g$ & 3281$\pm$110$^c$ & 0.455$\pm$0.018$^g$ & 0.507$\substack{+0.071\\-0.062}$ $^g$ & 25.30$\pm$1.20$^g$ & 10.23$^h$ \\ GJ 644 & M3*$^i$ & 3350$^j$ & - & 0.678$^k$ & 0.416$\pm$0.006$^i$ & 26.06$^j$ & 6.50$^k$ \\ AD Leo & M3.5*$^b$ & 3380$^b$ & - & 0.390$^l$ & 0.420$^l$ & 24.00$^m$ & 4.89$^b$ \\ GJ 667C & M3.5$^n$ & 3350$^o$ & 3327$\pm$120$^c$ & 0.460$^p$ & 0.330$\pm$0.019$^o$ & 13.70$^q$ & 6.80$^q$ \\ GJ 876 & M4$^e$ & 3129$\pm$19$^e$ & 3062$\substack{+120\\-130}$ $^c$& 0.376$\pm$0.006$^e$ & 0.370$^e$ & 12.20$\pm$0.20$^e$ & 4.69$^e$ \\ GJ 1214 & M4.5$^r$ & 3252$\pm$20$^s$ & 2935$\pm$100$^c$ & 0.211$\pm$0.011$^s$ & 0.176$\pm$0.009$^s$ & 4.05$\pm$0.19$^s$ & 14.55$\pm$0.13$^s$ \\ Proxima Cen. & M5.5$^t$ & 3054$\pm$79$^t$ & - & 0.141$\pm$0.007$^t$ & 0.118$^t$ & 1.55$\pm$0.02$^t$ & 1.30$^r$ \\ TRAPPIST-1 & M8$^u$ & 2559$\pm$50$^u$ & - & 0.117$\pm$0.004$^u$ & 0.080$\pm$0.007$^u$ & 0.52$\pm$0.03$^u$ & 12.10$\pm$0.40$^u$ \\ \hline \end{tabular} \tablebib{ (a)~\citet{bailey2008}; (b) \citet{gautier2007}; (c) \citet{loyd2016}; (d) \citet{bonfils2013}; (e) \citet{vonbraun2014}; (f) \citet{vonbraun2011}; (g) \citet{vonbraun2012}; (h) \citet{butler2004}; (i) \citet{segransan2000}; (j) \citet{reid1984}; (k) \citet{giampapa1996}; (l) \citet{reiners2009}; (m) \citet{pettersen1981}; (n) \citet{neves2014}; (o) \citet{anglada2013a}; (p) \citet{kraus2011}; (q) \citet{vanleeuwen2007}; (r) \citet{lurie2014}; (s) \citet{anglada2013b}; (t)~\citet{boyajian2012}; (u)~\citet{gillon2017} } \end{table*} Another drawback of a planet lying close to its host star is the high luminosity during the pre-main-sequence phase \citep[e.g.][]{ramirez2014,luger2015,tian2015} or that it might be subject to strong stellar cosmic rays \citep[see e.g.][]{griessmeier2005,segura2010,tabataba2016,scheucher2018}. On the other hand M-dwarf planets are favourable targets for the characterisation of their atmosphere. The high contrast ratio and transit depth of an Earth-like planet around cool host stars favours a detection of spectral atmospheric features. Hence, planets transiting M dwarfs are prime targets for future telescopes such as the James Webb Space Telescope \citep[JWST;][]{gardner2006,deming2009} and the European Extremely Large Telescope \citep[E-ELT;][]{gilmozzi2007,marconi2016}. \\ Additionally model simulations show that planets with an Earth-like composition can build up increased amounts of biosignature gases like ozone (O$_3$) and related compounds like water (H$_2$O) and methane (CH$_4$), which further increases their detectability \citep{segura2005,rauer2011,grenfell2013,rugheimer2015}. Knowing the ultraviolet radiation (UV) of the host star is crucial to understand the photochemical processes in the planetary atmosphere \citep[see e.g.][]{grenfell2014,rugheimer2015}. M dwarfs vary by several orders in the UV flux from inactive to active stars \citep{buccino2007,france2016}. \\ The approach of this study is to investigate the influence of a range of spectra of active and inactive stars with observed and modelled UV radiation on Earth-like planets in the HZ around M dwarfs. We furthermore aim to determine whether the resulting planetary spectral features could be detectable with JWST. Previous studies showed that multiple transits are needed to detect any spectral feature of an Earth-like planet with transmission or emission spectroscopy \citep{rauer2011,vonparis2011,barstow2016,barstow_irwin2016}. \citet{rauer2011} found that CH$_4$ of an Earth-like planet around AD~Leo (at 4.9~pc) would be detectable by co-adding at least three transits using transmission spectroscopy with JWST. To detect O$_3$ at least ten transits would be required. For the TRAPPIST-1 planets c and d at 12.1~pc, 30 transits are needed to detect an Earth-like concentration of O$_3$ with JWST \citep{barstow_irwin2016}. \\ In this study we first apply a 1D atmosphere model to calculate the global and annual mean temperature profiles and the corresponding chemical profiles of Earth-like planets orbiting M dwarfs. As model input we use 12 stellar spectra including the spectra of the MUSCLES database with observed UV radiation \citep{france2016}. In contrast to \citet{rugheimer2015} we use additional M-dwarf spectra and calculate transmission spectra of each Earth-like planet around M dwarfs with a radiative transfer model. We concentrate on transmission spectroscopy owing to the low chance of finding detectable absorption bands for temperate Earth-like planets with emission spectroscopy \citep[see][]{rauer2011,rugheimer2015, batalha2018}. We further develop a model to calculate the signal-to-noise ratio (S/N) of planetary spectral features with any kind of telescope and apply this to the modelled spectra and up-to-date JWST specifications. With the S/N we evaluate the potential to characterise the atmosphere of Earth-like exoplanets around M dwarfs. \\ The paper is organised as follows: In Section 2 we describe the atmospheric model, the scenarios, the line-by-line spectral model, and the S/N calculation. In Section 3 we first show the results of the atmospheric modelling and the resulting transmission spectra. Then we show the results of the S/N calculations. Section 4 presents the conclusions of this study. \section{Methods and models} \subsection{Climate-chemistry model} We use a coupled 1D steady-state, cloud-free, radiative-convective photochemical model. The code is based on the climate model of \citet{kasting1984} and the photochemical model of \citet{pavlov2002} and was further developed by \citet{segura2003}, \citet{vonparis2008}, \citet{vonparis2010}, \citet{rauer2011}, \citet{vonparis2015}, \citet{gebauer2017}, \citet{gebauer2018b}, and \citet{gebauer2018}. The atmosphere in the climate module is divided into 52~pressure layers and the chemistry model into 64~equidistant altitude levels.\\ The radiative transfer of the climate module is separated into a short wavelength region from 237.6~nm to 4.545~$\mu$m with 38~wavelength bands for incoming stellar radiation and a long wavelength region from 1~$\mu$m to 500 $\mu$m in 25~bands for planetary and atmospheric thermal radiation \citep{vonparis2015}. We consider Rayleigh scattering by N$_2$, O$_2$, H$_2$O, CO$_2$, CH$_4$, H$_2$, He, and CO using the two-stream radiative transfer method based on \citet{toon1989}. Molecular absorption in the short wavelength range is considered for the major absorbers H$_2$O, CO$_2$, CH$_4$, and O$_3$. Molecular absorption of thermal radiation by H$_2$O, CO$_2$, CH$_4$, and O$_3$ and continuum absorption by N$_2$, H$_2$O, and CO$_2$ are included \citep{vonparis2015}. \\ Our chemistry module includes 55 species with 217 chemical reactions. An update compared to previous model versions, for example used in \citet{keles2018}, is to consider altitude dependent CO$_2$, O$_2$, and N$_2$ profiles instead of using an isoprofile as described in \citet{gebauer2017}. The photolysis rates are calculated within the wavelength range between 121.4 nm and 855~nm. For the effective O$_2$ cross sections in the Schumann-Runge bands we use the values from \citet{murtagh1988} as described in \citet{gebauer2018}. The mean solar zenith angle is set to 54$^\circ$ in the photochemistry module in order to best reproduce the 1976 U.S. Standard Atmosphere \citep{anderson1986}. The water vapour concentrations in the troposphere are calculated using the relative humidity profile of the Earth taken from \citet{manabe1967}.\\ \subsection{Stellar Input} \begin{figure*} \centering \includegraphics[width=17cm]{spectra_fig1rsc_plt001.pdf} \caption{Stellar input spectra, scaled to reach a surface temperature of 288.15~K, of the modelled planet and binned to a resolving power of 200. AD Leo and GJ~644 are taken from the VPL website \citep{segura2003}, TRAPPIST-1 from \citet{omalley2017}, the other M-dwarf spectra from the MUSCLES database \citep{france2016}, and the solar spectrum from \citet{gueymard2004}. Stars labelled with an asterisk are active M dwarfs. Top: UV and IR spectra up to 3~$\mu$m with linear axes. Bottom: UV spectra with logarithmic axes.} \label{figure:spectra} \end{figure*} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{UV_comparison_Mdwarfs_plt004_sr.pdf}} \caption{Mean UV flux of each M dwarf between 170~nm and~240~nm. Mean values are calculated from the stellar spectra, scaled as in Fig~\ref{figure:spectra}.} \label{figure:uvcomp} \end{figure} An important aim of this study is to investigate the influence of different spectral energy distributions (SED) on the temperature and chemical composition profiles of an Earth-like planet in the HZ around an M dwarf. We used eight~observed M-dwarf spectra from the MUSCLES database\footnote{archive.stsci.edu/prepds/muscles/} \citep{france2016}, the observed spectra of AD Leo and GJ~644 from the VPL website\footnote{vpl.astro.washington.edu/spectra/stellar/mstar.htm} \citep{segura2005}, and the constructed stellar spectrum of TRAPPIST-1 with high UV activity using the method of \citet{rugheimer2015} and \citet{omalley2017}. We note that there is no observed spectrum of TRAPPIST-1 in the UV. The spectrum used in this study is assumed and could be very different from the real spectrum. The Mega-MUSCLES HST Treasury Survey is planning to observe TRAPPIST-1 to construct a representative spectrum of this low mass M dwarf \citep{froning2018}. \\ The spectra cover a range from M1.5 to M8 spectral type. For comparison we used the solar spectrum from \citet{gueymard2004}. Table \ref{table:stars} lists basic parameters of all stars used in this study. Figure \ref{figure:spectra} compares all M-dwarf spectra and the solar spectrum and Fig. \ref{figure:uvcomp} shows the mean UV flux of each M dwarf between 170 nm and~240~nm. In both plots the flux is scaled so that the surface temperature of the modelled planet reaches a value of 288.15~K. The scaling values are shown in Table \ref{table:planets}. Whereas the MUSCLES stars are classified as inactive \citep{france2016}, AD~Leo and GJ~644 are often cited as active M dwarfs \citep[e.g.][]{segura2005, reiners2009, leitzinger2010, rauer2011, rugheimer2015}. We label the active M dwarfs with an asterisk to separate them from non-active M dwarfs. Since the real TRAPPIST-1 spectrum is unknown, we cannot make any conclusions regarding the UV activity of this star. The spectrum we adopted for this study has a low mean UV flux, since we do not label TRAPPIST-1 as active. From the MUSCLES database we used version 22 of the adapted panchromatic SED, binned to a constant resolution of 1~$\AA$. In low S/N regions the spectra were sampled down by the MUSCLES project to avoid negative fluxes by averaging negative flux bins with their neighbours until no negative flux bins remained. The spectra of GJ~1214, GJ~436 and Proxima Centauri include zero values for some wavelength bins in the UV. For these bins we linearly interpolated the flux with the two next non-zero neighbours to achieve a better estimate of the real flux. We applied the same downsampling procedure to avoid negative and non-zero flux to the spectra of AD Leo and GJ~644.\\ Since the MUSCLES spectra extend only up to a wavelength of 5.5~$\mu$m, we extended the available wavelength range with the NextGen\footnote{phoenix.ens-lyon.fr/Grids/NextGen/} spectra up to 971~$\mu$m \citep{hauschildt1999} to avoid overestimation of the flux when scaling to 1~solar constant. For the MUSCLES spectra we took the values of the effective temperature, log~$g$ and [Fe/H] from \citet{loyd2016}. Since the NextGen spectra have a fixed grid for effective temperature, log~$g$ and [Fe/H], we took the spectra with the most similar log~$g$ and [Fe/H] values and linear interpolated the temperature grid to the corresponding value. \\ The incoming stellar radiation is used up to 4.545~$\mu$m in the radiative transfer calculations. For a G-type star like the Sun $\sim$ 1~$\%$ of the incoming flux is neglected by the model because of this cut. For TRAPPIST-1 the flux which is not included at longer wavelengths by the model increases up to 4.6 $\%$ of the total incoming radiation. \begin{table*} \centering \caption{Parameters of the Earth around the Sun and Earth-like planets around M-dwarf host stars. The value $S/S_{\odot}$ is the ratio of the total stellar irradiance to the TSI to reach a surface temperature of 288.15~K. The transit duration was calculated via Eq.~\ref{eq:td}. We assume circular orbits for all planets. The transit depth, $\delta~=~\frac{R_\text{p}^2}{R_\text{s}^2}$, is calculated using the stellar radii from Table~\ref{table:stars} and the radius of the Earth. The equilibrium temperature (T$_\text{eq}$) was calculated using the corresponding planetary albedo ($\alpha_\text{p}$).} \label{table:planets} \centering \begin{tabular}{l r r r r r r r} \hline\hline Host star & $S/S_\odot$ & $a$ [AU] & $P$ [d] & $t_\text{D}$ [h] & $\delta$ [ppm] & $\alpha_\text{p}$ & $T_\text{eq}$ [K]\\ \hline Sun & 1.000 & 1.000 & 365.3 & 13.10 & 84.1 & 0.201 & 263.4 \\ GJ 832 & 0.886 & 0.205 & 50.4 & 4.28 & 364.8 & 0.117 & 262.0 \\ GJ 176 & 0.915 & 0.192 & 45.9 & 3.92 & 409.6 & 0.095 & 265.8 \\ GJ 581 & 0.920 & 0.114 & 25.8 & 2.47 & 940.1 & 0.086 & 266.8 \\ GJ 436 & 0.918 & 0.166 & 34.8 & 3.45 & 406.0 & 0.086 & 266.6 \\ GJ 644 & 0.888 & 0.242 & 67.6 & 6.81 & 182.8 & 0.096 & 263.7 \\ AD Leo & 0.878 & 0.143 & 30.4 & 3.02 & 552.6 & 0.094 & 263.1 \\ GJ 667C & 0.913 & 0.162 & 41.5 & 4.27 & 397.2 & 0.096 & 265.5 \\ GJ 876 & 0.951 & 0.113 & 22.9 & 2.77 & 594.5 & 0.071 & 270.1 \\ GJ 1214 & 0.958 & 0.068 & 15.6 & 1.78 & 1887.9 & 0.068 & 270.8 \\ Proxima Cen. & 0.986 & 0.040 & 8.4 & 1.13 & 4227.7 & 0.059 & 273.4 \\ TRAPPIST-1 & 1.045 & 0.022 & 4.4 & 0.87 & 6036.4 & 0.053 & 277.9 \\ \hline \end{tabular} \end{table*} \subsection{Model scenarios} We model Earth-like planets around M dwarfs with N$_2$-O$_2$ dominated atmospheric chemical compositions. We use 78 $\%$ nitrogen, 21 $\%$ oxygen, and a modern CO$_2$ volume mixing ratio of 355~ppm as starting conditions. While in previous studies it was assumed that these concentrations are isoprofiles which stay constant with altitude, in this study only the volume mixing ratios (vmr) at the surface are held to these values. The vmr within the atmosphere are allowed to vary (see \citealt{gebauer2017} for details). The chemistry module uses Earth-like boundary conditions to reproduce mean Earth atmospheric conditions. These are kept constant for each simulation and are further described in \citet{gebauer2017}. We use the Earth's radius and gravity and a surface pressure of 1~bar for all modelled planets. The temperature profile is calculated with the climate module depending on the chemical composition. The tropospheric water concentration is calculated via the assumption of an Earth-like relative humidity profile \citep[see][]{manabe1967}. In contrast to \citet{rauer2011}, but following \citet{segura2005} we scale the stellar spectra from Fig. \ref{figure:spectra} so that we reproduce the Earth's surface temperature of 288.15~K, since the assumption of an Earth-like relative humidity may not be appropriate for higher temperatures \citep[see e.g.][]{leconte2013,godolt2015,godolt2016}. Table \ref{table:planets} shows the scaling values of the stellar insolation for each star. We note that except the Earth, we do not model existing planets in this study. There are several studies investigating the potential atmosphere of Proxima Centauri~b \citep{kreidberg2016,turbet2016,meadows2018}, GJ~1214~b \citep{menou2011,berta2012,charnay2015}, GJ~436~b \citep{lewis2010,beaulieu2011} and the TRAPPIST-1 planets \citep{barstow_irwin2016,dewit2016,omalley2017}. \subsection{Line-by-line spectral model} The Generic Atmospheric Radiation Line-by-line Infrared Code (GARLIC) is used to calculate synthetic spectra between 0.4 $\mu$m and 12 $\mu$m \citep{schreier2014,schreier2018}. The GARLIC model is based on the Fortran 77 code MIRART-SQuIRRL \citep{schreier2001}, which has been used for radiative transfer modelling in previous studies like \citet{rauer2011} and \citet{hedelt2013}. The line parameters over the whole wavelength range are taken from the HITRAN 2016 database \citep{gordon2017}. Additionally the Clough-Kneizys-Davies continuum model \citep[CKD;][]{clough1989} and Rayleigh extinction are considered \citep{murphy1977,clough1989, sneep2005, marcq2011}. The transmission spectra are calculated using the temperature profile and the profiles of the 23 atmospheric species\footnote{OH, HO$_2$, H$_2$O$_2$, H$_2$CO, H$_2$O, H$_2$, O$_3$, CH$_4$, CO, N$_2$O, NO, NO$_2$, HNO$_3$, ClO, CH$_3$Cl, HOCl, HCl, ClONO$_2$, H$_2$S, SO$_2$, O$_2$, CO$_2$, N$_2$}, which HITRAN 2016 and our 1D climate-chemistry model have in common. We note that not all of the species are relevant for transmission spectroscopy of Earth-like atmospheres \citep[see e.g.][]{schreier2018}. \\ The radius of a planet with an atmosphere is wavelength dependent. This effect may be measured for example via transit spectroscopy. The difference between the geometric transit depth (without the contribution of the atmosphere) and the transit depth (considering the atmosphere) is called the effective height $h_\text{e}(\lambda)$. We simulate the transmission spectra $\mathcal{T}(\lambda,z)$ for $L$~=~64 adjacent tangential beams (corresponding to the 64 chemistry levels $z$) through the atmosphere. The effective height of each atmosphere is calculated using \begin{equation} \label{eq:effhei} h_\text{e}(\lambda) = \int_{0}^{\infty} \Big(1-\mathcal{T}(\lambda,z)\Big)~dz. \end{equation} \subsection{Signal-to-noise ratio model} \label{sec:snr_model} To calculate the S/N for potential measurements of our synthetic Earth-like planets, we further developed the background-limited S/N model used by \citet{vonparis2011} and \citet{hedelt2013}. We added the readout noise contribution to the model and calculate the duty cycle (fraction of time spent on target) to better evaluate the detectability of atmospheric spectral features. The code of \citet{vonparis2011} and \citet{hedelt2013} is based on the photon noise model applied in \citet{rauer2011} as follows: \begin{equation} \label{eq:star_signal} S/N = S/N_\text{s} \cdot \frac{f_\text{A}}{\sqrt{2}} = \frac{F_\text{s} \cdot t_\text{int} \cdot f_\text{A}}{\sigma_\text{total}}, \end{equation} where S/N$_\text{s}$ is the stellar S/N. The stellar signal, $F_\text{s}$ [photons/s], is calculated with \begin{equation} F_\text{s} = \frac{1}{N} \cdot \frac{R_\text{s}^2}{d^2} I_\text{s} \cdot A \cdot q \cdot \Delta \lambda, \end{equation} where $R_s$ is the stellar radius, $I_\text{s}$ the spectral energy flux [W/m$^2$/$\mu$m], $d$ the distance of the star to the telescope, and $A$ the telescope area. The flux is divided by $N = h\frac{c}{\lambda}$ ($h$, Planck constant, $c$, speed of light) to convert into number of photons. In contrast to \citet{rauer2011} we consider a wavelength dependent total throughput, $q$, of the corresponding instrument. The bandwidth, $\Delta \lambda$ (resolving power, $R = \frac{\lambda}{\Delta \lambda}$), is dependent on the filter and disperser of the instrument in question. The additional transit depth due to the atmosphere of the planet, $f_\text{A}$, is calculated using $R_s$, the planetary radius, $R_\text{p}$ and $h_\text{e}$ (Eq. \ref{eq:effhei}), i.e. \begin{equation} \label{eq:atm_td} f_\text{A} = \frac{(R_\text{p} + h_\text{e})^2}{R_\text{s}^2} - \frac{R_\text{p}^2}{R_\text{s}^2}, \end{equation} The total noise contribution, $\sigma_\text{total}$, includes photon noise, $\sigma_\text{photon}$, zodiacal noise, $\sigma_\text{zodi}$, thermal noise $\sigma_\text{thermal}$, dark noise, $\sigma_\text{dark}$, and readout noise, $\sigma_\text{read}$, i.e. \begin{equation} \sigma_\text{total} = \sqrt{2 \cdot (\sigma_\text{photon}^2 + \sigma_\text{zodi}^2 + \sigma_\text{thermal}^2 + \sigma_\text{dark}^2 + \sigma_\text{read}^2)}. \end{equation} The additional factor $\sqrt{2}$ is included to consider the planetary signal as a result of the difference between in and out of transit. The individual noise sources are described in more detail in \citet{vonparis2011}. In constrast to \citet{vonparis2011} and \citet{hedelt2013} we calculate the smallest number of pixels, $N_\text{pixel}$, needed to collect at least 99\% of the full stellar signal. This assumption significantly decreases the contribution due to zodiacal, thermal, dark, and readout noise by neglecting contamination of background noise dominated pixels. The number of photons per pixel is given by the point spread function (PSF) of the corresponding instrument, filter, and disperser. We use the pixel of the PSF with the highest fraction of the total signal to calculate the exposure time, $t_\text{exp}$, which is the longest time of photon integration before the collector is saturated. Possible exposure times are usually predefined by the instrument configuration. The minimum exposure time defines the saturation limits of the instrument. The number of exposures, $N_\text{exp}$, during a single transit is calculated via \begin{equation} N_\text{exp} = \Bigl\lfloor\frac{t_\text{D}}{t_\text{exp} + t_\text{read}}\Bigr\rfloor, \end{equation} where $t_\text{read}$ is the readout time of the instrument configuration and $t_\text{D}$ is the transit duration. The value $N_\text{exp}$ is rounded down to the next integer. \\ We calculate $t_\text{D}$ via \begin{equation} \label{eq:td} t_\text{D}~=~P/\pi~\cdot~\arcsin\Big(\sqrt{(R_\text{s}~+~R_\text{p})^2~-~b^2}/{a}\Big), \end{equation} with $b~=~a/R_\text{s}~\cos(i)$. The inclination, $i$, was assumed to be 90$^\circ$ for all planets. The orbital distance, $a$, was calculated by $a~=~\sqrt{\Big((R/R_{\odot})^2 \cdot (T_\text{eff}/T_\text{eff~$\odot$})^4\Big)~/~\Big(S/S_\text{$\odot$}\Big)}$ and the orbital period, $P$, was calculated using Kepler's third law. The full integration time of one transit, $t_\text{int}$ is considered for the noise calculation, as well as for the calculation of the number of photons collected from the star during one transit ($F_\text{s}~\cdot~t_\text{int}$). The value $t_\text{int}$ is calculated via \begin{equation} t_\text{int} = t_\text{D} - N_\text{exp} \cdot t_\text{read}. \end{equation} The instrumental readout noise, RON, is usually given per pixel and per exposure. To calculate the full readout noise during one transit we use \begin{equation} \sigma_\text{read} = N_\text{exp} \cdot N_\text{pixel} \cdot \text{RON}. \end{equation} \section{Results and discussion} \subsection{Atmospheric profiles} Figures \ref{figure:temperature} and \ref{figure:chemical_profiles} show the results from the climate-chemistry model using the M dwarf and the solar spectra as incoming flux shown in Fig. \ref{figure:spectra}. Compared to previous studies such as \citet{rauer2011}, \citet{grenfell2014}, and \citet{keles2018} we use a further developed model and a set of active and inactive observed and modelled M-dwarf SEDs. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[width=8cm]{Tprofile_Tsurf288_plt002.pdf}} \caption{Influence of various M-dwarf SEDs on the temperature profile~[K] of an Earth-like planet. The stellar spectra of Fig. \ref{figure:spectra} are scaled to reproduce a surface temperature of 288.15~K (see Table \ref{table:planets}). Each coloured line represents the temperature profile of the modelled hypothetical planet orbiting a different host star. Weak UV emission of M dwarfs leads to low O$_3$ heating in the middle atmosphere and therefore to a reduced or missing temperature inversion. Enhanced CH$_4$ heating increases the temperature in the middle atmosphere of Earth-like planets around late-type M dwarfs by up to 60~K. } \label{figure:temperature} \end{figure} \begin{figure*} \centering \includegraphics[width=17cm]{profiles_Tsurf288_plt003_vmr.pdf} \caption{Influence of various M-dwarf SEDs on some selected chemical atmospheric profiles of an Earth-like planet. The legend is the same as for Fig. \ref{figure:temperature}.} \label{figure:chemical_profiles} \end{figure*} \subsubsection{Temperature profile} Figure~\ref{figure:temperature} shows temperature profiles for the hypothetical Earth-like planets around their corresponding host stars. All planets are placed at a distance to their host star which results in a surface temperature of 288.15~K. Table~\ref{table:planets} shows the scaling factor of the total solar irradiance (TSI) of each planet. Without scaling the spectra, Earth-like planets around most M dwarfs would show an increased surface temperature \citep[see e.g.][]{rauer2011, rugheimer2015}. In part, this is due to the shift of the emission maximum of M dwarfs to the near-IR, which leads to increased absorption of CO$_2$ and H$_2$O and decreased scattering at UV and visible wavelengths in the planetary atmosphere \citep[see e.g.][]{kasting1993}. \\ This effect can be intensified by an increased greenhouse heating due to the increased amount of CH$_4$ and H$_2$O in the troposphere of Earth-like planets around M dwarfs \citep[see e.g.][]{rauer2011,grenfell2013,rugheimer2015}. Most M-dwarf cases require decreased TSI to reach a surface temperature of 288.15~K. However, above a certain CH$_4$ concentration, the surface temperature decreases with the same insolation. This suggests that most of the stellar irradiation is absorbed in the stratosphere, resulting in a smaller increase of the surface temperature due to reduced stellar irradiation reaching the troposphere \citep[see also][]{grenfell2013}. For M1 to M3 type stars the TSI needs to be decreased by about 8-12$\%$. Mid M dwarfs show this saturation in greenhouse heating, resulting in a minor downscaling of 1-5$\%$. For the synthetic planet orbiting the late M-dwarf TRAPPIST-1, the stellar flux needs to be increased by 5$\%$ to reproduce Earth's surface temperature. Hence, the chemistry feedback counteracts the effect of the SED, which may shift the inner edge of the HZ towards shorter distances to the star for late-type M-dwarf planets compared to climate-only simulations such as \citet{kopparapu2013}. Neglecting the composition change, hence assuming Earth's composition and carrying out climate only calculations suggest that the TSI would need to be decreased by 8-12$\%$ for all Earth-like planets around M dwarfs including TRAPPIST-1 and Proxima~Centauri (not shown). Hence the chemical feedback tends to cool the planets around late-type M dwarfs. This result is consistent with the study of \citet{ramirez2018} who found that for planets around late M dwarfs a CH$_4$ anti-greenhouse effect can shrink the HZ by about 20\%. \\ Our simulations do not include effects on the radiative transfer due to clouds and hazes. This decreases the equilibrium temperature of the Earth due to the lower planetary albedo (see Table~\ref{table:planets}). The consideration of clouds can decrease or increase the surface temperature depending on the type and height of the clouds \citep[see e.g.][]{kitzmann2010}. Including thick hazes would likely decrease the surface temperature by a few degrees \citep[see e.g.][]{arney2016,arney2017}. Even for climate models of the Earth it is still challenging to simulate the effect of clouds and aerosols consistently \citep[see e.g.][]{ipcc2014}. Hence, we do not consider clouds and hazes for this model study. Effects which would lead to a different surface temperature could be corrected by up- or downscaling the stellar flux to obtain a surface temperature of 288.15~K. \\ In the middle atmosphere (stratosphere and mesosphere up to 0.01~hPa) heating due to O$_3$ absorption is decreased owing to weak UV emission of M dwarfs \citep[see e.g.][]{segura2005,grenfell2013,grenfell2014}. Therefore, no temperature inversion is present in the stratosphere, as found by \citet{segura2005}. For active and inactive early M dwarfs the temperature maximum in the middle atmosphere around 1~hPa is decreased compared to the Earth around the Sun. The enhanced heating rates due to CH$_4$ absorption of stellar irradiation increase the temperature in the middle atmosphere by up to 60~K for late-type M-dwarf planets compared to early-type M-dwarf planets \citep[see also][]{rauer2011,grenfell2014}. Increased temperatures from about 500~hPa up to 1~hPa result in an almost isothermal vertical gradient, most pronounced for late M-dwarf planets. For emission spectroscopy the weak temperature contrast between the surface and the atmosphere reduces the spectral features, resulting in low S/N as discussed in \citet{rauer2011}. \subsubsection{Responses of O$_3$} The production of O$_3$ in the middle atmosphere is sensitive to the UV radiation. In the Schumann-Runge bands and Herzberg continuum (from about 170 nm to 240~nm) molecular oxygen (O$_2$) is split into atomic oxygen in the ground state (O$^3$P), which reacts with O$_2$ to form O$_3$. The destruction of O$_3$ is mainly driven by absorption in the Hartley (200 nm~-~310~nm), Huggins (310~nm -~400~nm), and Chappuis (400~nm ~-~850~nm) bands. Hydrogen oxides, HO$_x$ (OH, HO$_2$, and HO$_3$), and nitrogen oxides, NO$_x$ (NO, NO$_2$, and NO$_3$), destroy O$_3$ via catalytic loss cycles \citep[e.g.][]{brasseur2006}. All planets around inactive host stars from the MUSCLES database and TRAPPIST-1 have a reduced O$_2$ photolysis in the Schumann-Runge bands and Herzberg continuum compared to the Earth around the Sun case \citep[see Fig. \ref{figure:uvcomp} and ][]{rugheimer2015}. The decreased destruction of O$_3$ by photolysis at wavelengths longer than $\sim$200~nm \citep[see Fig. \ref{figure:spectra} and][]{grenfell2014} can not compensate the decreased production, resulting in lower concentrations of O$_3$ throughout the atmosphere (see Fig. \ref{figure:chemical_profiles}). Earth-like planets around active M dwarfs (AD~Leo and GJ~644) show strong O$_2$ photolysis in the Schumann-Runge bands and Herzberg continuum (see Fig. \ref{figure:uvcomp}), which results in an increase of O$_3$ concentrations compared to planets around inactive M dwarfs \citep[see also][]{grenfell2014}. \subsubsection{Responses of CH$_4$} For an Earth-like atmosphere the destruction of CH$_4$ is mainly driven by the reaction with the hydroxyl radical, OH, (CH$_4$~+~OH~$\rightarrow$~CH$_3$~+~H$_2$O). The amount of OH in the atmosphere is closely associated with UV radiation via the source reaction with excited oxygen (O$^1$D): H$_2$O~+~O$^1$D~$\rightarrow$~2OH. O$^1$D is mainly produced by O$_3$ photolysis (see e.g. \citealt{grenfell2007} and \citealt{grenfell2013} for further details). As in previous studies \citep[e.g.][]{rauer2011} we assume a constant CH$_4$ bioflux for all our simulations. No in situ chemical production of CH$_4$ in the atmosphere is included in the model. The effect of different CH$_4$ biomass emissions on planets around M dwarfs was investigated by for example \citet{grenfell2014}. Compared to the Earth around the Sun, the UV photolysis of O$_3$ is reduced for all Earth-like planets orbiting M dwarfs \citep[see][]{grenfell2013}. This leads to reduced O$^1$D and therefore OH concentrations, except for active M-dwarf planets, where OH is increased in the upper middle atmosphere (Fig.~\ref{figure:chemical_profiles}). The reduced OH concentrations decrease the destruction of CH$_4$ and lead to enhanced CH$_4$ abundances \citep{segura2005,grenfell2013,grenfell2014,rugheimer2015}. The planets around the inactive MUSCLES host stars show CH$_4$ concentrations comparable to the results of \citet{rugheimer2015}. The active host stars have higher far UV radiation, which leads to more O$_3$ production, increased OH concentrations in the upper middle atmosphere, and more CH$_4$ destruction in the atmosphere of Earth-like planets compared to planets orbiting inactive host stars. \\ All our simulated planets around M dwarfs feature a CH$_4$/CO$_2$ ratio higher than 0.2, for which previous model studies suggest that hydrocarbon haze formation is favoured \citep[see][]{trainer2004,arney2016,arney2017}. Our model does not include hydrocarbon haze production or the impact of haze on the radiatve transfer. \citet{arney2017} found that the surface temperature of an Earth-like planet around AD~Leo would decrease by 7~K when the effect of hazes is considered. Such a decrease of the surface temperature by hazes would require an increase in stellar flux to obtain a surface temperature of 288.15 K. On the one hand haze production would lead to an CH$_4$ sink, on the other hand the lower UV radiation due to UV absorption by organic haze would decrease the CH$_4$ photolysis. The overall effect upon the resulting CH$_4$ profile is difficult to estimate. Hence, we refer to \citet{arney2016} and \citet{arney2017} who investigated the impact of organic haze on Earth-like atmospheres. \\ We based our simulations on the assumption of a constant CH$_4$ bioflux for all planets. \citet{segura2005} and \citet{rugheimer2015} reduced the surface flux of CH$_4$ or set a constant surface mixing ratio to limit the building up of CH$_4$ due to the reduced destruction for Earth-like planets around late M dwarfs. Although the biosphere of an Earth-like planet around M dwarfs is likely to be different to the Earth, we cannot exclude that biofluxes might be similar. \subsubsection{Responses of H$_2$O} In the middle atmosphere the increased CH$_4$ abundances lead to more H$_2$O production via CH$_4$ oxidation for Earth-like planets around M dwarfs compared to the Earth-Sun case \citep[e.g.][]{segura2005,rauer2011}. In the troposphere the H$_2$O content is related to the temperature profile and is calculated with the assumption of a relative humidity profile. Although all planets are placed at the distance to their host star where the surface temperature reaches 288.15 K, the H$_2$O content in the upper troposphere is increased for M-dwarf planets and is close to the mixing ratio thought to lead to water loss. In this case this high mixing ratio is caused by CH$_4$ oxidation. The temperatures in the troposphere are increased for all Earth-like M-dwarf planets compared to the Earth. This allows the atmosphere to sustain more H$_2$O in the troposphere, even if the same H$_2$O vmr is obtained at the surface. \label{sec:stellar_snr} \begin{figure*} \centering \includegraphics[width=17cm]{snr_compare2pandeia_snr004.pdf} \caption{Stellar signal-to-noise ratio over 1 h integration time of each M dwarf at 10~pc, binned to a resolution of 100. All NIRSpec filters and high resolution disperser are combined. Solid lines are calculated using the method described in Sec. \ref{sec:snr_model} and dashed lines are calculated using the JWST ETC, Pandeia \citep{pontoppidan2016}.} \label{figure:comp2pan} \end{figure*} \subsubsection{Responses of N$_2$O} On Earth nitrous oxide (N$_2$O) sources are mainly surface biomass emissions. Only weak chemical sources in the atmosphere further increase the amount of N$_2$O in the atmosphere \citep[see e.g.][]{grenfell2013}. We assume constant N$_2$O biomass emissions for all our simulations bases on modern Earth \citep[see e.g.][]{grenfell2011,grenfell2014}. The loss processes of N$_2$O are dominated by photolysis in the UV below 240~nm in the middle atmosphere (via N$_2$O~+~h$\nu$~$\rightarrow$~N$_2$~+~O$^3$P). We use the cross sections of \citet{selwyn1977}, which cover the wavelength range from 173 nm to 240~nm and peak for room temperature at around 180~nm. For Earth-like planets around M dwarfs and the Earth only $\sim$ 5-10$\%$ of N$_2$O is destroyed via catalytic reaction with O$^1$D \citep[see e.g.][]{grenfell2013}. The resulting N$_2$O is therefore closely related to the SED around 180~nm (see Fig.~\ref{figure:uvcomp}). The SED in the far UV range is dominated by the activity of the M dwarf and has a lower dependence on the spectral type \citep{france2013}. Since active M dwarfs emit strongly in the far UV, planets orbiting active M dwarfs show reduced N$_2$O abundances in comparison to planets around inactive M dwarfs. Similar to \citet{rugheimer2015} we do not find a dependence of the N$_2$O abundances on the type of the M dwarf using observed stellar spectra. The amount of N$_2$O in the atmosphere of Earth-like planets is mainly dependent on the incoming radiation in the wavelength range between 170 nm and~240~nm of the host star (see Fig.~\ref{figure:uvcomp}). For modelled spectra \citet{rugheimer2015} show that N$_2$O increases for Earth-like planets around late-type M dwarf. \subsubsection{Responses of CO$_2$} The CO$_2$ production is mainly driven by the reaction CO~+~OH~$\rightarrow$ ~CO$_2$~+~H. The increased CH$_4$ oxidation in the middle atmosphere leads to an increase in CO for Earth-like planets around M dwarfs compared to the Earth-Sun case (Fig.~\ref{figure:chemical_profiles}). CO$_2$ is mainly destroyed by photolysis in the far UV. At wavelengths longer than 120~nm the photolysis of CO$_2$ is strongest between 130~nm and~160~nm \citep[e.g.][]{huestis2011}. Despite the strong emission of AD~Leo in the far UV, \citet{gebauer2017} found a 40~\% increase of CO$_2$ abundances in the upper middle atmosphere of an Earth-like planet around AD~Leo due to an increase of OH and CO at these altitudes compared to the Earth around the Sun. Even for lower abundances of OH in the upper middle atmosphere, we find that CO$_2$ increases in the middle atmosphere for all M-dwarf planets because of the large amounts of CO. The enhancement of the CO$_2$ production was not taken into account in previous studies, such as \citet{rauer2011} and \citet{grenfell2014}, where CO$_2$ was kept constant. At the surface all scenarios show a CO$_2$ volume mixing ratio of 355 ppm. \subsection{Stellar S/N} Just a fraction of the measured stellar signal of the order of a few parts per million (ppm) is blocked by the atmosphere of a terrestrial planet during a transit. Hence, to calculate the S/N of the atmosphere of the planet, we first need to know the stellar S/N (S/N$_\text{s}$). The radius of the star and the distance to the star are also important to determine the stellar signal in addition to the spectral type, (Eq. \ref{eq:star_signal}). Figure~\ref{figure:comp2pan} shows the S/N$_\text{s}$ for all M dwarfs at 10~pc, observed for an integration time of 1 hour, using all high resolution JWST NIRSpec filters (G140H/F070LP, G140H/F100LP, G235H/F170LP, G395H/F290LP). The specifications used to calculate the S/N using the NIRSpec filters can be found in Appendix \ref{sec:appendix_nirspec}. We binned the S/N to a constant resolution of 100 over the entire wavelength range ($\Delta\lambda$ = 10~nm at 1~$\mu$m and $\Delta\lambda$ = 50~nm at 5~$\mu$m). We chose the highest S/N for all overlapping bins of two filters. We note that in practise it is not possible to observe several NIRSpec filters at the same time. Gaps in the wavelength coverage at $\sim$ 1.3~$\mu$m, $\sim$ 2.2~$\mu$m, and $\sim$ 3.8~$\mu$m result from a physical gap between the detectors\footnote{www.cosmos.esa.int/web/jwst-nirspec/bots}.\\ At 10~pc, all M dwarfs except GJ~644 are observable over the entire wavelength range using high resolution spectroscopy (HRS). For the low wavelength filters GJ~644 reaches the saturation limit at around 11.5~pc. The Sun, placed at 10 pc, is not observable at any wavelength using NIRSpec due to saturation of the detector. The SED of M dwarfs peaks between 0.7~$\mu$m (early M dwarfs) and 1.1~$\mu$m (late M dwarfs). Because of the broader bandwidth with increasing wavelengths the S/N does not decrease significantly towards longer wavelengths. \citet{nielsen2016} showed that the S/N for GJ~1214 reduces by a factor of 3 from 1.5~$\mu$m to 4~$\mu$m when using a constant bandwidth. The early M dwarfs are near the saturation limit, which results in a significant reduction of the S/N due to changes in the detector duty cycle (with shorter exposures) and additional readout noise \citep[see also][]{nielsen2016}. Early-type M dwarfs have larger S/N than late-type M dwarfs at the same distance owing to the combined effect of effective temperature and stellar radius. \\ The solid lines of Fig.~\ref{figure:comp2pan} show the S/N calculated using the method described in Sec.~\ref{sec:snr_model}. Our S/N model can be applied to any space and ground-based telescope. In comparison to the previous S/N model version of \citet{hedelt2013} we added the readout noise contribution and saturation limits to the code. We plan to further develop this model and include exozodiacal dust contamination and the influence of star spots on the S/N of transmission and emission spectroscopy. \\ To verify our S/N model we compared the calculations to results from the JWST Exposure Time Calculator (ETC), Pandeia \citep{pontoppidan2016}. The S/N calculated with Pandeia are shown as dashed lines in Fig.~\ref{figure:comp2pan}. For all stars and wavelengths we find that our calculations are in good agreement with the S/N calculated with Pandeia. For both calculations GJ~644 is just observable using G395H/F290LP. This confirms that the saturation limits of the NIRSpec HRS are also in agreement. \subsection{Transmission spectra} \label{sec:transmission_spectra} The top panel of Fig. \ref{figure:transmission_snr} shows the transmission spectra of each simulated hypothetical Earth-like planet around an M dwarf. The transmission spectra are shown as the wavelength dependent effective height (see Eq. \ref{eq:effhei}). We binned the line-by-line effective height of each planet to a resolving power of $R = \frac{\lambda}{\Delta\lambda} = 100$ over the entire wavelength range. We calculated the effective height of the Earth around the Sun for comparison. Dominant absorption features are H$_2$O at 1.4~$\mu$m and 6.3~$\mu$m, CO$_2$ at 2.8~$\mu$m, and 4.3~$\mu$m and O$_3$ at 9.6~$\mu$m, comparable to observed and modelled transmission spectra of the Earth \citep{palle2009,kaltenegger2009, vidal2010, betremieux2013, yan2015, schreier2018}. \\ \begin{figure*} \centering \includegraphics[width=17cm]{Effectiv_height_SNR_JWST_R100_snr002c.pdf} \caption{Top: Effective height of all hypothetical Earth-like planets orbiting M dwarfs for a fixed resolving power of 100. This data is available at \href{http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/624/A49}{cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/624/A49}. Bottom: combined S/N using all high or medium resolution NIRSpec and MIRI filters up to 11~$\mu$m for all M-dwarf planets at 10 parsecs integrated over a single transit and a resolving power of 100.} \label{figure:transmission_snr} \end{figure*} We do not consider clouds and hazes when calculating the transmissivity of the atmosphere. The consideration of clouds would essentially limit the minimum effective height to the height of the cloud deck \citep[e.g.][]{benneke2013,betremieux2014, Betremieux2017}. For the Earth the cloud layer is low and therefore the extent of the absorption feature is affected by only a few kilometres \citep[e.g.][]{schreier2018}. The consideration of hazes in the atmosphere would effect the spectral appearance of the planet \citep[see e.g.][]{kempton2011,kreidberg2014}. For Earth-like planets hazes affect the plantary spectra mostly at wavelengths below about 3.2~$\mu$m \citep[see][]{arney2016,arney2017}. For hazy atmospheres the transit depth is increased between the absorption features of, for example CH$_4$, H$_2$O and O$_2$ (atmospheric windows), leading to a lower separability of the individual spectral bands. \\ For Earth-like planets around M dwarfs the amount of H$_2$O and CH$_4$ in the middle atmosphere is increased by several orders of magnitude (Fig. \ref{figure:chemical_profiles}). Since CH$_4$ and H$_2$O are strong absorbers in the near-IR, increased abundances result in a decreased transmissivity and larger effective heights at most wavelengths compared to the Earth around the Sun \citep[see also][]{rauer2011}. In the middle atmosphere CH$_4$ is more abundant than H$_2$O for Earth-like planets around M dwarfs. Hence, the spectral features at 1.1~$\mu$m and 1.4~$\mu$m are dominated by CH$_4$ absorption as also shown by \citet{barstow2016}. Larger temperatures in the middle atmosphere of planets around late M dwarfs (Fig. \ref{figure:temperature}) lead to an expansion of the atmosphere by more than 10~km compared to planets around early M dwarfs (Table~\ref{table:atmosphere_H_T}). This, together with enhanced absorption of CH$_4$ and H$_2$O, results in an overall increased effective height for mid to late M-dwarf planets. Most spectral features of Earth-like planets around inactive M dwarfs are larger as a consequence of increased concentrations of CH$_4$, H$_2$O, and N$_2$O compared to planets around active M dwarfs. Just the spectral feature of O$_3$ is largest for active M-dwarf planets because of the increased O$_2$ photolysis in the UV, resulting in an enhanced O$_3$ layer. \\ Most other studies have used existing planets and have applied assumed atmospheres to spectral models \citep{shabram2011,barstow2015,barstow2016,barstow_irwin2016,morley2017}. In comparison to these studies the calculated concentrations of most biosignatures and related compounds are increased owing to the consideration of the chemical response of the atmosphere to M-dwarf spectra. Hence, the effective height of the atmosphere in our study is larger at most wavelengths and the detectability of most spectral features is increased. \begin{table} \centering \caption{Height, $H$, and temperature, $T$, at 100~hPa and at 0.1~hPa for all simulated atmospheres of Earth-like planets around M dwarfs and the Earth. Earth-like planets around late M dwarfs show an expansion of the atmosphere by more than 10~km compared to planets around early M dwarfs and the Earth.} \label{table:atmosphere_H_T} \centering \begin{tabular}{l r r r r} \hline\hline &\multicolumn{2}{c}{100 hPa} & \multicolumn{2}{c}{0.1 hPa} \\ Host star & $H$ [km] & $T$ [K] & $H$ [km] & $T$ [K] \\ \hline Sun & 16.2 & 213.5 & 63.3 & 199.0 \\ GJ 832 & 16.7 & 226.2 & 59.9 & 177.5 \\ GJ 176 & 17.2 & 241.2 & 64.9 & 201.1 \\ GJ 581 & 17.5 & 245.7 & 66.2 & 208.2 \\ GJ 436 & 17.4 & 244.6 & 65.7 & 205.2 \\ GJ 644 & 16.9 & 231.5 & 60.4 & 179.5 \\ AD Leo & 17.0 & 232.2 & 60.8 & 182.4 \\ GJ 667C & 17.2 & 239.7 & 64.2 & 198.4 \\ GJ 876 & 17.9 & 253.7 & 68.4 & 217.0 \\ GJ 1214 & 17.9 & 254.5 & 68.5 & 216.0 \\ Proxima Cen. & 18.2 & 260.2 & 70.0 & 224.1 \\ TRAPPIST-1 & 18.4 & 266.4 & 72.0 & 235.1 \\ \hline \end{tabular} \end{table} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{Effective_height_JWST_selstars_snr001b_ppm_new.pdf}} \caption{Coloured lines show the transit depth of all hypothetical Earth-like planets around their M-host stars for a fixed resolving power of 100. The error bars show two sigma (standard deviation), single transit errors for selected spectral features, and selected M-dwarf planets at 10 parsecs with optimised wavelength bins. In black: calculated using high resolution JWST NIRSpec BOTS disperser and filter. In grey: calculated using NIRCam weak lens filters and high resolution NIRCam grism filters. Central wavelengths and bandwidths are calculated individually to distinguish between spectral absorption features (see Sec.~\ref{sec:features_10pc}).} \label{figure:filter_jwst} \end{figure} \subsection{Planetary S/N} \subsubsection{Signal-to-noise ratio with constant resolution at 10~pc} For the bottom panel of Fig.~\ref{figure:transmission_snr} we combined all available JWST NIRSpec and MIRI filter-disperser combinations to calculate the highest corresponding S/N of the planetary spectral features during one transit. The specifications of all JWST instruments used in this study can be found in Appendix \ref{sec:appendix}. As already discussed in Sec.~\ref{sec:stellar_snr} certain wavelength regions are not observable with low or medium resolution for bright stars due to saturation effects. Scenarios for GJ~1214, Proxima Centauri, and TRAPPIST-1 are observable with NIRSpec medium resolution spectroscopy at 10~pc below 5~$\mu$m. GJ~644 at 10~pc is only observable at wavelengths longer than 3~$\mu$m. All other M dwarfs can also be observed below 5~$\mu$m via high resolution spectroscopy. For wavelengths longer than 5~$\mu$m MIRI specifications are used to calculate the S/N. The Sun placed at 10~pc is only observable with MIRI MRS, with roughly one-third of the quantum efficiency up to 10~$\mu$m compared to MIRI LRS \citep{rieke2015b,glasse2015}.\\ An increasing stellar radius increases the S/N of the star (Fig.~\ref{figure:comp2pan}) but decreases the transit depth of the planet (Table~\ref{table:planets}). The effective height of the CO$_2$ feature at 4.3~$\mu$m differs only by 10~km between early and late M-dwarf planets owing to similar CO$_2$ abundances of the simulated atmospheres (see top panel of Fig.~\ref{figure:transmission_snr}). The S/N of this feature is however twice as large for late-type M-dwarf planets than for early to mid M-dwarf planets due to the smaller stellar radius. Despite the high stellar signal, the GJ~644 planet has the lowest S/N owing to the exceptionally large radius of the host star. The consideration of the readout noise and the reduction of total exposure time due to the increased number of detector readouts (shorter duty cycle) has an impact on the S/N of bright targets. For GJ~644 at 10~pc half of the transit time is needed for detector readouts using the filter-disperser G395H/F290LP. For fainter targets like TRAPPIST-1 the duty cycle increases. Only 3\% of the transit time is lost for TRAPPIST-1 at 10~pc due to detector readouts. Hence, the detectability of the spectral features is further improved for close mid to late-type M-dwarf planets. For late-type M-dwarf planets we expect the highest S/N compared to earlier M-dwarf planets \citep[see e.g.][]{dewit2013}. The hypothetical Earth-like planet around the M5.5 star Proxima Centauri, placed at 10~pc, has a higher S/N than the planet around the M8 star TRAPPIST-1 placed at 10~pc for all spectral features at wavelength shorter than 3~$\mu$m. Comparing the theoretical radii from \citet{reid2005}, TRAPPIST-1 has a slightly bigger radius and Proxima Centauri a smaller radius than expected for this spectral type. The spectra of very low mass stars show strong atomic lines mainly from \ion{K}{i} and \ion{Ti}{i} between 1~$\mu$m and 1.3~$\mu$m, water vapour bands in the near-IR and CO bands at 2.3 ~$\mu$m to 2.4~$\mu$m \citep{allard2001}. These bands coincide with the absorption features found for the modelled Earth-like atmospheres and decrease the S/N at these wavelengths. We note that M dwarfs later than M6 are affected by dust formation, which increases the uncertainty of the spectral models \citep{allard2012}. \\ As mentioned in Sec.~\ref{sec:transmission_spectra} we do not consider the effect of clouds and hazes on the transmission spectra. Since we use the geometric transit depth (i.e. the planetary radius without atmosphere) as reference to calculate the S/N, the increase of the effective height from additional absorption, scattering, or reflection would not influence the S/N of the spectral bands of, for example CH$_4$ or H$_2$O, at low wavelengths.\\ The atmospheres of Earth-like planets around mid to late M dwarfs have much higher S/N at most absorption features compared to early M-dwarf planets. Nevertheless, at 10~pc a single transit is not enough to detect any spectral feature using JWST with a fixed resolution of 100 due to the maximum S/N of only around 1.5. In the next section we want to investigate whether absorption bands would be detectable when taking into account the entire spectral feature. \subsubsection{S/N for spectral features at 10~pc} \label{sec:features_10pc} In Fig. \ref{figure:filter_jwst} we show transmission spectra with improved detectability of each spectral features by calculating the bandwidth and central wavelength of each spectral feature. We maximise the lower limit of the error bar (lower value at the two sigma deviation) by shifting the central wavelength and increasing the bandwidth until the optimal value is found. Increasing the bandwidth decreases the noise contamination. But if the selected band spans over a wider wavelength range than the absorption feature, the mean effective height decreases. We note that for a nonsymmetric-Gaussian distribution of the absorption feature, the central wavelength is not necessarily the wavelength of the peak absorption. The selected bandwidth and central wavelength also depend on the overall S/N. For low S/N it is favourable to integrate over a broad wavelength range, including the wings of the spectral feature to decrease the noise contamination. If the noise is low, the bandwidth can be narrow to maximise the transit depth of the absorption feature. To detect a spectral feature it needs to be separated from an atmospheric window with a low effective height. We find the bandwidth and central wavelength of atmospheric windows by minimising the upper limit of the error bar (upper value at the two sigma deviation) between two absorption features. \\ \begin{figure} \centering \includegraphics[width=6.5cm]{SNRat10pc_snr006_review.pdf} \caption{Signal-to-noise ratio of the CH$_4$ feauture at 3.4~$\mu$m for a single transit of each M-dwarf planet at 10~pc.} \label{figure:snr_mtype} \end{figure} \begin{table} \centering \caption{Mean central wavelengths, $\lambda_c$, bandwidths, $\Delta\lambda$ and resolving power for selected spectral features and an S/N of 5. The NIRSpec HRS and MIRI LRS are used to calculate the individual wavelengths and bandwidth for each M-dwarf planet and feature. Values are nearly independent of the distance to the star but dependent on the atmosphere of the planet and the throughput of the filter-disperser combination.} \label{table:wavlc_delta} \centering \begin{tabular}{l r r r} \hline\hline Spectral feature & $\lambda_c$[$\mu$m] & $\Delta\lambda$[$\mu$m] & $\lambda_c$/$\Delta\lambda$ \\ \hline H$_2$O at 1.4 $\mu$m & 1.38 & 0.06 & 23.0 \\ CH$_4$ at 2.3 $\mu$m & 2.33 & 0.15 & 15.5 \\ CH$_4$ at 3.4 $\mu$m & 3.37 & 0.32 & 10.5 \\ CO$_2$ at 4.3 $\mu$m & 4.27 & 0.11 & 38.8 \\ H$_2$O at 6.3 $\mu$m & 6.12 & 0.90 & 6.8 \\ O$_3$ at 9.6 $\mu$m & 9.64 & 0.44 & 21.9 \\ \hline \end{tabular} \end{table} Figure~\ref{figure:filter_jwst} shows the detectability of the spectral features using NIRSpec in black and NIRCam in grey. For NIRCam we use five fixed filters in weak lens mode between 1.3~$\mu$m and 2.2~$\mu$m. We omit the narrow-band filters owing to the very low resulting S/N. At longer wavelengths we use the NIRCam grism mode to calculate the error bars. The advantage of using NIRCam would be that observations of one weak lens filter at short wavelengths could be performed simultaneously with one filter-grism at longer wavelengths.\\ No feature would be distinguishable from an atmospheric window with a single transit for early M-dwarf planets at 10~pc except for the planet around GJ~581, which has a small radius. For mid to late M dwarfs several absorption features could be distinguishable from an atmospheric window within two sigmas with just a single transit. These are CH$_4$/H$_2$O at 1.15~$\mu$m and 1.4~$\mu$m; the CH$_4$ at 1.8~$\mu$m, 2.3~$\mu$m, and 3.4~$\mu$m; and the H$_2$O/CO$_2$ at 2.7~$\mu$m. The CO$_2$ feature at 4.3~$\mu$m could be separated from one of the atmospheric windows below 1.7~$\mu$m if simultaneous observations of both wavelength ranges were to be provided. \citet{barstow2016} showed that just a few transits would be necessary to detect CO$_2$ at 4.3~$\mu$m for an Earth around an M5 star at 10~pc. For an Earth the detection of the H$_2$O and CO$_2$ feature at 2.7~$\mu$m would require more than ten transits due to the lower abundances of H$_2$O in the Earth's middle atmosphere compared to our calculated amounts.\\ Figure~\ref{figure:snr_mtype} shows that the planets around the later M dwarfs with a distinguishable CH$_4$ feature at 3.4~$\mu$m reach an S/N of at least 3. For real observations we expect additional sources of variability like stellar spots \citep{pont2008,zellem2017,rackham2018} and exozodiacal dust \citep{roberge2012,ertel2018}. For this study we consider an S/N of 5 to be sufficient to detect a spectral feature. \subsubsection{Detectability of spectral features up to 100~pc} \begin{figure*} \centering \includegraphics[width=18.5cm]{DistTint_SNR5_allstars_snr003_review.pdf} \caption{Number of transits needed to reach an S/N of 5 for selected spectral features of M-dwarf planets and the Earth-Sun case and different distances to the star. Central wavelength and bandwidths are shown in Table~\ref{table:wavlc_delta}. The sudden reductions of the needed observation time happen when there are changes in the duty cycle and when a filter-disperser combination with a higher throughput can be used.} \label{figure:tint_dis2star} \end{figure*} \begin{figure*} \centering \includegraphics[width=15cm]{SNRatDis_TessStars_snr007_review.pdf} \caption{Detectability of hypothetical nearby Earth-like planets around M dwarfs with a single transit. Coloured dots show planets with an S/N of at least 5 for the corresponding spectral features (H$_2$O: blue, CH$_4$: green and CO$_2$: red). More than one colour means that multiple spectral features can be detected with a single transit. The sizes of the grey dots indicate the highest S/N of all spectral features. The numbers inside the coloured dots correspond to the following host stars: 1:~Proxima~Centauri, 2:~Barnard's~star, 3:~Wolf~359, 4:~Ross~154, 5:~Ross~248, 6:~Ross~128, 7:~GJ~15~B, 8:~DX~Cancri, 9:~GJ~1061, 10:~YZ~Ceti, 11:~Teegarden's~star, 12:~Wolf~424, 13:~L~1159-16, 14:~GJ~1245~B, 15:~GJ~1245~A, 16:~GJ~412~B, 17:~LHS~2090, 18:~GJ~1116, 19:~LHS~1723, 20:~GJ~1005, 21:~L~43-72, 22:~GJ~3737, 23:~GL~Virginis, 24:~GJ~1128, 25:~LSPM~J2146+3813, 26:~Ross~619, 27:~G~161-7, 28:~GJ~4053, 29:~G~141-36, 30:~SCR~J0740-4257, 31:~GJ~1286, 32:~LHS~1070, 33:~SCR~J0838-5855, 34:~NLTT~40406, 35:~GJ~3146, 36:~LSPM~J0539+4038. } \label{figure:teff_dis2star} \end{figure*} The optimal central wavelengths and bandwidths to reach an S/N of 5 are independent from the distance to the star and differ just by a few nanometres between the M-dwarf planets. Table~\ref{table:wavlc_delta} shows the mean central wavelengths, bandwidths, and resolving power for each spectral feature. \\ In Fig.~\ref{figure:tint_dis2star} we show the number of transits necessary to reach an S/N of 5 for the selected spectral features at the given central wavelengths and resolving power. The saturation limit is reached if the minimum integration time is equal to the frame time. We do not consider partial saturation as proposed by \citet{batalha2018}. The saturation limit to observe the entire wavelength range is about 6.3 at J band. Early M dwarfs reach this limit at around 6-10~pc. Mid- to late-type M dwarfs are fully observable at 1-3~pc. A G2 star like the Sun is observable at short wavelengths only beyond a distance of about 30~pc. The O$_3$ feature at 9.6~$\mu$m and the H$_2$O feature at 6.3~$\mu$m can be observed at shorter distances using MIRI MRS. We note that the sudden reductions of the needed observation time are due to changes in the duty cycle (more frames per integration) as shown by \citet{nielsen2016} and changes of the considered filter-disperser combination. We always plot the lowest number of transits needed for detection in cases in which a feature can be observed with several filter-disperser combinations.\\ Because of the high saturation limit and lower S/N (Fig.~\ref{figure:snr_mtype}), none of the spectral features of Earth-like planets around early M dwarfs would be observable with a single transit. For Earth-like planets around late-type M dwarfs lying closer than 4~pc, all selected spectral features except O$_3$ are detectable with a single transit. By co-adding 10 transits, CH$_4$ is detectable between the saturation limit and 10~pc for most M-dwarf planets. \citet{rauer2011} found that at a distance of 4~pc the CH$_4$ absorption band of an Earth-like atmosphere around an M5 star would reach an S/N of 3 using JWST observations. Owing to the consideration of a broader bandwidth and by taking into account realistic throughputs, our new analysis shows an improved detectability of CH$_4$. For the planet around the M4.5 star, GJ~1214, with a comparable atmosphere to the Earth-like planet around the M5 star modelled by \citet{rauer2011}, we find that even at a larger distance of 7~pc, a single transit is sufficient to reach an S/N of 5. \\ The O$_3$ feature at 9.6~$\mu$m is more difficult to detect because of the low stellar signal at longer wavelengths and the low abundances of O$_3$ of Earth-like planets around inactive M dwarfs. The planet around the active M-dwarf AD~Leo has a similar O$_3$ column as the present Earth. If AD~Leo were to have a transiting Earth-like planet in the HZ, around 50 transits or more than three years of total observing period would be required to detect O$_3$ with JWST. This is in agreement with the results of \citet{rauer2011}. Only 10 transits are necessary to detect O$_3$ at 4~pc for the hypothetical Earth-like planets around the later type M-dwarfs Proxima Centauri and TRAPPIST-1, which have a lower O$_3$ concentration than the planet around AD~Leo.\\ All modelled atmospheres of planets around different M dwarfs show an increased amount of H$_2$O in the middle atmosphere compared to the Earth-Sun case (Fig. \ref{figure:chemical_profiles}). The H$_2$O feature at 6.3~$\mu$m is detectable for mid to late M-dwarf planets at 10~pc by co-adding 10 transits. We find that 20 transits are necessary to detect H$_2$O in the atmosphere of the hypothetical Earth-like planet around the M2.5 star, GJ~581, at 12~pc (J-band magnitude~8). This is comparable to results from \citet{kopparapu2017}, who showed that a detection of a 10 ppm H$_2$O feature observing an M3 star would require around 30 transits at J-band magnitude~8. \\ Previous studies investigated the potential detectability of the TRAPPIST-1 planets using predefined atmospheres \citep[see e.g.][]{barstow_irwin2016,morley2017,kopparapu2017,krissansen2018}. We calculate the detectability of an Earth-like planet around TRAPPIST-1 considering the effect of the stellar spectrum on the atmosphere of the planet. At least 9 transits are required to detect CO$_2$ of the Earth-like planet around TRAPPIST-1 at 12.1~pc. \citet{morley2017} found that a 1~bar Venus-like atmosphere of TRAPPIST-1e (TRAPPIST-1f) can be characterised by co-adding 4 (17) transits. The O$_3$ feature at longer wavelengths requires the observation of more transits for a detection. \citet{barstow_irwin2016} concluded that 30 transits are needed to detect O$_3$ at Earth abundances for TRAPPIST-1c and TRAPPIST-1d. Our model calculation shows that the O$_3$ levels would be reduced for an Earth-like planet around TRAPPIST-1. Furthermore, at longer wavelengths there is a significant contribution of zodiacal noise for faint targets. Hence, we find that 170 transits are needed to detect O$_3$, which is much more than found by \citet{barstow_irwin2016}, who did not consider zodiacal noise. The time span over which the Earth-like planet in the HZ around TRAPPIST-1 needs to be observed for 170 transits would be more than two years, assuming white noise only and that each transit would improve the S/N perfectly. Hence, it would be difficult to detect O$_3$ with JWST in the atmosphere around an Earth-like planet by transmission spectroscopy, even for a close late-type M-dwarf like TRAPPIST-1. The detectability of CH$_4$, H$_2$O, and CO$_2$ is improved for M-dwarf planets compared to the Earth because of the larger concentrations of these species in their atmospheres. Up to a distance of about 10~pc CH$_4$, H$_2$O, and CO$_2$ would be detectable in the atmosphere of Earth-like planets orbiting mid- to late-type M-dwarf planets with 10 transits. In Sec.~\ref{sec:tess_mdwarfs} we show for which potential TESS findings within 15~pc it would be possible to characterise the Earth-like planetary atmospheres. \subsubsection{Detectability of spectral features of hypothetical TESS findings} \label{sec:tess_mdwarfs} The TRAPPIST-1 planets will be among the main future targets for atmospheric characterisation. The TESS satellite will find more transiting exoplanets in the solar neighbourhood \citep{barclay2018}. We used the TESS catalogue of cool dwarf targets \citep{muirhead2018} to calculate the S/N of potential TESS findings. From this catalogue we selected all M dwarfs within 15~pc. The distances themselves are not included in the catalogue. Hence, we took the distance of the M dwarfs in the northern sky from \citet{dittmann2016} and in the southern sky from \citet{winters2014}. We complemented the distances with the data from the CARMENES input catalogue \citep{cortes2017} and with data from \citet{lepine2011}. For values which are present in two or more catalogues, we used the most recent reference. For consistency we used the values for the M dwarfs shown in Table~\ref{table:stars}. The spectra until 5.5~$\mu$m were taken from the PHOENIX model\footnote{phoenix.astro.physik.uni-goettingen.de/} \citep{husser2013}. At wavelengths longer than 5.5~$\mu$m we extended the spectra with NextGen \citep{hauschildt1999}. Since NextGen and PHOENIX have a fixed grid for effective temperature, log~$g$ and [Fe/H], we took the spectra assuming a log~$g$ of 5~cgs and a [Fe/H] of 0.0~dex and linear interpolate the temperature grid to the corresponding value.\\ Figure~\ref{figure:teff_dis2star} shows which hypothetical Earth-like planets around M dwarfs within 15~pc would have a detectable feature for a single transit observation using JWST. To calculate the S/N of each spectral feature we used the modelled atmosphere of the planets around the M dwarfs with the most similar effective temperature (see Table \ref{table:stars}). For 36 of the 915 hypothetical M-dwarf planets at least one spectral feature would however be detectable with a single transit observation. The saturation of the NIRSpec HRS limits the detectability of the planetary atmosphere to stellar effective temperatures of lower than about 3300~K and distances larger than about 8~pc. Bright targets can only be observed at longer wavelengths with MIRI. Proxima~Centauri and Barnard's~star, which are the two closest M dwarfs, are too bright for NIRSpec HRS. The H$_2$O feature at 6.3~$\mu$m of a hypothetical transiting Earth-like planet would be detectable using MIRI. For the planets around late-type M dwarfs Wolf~359 and DX~Cancri, CO$_2$, H$_2$O, and CH$_4$ would be detectable with a single transit. Because of the large CH$_4$ abundances, the detection of CH$_4$ in the atmosphere of a hypothetical Earth-like planet around mid- to late-type M dwarfs would be possible up to about 10~pc. Three of the 36 M dwarfs have confirmed planets found by radial velocity measurements (Proxima Centauri, Ross~128, and YZ~Ceti). Grey dots show the S/N of the strongest spectral feature. We find 267 hypothetical planets with an S/N of at least 3 for one or more spectral features, including two host stars with confirmed transiting planets (TRAPPIST-1 and GJ~1132) and two early M dwarfs, GJ~581 and Ross~605. For these planets, at least one spectral feature could be detected by co-adding 5 transits. \\ Table~\ref{table:ntr_transit_planets} shows the number of transits required to detect H$_2$O, CH$_4$, CO$_2$, and O$_3$ assuming an Earth-like planet around the 11 closest M dwarfs with a confirmed transiting planet\footnote{exoplanet.eu}. O$_3$ would not be detectable with JWST for any of the synthetic planets owing to the long observation time required. The detection of CH$_4$ would be possible with JWST in a reasonable amount of observation time for an Earth-like planet in the HZ around GJ~1132, TRAPPIST-1, GJ~1214, and LHS~1140. \begin{table} \centering \caption{Number of transits, $N_\text{Tr}$, required to detect H$_2$O, CH$_4$, CO$_2$, and O$_3$ with JWST for an Earth-like planet in the HZ of the corresponding host star. All values have been rounded up to the next integer.} \label{table:ntr_transit_planets} \centering \begin{tabular}{l r r r r r} \hline\hline Host star & H$_2$O & CH$_4$ & CO$_2$ & O$_3$ \\ \hline GJ 436 & 31 & 10 & 30 & 209 \\ GJ 1132 & 14 & 4 & 13 & 142 \\ TRAPPIST-1 & 12 & 3 & 10 & 172 \\ GJ 1214 & 17 & 4 & 15 & 171 \\ LHS 1140 & 25 & 5 & 21 & 354 \\ GJ 3470 & 91 & 17 & 80 & 792 \\ NLTT 41135 & 47 & 9 & 41 & 711 \\ K2-18 & 93 & 18 & 80 & 824 \\ LHS 6343 & 167 & 33 & 140 & 1332 \\ Kepler-42 & 168 & 30 & 106 & 2880 \\ K2-25 & 233 & 42 & 184 & 5887 \\ \hline \end{tabular} \end{table} \section{Summary and conclusions} We use a range of stellar spectra to investigate the influence of the SED on the atmosphere of Earth-like planets in the HZ around M dwarfs and their detectability by JWST via transmission spectroscopy. The simulations of M-dwarf planets show an increase of CH$_4$, H$_2$O, and CO$_2$ in the middle atmosphere compared to the Earth around the Sun. The abundances of O$_3$ increase for planets around active M dwarfs and decrease for planets around inactive M dwarfs. The results of the atmosphere simulations are in agreement with previous model studies \citep{segura2005,rauer2011,grenfell2013,grenfell2014,rugheimer2015}. For Earth-like planets around early- to mid-type M dwarfs the stellar insolation needs to be downscaled to reproduce the surface temperature of the Earth \citep[see also][]{segura2005}. For late M-dwarf planets like the hypothetical Earth-like planet around TRAPPIST-1 we find that the chemistry feedback requires an upscaling of the stellar insolation to reach the surface temperature of the Earths.\\ One of the main goals of this study was to investigate the detectability of spectral features of the simulated atmospheres. We use a sophisticated S/N model including readout noise contribution and consideration of saturation limits and the duty cycle for each target. Although the model is applicable to any current or future telescope, we limit our analysis to transmission spectroscopy with JWST. We verify the results of our S/N model with the JWST ETC, Pandeia \citep{pontoppidan2016}. We calculate the optimal central wavelengths and bandwidths to detect main spectral features of an Earth-like planet. \\ The detectability of atmospheric features of Earth-like planet around mid to late M dwarfs is much higher than for planets around early M dwarfs. The saturation limits of JWST allow the observation of almost all mid- to late-type M dwarfs at low wavelengths in the solar neighbourhood. For an early-type M dwarf at 10~pc, the integration time due to detector readouts can be reduced up to 50\%. In comparison, for an observation of TRAPPIST-1 the detector readout time is just a few percent of the full observation time when using NIRSpec MRS or HRS. Another advantage of mid to late M-dwarf planets are the increased abundances of biosignature related compounds like CH$_4$ and H$_2$O and the extended atmosphere due to warmer temperatures from the upper troposphere to the mesosphere. This allows a detection of CH$_4$ with only two transits and H$_2$O and CO$_2$ with less than ten transits up to a distance of about 10~pc. \\ We find that H$_2$O, CO$_2$, and CH$_4$ could be detected in the atmosphere of a hypothetical Earth-like planet around Wolf 359 and DX Cancri with only a single transit. Within 15~pc there are 267 M dwarfs for which detection of at least one spectral feature would be possible by co-adding just a few transits. For hypothetical Earth-like planets around GJ~1132, TRAPPIST-1, GJ~1214, and LHS~1140 a detection of CH$_4$ would require less than ten transits. The predictions of \citet{barclay2018} have shown that terrestrial planets around mid to late M dwarfs within 15~pc are expected to be found with TESS. In this study we conclude that atmospheric features of these planets could be partly characterised with JWST. \begin{acknowledgements} This research was supported by DFG projects RA-714/7-1, GO 2610/1-1 and SCHR 1125/3-1. We acknowledge the support of the DFG priority programme SPP 1992 "Exploring the Diversity of Extrasolar Planets (GO 2610/2-1)". We thank the anonymous referee for the helpful and constructive comments. \end{acknowledgements} \begin{appendix} \section{JWST configurations} \label{sec:appendix} Currently operating telescopes are likely unable to detect spectral features of an Earth-like planet within a reasonable amount of observation time. This paper aims to investigate whether JWST will be able to detect biosignatures and related compounds within a few transits using transmission spectroscopy. The JWST will provide a telescope aperture of 6.5~m diameter and a collecting area of 25.4~m$^2$ \citep{contos2006}. For our science case we chose the observing modes of NIRCam, NIRSpec, and MIRI, which are applicable to a single bright object. \subsection{NIRSpec} \label{sec:appendix_nirspec} The Bright Object Time Series (BOTS) mode is designed for time-resolved exoplanet transit spectroscopy of bright targets \citep{ferruit2012,ferruit2014}. The NIRSpec BOTS uses a fixed slit (FS) mode with the S1600A1 aperture (1.6~$\times$~1.6 arcsec slit) \citep{beichman2014,birkmann2016a}. For the NIRSpec modes there are three sets of disperser and filter combinations available. For low resolution spectroscopy the PRISM with a resolving power of $\sim$100 covers the full NIRSpec range of 0.6~-~5.3~$\mu$m. Medium and high resolution spectroscopy provides four different disperser-filter combinations with a mean resolving power of $\sim$1,000 and $\sim$2,700 respectively (medium resolution: G140M/F070LP, G140M/F100LP, G235M/F170LP, G395M/F290LP; high resolution: G140H/F070LP, G140H/F100LP, G235H/F170LP, G395H/F290LP). For high resolution observations a physical gap between the detectors results in gaps in the wavelength coverage. A subarray of 512~$\times$~32 pixel (SUB512) is used for the PRISM. For medium and high resolutions a subarray of 2024~$\times$~32 pixel (SUB2048) is used in order to cover the full spectrum. \\ To increase the brightness limits it is possible to observe only half of the spectrum (SUB1024A and SUB1024B). Using SUB2048 M dwarfs have a J-band brightness limit of 6.4~$\pm$~0.5 for the G140H and G235H disperser and a limit of ~5.8~$\pm$0.5 for the G395H disperser \citep{nielsen2016}. In this study we also consider the 1024~$\times$~32 pixel subarrays, when the brightness limit for SUB2048 is reached. \\ Each subarray has a different single frame time, which is the same as the readout time. The frame time for SUB512 is 0.226~s, for SUB1024A and SUB1024B 0.451~s and for SUB2024 0.902~s. It is recommended to use as many frames as possible per integration because the S/N does not simply scale with the number of frames owing to changes in the detector duty cycle \citep{rauscher2007,nielsen2016}. The duty cycle increases with increasing number of frames. We use the frame time as a minimum exposure time and do not consider a partial saturation strategy as proposed by \citet{batalha2018}. If one pixel is saturated in a shorter time than the single frame time, the target is considered to be not observable for this filter-disperser combination. \\ For detailed information about NIRSpec we refer to \citet{birkmann2016b} and the on-line documentation\footnote{jwst.stsci.edu/instrumentation/nirspec}. The wavelength ranges, resolving powers, and throughputs of each filter-disperser combination can be found in the on-line documentation, in \citet{birkmann2016a} and in \citet{beichman2014}. Other relevant information such as dark current (0.0092~e$^-$s/pixel), pixel scale (0.1~arcsec/pixel), readout noise (6.6 e$^-$), and full well capacity (55,100~e$^-$) are taken from the on-line documentation. The dispersion curves and PSFs are described in \citet{perrin2012} and were downloaded from the documentation page. \subsection{NIRCam} The NIRCam grism time-series observations allows for simultaneous observation of short wavelengths (1.3~$\mu$m to~2.3~$\mu$m) and longer wavelengths (2.4~$\mu$m to~5.0 $\mu$m) \citep{beichman2014}. To observe bright targets at short wavelengths, a weak lens is used to defocus incoming light. The expected saturation limit change for the strongest wave of defocus 8 (WLP8) in a subarray of 160~$\times$~160 pixel is 6.5 magnitudes \citep{greene2010}. For the medium filter centred at 1.4~$\mu$m this means a saturation limit of $\sim$~4 at K-band for a G2V star, assuming 80\% of the full well capacity (105,750~e$^-$). The readout time and the minimum exposure time is 0.27864~s. We use all medium (R~$\approx$~10) and wide (R~$\approx$~4) filters available for the weak lens mode for our S/N calculations (F140M, F150W, F182M, F210M, F200W). Further information about the NIRCam weak lens mode such as dark current (0.0019~e$^-$s/pixel), pixel scale (0.031 arcsec/pixel), readout noise (16.2~e$^-$), and full well capacity (105,750~e$^-$) are given in \citet{greene2010} and the NIRCam documentation\footnote{jwst.stsci.edu/instrumentation/nircam}. \\ At longer wavelengths a grism is used for slitless spectroscopy between 2.5 $\mu$m~and ~5.0 $\mu$m. The wide filters (F277W, F322W2, F356W, F444W) have a mean resolving power of $\sim$~1,600 and a dispersion of 1~nm/pixel \citep{greene2016,greene2017}. We use the smallest grism subarray of 64~$\times$~2048 pixel and a frame and readout time of 0.34061~s. For an M2V star the brightness limit at K band is $\sim$~4. Information on the wavelength dependent filter throughput and resolving power can be found in \citet{greene2017}. The PSFs were downloaded from\footnote{stsci.edu/$\sim$mperrin/software/psf\_library/} \citet{perrin2012}. Other specifications like dark current (0.027~e$^-$s/pixel), pixel scale (0.063~arcsec/pixel), readout noise (13.5~e$^-$), and full well capacity (83,300~e$^-$) are taken from the on-line documentation. \subsection{MIRI} The JWST Mid-Infrared Instrument (MIRI) provides medium resolution spectroscopy (MRS) from 4.9~$\mu$m to 29.8~$\mu$m and low resolution slitted and slitless spectroscopy (LRS) from 5~$\mu$m to 12~$\mu$m \citep{wright2015,rieke2015}. The MRS mode has four channels with three sub-bands each. The resolving power for short wavelengths is 3,710 and decreases for long wavelengths to 1,330 \citep{wells2015}. The brightness limits for MIRI MRS are around magnitude 4 at K band for late-type host stars for a 2 frame integration \citep{beichman2014,glasse2015}. For MRS there is no subarray available. The full size is 1024~$\times$~1032 pixels, which results in a frame time and readout time of 2.775~s. \\ The LRS slitless mode provides the possibility to read out a subarray of 416~$\times$~72 pixels. This decreases the frame and readout time to 0.159~s. Hence, despite the much lower resolving power of $\sim$100 compared to MRS \citep{kendrew2015}, the brightness limit is just 2 magnitudes fainter for the K-band \citep[magnitude 6 at K band;][]{beichman2014,glasse2015}. For targets brighter than magnitude 6 at K band, we always use LRS owing to the better throughput of around a factor of 3 up to 10~$\mu$m compared to MRS \citep[][\footnote{http://ircamera.as.arizona.edu/MIRI/pces.htm}]{rieke2015b,glasse2015}. We took other specifications like dark current (0.2~e$^-$s/pixel), pixel scale (0.11~arcsec/pixel), readout noise (14~e$^-$), and full well capacity (250,000~e$^-$) from the on-line documentation\footnote{https://jwst.stsci.edu/instrumentation/miri} and from references therein. \end{appendix} \bibliographystyle{aa}
2,869,038,156,446
arxiv
\section{On the Latent Separability of Backdoor Poison Samples} \label{sec:latent_separability_and_adaptive_backdoor_poisoning_attack} \input{sections/assets/visualization_latent_compare_seed_666} \subsection{Latent Separability} \label{subsec:latent_separability} One principled idea for detecting backdoor poison samples is to utilize the backdoored models' distinguishable behaviors on poison and clean populations to distinguish between these two populations themselves. \textbf{Arguably, the most popular and successful characteristic is the latent separability phenomenon} first observed by \citet{tran2018spectral}. The basic observation is that backdoored models trained on poisoned datasets tend to learn separable latent representations for poison and clean samples --- thus poison samples and clean samples can be separated via a cluster analysis in the latent representation space of the backdoored models. Commonly used backdoor samples detectors~(e.g. Spectral Signature~\cite{tran2018spectral}, Activation Clustering~\cite{chen2018activationclustering}) and some recently published state-of-the-art work~(e.g. SCAn~\cite{tang2021demon}, SPECTRE~\cite{hayase21a}) are consistently built on this characteristic. Beside latent separability, several other heuristic characteristics~(e.g. \cite{chou2020sentinet,gao2019strip}) have also been reported, but none of them are as successful as the latent separability in terms of backdoor samples detection. For example, Sentinet~\cite{chou2020sentinet} is designed only for local trigger, while Strip~\cite{gao2019strip} is not effective when triggers pattern are more complicated or when backdoor correlations are less dominant~\cite{tang2021demon}. As a sanity check of the latent separability characteristic, we train a group of backdoored models with a diverse set of poison strategies~\cite{gu2017badnets,Chen2017TargetedBA,nguyen2020input,barni2019new,turner2019label,tang2021demon} on CIFAR10, and visualize~(refer Apppendix~\ref{appendix:visualization_of_latent_space} for detailed configurations) their latent representation space (of target class) in Figure~\ref{fig:vis_latent_space_compare}. Although some information are lost due to the dimension reduction, we can still see that \textcolor{red}{poison} and \textcolor{blue}{clean} samples consistently exhibit different distributions in all the considered cases from Figure~\ref{fig:vis_badnet} to \ref{fig:vis_TaCT}. This is consistent to observations reported by prior arts~\cite{tran2018spectral,chen2018activationclustering,tang2021demon,hayase21a} and it also accounts for the effectiveness of backdoor samples detectors built on latent space cluster analysis. On the other hand, however, we find one aspect that has not been well studied by previous work is that \textbf{the extent of the latent separation can vary a lot}~(at least in the low dimensional PCA space) across different poison strategies and training configurations. For example, as can be observed in Figure~\ref{fig:vis_latent_space_compare}, poison samples of BadNet~(Figure~\ref{fig:vis_badnet}) exhibit stronger latent separation~(less fusion) than that of Blend~(Figure~\ref{fig:vis_blend}). Meanwhile, models trained with~(right-hand plot) and without~(left-hand plot) data augmentation also lead to different extent of separations --- models trained with data augmentation usually lead to stronger separation~(e.g. Figure~\ref{fig:vis_SIG},\ref{fig:vis_clean_label},\ref{fig:vis_TaCT}) than that of models trained without data augmentation, however, the trend can also be inverse~(e.g. Figure~\ref{fig:vis_blend}). Note that, for the visualization results, we are also aware of the randomness that arise from the stochastic nature of model training. To assure readers that the variations across different cases we discussed above are not simply random noise, we repeat the same experiments for 3 times with different random seeds~(see Figure~\ref{fig:vis_latent_space_compare_repeat} in Appendix~\ref{appendix:visualization_of_latent_space}), and it turns out that all the three groups of results generated with different random seeds are qualitatively consistent. This indicates that the variations we discussed above are indeed a reflection of inherent properties of these different cases rather than random noise. As we will see later in Section~\ref{sec:experiments}, the varying extent of separation directly results in varying performance of many backdoor samples detectors built on the separation characteristic --- while they can work well against some poison strategies, \textbf{their performances can be less satisfactory against some others}. \subsection{Adaptive Backdoor Poisoning Attacks} Motivated by the varying extent of separation, perhaps a more fundamental question to ask is: \textit{\textbf{Is the latent separability between poison and clean populations really an unavoidable characteristic of backdoored models?}} The implication of this question is that motivated adversaries might attempt to design {adaptive backdoor attacks} such that the backdoored models behave indistinguishably on both clean and poison samples. Such \textit{potential} adaptive attacks can pose a fundamental challenge to existing backdoor samples detectors, because they completely overturn the principal assumption~(that distinguishable behaviors should exist) underlying their designs. Answers to this question depend on specific threat models and defensive settings we consider. Under a strong threat model where adversaries can fully control the training process, a series of recent work~\cite{shokri2020bypassing,xia2021statistical,doan2021backdoor,ren2021simtrojan,cheng2020deep,zhong2022imperceptible} show that the latent representations of poison and clean samples can be made indistinguishable by explicitly encoding the indistinguishability objective into the training loss of the backdoored model. On the other hand, as for the weaker data poisoning based threat model that we consider in this work, the problem appears to be harder, because there is still a huge gap for understanding why a deep model tend to learn separate latent representations for backdoor poison samples. One very recent work by \citet{qi2022circumventing} looks into this problem. Motivated by some heuristic insights, they propose two principled adaptive poisoning strategies that empirically can well suppress the separation characteristic in the latent space of victim models. The basic idea underlying their adaptive strategies is to introduce a set of "cover" samples~(different from poison samples) which also contain backdoor triggers but are still correctly labeled to their semantic groundtruth (other than the target class). Intuitively, these "cover" samples work as regularizers that can penalize models for learning overwhelmingly strong backdoor signals in the latent representation space. In this work, we consider two adaptive strategies, namely adaptive-blend and adaptive-k, suggested in their work. Figure~\ref{fig:vis_adaptive_blend} and Figure~\ref{fig:vis_adaptive_k} give out a straightforward sense of their "adaptiveness". In Section~\ref{sec:experiments}, we will also see that \textbf{performance of existing detectors can degrade catastrophically against these adaptive backdoor poisoning attacks}. \section{Full Experiment Results} \label{appendix:full_exp_results} \subsection{Full Results of the Three Rounds of Evaluations} Table~\ref{tab:cifar_confusion_training_666}-\ref{tab:cifar_confusion_training_2333} and Table~\ref{tab:gtsrb_confusion_training_666}-\ref{tab:gtsrb_confusion_training_2333} show our three repeated experiments on CIFAR10 and GTSRB, respectively. As shown, our Confusion Training consistently removes all (or many enough) poison training samples, and all models retrained on our cleansed sets are backdoor-free. \input{sections/assets/cifar_confusion_training_combined} \input{sections/assets/gtsrb_confusion_training} \subsection{Effects of Data Augmentation} \label{appendix_subsec:effects_of_data_augmentation} Table~\ref{tab:cifar_aug_vs_no_aug_average}-\ref{tab:cifar_aug_vs_no_aug_2333} show the effects of data augmentation on passive defenses based on the already trained models. Obviously, such passive backdoor defenses suffer from instability with or without data augmentation. Passive defenders do not know whether data augmentation would benefit their defenses -- data augmentation is preferred (by defenders) in some cases, but the opposite in other cases. \input{sections/assets/cifar_aug_vs_no_aug} \section{Conclusion} In this work, we look into the problem of backdoor poison samples detection as a defense against backdoor poisoning attacks. We analyze limitations of prior arts on this problem, including inconsistent performance across different settings and susceptibility to adaptive poison strategies. We attribute these limitations to the passive defense paradigm that these prior arts follow --- that they passively assume backdoored models will naturally have distinguishable behaviors on poison and clean populations. As an alternative, we propose the methodology of active defense, which suggests that one should actively enforce the models trained on poisoned set to behave differently on the two populations. To illustrate this methodology, we introduce the technique of confusion training as a concrete instance. Confusion training introduces a dynamic confusion set as another strong poison to interrupt models' fitting on clean samples, such that the trained models only fit poison samples while clean samples are underfitted. This naturally induces a separation between poison samples and clean samples. Then, we compare our confusion training pipeline with other baseline defenses across different poison strategies and datasets, and validate the superiority of our method and the active defense methodology underling its design. By our work, we point out that active defense is a promising direction for mitigating backdoor poisoning attacks. We encourage future work to incorporate this idea into their designs. \section{Theoretical Formulation} \subsection{A Theoretical Modeling of Backdoor Poisoning Attacks} Consider a benign linear model \begin{align} & Y_b = {\bold{x}_b}^T \beta_b + \epsilon,\label{eqn:benign_full} \end{align} where $\beta_b \in \mathbb{R}^p$ is the true parameter of the benign model, and $\epsilon \in \mathbb{R}$ is a homoscedastic noise term that does not depend on $\bold x_b$, $\mathbb{E}(\epsilon) = 0$. In this regression model, only a part of covariates in $\bold{x}_b$ contribute to $Y$. Formally, we denote $S\subset \{1,2,...,p\}$ the contributing set and $S^C = \{1,2,...,p\} / S$ as its complement, $\forall j \in S : \beta_j \ne 0$ and $\forall j \in S^C : \beta_j = 0$. Without loss of generality, we suppose $S = \{1,2,...,|S|\}$. We use $({\bold x}_b)_S$ to denote the contributing covariates and $(\beta_b)_S$ to denote the corresponding coefficients. Likewise, the non-contributing part can be represented by $({\bold x}_b)_{S^C}$ and $(\beta_b)_{S^C}$. Further, we assume $({\bold x}_b)_{S}$,$({\bold x}_b)_{S^C}$ are independent and $\mathbb{E}(\bold x_b) = 0$. So, the regression model can be further written as: \begin{align} & Y_b = ({\bold x}_b)^T_S (\beta_b)_S + \epsilon \label{eqn:benign_reduced} \\ & Var({\bold x}_b) = \begin{bmatrix} \Sigma_S & \\ & \Sigma_{S^C}\end{bmatrix},\ \mathbb{E}(\bold x_b) = 0 \label{eqn:benign_cov} \end{align} On the other hand, we construct a poison model where $S^C$ instead is the contributing set: \begin{align} & Y_a = {\bold{x}_a}^T \beta_a + \epsilon = ({\bold x}_a)^T_{S^C}(\beta_b)_{S^C} + \epsilon, \label{eqn:adv_full}\\ & Var({\bold x}_a) = \begin{bmatrix} k^{-1} \Sigma_S &\\ & k \Sigma_{S^C}\end{bmatrix}, k \textgreater{1} \label{eqn:adv_cov}\\ & \mathbb{E}(\bold x_a) = 0 \label{eqn:adv_mean} \end{align} \underline{This mimics the last layer of a deep learning model under backdoor attacks}, where the learned representation ${\bold x}_a$ of a poisonc sample has an amplified backdoor signal $({\bold x}_a)_{S^C}$ while the semantic feature $({\bold x}_a)_{S}$ is suppressed (as commonly observed~\cite{tran2018spectral,chen2018activationclustering}), and the poison data model creates artificial backdoor correlations between the backdoor covariates $({\bold x}_a)_{S^C}$ and the adversarial label $Y_a$. For further modeling the intuition that the backdoor signal is negligible in benign correlations while dominating in backdoor correlations, we supplement additional conditions as follow: \begin{itemize} \item \textit{Benign Condition}: $\frac{\mathbb E (\|\bold x_b^T \beta_b\|^2)}{\mathbb E (\|\bold x_b^T \beta_a\|^2)} \ge \mathcal{K}_1$ for some large $\mathcal{K}_1 \textgreater{0}$, which is equivalent to \begin{align} & \frac{(\beta_b)_S^T \Sigma_S (\beta_b)_S}{(\beta_a)_{S^C}^T \Sigma_{S^C} (\beta_a)_{S^C}} \ge \mathcal{K}_1 \label{eqn:benign_codition} \end{align} \item \textit{Backdoor Condition}: $\frac{\mathbb E (\|\bold x_a^T \beta_a\|^2)}{\mathbb E (\|\bold x_a^T \beta_b\|^2)} \ge \mathcal{K}_2$ for some large $\mathcal{K}_2 \textgreater{0} $, which is equivalent to \begin{align} & \frac{k\cdot (\beta_a)_{S^C}^T \Sigma_{S^C} (\beta_a)_{S^C}}{k^{-1} \cdot (\beta_b)_{S}^T \Sigma_{S} (\beta_b)_{S}} \ge \mathcal{K}_2 \label{eqn:poison_codition} \end{align} \end{itemize} \begin{lemma}\label{lemma:magnitude_of_k} The poison model that satisfies both benign condition and backdoor condition must have $k \ge \sqrt{\mathcal{K}_1 \mathcal{K}_2}$. Proof. \begin{align} & k^2 \mathop{\ge}_{\text{by eqn~\ref{eqn:benign_codition}}} \mathcal{K}_2 \frac{(\beta_b)_S^T \Sigma_S (\beta_b)_S}{(\beta_a)_{S^C}^T \Sigma_{S^C} (\beta_a)_{S^C}} \mathop{\ge}_{\text{by eqn~\ref{eqn:poison_codition}}} \mathcal{K}_1\mathcal{K}_2 & \square \end{align} \end{lemma} Now, consider a poisoned dataset $\{(X_i,Y_i)\}_{i=1}^n$ with a poison rate of $t$. Without loss of generality, the first $(1-t)n$ samples are benign samples generated by the benign model while the last $tn$ samples are poison samples generated by the poison model. Denote the design matrix $X = [X_1,...,X_n]^T$, the response vector $Y = [Y_1,...,Y_n]^T$ and the the independent and homoscedastic noise term $\epsilon = [\epsilon_1, ..., \epsilon_n]^T$. By the definition of data model, \begin{align} & Y = \begin{bmatrix} X_{1:(1-t)n}\beta_b \\ X_{(1-t)n:n}\beta_a\end{bmatrix} + \epsilon, \label{eqn:generated_y} \end{align} where $X_{p:q}$ denotes the slice of the matrix $X$ from row $p$ to row $q$. Conducting least square regression on the poisoned dataset, we obtain the following estimator $\hat{\beta}$: \begin{align} & \hat{\beta} = (X^TX)^{-1} X^T Y, \label{eqn:normal_equation} \end{align} \begin{theorem}[Plausibility of the Modeling]\label{theorem:plausibility} When $n$~(size of the training set) is sufficiently large, a least square regression on the poisoned set gives rise to: \begin{align} & \hat{\beta} \approx \begin{bmatrix} [tk^{-1} + (1-t)]^{-1} (1-t) \cdot (\beta_b)_S \\ [tk+(1-t)]^{-1} k t \cdot (\beta_a)_{S^C}\end{bmatrix}, \label{eqn:least_square_estimator} \end{align} moreover, the poisoned least square estimator $\hat{\beta}$ satisfies: \begin{itemize} \item \textit{Bounded Clean Performance Drop} \begin{align} & \frac{\mathbb{E}(\| \bold{x}_b^T \hat{\beta} - \bold{x}_b^T \beta_b \|^2)}{Var(\bold x_b^T\beta_b)} \le \Bigg(\frac{t/k}{t/k+(1-t)}\Bigg)^2 + \Bigg(\frac{tk}{tk+(1-t)}\Bigg)^2 \cdot \mathcal{K}_1^{-1} \label{eqn:benign_error_bound} \end{align} \item \textit{Nontrivial Backdoor Property} \begin{align} & \frac{\mathbb{E}(\| \bold{x}_a^T \hat{\beta} - \bold{x}_a^T \beta_a \|^2)}{Var(\bold x_a^T \beta_a)} \le \Bigg(\frac{1-t}{tk+(1-t)}\Bigg)^2 + \Bigg(\frac{1-t}{t/k+(1-t)}\Bigg)^2 \cdot \mathcal{K}_2^{-1} \label{eqn:backdoor_error_bound} \end{align} \end{itemize} (See Appendix~\ref{appendix:proof_of_theorem_1} for the proof.) \end{theorem} Theorem~\ref{theorem:plausibility} shows the plausibility of our modeling of backdoor poisoning attacks above. The bound in the inequality~\ref{eqn:benign_error_bound} indicates that the rate of clean performance drop is of a magnitude $\mathcal{O}\big( t^2/k^2 + \mathcal{K}_1^{-1}\big)$. Since $\mathcal{K}_1$ is a large positive number by the benign condition~\ref{eqn:benign_codition}, $t$ is a small poison rate, $k\ge \sqrt{\mathcal{K}_1\mathcal{K}_2}$ is large, $\frac{\mathbb{E}(\| \bold{x}_b^T \hat{\beta} - \bold{x}_b^T \beta_b \|^2)}{Var(\bold x_b^T\beta_b)}$ should be considerably small --- \textbf{thus the fitted estimator $\hat{\beta}$ behaves as good as the oracle clean estimator $\beta_b$ on the benign data}. On the other hand, we know $k\ge \sqrt{\mathcal{K}_1\mathcal{K}_2}$ is considerably large. For $k \ge c(1-t)/t$, with similar reasoning on the property~\ref{eqn:backdoor_error_bound}, we will find $\frac{\mathbb{E}(\| \bold{x}_a^T \hat{\beta} - \bold{x}_a^T \beta_a \|^2)}{Var(\bold x_a^T \beta_a)}$ is bounded by a rate of the magnitude $\frac{1}{(1+c)^2}$. This indicates that \textbf{the model fitted on the poisoned set will capture a nontrivial portion of backdoor correlations}. \subsection{Decouple Benign Correlations} Suppose we have another set of clean samples $\{(\tilde{X}_l, \tilde{Y}_l)\}_{l=1}^L$ i.i.d. sampled from the clean data model, we use it to generate a confusion set $\{(\tilde{X}_l, \tilde{Y}_r): \forall l,r \in \{1,...,L\}\}$. We use $\tilde{X}$, $\tilde{Y}$ to denote the design matrix and response vector of the confusion set. It is easy to verify that covariates in the confusion set still follow the clean distribution: \begin{align} & L^{-2} \tilde{X}^T\tilde{X} = L^{-2}\sum_{l=1}^L\sum_{r=1}^L \tilde{X}_l \tilde{X}_l^T \approx Var(\bold x_b) = \begin{bmatrix} \Sigma_S & \\ & \Sigma_{S^C}\end{bmatrix}, \end{align} however the correlations between $\bold x$ and $Y$ are thoroughly decoupled: \begin{align} & \tilde{X}^T\tilde{Y} = L^{-2}\sum_{l=1}^L\sum_{r=1}^L \tilde{X}_l \cdot \tilde{Y}_r \approx \mathbb{E}(\bold x_b) \mathbb{E}{(Y_b)} = 0 \end{align} Perform weighted least square regression jointly on the poisoned set $\{X_i,Y_i\}_{i=1}^n$ and the confusion set $\{(\tilde{X}_l, \tilde{Y}_r): \forall l,r \in \{1,...,L\}\}$: \begin{align} & \tilde{\beta} = \mathop{\arg\min}_{\beta} = (1-w)\cdot n^{-1}\|Y - X\beta\|_2^2 + w\cdot L^{-2}\|\tilde{Y} - \tilde{X}\beta\|_2^2, \end{align} where a weight of $w \in (0,1)$ is assigned to the confusion set and a weight of $(1-w)$ is assigned to the poisoned set. Applying normal equation, it's easy to derive the resulting estimator: \begin{align} & \tilde{\beta} = \Bigg( \frac{1-w}{n} X^TX + \frac{w}{L^2} \tilde{X}^T\tilde{X}\Bigg)^{-1} \Bigg(\frac{1-w}{n}X^TY + \frac{w}{L^2}\tilde{X}^T\tilde{Y}\Bigg) \\ & = \begin{bmatrix} k^{-1}[t(1-w)(1-k)+k] \Sigma_S & \\ & k[t(1-w)(1-k^{-1})+k^{-1}] \Sigma_{S^C}\end{bmatrix}^{-1} \\ & (1-w)\Big[ (1-t) Var(\bold x_b) \beta_b + t Var(\bold x_a) \beta_a \Big] \\ & = \begin{bmatrix} \frac{1}{t/k + [(1-w)(1-t)]^{-1}} (\beta_b)_S \\ \frac{1}{(1-k^{-1}) + [tk(1-w)]^{-1}}(\beta_a)_{S^C} \end{bmatrix} \end{align} Denote $\delta_1:= \frac{1}{t/k + [(1-w)(1-t)]^{-1}}$, $\delta_2 := \frac{1}{(1-k^{-1}) + [tk(1-w)]^{-1}}$ and $\mathcal{C} = {Var(\bold x_b^T \beta_b)}/{Var(\bold x_a^T \beta_a)}$, \begin{align} & \frac{\mathbb{E}(\| \bold{x}_b^T \hat{\beta} - \bold{x}_b^T \beta_b \|^2)}{\mathbb{E}(\| \bold{x}_a^T \hat{\beta} - \bold{x}_a^T \beta_a \|^2)} = \frac{ (\hat{\beta} - \beta_b)^T Var(\bold x_b) (\hat{\beta} - \beta_b) }{ (\hat{\beta} - \beta_a)^T Var(\bold x_a) (\hat{\beta} - \beta_a) }\\ & = \frac{(\delta_1 -1)^2\mathcal{C} + \delta_2^2/k}{k^{-1}\delta_1^2 \mathcal{C} + (\delta_2-1)^2} \approx \frac{(\delta_1 -1)^2}{(\delta_2 -1)^2} \mathcal{C} \end{align} \section{Detailed Configurations of Our Experiments} \label{appendix:detailed_configurations} \subsection{Computation Resources and Datasets} \label{appendix_subsec:computation_resources} \paragraph{Computation Environment.} All of our experiments are conducted on a workstation with 48 Intel Xeon Silver 4214 CPU cores, 384 GB RAM, and 8 GeForce RTX 2080 Ti GPUs. \paragraph{Amounts of Computation.} For \underline{baseline evaluations}, two models are trained~(w/ and w/o data augmentation) on the cleansed dataset for each (attack, defense) pair. Thus, for each run, $6\times9\times 2 = 108$ modes are trained for CIFAR10 and $6 \times 8 \times 2 = 96$ models are trained for GTSRB. Since all experiments are repeated for 3 times with different random seeds, in total, 612 models are trained for baseline evaluations. With a single GPU~(GeForce RTX 2080 Ti), roughly 50 minutes are needed for training our model on CIFAR10~(200 epochs), and 15 minutes are needed for GTSRB~(100 epochs) --- about \textbf{342 GPU hours in total}. For \underline{confusion training}, we first launch the defense pipeline for each attack. With a single GPU, training an inference model for the defense roughly takes 50 minutes for CIFAR10 and 35 minutes for GTSRB. Similarly, after datasets are cleansed by the confusion training pipeline, we will also train two base models~(w/ and w/o data augmentation) on the cleansed set for each run. The costs for base models are the same to that of baseline defenses. In total, to evaluate the confusion training, \textbf{about $67.5$ GPU hours are needed for CIFAR10 and $26$ GPU hours for GTSRB}. \paragraph{Datasets.} We use two public datasets in our evaluations --- CIFAR10~\cite{krizhevsky2009learning} and GTSRB~\cite{stallkamp2012man}. CIFAR10 is under MIT license, while GTSRB is licensed under the CC0 1.0 Universal (CC0 1.0) Public Domain Dedication. For each dataset, we first follow the default training/test set split by Torchvision~\cite{marcel2010torchvision}. Recall that our defender is assumed to have a small reserved clean set at hand~(as defined in Section~\ref{subsec:goal_and_capability_of_our_defense}). To implement this setting, for both datasets, we further randomly pick \underline{2000} samples from the test split to simulate the reserved clean set, and leave the rest part of the test split for evaluation. \subsection{Detailed Configurations for Training Backdoored Models} \label{appendix_subsec:details_training_backdoored_models} For poisoned datasets and cleansed datasets, we will train models on them, to evaluate attack success rate~(ASR) and clean accuracy. To simulate the setting that a victim is training his/her model with on a poisoned dataset, we use the vanilla training procedures for training all base models. Specifically, for all base models, we use ResNet20~\cite{he2016deep} as the architecture. SGD with a momentum of 0.9, a weight decay of $10^{-4}$, and a batch size of 128, is used for optimization. Initially, we set the learning rate to $0.1$. On CIFAR10, we follow the standard 200 epochs stochastic gradient descent procedure, and the learning rate will be multiplied by a factor of $0.1$ at the epochs of $100$ and $150$. On GTSRB we use 100 epochs of training, and the learning rate is multiplied by $0.1$ at the epochs of $40$ and $80$. \subsection{Detailed Configurations for Baseline Attacks} \label{appendix_subsec:details_baseline_attacks} We consider eight different backdoor poisoning attacks in our evaluation. These attacks correspond to a diverse set of poisoning strategies. BadNet~\cite{gu2017badnets}~(Figure~\ref{fig:vis_badnet}) and Blend~\cite{Chen2017TargetedBA}~(Figure~\ref{fig:vis_blend}) correspond to typical all-to-one dirty-label attacks with patch-like trigger and blending based trigger respectively. Dynamic~\cite{nguyen2020input}~(Figure~\ref{fig:vis_dynamic}) corresponds to the strategy that use sample-specific triggers in place of single universal trigger. SIG~\cite{barni2019new}~(Figure~\ref{fig:vis_SIG}) and CL~\cite{turner2019label}~(Figure~\ref{fig:vis_clean_label}) correspond to two typical clean-label poisoning strategies. TaCT~\cite{tang2021demon}~(Figure~\ref{fig:vis_TaCT}) focus on source-specific attack. Finally, Adaptive-Blend~(Figure~\ref{fig:vis_adaptive_blend}) and Adaptive-K~(Figure~\ref{fig:vis_adaptive_k}) are adaptive poisoning strategies suggested by \citet{qi2022circumventing} that can suppress the latent separability characteristic. On CIFAR10, all of the eight attacks are implemented and evaluated. On GTSRB, we omit CL since the original paper only releases poison set for CIFAR10. During implementing these attacks, we follow the protocols and suggested default configurations of their original papers and open-source implementations. In table~\ref{tab:attack_config}, we summarize the hyperparameters that are used for each poison strategy. Specifically, \textit{Target Class} is the class that the backdoor trigger is correlated to, \textit{Poison Rate} denotes the portion of training samples that are stamped with the trigger and labeled to the target class. In addition, TaCT~\cite{tang2021demon} only chooses samples from a \textit{Source Class} to construct poison samples, and they will also randomly pick a portion~(\textit{Cover Rate}) of samples from some \textit{Cover Classes} and also plant triggers to them while keeping them still correctly labeled as their semantic labels. For Adap-Blend and Adap-K~\cite{qi2022circumventing}, they also keep a portion~(\textit{Cover Rate}) of samples planted with triggers but still correctly labeled. Besides, for all these attacks we implement, we use the same trigger patterns to those suggested by their original papers. \begin{table} \centering \resizebox{0.99\linewidth}{!}{ \begin{tabular}{llccccccc} \toprule & \multicolumn{1}{c}{BadNet~\cite{gu2017badnets}} & \multicolumn{1}{c}{Blend~\cite{Chen2017TargetedBA}} & \multicolumn{1}{c}{Dynamic~\cite{nguyen2020input}} & \multicolumn{1}{c}{SIG~\cite{barni2019new}} & \multicolumn{1}{c}{CL~\cite{turner2019label}} & \multicolumn{1}{c}{TaCT~\cite{tang2021demon}} & \multicolumn{1}{c}{Adap-Blend~\cite{qi2022circumventing}} & \multicolumn{1}{c}{Adap-K~\cite{qi2022circumventing}} \cr \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \midrule \multirow{5}*{\shortstack{CIFAR10~\cite{krizhevsky2009learning}}} & \multirow{5}*{\shortstack{Target Class = 0\\ Poison Rate = $1\%$}} & \multirow{5}*{\shortstack{Target Class = 0\\ Poison Rate = $1\%$}} & \multirow{5}*{\shortstack{Target Class = 0\\ Poison Rate = $1\%$}} & \multirow{5}*{\shortstack{Target Class = 0\\ Poison Rate = $2\%$}} & \multirow{5}*{\shortstack{Target Class = 0\\ Poison Rate = $0.5\%$}} & \multirow{5}*{\shortstack{Target Class = 0\\ Poison Rate = $2\%$\\ Source Class = 1 \\ Cover Classes = 5,7 \\ Cover Rate = $1\%$}} & \multirow{5}*{\shortstack{Target Class = 0\\ Poison Rate = $0.5\%$\\ Cover Rate = $0.5\%$}} & \multirow{5}*{\shortstack{Target Class = 0\\ Poison Rate = $0.5\%$\\ Cover Rate = $0.5\%$}} \cr \cr \cr \cr \cr \midrule \multirow{5}*{\shortstack{GTSRB~\cite{stallkamp2012man}}} & \multirow{5}*{\shortstack{Target Class = 2\\ Poison Rate = $1\%$}} & \multirow{5}*{\shortstack{Target Class = 2\\ Poison Rate = $1\%$}} & \multirow{5}*{\shortstack{Target Class = 2\\ Poison Rate = $1\%$}} & \multirow{5}*{\shortstack{Target Class = 2\\ Poison Rate = $2\%$}} & \multirow{5}*{/} & \multirow{5}*{\shortstack{Target Class = 2\\ Poison Rate = $2\%$\\ Source Class = 1 \\ Cover Classes = 5,7 \\ Cover Rate = $1\%$}} & \multirow{5}*{\shortstack{Target Class = 2\\ Poison Rate = $0.5\%$\\ Cover Rate = $0.5\%$}} & \multirow{5}*{\shortstack{Target Class = 2\\ Poison Rate = $0.5\%$\\ Cover Rate = $0.5\%$}} \cr \cr \cr \cr \cr \bottomrule \end{tabular} } \caption{ \textbf{Hyperparameters of Baseline Attacks} } \label{tab:attack_config} \end{table} \subsection{Detailed Configurations for Baseline Defenses} \label{appendix_subsec:details_baseline_defenses} To illustrate the superiority of active defense, we compare confusion training with five prior arts that also work on the task of backdoor poison samples detection, including three typical techniques~(\cite{tran2018spectral,chen2018activationclustering,gao2019strip}) that are commonly considered in the literature and two very recent state-of-the-art methods~\cite{tang2021demon,hayase21a}. As have been mentioned at the beginning of this work, these prior arts all belong to the paradigm of passive defense --- all of them passively rely on certain "distinguishable behaviors" of backdoored models that they do not control. Specifically, Strip~\cite{gao2019strip} assumes that backdoor models' predictions on poison samples have less entropy under intentional perturbation, while the rest four methods all rely on the latent separability characteristic that we introduce in Section~\ref{subsec:separability_and_adaptive_attack}. Spectral Signature~\cite{tran2018spectral} removes $1.5 * \rho_p$ suspected samples from every class, while SPECTRE~\cite{hayase21a} removes $1.5 * \rho_p$ suspected samples only from the class with the highest QUE score. Activation Clustering~\cite{chen2018activationclustering} cleanses classes with silhouette scores over a threshold (0.15 for CIFAR10 and 0.25 for GTSRB). Similar is SCAn~\cite{tang2021demon}, whose threshold is $e$. Strip~\cite{gao2019strip} first estimates the entropy distribution of clean samples on a validation set, selects an entropy threshold with a 10\% false positive rate, and eventually removes all training samples with entropy below this threshold. Note that, all these baseline defenses start from a base backdoored model~(generated in the way we described in Appendix~\ref{appendix_subsec:details_training_backdoored_models}), and identify poison samples according to the behaviors of the underlying base backdoored model. \subsection{Detailed Configurations for Confusion Training} \label{appendix_subsec:details_confusion_training} As mentioned in Appendix~\ref{appendix:confusion_training_protocol}, our defense pipeline works by iteratively running confusion training for multiple rounds, and each round we will rule out some samples from the training set to increase the density of poison. In our implementation, there are 4 rounds in total. In the first round, we use the full poisoned set for confusion training. In the second round, we only take the top $50\%$ samples with smallest loss values. In the third round, we only take top $25\%$. In the final round, we only take top $2000$ samples from the poisoned training set. Formally, in Algorithm~\ref{alg:inference_model_generation}, we take $r_1 = 0.5, r_2 = 0.25, r_3 = 2000/N$. For each round of confusion training, we initialize the model with weights generated by the last round. Then the model will be first roughly pretrained with the distilled poisoned set for 60 epochs. Intuitively, this will set up a good priors for the backdoor correlation. Then, another 4000 iterations of confusion training will be applied to decouple the benign correlations. A large weight will be assigned to the confusion batch, such that it will dominate the learning on benign features. Specifically, we take $\lambda = 24$ for CIFAR10, and $\lambda = 14$ for GTSRB. \section{Discussion} \label{sec:discussion} In Section~\ref{subsec:exp_performance}, we witness a big win of our confusion training pipeline over prior arts, across different datasets and poison strategies. Even though, \textbf{we keep a conservative attitude to oversell the security of the confusion training technique itself}. After all, there is still no strict guarantee that the technique would be reliable in all settings. \underline{Rather}, we deem the success of confusion training as a strong evidence for the \textbf{superiority of our active defense methodology} that is underlying the design of our techniques. As we have discussed in Section~\ref{subsec:separability_and_adaptive_attack}, illustrated in Figure~\ref{fig:vis_latent_space_compare}, and also empirically validated in Section~\ref{subsec:exp_performance}, it is suboptimal to just passively hope that some properties will automatically arise in backdoored models and then build defenses that heavily rely on such properties. Although this conclusion is very straightforward, it is surprising to see that a lot of prior works including very recently published state-of-the-arts are still built on such a passive paradigm. By our work, we want to promote awareness of the active defense methodology, and we encourage future work on backdoor samples detection to incorporate this idea into their designs. Finally, we also note that, since we reveal many failure cases of prior defenses. These information might potentially be used to threat systems built on these defenses. \section{Detailed Configurations of Baseline Defenses} \label{appendix:defense_configurations} TBD \section{Detailed Configurations of Baseline Attacks} \label{appendix:attack_configurations} TBD \section{Background and Related Work} \paragraph{Backdoor Poisoning Attacks.} A deep neural network~(DNN) model is said to be backdoored, if the model has encoded certain anomalous rules~(i.e. backdoors) that are exclusively known to some adversaries while it still behaves normally under standard evaluations. DNN models can be backdoored in various ways, including data poisoning~\cite{gu2017badnets,Chen2017TargetedBA}, transfer learning~\cite{yao2019latent,shen2021backdoor} and even direct model weights tampering~\cite{qi2021towards,rakin2020tbt}, etc. In this work, we focus on data poisoning based backdoor attacks, namely the \textit{\textbf{backdoor poisoning attacks}}, which are the most commonly considered settings in the literature of backdoor attacks on DNNs. To our best knowledge, {backdoor poisoning attacks} are first demonstrated by \citet{gu2017badnets}. They manipulate a small number of images in clean training sets, stamp them with a fixed patch pattern and mislabel them to a target class, and observe that models trained on such datasets consistently learn a dominant backdoor correlation between the patch pattern and the target class. Subsequent work further propose different poison strategies for further improving the practicality of the attacks. These improvements include different types of triggers~\cite{Chen2017TargetedBA, nguyen2020input}, clean label attacks~\cite{turner2019label,barni2019new}, adaptive attacks~\cite{qi2022circumventing} that can evade certain defenses, etc. \paragraph{Backdoor Defenses.} By exploiting certain properties of some backdoor attacks, various backdoor defenses~\cite{chen2018activationclustering,du2019robust,gao2019strip,kolouri2020universal,li2021neural,liu2018fine,liu2017neural,tran2018spectral,udeshi2019model,villarreal2020confoc,wang2019neural,xu2019Meta,huang2022backdoor} are also developed. As typical examples, Neural Cleanse~\cite{wang2019neural} and SentiNet~\cite{chou2020sentinet} propose to reverse engineer the backdoor triggers via searching for universal adversarial perturbations. Fine-pruning~\cite{liu2018fine} suggests to eliminate model backdoors via network pruning. \citet{du2019robust} and \citet{li2021anti} propose to apply additional intervention during model training to suppress the influence of poison samples. \citet{xu2019Meta} propose to directly train meta classifiers to predict whether a model is backdoored. \citet{liu2017neural} and \citet{li2021neural} attempt to eliminate model backdoors via finetuning the model on a small clean dataset. \citet{udeshi2019model,villarreal2020confoc} introduce input preprocessing to suppress the effectiveness of backdoor triggers during inference stage. There are also some certified defenses~\cite{wang2020certifying,weber2020rab} that adapt randomized smoothing~\cite{cohen-certified} for the settings of backdoor attacks and achieve nontrivial certified accuracy for backdoor attacks with small $\ell_p$ bounded triggers. Besides, another line of prior arts~\cite{tran2018spectral,chen2018activationclustering,gao2019strip,tang2021demon,hayase21a} look into \textbf{\textit{backdoor poison samples detection}}. Finally, it's also important to note that some empirical defenses~(e.g. \cite{wang2019neural,chou2020sentinet,gao2019strip} etc.) have already known to fail against some more sophisticated poison strategies, while certified defenses lack of utility because of low certifiable accuracy and the restriction to $\ell_p$ bounded triggers. We refer interested readers to \citet{li2020backdoor} for a more comprehensive review. \paragraph{Backdoor Poison Samples Detection.} This work focus on the detection of backdoor poison samples, which aims to eliminate backdoor poison samples from the training set. This line of defenses are particularly attractive for the threat model of backdoor poisoning attacks that we consider in this work. On the one hand, if one can accurately identify poison samples and eliminate them from training sets, the threat of backdoor poisoning attacks can be prevented in the first place and those cleansed datasets can be reliably used for any downstream tasks. On the other hand, even if one can only isolate a certain amount of poison samples, they can still be effectively used to unlearn the backdoor~(e.g. \cite{li2021anti,wang2019neural} etc.) --- thus the poison detection can also be used as an important building block for backdoor defense. However, many prior arts on this task passively rely on some properties of backdoored models that can not be controlled by defenders, and consequently suffer from both instability problem and adaptive attack~\cite{qi2022circumventing}. This work takes one step further by proposing the idea of \textbf{\textit{active defense}} and introducing \textbf{\textit{confusion training}} as a concrete instance, which significantly improves the performance of poison detection in practice. (We refer interested readers to Appendix~\ref{appendix:supp_review} for a more comprehensive review.) \section{Introduction} \label{sec:intro} The success of deep learning heavily relies on large-scale modern datasets~\cite{brown2020language,lin2014microsoft,russakovsky2015imagenet}. To build such large datasets, data collection procedures are typically automated and harvest information from the open world, while the data labeling processes are often outsourced to third-party labors. Such pipelines make it \textbf{intrinsically challenging to conduct strict supervision over the creation of datasets}, and consequently give rise to the threat of \textit{backdoor poisoning attacks}~\cite{Chen2017TargetedBA,gu2017badnets,li2020backdoor,nguyen2020input,turner2019label}, where adversaries can inject carefully crafted poison samples into the victim training set such that models trained on the contaminated set will be backdoored. Backdoor poisoning attacks are considerably risky, because those backdoored models behave almost the same as normal models under standard evaluation metrics but make catastrophic mistakes stealthily triggered by adversaries. Following the emerging threat of such attacks, a natural question to ask is --- \textit{if anyway we have to use a potentially poisoned dataset, is there a principled way for us to \textbf{distinguish between normal clean samples and potentially abnormal poison samples}?} If so, then we can effectively mitigate the risks of dataset contamination in the first place by simply removing suspicious samples from the raw datasets, before which are used by downstream pipelines. In this work, we look into this question under the threat model of backdoor poisoning attacks. In particular, we study the problem of detecting backdoor poison samples in poisoned datasets. \paragraph{Limitations of Prior Arts} There are a number of prior arts~\cite{chen2018activationclustering,gao2019strip,hayase21a,tran2018spectral,tang2021demon} for detecting backdoor poison samples. A principled idea underling these work is to \textit{utilize the backdoored models' distinguishable behaviors on poison and clean populations to distinguish between these two different populations themselves}. Typically, many prior arts~\cite{tran2018spectral,chen2018activationclustering,hayase21a,tang2021demon} consistently build their detectors upon a latent separability assumption, which states that backdoored models trained on the poisoned dataset will learn separable latent representations for backdoor and clean samples and thus poison samples can be identified via cluster analysis in latent representation space. Alternatively, \citet{gao2019strip} also argue that backdoor models' predictions on poison samples have less entropy under intentional perturbation compared with that of clean samples. Despite the effectiveness of these work in the respective settings they consider, we find that their performances can vary a lot across different poison strategies, datasets and training configurations. Worse still, one recent work~\cite{qi2022circumventing} further shows adaptive poisoning strategies can be developed to suppress the "distinguishable behaviors" relied by prior arts, rendering them much less effective~(or completely fail in most cases). We point out that these limitations are direct consequences of the \textbf{\textit{passive strategies}} adopted by these work --- they consistently rely on some properties of the backdoored models that are not controlled by defenders. \textbf{Our Contributions.} In this work, we propose the idea of \textit{\textbf{active defense}} --- rather than \textit{passively expecting} backdoored models will magically have certain distinguishable behaviors on poison and clean samples, we propose to \textbf{\underline{actively enforce}} the trained models to behave differently on these two different populations. To this end, we introduce \textit{\textbf{confusion training}} as a concrete instance of active defense for detecting backdoor poison samples. Confusion training enforces models to have the distinguishable behaviors by introducing another poisoning attack to the already poisoned dataset, such that the benign correlations between semantic features and semantic labels are actively decoupled, while the backdoor correlations between backdoor triggers and target labels are left to be the only learnable patterns --- consequently, only backdoor poison samples are fitted while clean samples are underfitted, and one can easily separate samples from these two populations via checking the fitting status. By extensive evaluations on both CIFAR10~\cite{krizhevsky2009learning} and GTSRB~\cite{stallkamp2012man}, we show superiority of our active defense over passive defenses, across a diverse set of backdoor poisoning attacks. Our code is publicly available~\footnote{\url{https://github.com/Unispac/Fight-Poison-With-Poison}}. \section{Visualization of Latent Representation Space} \label{appendix:visualization_of_latent_space} In this section, we present our full visualization results of latent representation space on CIFAR10, for a set of backdoored models attacked by different poison strategies. \paragraph{Procedure of The Visualization.} For each poison strategy, we first construct the resulting poisoned dataset following the configuration in Appendix~\ref{appendix_subsec:details_baseline_attacks}. Then, we train a backdoored model on this poisoned dataset with a standard training~(see Appendix~\ref{appendix_subsec:details_training_backdoored_models}). Next, we take out all samples that are \textbf{labelled as the target class} from the poisoned set and use the trained backdoored model to project them into the latent representation space. Finally, for visualization purpose, we project these latent representations to low dimensional visual planes. \paragraph{PCA~\cite{pearson1901liii} Projection.} In Figure~\ref{fig:vis_latent_space_compare} that we present in the main text, we use PCA~\cite{pearson1901liii} to project these latent representations into the top-2 principal directions. In Figure~\ref{fig:vis_latent_space_compare_repeat}, we present the full results produced by PCA projection, where the experiments are repeated for three times. \paragraph{t-SNE~\cite{van2008visualizing} Projection.} Considering that PCA may not well capture the local similarity structure, alternatively, we also use t-SNE~\cite{van2008visualizing}, which is designed specifically for preserving local structure and arguably one of the best dimensionality reduction technique. In Figure~\ref{fig:vis_latent_space_compare_repeat_tsne}, we present the visualization results with two-dimensional t-SNE projection. \paragraph{Oracle Projection.} Note that, both PCA and t-SNE are unsupervised methods for discovering structures. The low dimension projection generated by these two methods may still not faithfully reflect the real extent of separation between poison and clean samples --- these unsupervised dimensionality reduction methods may only keep other irrelevant structures and throw the information about the separability. Thus, following the same practice in \citet{qi2022circumventing}, we also incorporate oracle knowledge~(in the experimental simulation, we know which one is poison and which one is not) about poison and clean samples to improve the visualization. Specifically, we model the problem of poison samples detection as a binary classification problem, where poison samples form the positive class and clean samples form the negative class. Then, we use Support Vector Machine~(SVM) to fit these samples. Intuitively, the fitted hyperplane by SVM is the approximately optimal linear boundary between the two classes. Finally, we compute the (signed) distance between each point and the fitted hyperplane, and plot the distance histogram in Figure~\ref{fig:vis_latent_space_compare_repeat_oracle}. Conceptually, if the poison and clean populations are very well separated in the latent representation space, then the two groups should stay in the opposite side of the SVM hyperplane, and thus the distance histograms of these two different groups will also be far well separated. This is true for the naive BadNet~(Figure~\ref{fig:vis_badnet_repeat_oracle}). However, \textbf{for adaptive poison strategies, one can see~(Figure~\ref{fig:vis_adaptive_blend_repeat_oracle},\ref{fig:vis_adaptive_k_repeat_oracle}) that even with oracle knowledge, it is hard to separate the two groups with a linear boundary}. \input{sections/assets/visualization_latent_compare_repeat} \input{sections/assets/visualization_latent_compare_repeat_tsne} \input{sections/assets/visualization_latent_compare_repeat_oracle} \section{Technical Details of Confusion Training} \label{appendix:confusion_training_protocol} \vspace{-2mm} In Section~\ref{subsec:confusion_training}, we illustrate the key idea of confusion training. In this section, we will first introduce several useful engineering techniques~(Appendix~\ref{appendix_subsec:practical_techniques}) that are used for implementing our confusion training pipeline. Then we will describe the detailed algorithms~(Appendix~\ref{appendix_subsec:algorithms_of_confusion_training}) for confusion training that we use in our implementation. \vspace{-2mm} \subsection{Practical Techniques for Confusion Training} \label{appendix_subsec:practical_techniques} \vspace{-1mm} \paragraph{Mitigate Overfitting.} In practice, due to the strong expressivity of deep models, if we use a static confusion set, the deep model may still overfit both the confusion set and the poisoned training set as long as the training time is sufficiently long. This deviates our expectation that the inference model should separate backdoor poison samples and clean samples by only fitting the poison samples. To avoid such situation, in our implementation, we maintain a \textbf{dynamic confusion set} during confusion training. For every training batch, we will reassign a new set of random labels to samples from the confusion set. This leads to constantly changing correlations for clean semantics, and effectively prevents overfitting --- because it can not even be fitted! \vspace{-3mm} \paragraph{Iterative Poison Distillation.} In our implementation, we will iteratively run confusion training for multiple rounds. By our design, after a round of confusion training, most clean samples in the poisoned set should have high loss values due to the disruption of confusion set, while most poison samples should concentrate in the low loss region. Thus, after each round of confusion training, we can abandon the part of training samples with highest loss values and reasonably expect that most poison samples are still be kept in the rest part, then, the rest part will be further used for the next round. This is analogous to an iterative distillation process. By iteratively removing impurities~(clean samples) separated by distillation, density of the desired substances~(backdoor poison samples) will be increasingly higher. Intuitively, such a process will keep amplifying the backdoor correlations meanwhile weakening the benign correlations in the distilled poisoned training set. Consequently, quality of the resulting inference model will also be keep improved. \vspace{-3mm} \paragraph{Conditional Random Labeling.} For every sample in the confusion set, we always rule out their semantic ground truth during random labeling. This can make the inference model even worse than random classifiers on clean samples, and helps to reduce false positive rate for identify backdoor poison samples. Moreover, in our implementation, we do not really perform random labeling. For each training batch with the sequential batch id $b$ and the clean batch $(X_b,Y_b)$, we just shift its labels to $(Y_b + b) \% C$. If $b\%C$ equals to zero, we shift the labels to $(Y_b + b + 1) \% C$ instead. We view this as a simple identical way for implementing the conditional random labeling. \vspace{-3mm} \paragraph{Cluster Analysis for Identifying The Target Class.} Given the final inference model generated by confusion training, a naive way for eliminating backdoor poison samples is to directly remove all samples whose labels are consistent with the inference model's predictions~(i.e. all fitted training samples). False positives~(clean samples that are mistakenly identified as poison samples) can arise, because usually there are still a small amount of clean samples~(usually several percent) being fitted by the inference model despite the disruption of confusion set. Following the practice of previous work~\cite{chen2018activationclustering,tang2021demon}, a simple technique for reducing false positives is to only scan samples from suspicious classes that are likely to be target classes. Since the confusion training actively separates the poison and clean populations, just like previous work, we can still identify suspicious classes by applying cluster analysis on the latent representation space of the inference model. Recall that, although the naive inference procedure we mentioned at the beginning may have higher false positive rate, it can leave out a set of samples that~(almost) only contain clean samples. This gives us rich information about the distribution of clean samples' latent representations generated by the inference model. Thus, we can simply identify suspicious classes via checking whether there are obvious outliers that deviate the clean distribution. Specifically, we encode our knowledge about clean distribution into a set of equivalence constraints~(the knowledge that a set of samples must be in the same cluster) and use the semi-supervised EM algorithm proposed by \citet{shental2003computing} to compute Gaussian Mixture Models~(GMM). We identify a class as a potential target class, if the two-clusters GMM explanation has significantly higher likelihood than that of single-mode Gaussian model directly estimated from the clean cluster. \subsection{Algorithms of Confusion Training} \label{appendix_subsec:algorithms_of_confusion_training} We formulate our algorithms for the confusion training pipeline. Overall, there are two main steps. With Algorithm~\ref{alg:inference_model_generation}, we run iterative confusion training to concentrate poison samples in a condensed poisoned set $D_S$ and use it to generate the final inference model $F(\cdot,\theta_S)$ with another run of confusion training~(iteration $S$). Then, with this inference model, Algorithm~\ref{alg:poison_samples_elimination} will identify potential poison samples with a label-only inference by scanning samples from suspicious classes. As discussed in Appendix~\ref{appendix_subsec:practical_techniques}, in the stage of poison identification, $\mathcal{S} = \{1,\dots,C\}$ corresponds to the naive label-only inference, while a better $\mathcal{S}$ from cluster analysis can further help to reduce the false positive rate. \input{sections/assets/Pseudocode} \section{A Supplementary Review on Backdoor Poison Samples Detection} \label{appendix:supp_review} \subsection{The Important Role of Backdoor Poison Samples Detection} This work looks into the problem of backdoor poison samples detection. More precisely, we focus on offline detection that aims to identify backdoor poison samples in a fixed training set. \textbf{This line of defenses are particularly attractive for the threat model of backdoor poisoning attacks} that we consider in this work, for two reasons: \begin{itemize} \item \underline{From a utility perspective:} if one can accurately identify poison samples and eliminate them from training sets, the threat of backdoor poisoning attacks can be prevented in the first place and those cleansed datasets can be reliably used for any downstream tasks. Thus, the threat of backdoor poisoning can be resolved once and for all. On the other hand, even if poison samples can not be fully eliminated --- as long as one can accurately isolate a certain amount of poison samples, they can still be effectively used to unlearn the backdoor~(e.g. \cite{li2021anti,wang2019neural} etc.) --- thus the poison detection can also be used as an important building block for backdoor defense. \item \underline{From a methodological perspective:} we point out that \textbf{a "good" backdoor defense is always approximately equivalent to a "good" backdoor poison samples detector}. On the one hand, given a "good" detector that can eliminate (almost) all poison samples without sacrificing too many clean samples, models trained on the cleansed dataset will have negligible ASR~(attack success rate) but still keep high clean accuracy~(e.g. see our confusion training in Table~\ref{tab:cifar_confusion_training_avg},\ref{tab:gtsrb_confusion_training_avg}) --- \textbf{thus, it is a "good" defense}. On the other hand, if one defense~(not necessarily based on poison detection) is "good", the resulting model should have negligible ASR and have high clean accuracy. So, the model's predictions on most poison samples will be inconsistent with their labels, while the predictions on most clean samples would be consistent. Based on this result, \textbf{one can equivalently build a "good" poison detector with this model}, by simply removing all training samples whole labels are not consistent with the predictions --- most poison samples are removed while clean samples are kept. Thus, in some sense, \textbf{the seemingly larger problem of backdoor defense is equivalent to the sub-problem of backdoor poison samples detection}. \end{itemize} In short, above discussions suggest the fundamental role of backdoor poison samples detection underlying backdoor defense, and it also justifies why we are particularly interested in this problem. \subsection{The Distinguishable Behaviors} \label{appendix_subsec:the_distinguishable_behaviors} There are a number of prior arts for detecting backdoor poison samples. A \textbf{principled idea} underling most of these work is to \textit{utilize the backdoored models' \textbf{distinguishable behaviors} on poison and clean populations to distinguish between these two different populations themselves}. Typically, many prior arts~\cite{tran2018spectral,chen2018activationclustering,hayase21a,tang2021demon} consistently build their detectors upon a \textbf{latent separability assumption}, which states that backdoored models trained on the poisoned dataset will learn separable latent representations for backdoor and clean samples and thus poison samples can be identified via cluster analysis in latent representation space. To our best knowledge, \citet{tran2018spectral} is the first to observe the latent separation phenomenon. Specifically, they train a backdoored model on the poisoned dataset and use it to map all training samples to the latent representation space. Then, for samples of each class, they project their representations to their top PCA~\cite{pearson1901liii} directions. Interestingly, for all non-target classes that only contain clean samples, their latent representations regularly distribute around a single center; however, for the target class that contains both clean and poison samples, the latent representations clearly form a bimodal distribution --- poison and clean samples form their own modal respectively. Thus, poison samples can then be eliminated by simply dropping the poison modal. Later, \citet{chen2018activationclustering} also independently observe this separation phenomenon and propose to use K-means to separate the clean and poison clusters in the latent space. Note that, both \citet{tran2018spectral} and \citet{chen2018activationclustering} launch their cluster analysis in an unsupervised manner and this may not work well in more challenging cases. Thus, more recently, \citet{tang2021demon} and \citet{hayase21a} further propose to take use of the knowledge of clean distribution which significantly improves the cluster analysis. Despite the effectiveness of these latent separability based techniques in the respective settings they consider, as we have discussed and illustrated in our main text~(Section~\ref{subsec:separability_and_adaptive_attack}, Section~\ref{subsec:exp_performance}), their performances can vary a lot across different poison strategies, datasets and training configurations. Worse still, adaptive attacks that can intentionally suppress this separability characteristic can make then degrade catastrophically. Beside the latent separability, \textbf{other heuristic characteristics} are also utilized to detect backdoor poison samples. For localized triggers, Sentinet~\cite{chou2020sentinet} use backdoored models to compute the saliency maps of each sample and locate suspicious sample via checking whether their saliency maps contain small connected regions of high salience. Strip~\cite{gao2019strip} also observe that when samples are intentionally superimposed with random image patterns, backdoored models' predictions on poison samples have smaller randomness compared with that of clean samples. However, these characteristics are not as successful as the latent separability. Sentinet~\cite{chou2020sentinet} is designed only for local trigger, while Strip~\cite{gao2019strip} is not effective when triggers pattern are more complicated or when backdoor correlations are less dominant~\cite{tang2021demon}. We point out that all these limitations we mentioned above are direct consequences of the \textbf{\textit{passive strategies}} adopted by these work --- they consistently rely on some "distinguishable behaviors" of the backdoored models that are not controlled by defenders. This work takes one step further by proposing the idea of \textbf{\textit{active defense}}~(Section~\ref{subsec:idea_of_active_defense}) and introducing \textbf{\textit{confusion training}}~(Section~\ref{subsec:confusion_training}) as a concrete instance, which significantly improves the performance of poison detection in practice~(Section~\ref{subsec:exp_performance}). \section{A Theoretical Formulation} \subsection{A Theoretical Modeling of Backdoor Poisoning Attacks} Consider a benign linear model \begin{align} & Y_b = {\bold{x}_b}^T \beta_b + \epsilon,\label{eqn:benign_full} \end{align} where $\beta_b \in \mathbb{R}^p$ is the true parameter of the benign model, and $\epsilon \in \mathbb{R}$ is a homoscedastic noise term that does not depend on $\bold x_b$, $\mathbb{E}(\epsilon) = 0$. In this regression model, only a part of covariates in $\bold{x}_b$ contribute to $Y$. Formally, we denote $S\subset \{1,2,...,p\}$ the contributing set and $S^C = \{1,2,...,p\} / S$ as its complement, $\forall j \in S : \beta_j \ne 0$ and $\forall j \in S^C : \beta_j = 0$. Without loss of generality, we suppose $S = \{1,2,...,|S|\}$. We use $({\bold x}_b)_S$ to denote the contributing covariates and $(\beta_b)_S$ to denote the corresponding coefficients. Likewise, the non-contributing part can be represented by $({\bold x}_b)_{S^C}$ and $(\beta_b)_{S^C}$. Further, we assume $({\bold x}_b)_{S}$,$({\bold x}_b)_{S^C}$ are independent and $\mathbb{E}(\bold x_b) = 0$. So, the regression model can be further written as: \begin{align} & Y_b = ({\bold x}_b)^T_S (\beta_b)_S + \epsilon \label{eqn:benign_reduced} \\ & Var({\bold x}_b) = \begin{bmatrix} \Sigma_S & \\ & \Sigma_{S^C}\end{bmatrix},\ \mathbb{E}(\bold x_b) = 0 \label{eqn:benign_cov} \end{align} On the other hand, we construct a poison model where $S^C$ instead is the contributing set: \begin{align} & Y_a = {\bold{x}_a}^T \beta_a + \epsilon = ({\bold x}_a)^T_{S^C}(\beta_b)_{S^C} + \epsilon, \label{eqn:adv_full}\\ & Var({\bold x}_a) = \begin{bmatrix} k^{-1} \Sigma_S &\\ & k \Sigma_{S^C}\end{bmatrix}, k \textgreater{1} \label{eqn:adv_cov}\\ & \mathbb{E}(\bold x_a) = 0 \label{eqn:adv_mean} \end{align} \underline{This mimics the last layer of a deep learning model under backdoor attacks}, where the learned representation ${\bold x}_a$ of a poisonc sample has an amplified backdoor signal $({\bold x}_a)_{S^C}$ while the semantic feature $({\bold x}_a)_{S}$ is suppressed (as commonly observed~\cite{tran2018spectral,chen2018activationclustering}), and the poison data model creates artificial backdoor correlations between the backdoor covariates $({\bold x}_a)_{S^C}$ and the adversarial label $Y_a$. For further modeling the intuition that the backdoor signal is negligible in benign correlations while dominating in backdoor correlations, we supplement additional conditions as follow: \begin{itemize} \item \textit{Benign Condition}: $\frac{\mathbb E (\|\bold x_b^T \beta_b\|^2)}{\mathbb E (\|\bold x_b^T \beta_a\|^2)} \ge \mathcal{K}_1$ for some large $\mathcal{K}_1 \textgreater{0}$, which is equivalent to \begin{align} & \frac{(\beta_b)_S^T \Sigma_S (\beta_b)_S}{(\beta_a)_{S^C}^T \Sigma_{S^C} (\beta_a)_{S^C}} \ge \mathcal{K}_1 \label{eqn:benign_codition} \end{align} \item \textit{Backdoor Condition}: $\frac{\mathbb E (\|\bold x_a^T \beta_a\|^2)}{\mathbb E (\|\bold x_a^T \beta_b\|^2)} \ge \mathcal{K}_2$ for some large $\mathcal{K}_2 \textgreater{0} $, which is equivalent to \begin{align} & \frac{k\cdot (\beta_a)_{S^C}^T \Sigma_{S^C} (\beta_a)_{S^C}}{k^{-1} \cdot (\beta_b)_{S}^T \Sigma_{S} (\beta_b)_{S}} \ge \mathcal{K}_2 \label{eqn:poison_codition} \end{align} \end{itemize} \begin{lemma}\label{lemma:magnitude_of_k} The poison model that satisfies both benign condition and backdoor condition must have $k \ge \sqrt{\mathcal{K}_1 \mathcal{K}_2}$. Proof. \begin{align} & k^2 \mathop{\ge}_{\text{by eqn~\ref{eqn:benign_codition}}} \mathcal{K}_2 \frac{(\beta_b)_S^T \Sigma_S (\beta_b)_S}{(\beta_a)_{S^C}^T \Sigma_{S^C} (\beta_a)_{S^C}} \mathop{\ge}_{\text{by eqn~\ref{eqn:poison_codition}}} \mathcal{K}_1\mathcal{K}_2 & \square \end{align} \end{lemma} Now, consider a poisoned dataset $\{(X_i,Y_i)\}_{i=1}^n$ with a poison rate of $t$. Without loss of generality, the first $(1-t)n$ samples are benign samples generated by the benign model while the last $tn$ samples are poison samples generated by the poison model. Denote the design matrix $X = [X_1,...,X_n]^T$, the response vector $Y = [Y_1,...,Y_n]^T$ and the the independent and homoscedastic noise term $\epsilon = [\epsilon_1, ..., \epsilon_n]^T$. By the definition of data model, \begin{align} & Y = \begin{bmatrix} X_{1:(1-t)n}\beta_b \\ X_{(1-t)n:n}\beta_a\end{bmatrix} + \epsilon, \label{eqn:generated_y} \end{align} where $X_{p:q}$ denotes the slice of the matrix $X$ from row $p$ to row $q$. Conducting least square regression on the poisoned dataset, we obtain the following estimator $\hat{\beta}$: \begin{align} & \hat{\beta} = (X^TX)^{-1} X^T Y, \label{eqn:normal_equation} \end{align} \begin{theorem}[Plausibility of the Modeling]\label{theorem:plausibility} When $n$~(size of the training set) is sufficiently large, a least square regression on the poisoned set gives rise to: \begin{align} & \hat{\beta} \approx \begin{bmatrix} [tk^{-1} + (1-t)]^{-1} (1-t) \cdot (\beta_b)_S \\ [tk+(1-t)]^{-1} k t \cdot (\beta_a)_{S^C}\end{bmatrix}, \label{eqn:least_square_estimator} \end{align} moreover, the poisoned least square estimator $\hat{\beta}$ satisfies: \begin{itemize} \item \textit{Bounded Clean Performance Drop} \begin{align} & \frac{\mathbb{E}(\| \bold{x}_b^T \hat{\beta} - \bold{x}_b^T \beta_b \|^2)}{Var(\bold x_b^T\beta_b)} \le \Bigg(\frac{t/k}{t/k+(1-t)}\Bigg)^2 + \Bigg(\frac{tk}{tk+(1-t)}\Bigg)^2 \cdot \mathcal{K}_1^{-1} \label{eqn:benign_error_bound} \end{align} \item \textit{Nontrivial Backdoor Property} \begin{align} & \frac{\mathbb{E}(\| \bold{x}_a^T \hat{\beta} - \bold{x}_a^T \beta_a \|^2)}{Var(\bold x_a^T \beta_a)} \le \Bigg(\frac{1-t}{tk+(1-t)}\Bigg)^2 + \Bigg(\frac{1-t}{t/k+(1-t)}\Bigg)^2 \cdot \mathcal{K}_2^{-1} \label{eqn:backdoor_error_bound} \end{align} \end{itemize} (See Appendix~\ref{appendix:proof_of_theorem_1} for the proof.) \end{theorem} Theorem~\ref{theorem:plausibility} shows the plausibility of our modeling of backdoor poisoning attacks above. The bound in the inequality~\ref{eqn:benign_error_bound} indicates that the rate of clean performance drop is of a magnitude $\mathcal{O}\big( t^2/k^2 + \mathcal{K}_1^{-1}\big)$. Since $\mathcal{K}_1$ is a large positive number by the benign condition~\ref{eqn:benign_codition}, $t$ is a small poison rate, $k\ge \sqrt{\mathcal{K}_1\mathcal{K}_2}$ is large, $\frac{\mathbb{E}(\| \bold{x}_b^T \hat{\beta} - \bold{x}_b^T \beta_b \|^2)}{Var(\bold x_b^T\beta_b)}$ should be considerably small --- \textbf{thus the fitted estimator $\hat{\beta}$ behaves as good as the oracle clean estimator $\beta_b$ on the benign data}. On the other hand, we know $k\ge \sqrt{\mathcal{K}_1\mathcal{K}_2}$ is considerably large. For $k \ge c(1-t)/t$, with similar reasoning on the property~\ref{eqn:backdoor_error_bound}, we will find $\frac{\mathbb{E}(\| \bold{x}_a^T \hat{\beta} - \bold{x}_a^T \beta_a \|^2)}{Var(\bold x_a^T \beta_a)}$ is bounded by a rate of the magnitude $\frac{1}{(1+c)^2}$. This indicates that \textbf{the model fitted on the poisoned set will capture a nontrivial portion of backdoor correlations}. \subsection{Decouple Benign Correlations} Suppose we have another set of clean samples $\{(\tilde{X}_l, \tilde{Y}_l)\}_{l=1}^L$ i.i.d. sampled from the clean data model, we use it to generate a confusion set $\{(\tilde{X}_l, \tilde{Y}_r): \forall l,r \in \{1,...,L\}\}$. We use $\tilde{X}$, $\tilde{Y}$ to denote the design matrix and response vector of the confusion set. It is easy to verify that covariates in the confusion set still follow the clean distribution: \begin{align} & L^{-2} \tilde{X}^T\tilde{X} = L^{-2}\sum_{l=1}^L\sum_{r=1}^L \tilde{X}_l \tilde{X}_l^T \approx Var(\bold x_b) = \begin{bmatrix} \Sigma_S & \\ & \Sigma_{S^C}\end{bmatrix}, \end{align} however the correlations between $\bold x$ and $Y$ are thoroughly decoupled: \begin{align} & \tilde{X}^T\tilde{Y} = L^{-2}\sum_{l=1}^L\sum_{r=1}^L \tilde{X}_l \cdot \tilde{Y}_r \approx \mathbb{E}(\bold x_b) \mathbb{E}{(Y_b)} = 0 \end{align} Perform weighted least square regression jointly on the poisoned set $\{X_i,Y_i\}_{i=1}^n$ and the confusion set $\{(\tilde{X}_l, \tilde{Y}_r): \forall l,r \in \{1,...,L\}\}$: \begin{align} & \tilde{\beta} = \mathop{\arg\min}_{\beta} = (1-w)\cdot n^{-1}\|Y - X\beta\|_2^2 + w\cdot L^{-2}\|\tilde{Y} - \tilde{X}\beta\|_2^2, \end{align} where a weight of $w \in (0,1)$ is assigned to the confusion set and a weight of $(1-w)$ is assigned to the poisoned set. Applying normal equation, it's easy to derive the resulting estimator: \begin{align} & \tilde{\beta} = \Bigg( \frac{1-w}{n} X^TX + \frac{w}{L^2} \tilde{X}^T\tilde{X}\Bigg)^{-1} \Bigg(\frac{1-w}{n}X^TY + \frac{w}{L^2}\tilde{X}^T\tilde{Y}\Bigg) \\ & = \begin{bmatrix} k^{-1}[t(1-w)(1-k)+k] \Sigma_S & \\ & k[t(1-w)(1-k^{-1})+k^{-1}] \Sigma_{S^C}\end{bmatrix}^{-1} \\ & (1-w)\Big[ (1-t) Var(\bold x_b) \beta_b + t Var(\bold x_a) \beta_a \Big] \\ & = \begin{bmatrix} \frac{1}{t/k + [(1-w)(1-t)]^{-1}} (\beta_b)_S \\ \frac{1}{(1-k^{-1}) + [tk(1-w)]^{-1}}(\beta_a)_{S^C} \end{bmatrix} \end{align} Denote $\delta_1:= \frac{1}{t/k + [(1-w)(1-t)]^{-1}}$, $\delta_2 := \frac{1}{(1-k^{-1}) + [tk(1-w)]^{-1}}$ and $\mathcal{C} = {Var(\bold x_b^T \beta_b)}/{Var(\bold x_a^T \beta_a)}$, \begin{align} & \frac{\mathbb{E}(\| \bold{x}_b^T \hat{\beta} - \bold{x}_b^T \beta_b \|^2)}{\mathbb{E}(\| \bold{x}_a^T \hat{\beta} - \bold{x}_a^T \beta_a \|^2)} = \frac{ (\hat{\beta} - \beta_b)^T Var(\bold x_b) (\hat{\beta} - \beta_b) }{ (\hat{\beta} - \beta_a)^T Var(\bold x_a) (\hat{\beta} - \beta_a) }\\ & = \frac{(\delta_1 -1)^2\mathcal{C} + \delta_2^2/k}{k^{-1}\delta_1^2 \mathcal{C} + (\delta_2-1)^2} \approx \frac{(\delta_1 -1)^2}{(\delta_2 -1)^2} \mathcal{C} \end{align} \section{Proof of Theorem~\ref{theorem:plausibility}} \label{appendix:proof_of_theorem_1} \begin{proof}[Proof of Theorem~\ref{theorem:plausibility}] When $n$ is sufficiently large, we have: \begin{align} & X^TX = \sum_{i=1}^{(1-t)n} X_i X_i^T + \sum_{s=(1-t)n+1}^{n} X_s X_s^T \approx (1-t)n \times Var(\bold x_b) + tn \times Var(\bold x_a) \\ & = n \begin{bmatrix} [tk^{-1} + (1-t)] \Sigma_S & \\ & [tk+(1-t)] \Sigma_{S^C}\end{bmatrix}, k \textgreater{1} \label{eqn:full_covariance} \end{align} On the other hand, since $\epsilon$ is independent of $X$, $\mathbb{E}(X^T\epsilon) = 0$. It follows that \begin{align} & n^{-1}X^T Y = n^{-1}(X_{1:(1-t)n})^T\Big(X_{1:(1-t)n}\beta_b \Big) + n^{-1}(X_{(1-t)n:n})^T\Big(X_{(1-t)n:n}\beta_a \Big) + n^{-1}X^T\epsilon\\ & \approx (1-t) Var(\bold x_b) \beta_b + t Var(\bold x_a) \beta_a \label{eqn:full_corelation} \end{align} By Equation~\ref{eqn:normal_equation},\ref{eqn:full_covariance},\ref{eqn:full_corelation}: \begin{align} & \hat{\beta} \approx \begin{bmatrix} [tk^{-1} + (1-t)]^{-1} (1-t) \cdot (\beta_b)_S \\ [tk+(1-t)]^{-1} k t \cdot (\beta_a)_{S^C}\end{bmatrix} \end{align} Consider the clean performance drop: \begin{align} & \frac{\mathbb{E}(\| \bold{x}_b^T \hat{\beta} - \bold{x}_b^T \beta_b \|^2)}{Var(\bold x_b^T\beta_b)} = \frac{\mathbb{E} \Big[(\hat{\beta} - \beta_b)^T\bold x_b \bold x_b^T (\hat{\beta} - \beta_b) \Big]}{Var(\bold x_b^T\beta_b)} = \frac{ (\hat{\beta} - \beta_b)^T Var(\bold x_b) (\hat{\beta} - \beta_b) }{Var(\bold x_b^T\beta_b)} \\ & = \begin{bmatrix} \frac{-t/k}{t/k + (1-t)} (\beta_b)_S \\ \frac{tk}{tk+(1-t)} (\beta_a)_{S^C} \end{bmatrix}^T \begin{bmatrix} \Sigma_S & \\ & \Sigma_{S^C}\end{bmatrix} \begin{bmatrix} \frac{-t/k}{t/k + (1-t)} (\beta_b)_S \\ \frac{tk}{tk+(1-t)} (\beta_a)_{S^C} \end{bmatrix} \Bigg / Var(\bold x_b^T\beta_b)\\ & = \Bigg(\frac{t/k}{t/k+(1-t)}\Bigg)^2 + \Bigg(\frac{tk}{tk+(1-t)}\Bigg)^2 \cdot \frac{(\beta_a)_{S^C}^T \Sigma_{S^C} (\beta_a)_{S^C}}{(\beta_b)_S^T \Sigma_S (\beta_b)_S}\\ & \mathop{\le}_{\text{by eqn~\ref{eqn:benign_codition}}} \Bigg(\frac{t/k}{t/k+(1-t)}\Bigg)^2 + \Bigg(\frac{tk}{tk+(1-t)}\Bigg)^2 \cdot \mathcal{K}_1^{-1} \end{align} For the poison distribution, similarly, we get: \begin{align} & \frac{\mathbb{E}(\| \bold{x}_a^T \hat{\beta} - \bold{x}_a^T \beta_a \|^2)}{Var(\bold x_a^T \beta_a)} = \frac{(\hat{\beta} - \beta_a)^T Var(\bold x_a) (\hat{\beta} - \beta_a)}{Var(\bold x_a^T \beta_a)}\\ & = \Bigg(\frac{1-t}{t/k+(1-t)}\Bigg)^2 \cdot \frac{k^{-1} \cdot (\beta_b)_{S}^T \Sigma_{S} (\beta_b)_{S}}{k\cdot (\beta_a)_{S^C}^T \Sigma_{S^C} (\beta_a)_{S^C}} + \Bigg(\frac{1-t}{tk+(1-t)}\Bigg)^2\\ & \mathop{\le}_{\text{by eqn~\ref{eqn:poison_codition}}} \Bigg(\frac{1-t}{tk+(1-t)}\Bigg)^2 + \Bigg(\frac{1-t}{t/k+(1-t)}\Bigg)^2 \cdot \mathcal{K}_2^{-1} & \square \end{align} \end{proof} \section{On the Latent Separability of Backdoor Poison Samples} \label{appendix:latent_separability_and_adaptive_backdoor_poisoning_attack} \subsection{Latent Separability} \label{appendix_subsec:latent_separability} One principled idea for detecting backdoor poison samples is to utilize the backdoored models' distinguishable behaviors on poison and clean populations to distinguish between these two populations themselves. \textbf{Arguably, the most popular and successful characteristic is the latent separability phenomenon} first observed by \citet{tran2018spectral}. The basic observation is that backdoored models trained on poisoned datasets tend to learn separable latent representations for poison and clean samples --- thus poison samples and clean samples can be separated via a cluster analysis in the latent representation space of the backdoored models. Commonly used backdoor samples detectors~(e.g. Spectral Signature~\cite{tran2018spectral}, Activation Clustering~\cite{chen2018activationclustering}) and some recently published state-of-the-art work~(e.g. SCAn~\cite{tang2021demon}, SPECTRE~\cite{hayase21a}) are consistently built on this characteristic. As a sanity check of the latent separability characteristic, we train a group of backdoored models with a diverse set of poison strategies~\cite{gu2017badnets,Chen2017TargetedBA,nguyen2020input,barni2019new,turner2019label,tang2021demon} on CIFAR10, and visualize~(refer Apppendix~\ref{appendix:visualization_of_latent_space} for detailed configurations) their latent representation space (of target class) in Figure~\ref{fig:vis_latent_space_compare}. Although some information are lost due to the dimension reduction, we can still see that \textcolor{red}{poison} and \textcolor{blue}{clean} samples consistently exhibit different distributions in all the considered cases from Figure~\ref{fig:vis_badnet} to \ref{fig:vis_TaCT}. This is consistent to observations reported by prior arts~\cite{tran2018spectral,chen2018activationclustering,tang2021demon,hayase21a} and it also accounts for the effectiveness of backdoor samples detectors built on latent space cluster analysis. On the other hand, however, we find one aspect that has not been well studied by previous work is that \textbf{the extent of the latent separation can vary a lot}~(at least in the low dimensional PCA space) across different poison strategies and training configurations. For example, as can be observed in Figure~\ref{fig:vis_latent_space_compare}, poison samples of BadNet~(Figure~\ref{fig:vis_badnet}) exhibit stronger latent separation~(less fusion) than that of Blend~(Figure~\ref{fig:vis_blend}). Meanwhile, models trained with~(right-hand plot) and without~(left-hand plot) data augmentation also lead to different extent of separations --- models trained with data augmentation usually lead to stronger separation~(e.g. Figure~\ref{fig:vis_SIG},\ref{fig:vis_clean_label},\ref{fig:vis_TaCT}) than that of models trained without data augmentation, however, the trend can also be inverse~(e.g. Figure~\ref{fig:vis_blend}). Note that, for the visualization results, we are also aware of the randomness that arise from the stochastic nature of model training. To assure readers that the variations across different cases we discussed above are not simply random noise, we repeat the same experiments for 3 times with different random seeds~(see Figure~\ref{fig:vis_latent_space_compare_repeat} in Appendix~\ref{appendix:visualization_of_latent_space}), and it turns out that all the three groups of results generated with different random seeds are qualitatively consistent. This indicates that the variations we discussed above are indeed a reflection of inherent properties of these different cases rather than random noise. As we can see in Section~\ref{subsec:exp_performance}, the varying extent of separation directly results in varying performance of many backdoor samples detectors built on the separation characteristic --- while they can work well against some poison strategies, \textbf{their performances can be less satisfactory against some others}. \subsection{Adaptive Backdoor Poisoning Attacks} \label{appendix_subsec:adaptive_backdoor_poisoning_attack} Motivated by the varying extent of separation, perhaps a more fundamental question to ask is: \textit{\textbf{Is the latent separability between poison and clean populations really an unavoidable characteristic of backdoored models?}} The implication of this question is that motivated adversaries might attempt to design {adaptive backdoor attacks} such that the backdoored models behave indistinguishably on both clean and poison samples. Such \textit{potential} adaptive attacks can pose a fundamental challenge to existing backdoor samples detectors, because they completely overturn the principal assumption~(that distinguishable behaviors should exist) underlying their designs. Answers to this question depend on specific threat models and defensive settings we consider. Under a strong threat model where adversaries can fully control the training process, a series of recent work~\cite{shokri2020bypassing,xia2021statistical,doan2021backdoor,ren2021simtrojan,cheng2020deep,zhong2022imperceptible} show that the latent representations of poison and clean samples can be made indistinguishable by explicitly encoding the indistinguishability objective into the training loss of the backdoored model. On the other hand, as for the weaker data poisoning based threat model that we consider in this work, the problem appears to be harder, because there is still a huge gap for understanding why a deep model tend to learn separate latent representations for backdoor poison samples. One very recent work by \citet{qi2022circumventing} looks into this problem. Motivated by some heuristic insights, they propose two principled adaptive poisoning strategies that empirically can well suppress the separation characteristic in the latent space of victim models. The basic idea underlying their adaptive strategies is to introduce a set of "cover" samples~(different from poison samples) which also contain backdoor triggers but are still correctly labeled to their semantic groundtruth (other than the target class). Intuitively, these "cover" samples work as regularizers that can penalize models for learning overwhelmingly strong backdoor signals in the latent representation space. In this work, we consider two adaptive strategies, namely adaptive-blend and adaptive-k, suggested in their work. Figure~\ref{fig:vis_adaptive_blend} and Figure~\ref{fig:vis_adaptive_k} give out a straightforward sense of their "adaptiveness". In Section~\ref{subsec:exp_performance}, we also see that \textbf{performance of existing detectors can degrade catastrophically against these adaptive backdoor poisoning attacks}. \section{Towards Active Defense} \label{sec:methodology} In this section, we first motivate the general idea of active defense~(Section~\ref{subsec:idea_of_active_defense}), and then we will describe~(Section~\ref{subsec:confusion_training}) a concrete instance of active defense, namely the confusion training, that we propose in this work. \subsection{From Passive Defense to Active Defense} \label{subsec:idea_of_active_defense} Distinguishable characteristics of backdoor samples enable defenses to identify them. However, as we have discussed in Section~\ref{subsec:separability_and_adaptive_attack}, such characteristics can vary and even be intentionally suppressed by adaptive attacks~\cite{qi2022circumventing}. Consequently, performance of existing detectors that are built on such characteristics can also vary or catastrophically degrade. We point out that such a situation is a direct consequence of the \textbf{\textit{passive strategies}} adopted by these work. Take the latent separability characteristic for example, although it plays a central role for backdoor samples detection, prior designs only \underline{passively assume} that such characteristics will naturally arise in backdoor learning. That is to say, the fundamental building block of the defensive construction is actually not under control of the constructor. In this work, we take one step further by proposing the idea of \textit{\textbf{active defense}}. Our basic philosophy is that --- rather than \textit{passively assuming} backdoored models will naturally have distinguishable behaviors on poison and clean populations, one should \underline{actively enforce} the models trained on poisoned set to behave differently on the two populations. To illustrate this methodology, we introduce \textit{\textbf{confusion training}} as a concrete instance in the next subsection. \subsection{Confusion Training} \label{subsec:confusion_training} \input{sections/assets/overview} In Figure~\ref{fig:overview}, we sketch an overview of the confusion training pipeline. To be straightforward, we also abstract the procedure of confusion training in Algorithm~\ref{alg:confusion_training_sketch}. As illustrated, our defender initially has a small \textbf{reserved clean set} at hand, which is collected with strict supervision. From this small clean set, the defender will further construct a \textbf{dynamic confusion set} by mislabeling all the reserved clean samples to random labels. By "dynamic", we mean the random labels of this set will be dynamically reassigned when the set is used in the downstream of the pipeline. Conceptually, such a dynamic random labeling process completely \textbf{decouples the benign correlations} between \textit{semantic features} and \textit{semantic labels} in the confusion set. Moreover, its dynamic nature will also \textbf{prevent models from naively memorizing}. \input{sections/assets/confusion_training_algorithm_sketch} Confusion training~(Algorithm~\ref{alg:confusion_training_sketch}) works by training an \textbf{inference model} jointly on the poisoned training set and the dynamic confusion set. Specifically, during the training, the confusion set is assigned a \textbf{dominant weight} which is much larger than that assigned to the poisoned training set. As a result, the \textbf{confusion set will serve as an another strong poison} and dominate the training. On the one hand, this will significantly disrupt models' fitting on clean samples. On the other hand, since the confusion set does not contain any information about the backdoor trigger, it will not disrupt the fitting of backdoor correlation. Thus, at the end of confusion training, the resulting inference model fail to fit most clean samples but still perfectly fit the poison samples. \textbf{This is exactly a typical example how an active defender can intentionally induce the distinguishable behavior.} As illustrated in Figure~\ref{fig:overview}, one can expect that the inference model will output a random label given a clean input, while always correctly output the target label given a backdoor input. Thus, at the end of the day, the task of \textbf{identifying backdoor poison samples} is reduced to a simple task of finding samples whose labels are consistent with the predictions of the inference model, which is similar to the process of a naive label-only membership inference~\cite{choquette2021label}. Note that, in practice, our implementation involves multiple rounds of confusion training for better performance, which is more sophisticated than the sketch version we present in Algorithm~\ref{alg:confusion_training_sketch}. However the key philosophy is faithfully reflected by the sketch description. We refer interested readers to Appendix~\ref{appendix:confusion_training_protocol} for more technical details. \section{Empirical Evaluation} \label{sec:experiments} In this section, we present our experiment results. We first describe the setup of our experiments~(Section~\ref{subsec:exp_setup}), and present the main evaluation results~(Section~\ref{subsec:exp_performance}) on confusion training pipeline and other baseline defenses. \subsection{Setup} \label{subsec:exp_setup} \paragraph{Datasets.} We consider backdoor poisoning attacks on both the CIFAR10~\cite{krizhevsky2009learning} and the GTSRB~\cite{stallkamp2012man} datasets. For each dataset, we follow the default training/test set split by Torchvision~\cite{marcel2010torchvision}. Recall that our defender is assumed to have a small reserved clean set at hand~(as defined in Section~\ref{subsec:goal_and_capability_of_our_defense}). To implement this setting, for both datasets, we randomly pick \underline{2000} samples from the test split to simulate the reserved clean set, and leave the rest part of the test split for evaluation. \paragraph{Models.} Similar to \citet{tran2018spectral}, we use ResNet20~\cite{he2016deep} as the base architecture for implementing all experiments presented in this section. Specifically, for each poison strategy we consider, we will use ResNet20 to train \underline{base backdoored models} on the corresponding poisoned training set. The training follows standard training scripts that are commonly used by prior work, and the trained base models are used as building blocks for baseline detectors. As for our confusion training pipeline, ResNet20 is also consistently used for building inference models. See Appendix~\ref{appendix:detailed_configurations} for training details. \paragraph{Attacks.} We consider eight different backdoor poisoning attacks in our evaluation. These attacks correspond to a diverse set of poisoning strategies. BadNet~\cite{gu2017badnets}~(Figure~\ref{fig:vis_badnet}) and Blend~\cite{Chen2017TargetedBA}~(Figure~\ref{fig:vis_blend}) correspond to typical all-to-one dirty-label attacks with patch-like trigger and blending based trigger respectively. Dynamic~\cite{nguyen2020input}~(Figure~\ref{fig:vis_dynamic}) corresponds to the strategy that use sample-specific triggers in place of single universal trigger. SIG~\cite{barni2019new}~(Figure~\ref{fig:vis_SIG}) and CL~\cite{turner2019label}~(Figure~\ref{fig:vis_clean_label}) correspond to two typical clean-label poisoning strategies. TaCT~\cite{tang2021demon}~(Figure~\ref{fig:vis_TaCT}) focus on source-specific attack. Finally, Adaptive-Blend~(Figure~\ref{fig:vis_adaptive_blend}) and Adaptive-K~(Figure~\ref{fig:vis_adaptive_k}) are adaptive poisoning strategies suggested by \citet{qi2022circumventing} that can suppress the latent separability characteristic. During implementing these attacks, we follow the protocols and suggested default configurations of their original papers and open-source implementations. On CIFAR10, all of the eight attacks are implemented and evaluated. On GTSRB, we omit CL since the original paper only releases poison set for CIFAR10. In Appendix~\ref{appendix:detailed_configurations}, we present the detailed configurations of all the considered attacks. \paragraph{Defenses.} To illustrate the superiority of active defense, we compare confusion training with five prior arts that also work on the task of backdoor poison samples detection, including three typical techniques~(\cite{tran2018spectral,chen2018activationclustering,gao2019strip}) that are commonly considered in the literature and two very recent state-of-the-art methods~\cite{tang2021demon,hayase21a}. As have been mentioned at the beginning of this work, these prior arts all belong to the paradigm of passive defense --- all of them passively rely on certain "distinguishable behaviors" of backdoored models that they do not control. Specifically, Strip~\cite{gao2019strip} assumes that backdoor models' predictions on poison samples have less entropy under intentional perturbation, while the rest four methods all rely on the latent separability characteristic that we introduce in Section~\ref{subsec:separability_and_adaptive_attack}. As for our confusion training pipeline, the technical details are discussed in Appendix~\ref{appendix:confusion_training_protocol}. All the configurations are also detailed in Appendix~\ref{appendix:detailed_configurations}. \paragraph{Evaluation Protocols.} We report four metrics in our evaluation. For each defense, we report its \underline{elimination rate}~(true positive rate) and \underline{sacrifice rate}~(false positive rate) against each poison strategy that we evaluate. Elimination rate denotes the ratio of poison samples that are correctly identified by the detector, while sacrifice rate denotes the ratio of clean samples that are mistakenly diagnosed as poison. After applying a detector to scan a dataset, we cleanse the dataset by removing all those samples that are identified as potential poison. Then, we train a new model from scratch on the cleansed dataset, and report the \underline{clean accuracy} and the \underline{attack success rate}~(ASR) of this new model. Clean accuracy denotes the accuracy on clean test inputs, while ASR denotes the success rate of activating the backdoor with backdoor inputs. Considering the randomness that exist in both attacks and defenses, we \textbf{repeat all evaluations for three times} with different random seeds, and finally report the average results across the three repeated evaluations. \paragraph{Data Augmentation} During our evaluation of \underline{baseline defenses}, we also seriously consider the effects of data augmentation. Specifically, for each poison strategy, we train two base backdoored models both with and without data augmentation, and evaluate performance of baseline poison detectors built on these two base models respectively. Note that, for our confusion training, we do not use any data augmentation. \subsection{Performance of Defenses} \label{subsec:exp_performance} \input{sections/assets/cifar_table_final} \input{sections/assets/gtsrb_table_final} During our empirical study, we observe that data augmentation can improve the performance of those baseline poison detectors in many cases, however, it can also make the performance worse in some cases. Thus, \textbf{there is no generally consistent conclusion that can claim data augmentation is always preferable (or undesirable)}. We will further discuss this issue in Appendix~\ref{appendix_subsec:effects_of_data_augmentation}. For now, in this subsection, we summarize the performance of the evaluated defenses. For conciseness, when we summarize our evaluation results for \underline{baseline defenses}, we always pick the best result out of the both cases~(w/ and w/o data augmentation). These numbers reflect upper bounds of the baseline defenses --- in practice, the defender may not know whether data augmentation would be better or not for a specific poisoned dataset, and thus can only choose one of the two options. On the other hand, since our confusion training does not involves data augmentation, the reported numbers for our defense are from a single consistent configuration. Still, we will see, although we consider the best possible results for baselines, our defense still achieves consistently superior performance. We present the summarized results in Table~\ref{tab:cifar_confusion_training_avg} and Table~\ref{tab:gtsrb_confusion_training_avg} for CIFAR10 and GTSRB respectively, and we defer the full results to Appendix~\ref{appendix:full_exp_results}. We consider a defense is successful against an attack, if the ASR is reduced below $20\%$; otherwise we say the defense is unsuccessful. In the tables, for each (attack, defense) pair, we highlight the attack success rate~(ASR) with \green{green} color for \green{successful} cases and \red{red} color for \red{unsuccessful} cases. \paragraph{Confusion training is robust across different poison strategies.} As shown, consistent with the intuitions that we illustrate in Figure~\ref{fig:vis_latent_space_compare}, on both datasets, all baseline defenses (built on the passive defense paradigm) are \textit{susceptible to} at least one of the \textit{adaptive backdoor poisoning attacks}~\cite{qi2022circumventing}~(Adap-Blend, Adap-K), and the elimination rates catastrophically degrade. Moreover, even for non-adaptive attacks, performance of these baseline defenses still vary a lot across different poison strategies. As we have discussed in prior sections, these are exactly the limitations of passive defense. \underline{In contrast}, \textit{on both datasets, our confusion training pipeline (built on the active defense paradigm) can always eliminate almost all poison samples against all poison strategies~(including adaptive ones) we consider}, and models trained on the cleansed dataset only have negligible ASR. \paragraph{Confusion training is robust across different datasets.} Another interesting observation is that, even for a same (attack, defense) pair, the elimination rate often baseline defenses can also vary. For example, on CIFAR10, activation clustering can eliminate $96.4\%$ of the poison samples on average against SIG, however, the number drops to $28.6\%$ for GTSRB. A more extreme example is SPECTRE. On CIFAR10, it exhibits state-of-the-art results among baselines and can even defeat one of the adaptive attacks~(Adap-Blend). However, we find that it completely fails in most cases on GTSRB, because the dataset is highly unbalanced and consequently it often can not correctly identify the target class. For this reason, we omit it~(refer Appendix~\ref{appendix:full_exp_results} for the failure results) in Table~\ref{tab:gtsrb_confusion_training_avg}. In comparison, as shown, our confusion training pipeline consistently performs well on both datasets. \paragraph{Confusion training has low sacrifice rate.} Moreover, we highlight that our confusion training pipeline has low sacrifice rate~(false positive rate). In all evaluated cases, the sacrifice rates are consistently kept at a low level~(below $1\%$ in most cases). Note that, advantage of low sacrifice rate is not obviously reflected by the clean accuracy. Because, both GTSRB and CIFAR10 have a large samples size, and even if more clean data are sacrificed, the drop of clean accuracy is only within several percents. However, we point out that the low sacrifice rate itself has its own merit. For example, one can use all the detected suspicious samples to do unlearning~(e.g. \cite{li2021anti,wang2019neural}). If the sacrifice~(false positive) rate is low, then the backdoor correlation can be unlearned without hurting clean accuracy too much. This can be a useful follow-up remedy when the elimination rate is low. \section*{Checklist} \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \textcolor{blue}{[YES]}~See Setion~\ref{sec:methodology}, Section~\ref{subsec:exp_performance} and Table~\ref{tab:cifar_confusion_training_avg},\ref{tab:gtsrb_confusion_training_avg}. \item Did you describe the limitations of your work? \textcolor{blue}{[YES]}~See Section~\ref{sec:discussion}. \item Did you discuss any potential negative societal impacts of your work? \textcolor{blue}{[YES]}~See Section~\ref{sec:discussion}. \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \textcolor{blue}{[YES]} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerNA~This is an empirical work. \item Did you include complete proofs of all theoretical results? \answerNA~This is an empirical work. \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \textcolor{blue}{[YES]}~We provide our code in supplementary material. \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \textcolor{blue}{[YES]}~See appendix~\ref{appendix:detailed_configurations}. \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \textcolor{blue}{[NO]}~\textbf{But all experiments are repeated three times with different random seeds, and we report the average results.} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \textcolor{blue}{[YES]}~See appendix~\ref{appendix:detailed_configurations}. \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \textcolor{blue}{[YES]} \item Did you mention the license of the assets? \textcolor{blue}{[YES]}~See appendix~\ref{appendix:detailed_configurations}. \item Did you include any new assets either in the supplemental material or as a URL? \answerNA \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA \end{enumerate} \end{enumerate} \section{Preliminaries} In this section, we first introduce our threat model~(Section~\ref{subsec:threat_model}) and the basic setting of our defenders~(Section~\ref{subsec:goal_and_capability_of_our_defense}). Then, in Section~\ref{subsec:separability_and_adaptive_attack}, we introduce the latent separability characteristic that is widely used by existing backdoor samples detectors, and discuss problems faced by these prior work. \subsection{Threat Model} \label{subsec:threat_model} Consistent with \citet{tran2018spectral} and \citet{hayase21a}, we consider the standard threat model of backdoor poisoning attacks. Specifically, we assume the adversary can manipulate a limited portion of training data, and we also allow the adversary to have knowledge of the victim's network architecture and training algorithm. Besides, the adversary neither controls the training process nor the environment where models are deployed. Thus, the victim can freely train their models on the training data, which though might be contaminated by the adversary. Moreover, the assumed adversary manipulates the training data with the goal of conducting backdoor attack and two constraints should be satisfied. First, with vanilla training algorithm, models trained on the poisoned dataset should have comparable accuracy to that of models trained on unpoisoned dataset on standard data distribution. Second, models normally trained on the poisoned dataset should be backdoored --- when clean data points from non-target classes are corrupted by the attacker-specified trigger, the affected models should misclassify a nontrivial portion of these corrupted points to the target class. \subsection{Goals and Capability of Our Defense} \label{subsec:goal_and_capability_of_our_defense} In this work, our defender considers the task of poison samples detection, which aims to eliminate backdoor poison samples from poisoned datasets and prevent backdoor attacks in the first place. On the other hand, the defender also aims to control the false positive rate of the defense at a low level --- that is, by eliminating potential poison samples, only a small portion of clean samples are sacrificed. Besides, the defender requires the performance of defensive techniques to be stable across different datasets, poison strategies and other irrelevant random factors. Our defender has full control over the training process. Given a potentially poisoned dataset, the defender is allowed to freely use it to train her own model and inspect both the trained model and the dataset. Similar to \citet{tang2021demon}, we also assume the defender has access to a small reserved set of clean data~(not necessarily labeled). One can expect that this small clean set is collected with strict and expensive supervision, in contrast to the given training set which is much larger and collected in a much cheaper way. We argue that this a reasonable assumption, and it is also prevalent in many prior work that study backdoor defense~(e.g. \cite{liu2017trojaning,liu2018fine,liu2021removing,truong2020systematic,zhao2020bridging,li2021neural}). \subsection{Latent Separability and Adaptive Backdoor Poisoning Attacks} \label{subsec:separability_and_adaptive_attack} \input{sections/assets/visualization_latent_compare_seed_666} One principled idea for detecting backdoor poison samples is to utilize the backdoored models' distinguishable behaviors on poison and clean populations to distinguish between these two populations themselves. \textbf{Arguably, the most popular and successful characteristic is the latent separability phenomenon} first observed by \citet{tran2018spectral}. The basic observation is that backdoored models trained on poisoned datasets \underline{tend} to learn separable latent representations for poison and clean samples --- thus poison samples and clean samples can be separated via a cluster analysis in the latent representation space of the backdoored models. Most state-of-the-art backdoor samples detectors~\cite{tran2018spectral,chen2018activationclustering,tang2021demon,hayase21a} are also explicitly built upon this characteristic. However, one aspect that has not been seriously considered by previous work is --- \textbf{across different settings, the extent of the latent separation may not always be as significant as what the defenders ideally expect}, and consequently the cluster analysis in latent representation space will also fail to identify the poison populations. For example, the latent separation characteristic might just be intrinsically weaker for some poison strategies than that of others. In the most extreme cases, motivated adversaries may even develop \textbf{adaptive backdoor poisoning attacks} that can intentionally suppress the latent separation characteristic. This is not just a conceptual assumption. Very recently, \citet{qi2022circumventing} indeed come up with such adaptive backdoor poisoning attacks and successfully circumvent state-of-the-art backdoor samples detectors built on the latent separability. Besides, \textbf{other factors} like the training configurations~(e.g. whether to apply data augmentation) of the base backdoored models may also impact the separation characteristics. To illustrate this problem, we pick a diverse set of poison strategies~\cite{gu2017badnets,Chen2017TargetedBA,nguyen2020input,barni2019new,turner2019label,tang2021demon,qi2022circumventing}~(including adaptive ones), and in Figure~\ref{fig:vis_latent_space_compare} we visualize~(refer Apppendix~\ref{appendix:visualization_of_latent_space} for detailed configurations) the latent representation space of their corresponding backdoored models. As shown, although the latent separation is notable in many cases, the characteristics can also be hard-to-identify when adaptive poison strategies~(Figure~\ref{fig:vis_adaptive_blend},\ref{fig:vis_adaptive_k}) are used. Besides, in many cases~(Figure~\ref{fig:vis_SIG},\ref{fig:vis_TaCT},\ref{fig:vis_adaptive_blend}), models trained with~(right-hand plots) and without~(left-hand plots) data augmentation also exhibit very different extent of separation. We refer interested readers to Appendix~\ref{appendix:latent_separability_and_adaptive_backdoor_poisoning_attack} for a more thorough discussion.
2,869,038,156,447
arxiv
\section{Introduction} Neural networks (NNs) have become a default solution for many problems because of their high performance. However, wider adoption of NNs requires not only high accuracy but also high computational efficiency. Researchers either compress, search or jointly search and compress architectures aiming for more computationally effective solutions~\cite{review}. This problem is as well actual for SR architectures often utilized on mobile devices. In this paper, we choose to combine techniques of NAS and quantization to search for efficient quantization-friendly models for SR. The NAS problem is hard because we should either define a differentiable NAS procedure or use discrete optimization in a high-dimensional space of architectures. The problem is even more challenging for quantization-aware NAS because quantization is an indifferentiable operation. Therefore, optimization of quantized models is more difficult than full precision models. An additional technical challenge arises for SR, as Batch Norm (BN) in SR models damages final performance \cite{AdaDM} but training models without BN is much slower. Finally, we should define an appropriate search space given numerous recent advances in quantized SR architectures and take into account that the size of the discrete search space grows exponentially with the introduction of new parts of the architecture, making the optimization problem harder. \paragraph{\textbf{Our contributions:}} \begin{itemize} \item We propose the first end2end approach to NAS for mixed precision quantization of SR architectures. \item To search for robust, quantization-friendly architectures we approximate model degradation caused by quantization with Quantization Noise (QN) instead of directly quantizing model weights during the search phase. For sampling QN we follow procedure proposed in \cite{DiffQ}. Such reparametrization allows differentiability crucial for a differentiable NAS and is up to 30\% faster than quantizing weights directly. \item We design a specific search space for SR models. The proposed search space is simpler than the current SOTA - TrilevelNAS~\cite{Trilevel} and leads to about 5 times faster search procedure. We show that the search space design is equally important as search methods and argue that the community should pay more attention to the search space design. \item Quantization-aware NAS with Search Against Noise (SAN) yields results with a better trade-off between quality measured in PSNR and efficiency measured in BitOps compared to uniform and mixed precision quantization of fixed architectures. Thus, the joint quantization-aware NAS is a better choice than separate quantization and NAS. \end{itemize} \section{Related works} \begin{figure}[ht] \begin{center} {\includegraphics[scale=0.3]{images/single_path.png}} \end{center} \caption{The search space for NAS is equivalent to an over-parametrized supernet represented as a graph. In this graph, multiple possible operations connect nodes that are outputs of each layer. $\alpha$ values represent the importance of edges. Joint training of parameters of operations and their importance allow differentiable NAS. The final architecture is the result of the selection of edges with the highest importance for each consecutive pair of nodes. They have solid lines in the figure.} \label{fig:dag_supernet} \end{figure} \begin{figure}[h] \begin{center} {\includegraphics[scale=0.3]{images/Qnoise.png}} \end{center} \caption{SAN approach, $QNoise(b)$ - is some function to generate quantiziation noise, $QNoise(b)$ does not depend on weights and does not require gradients approximation. For quantization aware search, each blue operation on Figure \ref{fig:dag_supernet} becomes SAN operation with noisy weights.$WR$ - real valued weights, $WQ$ - pseudo quantized weights, $\boldsymbol{\alpha}$ is a vector of trainable parameters. By adjusting $\boldsymbol{\alpha}$ we can search for acceptable model degradation caused by quantization procedure.} \label{fig:dag_san} \end{figure} \paragraph{Differentiable NAS (DNAS)} \cite{FBNet, Darts, AGD, Trilevel} is a differentiable method of selecting a directed acyclic sub-graph (DAG) from an over-parameterized supernet. An example of such selection is on Figure \ref{fig:dag_supernet}. Each node represents a feature map of intermediate layers of inputs and outputs. Edges are operations between those nodes. During the search procedure, we aim to assign importance weights for each edge and consequently select a sub-graph using edges with the highest importance weights. The weights assignment can be done in several ways. The main idea of DNAS is to update importance weights $\boldsymbol{\alpha}$ with respect to a loss function parameterized on supernet weights $W$. Consequently, hardware constraints are easy to introduce as an extension of an initial loss function. DNAS has been proven to be efficient to search for computationally-optimized models. FBnet \cite{FBNet} focuses on optimizing FLOPs and latency. Authors mainly focus on classification problems. AGD \cite{AGD} and TrilevelNAS\cite{Trilevel} further extend resource constrained NAS for super resolution problem (SR) for full precision models. \paragraph{Quantization aware DNAS.} DNAS can be employed to search for architectures with desired properties. In OQAT \cite{OQAT}, authors perform quantization-aware NAS with uniform quantization. They show that architectures found with a quantization-aware search perform better when quantized compared to architectures found without accounting for quantization. However, uniform quantization is less flexible and leads to suboptimal solutions compared to mixed-precision quantization (MPQ) - where each operation and activation has its own quantization bits. This idea was explored in EdMIPS \cite{EdMIPS}. MPQ used in EdMIPS is a NAS procedure where all operations are fixed, and we search for different quantization levels. One on hand MPQ-NAS is a natural extension of EdMIPS or OQAT but on another hand joint optimization of quantization bits and operations has a high computational costs. Additionally, instability of optimization with quantized weights was highlighted in DiffQ \cite{DiffQ}, authors considers QN to perform MPQ on existing architectures. Two problems above make joint optimization of an over-parameterized supernet with mixed-precision bits a challenging task. We propose a procedure that is simultaneously effective and relatively robust. Due to the usage of supernet, we turn our problem into a continuous one, and by the use of quantization noise, we can make the solution of this problem fast and stable. \paragraph{Search space design} is crucial for achieving good results. It should be both flexible and contain known best-performing solutions. Even a random search can be a reasonable method with a good search space design. AGD~\cite{AGD} applies NAS to SR problem. Authors search for (1) a cell - a block which is repeated several times, and (2) kernel size along with other hyperparameters like the number of input and output channels. TrilevelNAS\cite{Trilevel} extends the previous work by adding (3) network level that optimizes the position of the network upsampling layer. Both articles expand the search space, making it more flexible while more tricky to search in, possibly leading to local and sub-optimal solutions. In our work, we choose to focus on a simpler search space consisting of: (1) a cell block - search is performed for operations within the block and (2) quantization levels different for each operation (for quantization-aware search). We show that this design leads to architectures with similar performance as TrilevelNAS\cite{Trilevel} but search time is much faster. \paragraph{\textbf{Sparsification for differentiable architecture search}} was discussed in several works \cite{BATS,entropy,GUMBEL,sparse,Trilevel,ISTA}. The problem arises because operations in a supernet co-adapt. So, a subgraph selected from the supernet depends on all the left in the supernet operations. We can make the problem better suitable for NAS if we enforce the sparsification of a graph with most of the importances for a particular connection between nodes being close to zero. The sparsification strategy depends on the graph structure of a final model. In our work, we use the Single-Path strategy - one possible edge between two nodes, more in appendix~\ref{sec:single_path}. For the Single-Path strategy, the sum of node outputs is a weighted sum of features, it can be seen in Figure \ref{fig:dag_supernet}. The co-adaptation problem becomes obvious. Second layer convolutions are trained on a weighted sum of features, but after discretization (selecting a subgraph), only one source of features remains. Therefore, sparsification for the vector of $\boldsymbol{\alpha}$ is necessary. In BATS \cite{BATS}, sparsification is achieved via scheduled temperature for softmax. Entropy regularization proposed in Discretization-Aware search \cite{entropy}. In \cite{GUMBEL}, authors proposed an ensemble of Gumbels to sample sparse architectures for the Mixed-Path strategy and in \cite{sparse}, Sparse Group Lasso (SGL) regularization is used. In ISTA-NAS \cite{ISTA}, authors tackle sparsification as a sparse coding problem. Trilvel NAS \cite{Trilevel} proposed sorted Sparsestmax. In our work, we used entropy regularization \cite{entropy} to disentangle the final model from the supernet. \section{Methodology} We follow the basic definitions provided in the previous section with the description of our approach. The description has three parts. We start with (i) subsection~\ref{sec:design} that describes the space of considered architectures. The procedure for searching of an architecture in this space is in (ii) subsection~\ref{sec:train_search}. It includes the description of the used loss function. Finally, in (iii) subsection~\ref{sec:quant} we provide details on reparametrization with quantization noise. First we quantize \subsection{Search space design} \label{sec:design} \begin{figure*}[t] \center{\includegraphics[scale=0.35]{images/SearchDesign.png}} \caption{The search space design. We separate the whole architecture into $4$ parts: head, body, upsample, and tail. The head and the tail have $N = 2$ convolutional layers. The identical body part is repeated 3 times, unless specified otherwise. The number of channels for all the blocks equals $36$, except for the head's first layer, upsample, and the tail's first layers. All the blocks with skip connections incorporate AdaDM with BN.} \label{fig:search_design} \end{figure*} We design our search space taking into account recent results in this area and, in particular, the SR quantization challenge~\cite{SRContest}. The challenge was in manually designing of quantization-friendly architectures. We combine most of these ideas in the search design depicted in Figure~\ref{fig:search_design}. The deterministic part of our search space includes the upsampling layer in tail block of the architecture, the number of channels in convolutions and specific for SR AdaDM~\cite{AdaDM} block. The AdaDM block is used only in quantization-aware search. The variable part is quantization bit values and operations within head, body, up-sample, and tail blocks. We perform all experiments with 3 body blocks, unless specified otherwise. Additional, parallel convolutional layer of the body block is used to increase representation power of quantized activations. \paragraph{Batch Norm for SR and modification of AdaDM.} Variation in a signal is crucial for identifying small details. In particular, the residual feature's standard deviation shrinks after the layers' normalisation, and SR models with BN perform worse~\cite{AdaDM}. On another hand training overparameterized supernet without BN can be challenging. The authors of~\cite{AdaDM} proposed to rescale the signal after BN based on its variation before BN layers. Empirically we found that AdaDM with removed second BN improves overall performance of quantized models. Original AdaDM block and our modification are depicted in Figure~\ref{fig:ADM}. All the residual blocks in our search design have the modified AdaDM part: the body block, all the repeated layers within the body block and the tail block. \begin{figure}[t] \center{\includegraphics[scale=0.16]{images/ADM.png}}\caption{Our modification of AdaDM~\cite{AdaDM}. \emph{Some Block} represents any residual block with several layers within, $\sigma(X_{in})$ is a variance of input signal, $\gamma$ and $\beta$ are learnable scalars. We remove the second BN after $X_{out}$.} \label{fig:ADM} \end{figure} \subsection{The search procedure} \label{sec:train_search} We consider the selection of blocks and quantization bits during NAS. We assign separate $\alpha$ values to optimize during NAS for a Cartesian product of possible operations and the number of quantization bits. Search and training procedures are performed as two independent steps. For search, we alternately update supernet's weights $W$ and edges importances $\boldsymbol{\alpha}$. Two different subsets of training data are used to calculate the loss function and derivatives for updating $W$ and $\boldsymbol{\alpha}$ similar to~\cite{AGD}. Hardware constraints and entropy regularisation are applied as additional terms in the loss function for updating $\boldsymbol{\alpha}$. To calculate the output of $l$-th layer $x_{l + 1}$ we weight the output of separate edges according to importance values $\alpha_{ibl}$ of each of $O^l$ operations and bits: \begin{equation} \label{eq:super_net} x_{l+1} = \sum_{i=1}^{|O^l|} \sum_{b=1}^{|B|} \alpha_{ibl}o^l_{i}(x_l), \end{equation} where \(\quad \sum_{i = 1}^{|O^j|} \sum_{b=1}^{|B|} \alpha_{ibl} = 1 \) and all \(\alpha_{ibl} \geq 0\). Then the final architecture is derived by choosing a single operator with the maximal $\alpha_j$ among the ones for this layer. Finally, we train the obtained architecture from scratch. \label{sec:alpha_loss} To optimize $\alpha$ we compute the following loss: $$L(\boldsymbol{\alpha}) = L_1(\boldsymbol{\alpha}) + \eta L_{cq}(\boldsymbol{\alpha}) + \mu(t) L_{e}(\boldsymbol{\alpha}),$$ where $\eta$ and $\mu(t)$ are regularization constants. $\mu(t)$ depends on the iteration $t$. $L_1(\boldsymbol{\alpha})$ is $l_1$-distance between high resolution and restored images, and $L_{cq}(\boldsymbol{\alpha})$ is the hardware constraint and $L_{e}(\boldsymbol{\alpha})$ is the entropy loss that enforces sparsity of the vector $\boldsymbol{\alpha}$. The last two losses are defined in two subsections below. \subsubsection{Hardware constraint regularization} The hardware constraint is proportional to the number of floating point operations FLOPs for full precision models and the number of quantized operations BitOps for mixed-low precision models. $F_{fp}(o, x)$ is the function computing FLOPs value based on the input image size $x$ and the properties of a convolutional layer $o$: kernel size, number of channels, stride, and the number of groups. We use the same number of bits for weights and activations in our setup. Therefore, BitOps can be computed as $F_{q}(o, x) = b^2 F_{fp}(o, x)$ where $b$ is the number of bits. Then, the corresponding hardware part of the loss $L_{cq}$ is: \begin{multline} \label{eq:bitops} L_{cq}(\boldsymbol{\alpha}) = \sum_{j=1}^{|S|} \sum_{i=1}^{|O^j|} \sum_{b=1}^{|B|} \alpha_{jib} b^2 F_{fp}(o^j_{i}, x_j), \\ \quad \textrm{where} \quad \sum_{i=1}^{|O^j|} \sum_{b=1}^{|B|} \alpha_{jib}=1 \quad \textrm{and} \quad \forall \quad \alpha_{jib} \geq 0, \end{multline} where $S$ is a supernet's block or layer consisting of several operations, the layer-wise structure is presented in Figure~\ref{fig:dag_supernet}. We normalize $L_{cq}(\boldsymbol{\alpha})$ value by the value of this loss at initialization with the uniform assignment of $\alpha$, as the scale of the unnormalized hardware constraint reaches $10^{12}$. \subsubsection{Sparsity regularization} After the architecture search, the model keeps only one edge between two nodes. Let us denote as $\boldsymbol{\alpha}_l$ all alphas that correspond to edges that connect a particular pair of nodes. They include different operations and different bits. At the end of the search, we want $\boldsymbol{\alpha}_l$ to be a vector close to the vector with one value close to $1$ and all remaining components to be $0$. We found that the entropy loss works the best with our settings. The sparsification loss $L_{e}(\boldsymbol{\alpha})$ for $\boldsymbol{\alpha}$ update step has the following form: \begin{equation} L_{e}(\boldsymbol{\alpha}) = \sum_{l=1}^{|S|} H(\boldsymbol{\alpha}_{l}), \end{equation} where $H$ is the entropy function. The coefficient before this loss $\mu(t)$ depends on the training epoch $t$. The detailed new procedure for regularization scheduling is given in Appendix~\ref{sec:schedule}. \subsection{Quantization} \label{sec:quant} Our aim is to find quantization-friendly architectures that perform well after quantization. (i) To obtain a trained and quantized model, we perform Quantization Aware Training QAT \cite{QAT}. During training, (a)~we approximate quantization with QN; (b)~compute gradients for quantized weights; (c)~update full precision weights. (ii) Then, a model with found architecture is trained from scratch. Below we provide details for these steps. \subsubsection{Quantization Aware Training} \label{sec:final_quantization} Let's consider the following one-layer neural network (NN), \begin{equation} \label{eq:nn_simple} y = f(a(x)) = W a(x), \end{equation} where $a$ is a non linear activation function and $f$ is a function parametrized by a tensor $W$. In~\eqref{eq:nn_simple} $f$ is a linear function, but it also can be a convolutional operation. To decrease the computational complexity of the network, we replace expensive float-point operations with low-bit width ones. Quantization occurs for both weights $W$ and activation $a$. The quantized output has the following form: \begin{equation} \label{eq:quant_simple} y_q = f_q(a_q(x)) = o(G(a(x), b), Q(W, b)). \end{equation} $Q(W, b)$ is a quantization function for weights, we use Learned Step Quantization (LSQ)~\cite{LSQ} with learnable step value. To quantize activations we use $G(W, b)$ a half wave Gaussian quantization function~\cite{HWGQ}. Where, quantization level is $b$ and a convolution layer is $o$. \subsubsection{Quantization Aware Search with Shared Weights (SW)} To account for further quantization during the search phase, we perform Quantization Aware Search similar to QAT \cite{QAT}. One way to do so is to quantize model weights and activations during the search phase the same way as during training. To improve computational efficiency we can quantize weights of identical operations with different quantization bits instead of using different weights for each quantization bit, this idea was studied in \cite{EdMIPS}. Here, $x_{j+1}$ is the output of $j$-th layer with input $x_j$ and parameters $W_j$. \begin{equation} \label{eq:super_net_quant_shared} x_{j+1} = \sum_{i=1}^{|O^j|} o_{ij} \left(\sum_{b \in B} \alpha_{jib} G(a(x_j), b), \sum_{b \in B} \alpha_{jib} Q(W_{ji}, b) \right). \end{equation} \[ \sum_{i=1}^{|O^j|}\sum_{b \in B}\alpha_{jib}=1, \forall i, j, b \,\, \alpha_{ijb} \geq 0. \] The effectiveness of SW can be seen from~\eqref{eq:super_net_quant_shared}: it requires fewer convolutional operations and less memory to store the weights. \subsubsection{Quantization Aware Search Against Noise} To further improve computational efficiency and performance of search phase we introduce SAN. Model degradation caused by weights quantization is equivalent to adding the quantization noise $QNoise_b(W) = Q(W, b) - W$. Then, quantized weights is $Q(W, b) = W + QNoise_b(W)$ and \eqref{eq:super_net_quant_shared} is: \begin{equation} \label{eq:super_net_quant_noise} \begin{split} x_{j+1} = \sum_{i=1}^{|O^j|} o_{ij}(&\sum_{b \in B} \alpha_{jib} QNoise_{b}(a(x_j)) + a(x_j), \\ &\sum_{b \in B} \alpha_{jib} QNoise_{b}(W_{ji}) + W_{ji}). \end{split} \end{equation} This procedure (i) does not require weights quantization and (ii) differentiable unlike quantization. $QNoise_b$ is a function of $W$ because it depends on its shape and magnitude of values. Given the quantization noise, we can more efficiently run forward and backward passes for our network, similar to the reparametrization trick. Adding quantization noise is similar to adding independent uniform variables from $[- \Delta/2, \Delta/2]$ with $\Delta = \frac{1}{2^b-1}$~\cite{PQN}. However, for the noise sampling, we use the following procedure~\cite{DiffQ}: \begin{equation} \label{eq:noise} QNoise(b) = \frac{\Delta}{2} z, z \sim \mathcal{N}(0,1), \end{equation} as it performs slightly better than the uniform distribution~\cite{DiffQ}. \begin{figure}[ht] {\includegraphics[scale=0.40]{images/pareto_EdMIPS.pdf}} \caption{Our quantization aware NAS approach and fixed quantized architectures. Ours - QuantNAS with AdaDM and SAN with following hardware penalties: 0, 1e-4, 1e-3, 5e-5 and a different number of body blocks. Mixed precision quantization by EdMIPS \cite{EdMIPS} for SRResNet \cite{SRRESNET} and ESPCN \cite{ESPCN} with following hardware penalties: 0, 1e-3, 1e-2, 1e-1. PSNR was computed on Set14 and BitOPs for image size 32x32. Our search procedure found a significantly more efficient architecture compared to manually designed and then quantized ESPCN within the same BitOPs range.} \label{fig:pareto_baseline} \end{figure} \begin{figure}[ht] {\includegraphics[scale=0.40]{images/pareto_main.pdf}}% \caption{QuantNAS, models found with different hardware penalty values: 0, 1e-4, 1e-3, 5e-5. QuantNAS with AdaDM in blue, SAN with AdaDM in green, SAN without AdaDM and without BN in purple, shared weights procedure without AdaDM and without BN in brown. Results are presented for Set14.} \label{fig:ablation} \end{figure} \section{Results} We provide the code for our experiments \href{https://anonymous.4open.science/r/QuanToaster/README.md}{here}. \subsection{Evaluation protocol} \label{sec:techdetails} For all experiments, we consider the following setup if not stated otherwise. A number of body blocks is set to 3. As the training dataset, we use DIV2K dataset~\cite{DIV2K}. As the test datasets we use Set14~\cite{Set14}, Set5~\cite{set5}, Urban1100~\cite{Urban100}, Manga109~\cite{fujimoto2016manga109}, with scale 4. In the main body of the paper, we present results on Set14. The results for other datasets are presented in Appendix. For training, we use RGB images. For PSNR score calculation, we use only the Y channel, similarly to \cite{AGD,Trilevel}. Evaluation of FLOPs and BitOPs is done for fixed image sizes $256 \times 256$ and $32 \times 32$, respectively. \paragraph{Search space.} For full precision search, we use ten different possible operations as candidates for a connection between two nodes. For quantization-aware search, we limit the number of operations to $4$ to obtain a search space of a reasonable size. For quantization, we consider two options as possible quantization bits: 4 or 8 bits for activations and weights. A search space analysis and more technical details are provided in Appendix~\ref{sec:sp_analysis}. \subsection{Different number of body blocks} \label{sec:body_blocks} A straightforward way to improve model performance is to increase the number of layers. We study how our method scales by performing search with a different number of body blocks: 1, 3, and 6. Three constellations are presented in Figure~\ref{fig:pareto_baseline}. We observe that increasing the number of blocks improves final performance and increases the number of BitOPs for architectures found with the highest hardware regularization - each constellation is slightly shifted to the right. \subsection{QuantNAS and quantization of fixed architectures} \label{sec:other methods} We compare QuantNAS with (1) uniform quantization and (2) mixed precision quantization for two existing architectures, ESPCN~\cite{ESPCN} and SRResNet~\cite{SRRESNET}. \textbf{For uniform quantization}, we use LSQ~\cite{LSQ} and HWGQ~\cite{HWGQ}. \textbf{For mixed precision quantization}, we use EdMIPS~\cite{EdMIPS}. Our setup for EdMIPS is matching the original setup and search is performed for different quantization bits for weights and activations. Unlike in QuantNAS, quantization bits for activations and weights are the same. In Figure~\ref{fig:pareto_baseline}, we compare our procedure with two architectures quantized by EdMIPS. We can see that QuantNAS finds architectures with better PSNR/BitOps trade-offs within a range where BitOps values overlap. Performance gain is especially notable between quantized ESPCN and our approach. We note that due to computational limits, our search is bounded in terms of the number of layers. Therefore, we can't extend our results beyond SRResNet in terms of BitOps to provide a more detailed comparison. From Table~\ref{tab:quantized}, we can see that QuantNAS finds architectures with a better PSNR/BitOps trade-off than uniform quantization techniques. We compare within the same range of BitOPs values, 8 bits for ESPCN and 4 bits for SRResNet. \begin{table} \begin{tabular}{llll} \hline Model & GBitOPs & PSNR & Method \\ \hline SRResNet & 23.3 & 27.88 & LSQ(4-U) \\ SRResNet & 23.3 & 27.42 & HWGQ(4-U) \\ ESPCN & 2.3 & 27.26 & LSQ(8-U) \\ ESPCN & 2.3 & 27.00 & HWGQ(8-U) \\ SRResNet & 19.4 & 27.919 & EdMIPS(4,8) \\ $\mathbf{Our (body 3)}$ & 4.6 & 27.814 & QuantNAS(4,8) \\ $\mathbf{Our (body 6)}$ & 9.3 & 27.988 & QuantNAS(4,8) \\ \hline \end{tabular} \caption{Quantitative results for different quantization methods for different models. For EdMIPS and QuantNAS, we present models found with different hardware penalties 1e-3 and 1e-4, respectively. "U" - stands for uniform quantization - all bits are the same for all layers. BitOPs were computed for 32x32 image size. Our (body 3) quantized architecture with body block repeated 3 times is depicted in Figure~\ref{fig:arch_examples_q}.} \label{tab:quantized} \end{table} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{images/time_noise.pdf} \end{center} \caption{Time comparison of quantization noise and weights sharing strategy during the search phase of quantization-aware NAS. Y-axis (on the left) shows time spent on 60 training iterations (line plot). The secondary Y-axis (on the right) presents the time fraction of SW strategy (bar plot). SAN provides speed-up. } \label{fig:timeeffectiveness} \end{figure} \subsection{Time efficiency of QuantNAS} We measure the average training time for three considered quantisation approaches: without sharing weights, with sharing (SW), and with quantization noise~(ours QuantNAS or briefly SAN). We run the same experiment for different amounts of searched quantization bits. Figure~\ref{fig:timeeffectiveness} shows the advantage of our approach in training time. As the number of searched bits grows, so does the advantage. On average, we get up to $30\%$ speedup. \subsection{Ablation studies} \subsubsection{Adaptive Deviation Modulation} \label{sec:adm} We start with comparing the effect of AdaDM \cite{AdaDM} and Batch Normalization on two architectures randomly sampled from our search space. In Table~\ref{tab:adadm}, we can see that both original AdaDM and Batch Normalization hurt the final performance, while AdaDM with our modification improves PSNR scores. In Figure~\ref{fig:ablation}, we observe that architectures found with AdaDM are better in terms of both PSNR and BitOPs. Interestingly, we did not notice any improvement with AdaDM for full-precision models. Our best full precision model in Table~\ref{tab:adadm} was obtained without AdaDM. \begin{table} \begin{tabular}{llll} \hline Model & Model M1 & Model M2 \\ \hline Without Batch Norm & 27.55 & 28 \\ With Batch Norm & 27 & 27.16 \\ Original AdaDM & 27.33 & 27.84 \\ Our AdaDM & $\mathbf{27.68}$ & $\mathbf{28.046}$ \\ \hline \end{tabular} \caption{PSNR of SR models with scaling factor 4 for Set14 dataset. M1 and M2 are two arbitrary mixed precision models randomly sampled from our search space.} \label{tab:adadm} \end{table} \subsubsection{Entropy regularization} \label{sec:entropy} We consider three settings to compare QuantNas with and without Entropy regularization: (A) reduced search space, SGD optimizer; (B) full search space, Adam~\cite{Adam} optimizer; (C) reduced search space, Adam~\cite{Adam} optimizer. All the experiments were performed for full precision search. For full and reduced search spaces, we refer to Appendix \ref{sec: search_spaces}. We perform the search without hardware penalty to analyze the effect of the entropy penalty. Quantitative results for Entropy regularisation are in Table \ref{tab:entropy}. Entropy regularisation improves performance in terms of PSNR for all the experiments. Figure~\ref{fig:alphas} demonstrates dynamics of operations importance for joint NAS with quantization for 4 and 8 bits. 4 bits edges are depicted in dashed lines. Only two layers are depicted: the first layer for the head (HEAD) block and the skip (SKIP) layer for the body block. With entropy regularization, the most important block is evident from its $alpha$ value. Without entropy regularization, we have no clear most important block. So, our search procedure has two properties: (a) the input to the following layer is mostly produced as the output of a single operation from the previous layer; (b) an architecture at final search epochs is very close to the architecture obtained after selecting only one operation per layer with the highest importance value. \begin{table} \begin{tabular}{llll} \hline Training settings & w/o Entropy & w Entropy &\\ \hline A & 27.99 / 111 & $\mathbf{28.10}$ / 206 \\ B & 28.00 / 30 & $\mathbf{28.12}$ / 19 \\ C & 27.92 / 61 & $\mathbf{28.11}$ / 321\\ \hline \end{tabular} \caption{PSNR/GFFLOPs values of search procedure with and without Entropy regularisation. Models were searched in different settings A, B, and C.} \label{tab:entropy} \end{table} \begin{figure*}[ht] \center{\includegraphics[scale=0.35]{images/alpha_noise_entropy.pdf}} \center{\includegraphics[scale=0.35]{images/alpha_noise_no_entropy.pdf}} \caption{Importance weights for different operations through epochs for QuantNas search, for 8 and 4 bits, solid and dashed lines respectively. Supernet sparsification with entropy regularisation on the top for two layers HEAD-1 and a parallel conv layer in BODY (SKIP-1), regularisation value is set to 1e-3 (on the top). Training the supernet without sparsification with entropy regularization at the bottom.} \label{fig:alphas} \end{figure*} \begin{table}[b!] \begin{tabular}{llll} \hline Method & GFLOPs & PSNR & Search cost \\ \hline SRResNet & 166.0 & 28.49 & Manual \\ AGD$\phantom{}^{*}$ & 140.0 & 28.40 & 1.8 GPU days \\ Trilevel NAS$\phantom{}^{*}$ & 33.3 & 28.26 & 6 GPU days \\ Our FP best & $\mathbf{29.3}$ & 28.22 & 1.2 GPU days \\ \hline \end{tabular} \caption{Quantitative results of PSNR-oriented models with SR scaling factor 4 for Set14 dataset. $*$ results are from paper \cite{Trilevel}} \label{tab:fullprecision} \end{table} \subsubsection{Comparison with existing full precision NAS for SR approaches} \label{sec:fpnas} Here we examine the quality of our procedure for full precision NAS. We did not use the AdaDM block for the full precision search. The results are in Table \ref{tab:fullprecision}. Our search procedure achieves comparable results with TrilevelNAS\cite{Trilevel} with a relatively simpler search design and about 5 times faster training. The best performing full precision architecture was found with a hardware penalty of value $1e-3$. This architecture is depicted in Appendix Figure~\ref{fig:arch_examples_fp}. Additionally, we compare results with a popular SR architecture SRResNet \cite{SRRESNET}. Visual examples of the obtained super-resolution pictures are presented in Appendix Figure~\ref{fig:images} for Set14~\cite{Set14}, Set5~\cite{set5}, Urban1100~\cite{Urban100}, Manga109~\cite{fujimoto2016manga109} with scale factor 4. \section{Limitations} For our NAS procedure, the overall NAS limitation applies: the computational demand for joint optimization of many architectures is high. The search procedure takes about $24$ hours to finish for a single GPU TITAN RTX. Moreover, obtaining the full Pareto frontier requires running the same experiment multiple times. On Figure~\ref{fig:ablation} all most right points (within one experiment/color) have $0$ hardware penalty. It clearly shows that limited search space creates an upper bound for the top model performance. Therefore, our results do not fall within the same BitOps range as SRResNet. We found that our search design is sensitive to hyperparameters. In particular, optimal coefficients for hardware penalty and entropy regularization can vary across different search settings. Moreover, we expect that there is a connection between optimal coefficients for the hardware penalty, entropy regularization and search space size. Different strategies or search settings require different values of hardware penalties. Applying the same set of values for different settings might not be the best option, but it is not straightforward how to determine them beforehand. \section{Conclusion} To the best of our knowledge, we are the first to deeply explore NAS with mixed-precision search for Super-Resolution (SR) tasks. We proposed the method QuantNAS for obtaining computationally efficient and accurate architectures for SR using jointly NAS and mixed-precision quantization. Our method is better than others due to (1) specifically tailored search space design; (2) differentiable SAN procedure; (3) adaptation of AdaDM; (4) the entropy regularization to avoid co-adaptation in super-nets during differentiable search. Experiments on standard SR tasks demonstrate the high quality of our search. Our method leads to better solutions compared to mixed-precision quantization of popular SR architectures with \cite{EdMIPS}. Moreover, our search is up to $30\%$ faster than a share-weights approach. \clearpage \bibliographystyle{ieee_fullname}
2,869,038,156,448
arxiv
\section{Introduction}\label{sec1} Neutron stars are mysterious compact objects where strong gravitational and electromagnetic fields emerge. These objects usually manifest as pulsars, emitting energetic electromagnetic signals detected at very precise intervals \citep{Camenzind}. Even though they were discovered more than fifty years ago, many ingredients of these astrophysical systems are still poorly known: the composition of the compact object itself \citep{Weber2005}, the composition and structure of their magnetosphere \citep{Petri2016}, and the generation of their strong magnetic fields \citep{Duncan1992, Dieters1998}. Concerning the magnetosphere, three fundamental models can be considered: the simplest one consists of a naked star, without any kind of plasma in its neighborhood; the opposite one in which we have the compact object fully immersed in a plasma, and finally, an intermediate model that admits the existence of a magnetosphere partially filled with electrons and positrons \citep{Petri2016} On the other hand, a magnetic field strength on the surface of neutron stars of the order of Schwinger’s critical field $B_c= 4.41 \times 10^{13}$~G (e.g. \citep{Ciolfi2014}), and beyond ( e.g. \citep{Dieters1998}), imposes to study the behavior of matter at extreme conditions. On the observational side, the information we can get from pulsars is based on the measurement of their pulses arrival times. During their propagation from the source to Earth, photons can experience a variety of time delays. In particular, a most known time delay is the dispersion of photons when interacting with the electrons in the interstellar medium \citep{Pushkarev2010},\citep{Wang2004}, \citep{Bosnjakl2012}, \citep{Waxman1996}. This type of time delay mainly depends on the electron column density and the distance to the source, and causes that pulses at lower frequencies are delayed with respect to those emitted at higher frequencies (the delay is inversely proportional to the square of the photon frequency; see e.g.~\citep{2004hpa..book.....L}). Therefore, the knowledge of this type of time delay probes the interstellar medium properties rather than the source site ones. We are here interested in a photon time delay process occurring in the vicinity of the source, and unrelated with the photon interaction with matter. The propagation of light in vacuum is modified by various external agents: electromagnetic fields, temperature, geometric boundary configurations, gravitational background and non-trivial topologies. In particular, the problem of light propagation in electron-positron vacuum in the presence of a magnetic field is similar to the dispersion of light in an anisotropic medium, where the external field axis sets the anisotropic direction. Therefore, the photon dispersion relation is corrected by adding the polarization tensor $\Pi(k_{\perp},k_{\parallel}, B,\omega)$. This takes into account the indirect interaction with the magnetic field through the virtual electron-positron pairs \citep{PerezRojasH.Shabad1978} and depend on components of the wave vector, the photon frequency and the external magnetic field. In this paper, we study photon time delay which might occur in the pulsar magnetosphere. We use the simplest approximation to describe it, which consists in considering it as a magnetized vacuum. We start by solving the photon dispersion equation considering the radiative corrections given by the magnetized photon self-energy. Then, we compute the phase velocity and the photon time delay. The propagation of photons is considered perpendicular to the magnetic field ($k \perp B)$ since, for parallel propagation, photons behave like in absence of the magnetic field, namely with no deviation from the light-cone. From a physical aspect, we estimate the time delay of photons in the region of the pulsar magnetosphere, modeling it as an electromagnetic vacuum. From a mathematical point of view, our calculation is more robust (and elegant) than others \citep{PerezRojas2014}, since our analytic expressions for the solution of the dispersion equation are presented in term of A-hypergeometric functions \citep{Sturmfels2000}. The paper is organized as follows. In section \ref{sec2}, we solve the dispersion equations considering the radiative corrections given by the photon self-energy in presence of magnetic field. We devote section \ref{sec3} to discuss the phase velocity and time delay of photons traveling in a magnetized vacuum. The dependence with the radial coordinate of the photon delay is studied assuming a dipole configuration for the magnetic field in the magnetosphere. Finally, we present in section~\ref{sec4} the conclusions of our work. \section{Propagation of photon in magnetized vacuum} \label{sec2} In this section, we study the propagation of photons perpendicular to the constant and uniform external magnetic field in vacuum\footnote{We use natural units $\hbar=c=1$.}. It is well known that photons in vacuum obey the dispersion equation \begin{equation}\label{lightcone} k^2_{\perp}+k^2_{\parallel}-\omega^2=0, \end{equation} that implies that photons travel at the speed of light. The effect of the presence of the magnetic field on the dispersion relation, Eq.~(\ref{lightcone}), can be included through radiative corrections to the photon self-energy. The modified dispersion equations ~\citep{Sha1984} are \begin{equation} k^2=\kappa^{(i)}(\omega,k_{\parallel},k_{\perp},b), \end{equation} where $b$ is the magnetic field normalized to the Schwinger's field, $b=B/B_c$, and $\kappa^{(i)}$ are the eigenvalues of the photon self-energy given in the appendix. In what follows, we consider photon propagation perpendicular to the magnetic field ($\vec{k}\perp \vec{B}$). Three modes appear: one longitudinal mode $i=1$, that is not physical and two transverse ones $i=2,3$. The threshold of pair creation of second and third modes are $\omega=2m_e$ and $\omega=m_e+\sqrt{m_e^2+2eB}$, respectively \citep{PerezRojas2014}. In our study, we consider only the second mode, which is more relevant in the region of transparency. The corrections of dispersion relations become relevant close to the thresholds, and the second mode threshold is independent on the magnetic field, being much lower than the threshold for the third mode, for the considered values of the magnetic field. For a large range of frequencies, the solution of the dispersion equation corresponds to relatively small deviations from the light-cone, $k^2\ll e B$, except for values of $\omega^2-k^2_{\parallel}$ extremely close to the vacuum threshold for pair creation \citep{PerezRojas2014}. As it is shown in the appendix, in this case, we can write the photon self-energy eigenvalues as polynomials in $k^2$: \begin{equation} \kappa_{i}=\sum_{l=0}^{\infty}\chi_{il}(k^2)^l. \end{equation} If we truncate the first four terms of the power series, for $j=3$ we obtain a cubic equation in $k^2$. This equation has been solved in \citet{PerezRojas2014} using Cardano formulas for polynomials of third degree. However, numerical calculations with quadratic and cubic roots are thorny, so in this work we solve it with the aid of hypergeometric functions \citep{Sturmfels2000}: \begin{equation} k^2=-\sum_{j_2,...,j_n=0}^{\infty}\dfrac{(-1)^{j_1}j_1!}{(j_0+1)!j_2!...j_n!}\dfrac{\chi_{0}^{j_0+1} \chi_{i2}^{j_2} ...\chi_{in}^{j_n} }{(\chi_{i1}-1)^{j_1+1}}, \end{equation} where $ j_0=j_2+2j_3+...+(n-1)j_n $, $j_1=2j_2+3j_3+...+nj_n $, and integral expressions of $\chi_{i1}$ are written in the appendix. The solution of the dispersion equation is shown in Fig.~\ref{fig:lightcone} for selected values of the magnetic field strength. The figure shows that, when the magnetic field increases, the deviation from the light-cone is higher. Besides, for any value of the magnetic field, a threshold exists $\omega=2m_{e}$, above which the photons have a high probability to decay in electron-positron pairs \citep{PerezRojas2009}. \begin{figure}[t] \centering \includegraphics[width=.6\linewidth]{lightcone_mode2.eps} \caption{Dispersion relation for selected values of the magnetic field strength. The orange line corresponds to the propagation of light for $B=0$ (light-cone) and the gray line marks the first threshold of pair creation. We recall that $B_{c}=4.41 \times 10^{13}$~G and $m_{e}= 0.511$~MeV.} \label{fig:lightcone} \end{figure} For our purposes, we are only interested in the study of the region of transparency ($ 0<\omega<2m_e$), which is the region of momentum space where the photon self-energy and its frequency have real values. \section{Phase velocity and time delay in magnetized vacuum} \label{sec3} In this section, we calculate the phase velocity and the photon time delay taking advantage of the previous calculations. The photon phase velocity takes the form \begin{eqnarray} v_{ph}(\omega,B)&=&\dfrac{\omega}{k_{\perp}}\nonumber\\ &=&\left (1-\dfrac{1}{\omega^2}\sum_{j_2,j_3=0}^{\infty}\dfrac{(-1)^{j_1}j_1!}{(j_0+1)!j_2!j_3!}\dfrac{\chi_{i0}^{j_0+1} \chi_{i2}^{j_2} \chi_{i3}^{j_3}}{(\chi_{i1}-1)^{j_1+1}} \right )^{-1/2}. \end{eqnarray} Figure~\ref{figure2} shows the photon phase velocity as a function of magnetic field for fixed values of the frequencies. \begin{figure}[h!] \centering \includegraphics[width=.6\linewidth]{velocityphase_field_mode2.eps} \caption{ Phase velocity in function of the magnetic field for different frequencies } \label{figure2} \end{figure} We can see that photons of higher energies have lower phase velocity than the lower energy ones, hence the latter suffer a longer time delay. Besides, we can appreciate that in the limit of low frequency (black solid line), the phase velocity decreases linearly with the external magnetic field strength. \subsection{Photon time delay in magnetosphere} To calculate the photons time delay when crossing the magnetosphere (magnetized vacuum), for different energies, we consider a magnetic dipole configuration: \begin{equation}\label{dipolar} B(r)=B_{0} \left (\dfrac{r_{0}}{r}\right )^3, \end{equation} where $B_{0}$ and $r_{0}$ are, respectively, the surface magnetic field and radius of the neutron star. We consider for $B_{0}$ values from $10^{12}$~G all the way up to $10^{15}$~G, covering the range of (theoretically) estimated fields from radio pulsars to soft gamma repeaters and anomalous X-ray pulsars (``magnetars'') \citep[e.g.][]{Duncan1992,Dieters1998}. Using Eq.~(\ref{dipolar}), we can compute the time delay of the radiation crossing the magnetosphere of the pulsars given by the expression \begin{equation} \tau=\int_{r_o}^ r \frac{dr}{v_{ph}(\omega,B(r))}. \end{equation} Figure~\ref{Velocidaddefase} shows the phase velocity as a function of the distance traveled by the photons for different values of frequencies and two different values of $B_{0}$. It can be seen how, as expected, the phase velocity tends to the speed of light as the magnetic field decreases. \begin{figure}[h!] \centering \includegraphics[width=.6\linewidth]{velocityphase_distance_field_10_range_1_2.eps} \includegraphics[width=.6\linewidth]{velocityphase_distance_field_100_range_1_4.eps} \caption{Phase velocity as a function of the distance traveled by the photons for different frequencies, top panel ($B_{0}= 10 B_{c}$) and bottom panel ($B_{0}= 100 B_{c}$) } \label{Velocidaddefase} \end{figure} Figure~\ref{timedelay} shows the time delay of photons as a function of the distance, for two fixed values of the surface magnetic field $B_{0}$. The time delay is of the order of nanosecond for different values of frequencies. In spite of its shortness, it already shows that the solely presence of the magnetic field is sufficient to cause a time delay that grows with the photon frequency. We would like to stress that more complex configurations of the magnetosphere, including for instance the magnetized electron-positron plasma, might have relevant effects. A study in this direction is currently in progress. \begin{figure}[h!] \centering \includegraphics[width=.6\linewidth]{timedelay_field_10_range_1_4.eps} \includegraphics[width=.6\linewidth]{timedelay_field_100_range_1_4.eps} \caption{Time delay of photons for fixed values of frequencies as a function of the distance traveled, top panel ($B_{o}= 10 B_{c}$) and bottom panel ($B_{o}= 100 B_{c}$) } \label{timedelay} \end{figure} \section{Conclusions}\label{sec4} We solved the dispersion equation for photons propagating perpendicular to a constant and uniform magnetic field. The analytical and approximate expressions for the phase velocity, valid in a wide range of the characteristic parameters, have been calculated. The photon phase velocity depends on the magnetic field and the photon energy. In the limit of low frequencies, the phase velocity tends to have a linear behavior with the magnetic field. The photon time delay was calculated starting from a simple model of the magnetic field configuration in the neutron star magnetosphere. We found that differences between the photon time delay of $\gamma$-radiation $\lesssim 1~ MeV$ is of the order of nanosecond. This difference might be due to the fact that more energetic photons interact stronger with virtual electron-positron pairs, being closer to the threshold ($\omega=2 m_{e}$) In this work, we have made a first attempt to study the photon time delay considering a simple model of the magnetosphere. An improvement of our study should include, for instance, a more realistic model of pulsar magnetosphere and the magnetized electron-positron plasma, as well as other possible geometrical configurations of the magnetic field that surrounds the neutron star. A relevant result of our work is that, contrary to the traditional time delay of photons in the interstellar medium, in the present quantum electrodynamical process the more energetic photons are delayed with respect to the lower energetic ones. An essential pending task is to understand the physical reason at the core of this theoretical result.
2,869,038,156,449
arxiv
\section{Introduction} Over the last decade, pharmaceutical industry has undergone a paradigm shift \citep{ierapetritou_perspectives_2016} towards adoption of efficient process, control and material usage strategies in manufacturing, commonly referred to as Quality by Design (QbD) \citep{yu_understanding_2014,etzler_powder_2013}, which provide guidelines for optimal control over quality of the final product by acquiring a detailed understanding of the effect of process parameters on raw material attributes through (semi) empirical models. These strategies are further complemented by Quality by Control (QbC) \citep{su_QbC_2018} techniques, which are more prevalent in the continuous manufacturing practices as they provide increased flexibility to the design space and efficiency in characterization processes \citep{su_systematic_2017,SuBommireddy2018}. Understanding the connection between mechanical behavior of a single particle and tabletability of the powder, i.e., the ability of the powder to gain strength under confined pressure, is essential for realizing QbD and QbC objectives \citep{simek_comparison_2017,Yi2018}. An alternative approach utilizing empirical models has been extensively attempted, albeit with limited success, to try to capture the stress-strain relationship of a powder bed under load. Heckel and Kawakita \citep{denny_compaction_2002,simek_comparison_2017} models are amongst the ones commonly used to relate effective compaction parameters with deformation mechanisms and material properties of single particles. In general, these empirical relationships cannot be interpreted unambiguously, unless a plethora of empirical compaction equations is selectively used \citep{nordstrom2012protocol}. Similar empirical efforts have been made, with rather limited success, to relate effective strength parameters of the compact with particle material properties, e.g., by means of the Leuenberger equation \citep{kuentz_new_2000}. The challenges in achieving a systematic experimental characterization of micro-sized particles under confined conditions and large deformations are associated to the small size and irregular morphology of the particles, typical of pharmaceutical powders. Diametrical compression of single micro-particles using indentation equipment or Micro-Compression Testers (MCT) is one of the most common confinement techniques used to characterize the mechanical properties of various materials, especially polymers \citep{Egholm-2006,He-2009,Tanaka2007,Liu-1998}. \citet{Tanaka2007a} used a micro compression equipment to study the compaction behavior of individual 50 micron alumina granules with PVA (Poly Vinyl Alcohol) or PAA (Poly Acrylic Acid) as the binder matrix. In the context of pharmaceutical materials, \citet{Yap2008} studied the diametrical compression of single particles of various pharmaceutical excipients to relate the mechanical properties characterized at particle-scale to bulk compression properties of the materials. The technique has also been used to characterize visco-elastic properties of agarose \citep{Yan2009} and alginate \citep{Nguyen2009,Wang2005} microspheres. A major recent development in confinement methods for single particles has been made by \citet{Jonsson-2015}, who have developed a novel apparatus for triaxial testing of single particles, providing a more realistic insight into the behavior of individual particles under confinement. The apparatus has been used to investigate and characterize the yield and fracture behavior of micro-crystalline cellulose (MCC) granules \citep{Jonsson-2016}, highlighting the critical differences in their fracture behavior during triaxial and uniaxial compression, and correlating the triaxial response of single particles with bulk powder compression response. In most of the mentioned works, the shape of studied particles is either spherical or assumed to be spherical, primarily for employment of existing analytical contact formulations that were developed specifically for spherical particles due to obvious geometrical simplicity. The elastic mechanical properties, typically the Young's Modulus and Poisson's ratio, are commonly characterized using Hertz contact law \citep{Hertz-1881}, while the inelastic properties, typically the hardness or representative strength of the material, are characterized using various contact models developed for describing spherical indentation \citep{Tabor-1951, Johnson1985,Biwa-1995} and contact of inelastic solids of revolution \citep{Storakers-1997}. It is important to note that these contact models are applicable only in the range of small deformations due to various limiting assumptions, such as independent contacts and simplified contact surface profile curvatures \citep{Gonzalez2012,AGARWAL201826}. In this work, we restrict attention to micro-crystalline cellulose (Avicel PH-200) particles, with average diameters ranging between $50$ and $300~\mu$m, under large diametrical compression. Micro-crystalline cellulose is one of the widely used binding materials in pharmaceutical powder blends for die-compaction of solid tablets \citep{jivraj_overview_2000}. MCC particles exhibit significant plastic deformations under diametrical compression and elastic relaxation during unloading, which makes them relevant for this study \citep{mashadi_characterization_1987,mohammed_study_2006}. We address the first set of challenges described above by using a Shimadzu MCT-510 micro-compression tester equipped with a flat punch tip of $500~\mu$m in diameter, a top microscope, and a side camera capable of recording the deformation process. The micro-compression tester is capable of applying loading-unloading cycles under load control mode within a wide loading range of $9.8$ to $4803$~mN and a displacement range of $0$ to $100~\mu$m, at a minimum increment of $0.001~\mu$m. We then characterize particle size and shape using the top microscope and the side camera, and record accurate force-displacement curves over a range of large deformations. The characterization of these irregular particles under lateral confinement clearly increases the experimental complexity and, even though it provides complementary information for building a semi-empirical mechanistic model, it will be beyond the scope of this work. Despite the challenges described above, it is expected, as is the case for a wide range of physical responses, that a special structure exists in the response of micro-sized particles under large diametrical compression, which can be discovered and exploited to create a semi-empirical mechanistic model. Specifically, we assume that the force-displacement response exhibits a special structure known as an active subspace (see \citet{tripathy2016gaussian} and references therein), i.e., a manifold of the stochastic space of inputs (such as particle size, shape, surface roughness, internal porosity, material properties, loading conditions and confinement) characterized by maximal variation of the multivariate response function (i.e., the contact force). We identify this low dimensional manifold based on mechanistic understanding of the problem, project onto it the high-dimensional space of inputs, and link the projection to the measured output contact force to build a semi-empirical mechanistic contact law. It is worth noting that the high-dimensional space of inputs is typically not amenable to a full experimental characterization and thus some components of the active subspace need to be estimated as part of the process of building the response function. Specifically, we estimate three plastic and one elastic material properties, and one geometric parameter associated with the loading condition through the shape factor function. Finally, we account for model inaccuracies and/or imperfections by endowing the three plastic material properties with a probability distribution, namely a log-normal distribution. It bears emphasis that a semi-empirical mechanistic contact law for elasto-plastic particles is also relevant to three-dimensional particle mechanics calculations \citep{Gonzalez2012,gonzalez2016microstructure,yohannes2016evolution,yohannes2017discrete,gonzalez2018statistical}. The particle mechanics approach enabled the prediction of microstructure evolution during the three most important steps of powder die-compaction (namely compaction, unloading, and ejection) using generalized loading-unloading contact laws for elasto-plastic spheres with bonding strength \citep{Gonzalez2018generalized}. Moreover, these detailed calculations enabled the development of microstructure-mediated process-structure-property-performance interrelationships for QbD and QbC product development and process control, which depend on a small number of parameters with well-defined physical meanings. Therefore, the work presented in this paper, in combination with particle mechanics calculations, can contribute towards establishing the relationship between particle-level material properties and tablet performance. The paper is organized as follows. Contact mechanics formulations for elastic ellipsoidal particles are reviewed in Section~\ref{Section-EllipsoidalParticles}, followed by the introduction of shape factor and master contact law concepts, which are subsequently generalized to plastic irregular particles. The experimental analysis of MCC particles under diametrical compression is discussed in Section~\ref{Section-DiamtricalCompression}. Plastic shape factor for MCC particles is proposed in Section~\ref{Section-ShapeFactor}, followed by the description of an optimization procedure used to determine the geometric parameter associated with the loading condition. in Section~\ref{Section-MasterLaw}, a master contact law for micro-crystalline cellulose particles is proposed, and log-normal distributions for the three plastic material properties are estimated from experimental data. Section~\ref{Section-Loading-Unloading} extends the work to loading-unloading contact laws, and Section~\ref{Section-Results} shows the comparison between predictions of the calibrated contact law and the experimental values. Finally, a summary is presented in Section~\ref{Section-Summary}. \section{Contact mechanics formulations for ellipsoidal particles} \label{Section-EllipsoidalParticles} The contact mechanics of ellipsoidal elastic particles, characterized by three diameters $D_1$, $D_2$ and $D_3$ (Fig.~\ref{Fig-ParticleDimensions}), indicates that the contact force $F$ applied on a particle by two rigid plates in the direction of $D_3$ is given by \citep{zheng_contact_2013,Johnson1985} \begin{figure}[t] \centering \includegraphics[scale=0.7]{Fig-ParticleDimensions} \caption{Front and side view of an ellipsoidal particle under diametrical compression in the direction of $D_3$.} \label{Fig-ParticleDimensions} \end{figure} \begin{equation} F = \frac{4 E \sqrt{R_e}}{3(1-\nu^2)} \left[\eta_e(D_1,D_2)~\gamma \right]^{3/2} \label{Eqn-EllipsoidalHertz} \end{equation} where $\gamma$ is the relative displacement of the plates, $E$ and $\nu$ are the Young's modulus and the Poisson's ratio respectivley, $R_e=D_1 D_2/2D_3$ is the effective radius of the ellipsoidal particle, and $\eta_e$ is a shape factor dependent on $D_1$ and $D_2$ , given by \begin{equation} \frac{1}{\eta_e(D_1,D_2)} = 1-\left[\left(\frac{D_1}{D_2}\right)^{0.1368}-1\right]^{1.531} \end{equation} By defining effective stress and strain as \begin{equation} \sigma = \frac{F}{\pi R_e^2} \label{Eqn-Stress} \end{equation} \begin{equation} \epsilon = \frac{\gamma}{2 R_e} \label{Eqn-Strain} \end{equation} the following stress-strain relationship $\sigma(\eta_e \epsilon)$ is recovered \begin{equation} \sigma := \sigma(\eta_e \epsilon) = \frac{8\sqrt{2} E}{3\pi (1-\nu^2)} \left[\eta_e(D_1,D_2)~\epsilon \right]^{3/2} \label{Eqn-EllipsoidalHertz2} \end{equation} which depends on material properties ($E$ and $\nu$) and particle geometry ($D_1$ and $D_2$). It is worth noting that for spherical particles, the effective radius $R_e$ is equal to the radius of the sphere, the elastic shape factor $\eta_e$ simplifies to 1, and Hertz theory is recovered. Another noteworthy consideration from the above equation is that $\eta_e$ is independent of the particle height $D_3$. This is an artefact of the \emph{local character} of the formulation, which assumes that the contacts evolve locally and independently without any interactions with other contacts on the same particle. Consequently, evolution of the contact is dependent only on the in-plane dimensions, i.e. $D_1$ and $D_2$. In the context of confined granular systems, this assumption has been found to hold only during the inital stages of compression \citep{Mesarovic-2000}. For linear-elastic spherical particles, the nonlocal contact formulation by \citet{Gonzalez2012}, and its later extension by \citet{AGARWAL201826} to correct particle contact areas, is a significant contribution towards relaxation of this limiting assumption. The formulation invokes the principle of superposition to describe the normal contact deformation and evolution of inter-particle contact area as a sum of local (i.e., Hertzian) deformation and nonlocal deformations generated by other contact forces acting on the same particle. For elasto-plastic materials, development of a closed-form analytical formulation for general contact configurations that can describe contact interactions at large deformations \citep{Tsigginos2015} still remains an open problem. Initial progress in this regard has been made by \citet{Frenning2013}, who proposed a truncated sphere model applicable for small to moderate deformations, that utilizes the plastic incompressibility assumption to relate the average pressure in the particle due to elastic volumetric strain to the mean pressure generated at the particle contacts. In this work, we interpret the function $\eta_e$ as the elastic shape factor that depends solely on particle geometry and loading condition, and the function $\sigma(\cdot)$ as the master contact behavior of the particle that depends solely on the particle deformation behavior (i.e., on its elastic behavior). Furthermore, we generalize this interpretation to plastic ellipsoidal particles as follows \begin{equation} \sigma := \sigma(\eta_p \epsilon) \label{Eqn-MasterPlastic-1} \end{equation} where the plastic master behavior depends on the deformation mechanisms of plastic particles and the shape factor is \begin{equation} \eta_p := \eta_p(D_1/D_3, D_2/D_3) \mbox{~~such that~~}\eta_p(1,1)=1 \label{Eqn-ShapeFactor-1} \end{equation} It is worth noting that the plastic shape factor depends on $D_3$ in addition to $D_1$ and $D_2$, which emphasizes its applicability to large deformations. In order to demonstrate this generalization, we develop an experimental and numerical procedure to identify (i) the plastic master contact behavior $\sigma(\cdot)$ for micro-crystalline cellulose (Avicel PH-200) under large plastic deformations, and (ii) the shape factor $\eta_p$ for irregular particles that can be approximated by ellipsoids (Fig.~\ref{Fig-SideViewExamples}). \begin{figure}[b!] \centering \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\linewidth,trim=60 70 60 119, clip]{Fig-Particle-Ellipse-1.jpg} \end{subfigure} \hspace{30pt} \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\linewidth,trim=10 0 10 130, clip]{Fig-Particle-Ellipse-2.jpg} \end{subfigure} \caption{Side view of two Avicel PH-200 irregular particles, cf. Fig.~\ref{Fig-ParticleDimensions}. Left: $D_1=155~\mu$m, $D_2=217~\mu$m, $D_3=136~\mu$m. Right: $D_1=213~\mu$m, $D_2=196~\mu$m, $D_3=186~\mu$m.} \label{Fig-SideViewExamples} \end{figure} It is important to understand the characteristics of micro-crystalline cellulose particles and their dependency on environmental and process variables (see, e.g., \citet{sun_mechanism_2008,inghelbrecht_roller_1998,westermarck_microcrystalline_1999}). These properties gather further importance in implementing QbD and QbC strategies, which require the understanding of raw material attributes for robust in-line monitoring and control of manufacturing process \citep{thoorens_microcrystalline_2014}. Avicel has excellent compactibility, high dilution potential and superior disintegration properties, making it a prime filler material for the challenging direct compression tabletting for a wide range of active pharmaceutical ingredients (APIs) \citep{jivraj_overview_2000,celik_compaction_1996,doelker_comparative_1993,reier_microcrystalline_1966}. Different grades of micro-crystalline cellulose exist in the pharmaceutical industry, increasing the flexibility of its usage to not only direct compression but also to dry granulation and wet granulation processes \citep{inghelbrecht_roller_1998,westermarck_microcrystalline_1999}. Some of the grades have different functional usages and not just physical variations such as to counter the weakening effect of lubricants on tablet strength \citep{van_veen_compaction_2005}. \section{Diametrical compression of micro-crystalline cellulose particles} \label{Section-DiamtricalCompression} \begin{figure}[t] \centering \includegraphics[width=0.27\textwidth]{Fig-Shimzadu.png} \caption{Shimadzu MCT-510 micro-compression tester.} \label{Fig-MCT} \end{figure} \begin{figure}[b!] \centering \includegraphics[width=0.55\textwidth]{Fig-ParticlesStatistics.pdf} \caption{Dimensions of tested Avicel PH-200 particles. The ratio between the average diameter, i.e., $(D_1+D_2+D_3)/3$, and effective diameter, i.e., $D_1 D_2/D_3$, appears to be constant and about 1.5.} \label{Fig-DimensionStatistics} \end{figure} Micro-crystalline cellulose (Avicel PH-200) particles were tester under diametrical compression using a Shimadzu MCT-510 micro-compression tester (see Fig.~\ref{Fig-MCT}) equipped with a flat punch tip of $500~\mu$m in diameter. Load-unload tests were performed on 20 particles at a constant stress rate of $2$~mN/s, with maximum applied stress in the range of $7-16$~MPa. The characteristic diameters $D_1$ and $D_2$ were measured from the top view using the in-built microscope, while $D_3$ was calculated from a side view obtained using the side camera. The obtained dimensions for all particles are listed in Table \ref{Table-MCC-Dimensions}, while morphology of the tested particles is illustrated in Fig.~\ref{Fig-DimensionStatistics}, showing the relationship between their average diameter, i.e., $(D_1+D_2+D_3)/3$, and their effective diameter, i.e., $D_1 D_2/D_3$. It is interesting to note that this relationship has a linear trend, and that the ratio of the two values is about 1.5 for Avicel PH-200. \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|} \hline Particle No. & $D_1 (\mu\mathrm{m})$ & $D_2 (\mu\mathrm{m})$ & $D_3 (\mu\mathrm{m})$ \\ \hline $1$ & $213.54$ & $196.07$ & $186.33$ \\ $2$ & $155.32$ & $217.51$ & $136.87$ \\ $3$ & $168.74$ & $171.64$ & $125.15$ \\ $4$ & $97.56$ & $100.98$ & $69.09$ \\ $5$ & $238.47$ & $249.24$ & $237.81$ \\ $6$ & $93.8$ & $76.46$ & $52.85$ \\ $7$ & $160.44$ & $167.7$ & $151.3$ \\ $8$ & $114.1$ & $114.54$ & $80.98$ \\ $9$ & $168.31$ & $144.54$ & $131.27$ \\ $10$ & $133.8$ & $117.12$ & $103.38$ \\ $11$ & $122.69$ & $131.73$ & $109.91$ \\ $12$ & $121.04$ & $205.93$ & $133.08$ \\ $13$ & $185.1$ & $206.98$ & $146.91$ \\ $14$ & $180.35$ & $173.88$ & $139.81$ \\ $15$ & $173.26$ & $157.3$ & $147.79$ \\ $16$ & $296.34$ & $267.76$ & $219.78$ \\ $17$ & $219.5$ & $281.94$ & $225.55$ \\ $18$ & $194.97$ & $167.95$ & $151.23$ \\ $19$ & $105.57$ & $109.25$ & $86.46$ \\ $20$ & $154.12$ & $182.82$ & $166.13$ \\ \hline \end{tabular} \caption{Obtained characteristic dimensions of the studied Avicel PH-200 particles} \label{Table-MCC-Dimensions} \end{table} \begin{figure}[b!] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\linewidth,trim=100 65 100 135, clip]{part_3_1.png} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\linewidth,trim=100 50 100 150, clip]{part_3_2.png} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\linewidth,trim=100 50 100 150, clip]{part_3_3.png} \end{subfigure} \caption{Avicel PH-200 particle at three increasingly higher levels of diametrical compression (from left to right).} \label{Fig-DeformationSideVIew} \end{figure} As an illustrative example, Fig.~\ref{Fig-DeformationSideVIew} shows an Avicel PH-200 particle at three increasingly higher levels of diametrical compression. It is evident from the figure that the particle undergoes large deformations without apparent brittle failure, but rather permanent plastic deformation, as confirmed from the unloading elastic curves in Fig.~\ref{Fig-ExampleCurves}. It is also evident from the figures that surface irregularities are eliminated at low levels of diametrical compression. However, Fig.~\ref{Fig-ExampleCurves} illustrates that the strain required to eliminate these irregularities, which is measured from the first contact between the particle surface and the flat punch tip, is different for each particle and, therefore, is not predictable by a deterministic contact mechanics formulation such as equations~\eqref{Eqn-MasterPlastic-1}-\eqref{Eqn-ShapeFactor-1}. Therefore, in this study, we adopted a threshold stress of $0.2$~MPa to separate stochastic from deterministic deformation behaviors (i.e., to separate the initial deformation behavior dominated by uncharacterized surface irregularities from the subsequent behavior dominated by plastic deformations of an equivalent ellipsoidal particle). Specifically, we assumed a linear stress-strain response for the initial deformation behavior, with strain at $0.2$~MPa equal to the averaged value obtained from the 20 diametrical compression tests. The stress-strain response of each tested particle was then adjusted by offsetting the strain such that all deformation curves had the same averaged strain value at $0.2$~MPa. Fig.~\ref{Fig-ExampleCurves} shows raw and adjusted curves for three different particles. It is worth noting that the stress of $0.2$~MPa corresponds to 1-3$\%$ of the maximum stress applied during the tests. The adjusted diametrical compression curves were subsequently used to determine the shape factor $\eta_p(D_1/D_3,D_2/D_3)$ and the master contact law $\sigma(\cdot)$ for micro-crystalline cellulose particles. \begin{figure*}[t] \centering \includegraphics[width=0.55\textwidth]{Fig-AdjustedCurves.pdf} \caption{Diametrical loading-unloading curves for particles 1, 2 and 3. Solid lines correspond to the raw measurements of force $F$ and displacement $\gamma$ normalized by the corresponding effective diameter $D_e$ to obtain stress and strain, respectively. The dashed lines correspond to the adjusted loading-unloading curves, using a stress threshold of $0.2$~MPa.} \label{Fig-ExampleCurves} \end{figure*} \section{Plastic shape factor for micro-crystalline cellulose particles} \label{Section-ShapeFactor} We proposed in Section~\ref{Section-DiamtricalCompression} that the plastic shape factor for ellipsoidal plastic particles $\eta_p$ must depend solely on $D_1/D_3$ and $D_2/D_3$ and that it must simplify to 1 for spherical particles (Eqn.~\eqref{Eqn-ShapeFactor-1}). In addition, using Eqn.~\eqref{Eqn-MasterPlastic-1}, we propose that the stress-strain relationship $\sigma_i(\epsilon)$ of an ellipsoidal particle $i$ is related to the master contact law $\sigma(\epsilon)$ by \begin{equation} \sigma_i(\epsilon) = \sigma( \eta_{p,i} \epsilon) \implies \sigma(\epsilon) = \sigma_i(\epsilon/\eta_{p,i}) \label{Eqn-Master-StressStrain} \end{equation} Equivalently, the strain-stress relationship $\epsilon(\sigma)$ of the master contact law and that of an ellipsoidal particle $i$, $\epsilon_i(\sigma)$, are related by \begin{equation} \epsilon(\sigma) = \eta_{p,i} \epsilon_i(\sigma) \label{Eqn-Master-StrainStress} \end{equation} Even though Eqns.~\eqref{Eqn-Master-StressStrain} and \eqref{Eqn-Master-StrainStress} are equivalent, we shall use the experimentally characterized curves $\epsilon_i(\sigma)$ to determine the master contact law $\epsilon(\sigma)$ and the plastic shape factor $\eta_p$. This choice is motivated by the observation that micro-crystalline cellulose particles exhibit an apparent strain-hardening at distinctly different strain values (see Fig.~\ref{Fig-ExampleCurves}). Hence, the domain of the function $\epsilon_i(\sigma)$, that is $[0, \sigma^\mathrm{m}_i]$, is quite similar for all the characterized particles---with $\sigma^\mathrm{m}_i$ being the maximum applied stress during diametrical compression of particle $i$. Moreover, Avicel PH-200 particles are inhomogeneous and not perfectly ellipsoidal; thus, a unique function \eqref{Eqn-Master-StrainStress} that holds true for all tested particles does not exist. Therefore, we propose that the master contact law is given by a normal distribution \begin{equation} \epsilon(\sigma) \sim \mathcal{N}(\bar{\epsilon}(\sigma),s^2_\epsilon(\sigma)) \end{equation} with expectation given by the sample mean $\bar{\epsilon}$ and variance given by square of the sample standard deviation $s^2_\epsilon$, i.e. \begin{eqnarray} \bar{\epsilon}(\sigma) &:=& \frac{1}{\#\mathcal{S}_{\sigma}}\sum_{i\in\mathcal{S}_{\sigma}} \eta_{p,i} \epsilon_i(\sigma) \label{Eqn-MasterMean} \\ s_\epsilon^2(\sigma) &:=& \frac{1}{\#\mathcal{S}_{\sigma}-1}\sum_{i\in\mathcal{S}_{\sigma}} \left[ \eta_{p,i} \epsilon_i(\sigma) - \bar{\epsilon}(\sigma) \right]^2 \label{Eqn-MasterStd} \end{eqnarray} for all $\sigma$ such that $\#\mathcal{S}_{\sigma} > N/5$. In the above equations $N$ is the total number of tested particles (i.e., $20$ in this study), $\mathcal{S}_{\sigma}$ is the set of experiments for which a stress of $\sigma$ has been measured or reached before unloading, and $\#\mathcal{S}_{\sigma}$ is the cardinality or the number of elements of $\mathcal{S}_{\sigma}$. In order to achieve statistical significance, a sufficient number of particles needs to be tested. For the purpose of demonstrating the proposed method, we adopt such number to be $N/5$. Next, we propose the following shape factor for micro-crystalline cellulose particles \begin{equation} \eta_p(D_1/D_3,D_2/D_3) := 1 + a \left( 1 - \frac{D_3^2}{D_1 D_2} \right) \label{Eqn-ShapeFactor} \end{equation} where $a$ is a geometric parameter associated with the loading condition, to be determined from the experimental strain-stress curves $\epsilon_i(\sigma)$. Specifically, the parameter $a$ is such that the relative standard deviation of $\epsilon$ is minimized over a range of stresses $[0,\sigma^\mathrm{m}_{\sigma,i}]$, i.e., $a$ minimizes \begin{equation} {\mathlarger{\sum}}_{i=1}^{N} \frac{{\mathop{\mathlarger{\mathlarger{\int}}}_{0}^{\sigma^\mathrm{m}_{\sigma,i}} \left[\frac{1}{\#\mathcal{S}_{\sigma}}\sum_{j\in\mathcal{S}_{\sigma}} \eta_{p,j} \epsilon_j(\sigma) - {\eta_{p,i}} \epsilon_i(\sigma) \right]^2\mathrm{d}\sigma}} {\mathop{\mathlarger{\int}}_{0}^{\sigma^\mathrm{m}_{\sigma,i}} \left[\frac{1}{\#\mathcal{S}_{\sigma}}\sum_{j\in\mathcal{S}_{\sigma}} \eta_{p,j} \epsilon_j(\sigma)\right]^2\mathrm{d}\sigma} \label{Fig-DetermineCoeff-1} \end{equation} with $\sigma^\mathrm{m}_{\sigma,i}=\min\{\sigma^\mathrm{m}_i,\sigma^\mathrm{m}\}$, $\sigma^\mathrm{m}$ being the maximum stress value in the master contact law $\bar{\epsilon}(\sigma)$, i.e., in Eqn.~\eqref{Eqn-MasterMean}. In order to simplify the optimization process, we first identify a particle with $D_3^2/D_1 D_2$ close to 1 which is diametrically compressed at a high maximum stress and label it as particle $N$, and then rewrite Eqn.~\eqref{Fig-DetermineCoeff-1} using $$ \frac{1}{\#\mathcal{S}_{\sigma}}\sum_{i\in\mathcal{S}_{\sigma}} \eta_{p,i} \epsilon_i(\sigma) \approx \eta_{p,N} \epsilon_N(\sigma) $$ which leads to the following approximate minimization problem \begin{equation} a := \arg \min_{a} {\mathlarger{\sum}}_{i=1}^{N-1} \frac{{\mathop{\mathlarger{\mathlarger{\int}}}_{0}^{\sigma^\mathrm{m}_{\sigma,i}} \left[\epsilon_N(\sigma) - \dfrac{\eta_{p,i}}{\eta_{p,N}} \epsilon_i(\sigma) \right]^2\mathrm{d}\sigma}} {\mathop{\mathlarger{\int}}_{0}^{\sigma^\mathrm{m}_{\sigma,i}} \left[{\epsilon_N(\sigma)}\right]^2\mathrm{d}\sigma} \label{Eqn-Minimization} \end{equation} with $\sigma^\mathrm{m}_{\sigma,i}=\min\{\sigma^\mathrm{m}_i,\sigma_N^\mathrm{m}\}$. It is worth noting that the experimental curves are linearly interpolated and numerically integrated for the purpose of solving Eqn.~\eqref{Eqn-Minimization}. The proposed optimization process results in a shape factor, Eqn.~\eqref{Eqn-ShapeFactor}, for the tested Avicel PH-200 particles with an optimal coefficient $a=1.43$. The master contact behavior for these particles is determined in the next section. \section{Master contact law for micro-crystalline cellulose particles} \label{Section-MasterLaw} \begin{figure}[t] \centering \includegraphics[width=0.55\textwidth]{Fig-MasterPlot.pdf} \caption{Master contact law $\epsilon \sim \mathcal{N}(\bar{\epsilon},s^2_\epsilon)$, for micro-crystalline cellulose particles. Symbols correspond to the mean. Shaded area corresponds to one standard deviation from the mean. The shape factor $\eta_p$ is given by Eqn.~\eqref{Eqn-ShapeFactor} with $a=1.43$.} \label{Fig-MasterPlot-1} \end{figure} We proposed in the previous section that the master contact law is given by a normal distribution $\epsilon(\sigma) \sim \mathcal{N}(\bar{\epsilon}(\sigma),s^2_\epsilon(\sigma))$, where the sample mean $\bar{\epsilon}(\sigma)$ and the sample standard deviation squared $s_\epsilon^2(\sigma)$ are given by Eqns.~\eqref{Eqn-MasterMean}-\eqref{Eqn-MasterStd} and the shape factor $\eta_p$ is given by Eqn.~\eqref{Eqn-ShapeFactor} with $a=1.43$. Therefore, the master contact law is readily available from the experimental curves and is shown in Fig.~\ref{Fig-MasterPlot-1}. \begin{table}[b!] \centering \begin{tabular}{|c|c|c|} \hline & $\mu_{C_i}$ & $\sigma_{C_i}$ \\ \hline $\log(C_1)$ & $1.9027$ & $0.2985$ \\ $\log(C_3)$ & $1.1015$ & $4.8914 \times 10^{-9}$ \\ $\log(C_5)$ & $0.9784$ & $0.0797$ \\ \hline \end{tabular} \caption{Plastic coefficients $\log(C_i) \sim \mathcal{N}(\mu_{C_i},\sigma^2_{C_i})$ of the stress-strain relationship of the master contact law for micro-crystalline cellulose, Eqn.~\eqref{Eqn-Master-StressStrain2}.} \label{Table-StressStrain-Param} \end{table} The stress-strain relationship $\sigma(\epsilon)$ of the master contact law for micro-crystalline cellulose particles is given by the inverse of $\epsilon(\sigma)$. We propose to approximate $\sigma(\epsilon)$ by \begin{equation} \sigma(\epsilon) / \mathrm{MPa} = C_1~\eta_p \epsilon - \left( C_3~\eta_p \epsilon \right)^3 + \left( C_5~\eta_p \epsilon \right)^5 \label{Eqn-Master-StressStrain2} \end{equation} with $\log(C_i) \sim \mathcal{N}(\mu_{C_i},\sigma^2_{C_i})$. Therefore, the expected value of $\sigma(\epsilon)$ is given by \begin{equation} \mathrm{E}\langle\sigma(\epsilon)\rangle / \mathrm{MPa} = \mathrm{e}^{\mu_{C_1}+\sigma^2_{C_1}/2} \eta_p \epsilon - \mathrm{e}^{3\mu_{C_3}+9\sigma^2_{C_3}/2} \left(\eta_p \epsilon \right)^3 + \mathrm{e}^{5\mu_{C_5}+25\sigma^2_{C_5}/2} \left(\eta_p \epsilon \right)^5 \label{Eqn-Master-StressStrain3} \end{equation} The approximated stress-strain relationship is readily available by calibration and it is shown in Fig.~\ref{Fig-MasterPlot-2} with coefficients $C_i$ given in Table~\ref{Table-StressStrain-Param}. The expected value is thus given by \begin{equation} \mathrm{E}\langle\sigma(\epsilon)\rangle / \mathrm{MPa} = 7.0094~\eta_p \epsilon - 27.235~\left(\eta_p \epsilon \right)^3 + 144.23~\left(\eta_p \epsilon \right)^5 \label{Eqn-Master-StressStrain4} \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.55\textwidth]{Fig-MasterContactLaw.pdf} \caption{Master contact law $\sigma(\epsilon)$ for micro-crystalline cellulose particles given by Eqn.~\eqref{Eqn-Master-StressStrain2} with coefficients in Table~\ref{Table-StressStrain-Param}. The solid line corresponds to the mean of $\sigma(\epsilon)$ and the dashed lines correspond to one standard deviation from the mean. The dashed-dotted line corresponds to the mean of $\sigma(\epsilon)$ calculated by the first term of the master law. Symbols correspond to the sample mean $\bar{\epsilon}$ and the shaded area corresponds to one sample standard deviation $s_\epsilon$ from the mean. The shape factor $\eta_p$ is given by Eqn.~\eqref{Eqn-ShapeFactor} with $a=1.43$.} \label{Fig-MasterPlot-2} \end{figure} The stress-strain relationship given by Eqn.~\eqref{Eqn-Master-StressStrain2} is in spirit of the curvature-corrected nonlocal contact formulation for elastic spherical particles \citep{AGARWAL201826}. The first term in the relationship corresponds to the initial linear deformation behavior (represented by a dashed-dotted curve in Fig.~\ref{Fig-MasterPlot-2}), which has been previously described with contact models developed for spherical indentation \citep{Tabor-1951,Johnson1985,Biwa-1995} and contact of inelastic solids of revolution \citep{Storakers-1997} in the context of small deformations. Of particular interest is the similarity solution by \citet{Biwa-1995}, which describes the stress-strain relationship at the contact of rigid-plastic power law hardening solids. For spherical particles under diametrical compression, the relationship is given by \begin{equation} \sigma(\epsilon) = 4(3^{1-1/m})(c^2_{max})^{1+1/2m}\kappa\epsilon^{1+1/2m} \end{equation} where $m$ is the power law hardening exponent, $\kappa$ is a representative strength given by \citep{Mesarovic-1999,Mesarovic-2000} \begin{equation} \kappa = \sigma^{1-1/m}_yE^{1/m} \end{equation} where $\sigma_y$ is the uniaxial yield stress and $E$ is the Young's modulus, and $c^2_{max}$ is a calibrated material parameter, well approximated by the following relationship \citep{Storakers-1994} \begin{equation} c^2_{max} = 1.43\exp\left(\frac{-0.97}{m}\right) \end{equation} Assuming perfectly plastic material behavior (i.e., $m\to\infty$), the stress-strain relationship reduces to \begin{equation} \sigma(\epsilon) = 17.16\sigma_y\epsilon \label{Eqn-IdealPlasticStorakers} \end{equation} and, by correlating the calibrated value of plastic coefficient $C_1$ with the above equation, the uniaxial yield stress is estimated as $\sigma_y = 0.4085\,\mathrm{MPa}$. Other terms in the master contact law stress-strain relationship describe moderate-to-large deformation behavior of the particle. Following the initial linear response, a softening or reduction in stiffness is observed as a dip in the slope of the stress-strain curve. This regime of deformations has been termed as the contact interaction regime \citep{Tsigginos2015}, which starts with coalescence of two or more plastically deforming zones surrounding the contacts \citep{Frenning2013,Tsigginos2015}. With the onset of this regime, contacts can no longer be assumed independent. The second term of the master stress-strain relationship describes the contact response in this regime. The third term describes the large deformation strain-hardening response, referred to as geometric hardening \citep{Sundstrom-1973} or the low compressibility regime \citep{Tsigginos2015}, during which the average contact pressure unboundedly rises due to increasing contact interactions. \section{Loading-unloading contact law for micro-crystalline cellulose particles} \label{Section-Loading-Unloading} Unloading contact laws for elasto-plastic spheres with bonding strength, or adhesion, have been developed \citep{Mesarovic-1999,Mesarovic-2000,Mesarovic-2000b}, assuming elastic perfectly-plastic behavior and using a rigid punch decomposition \citep{Hill-1990}. \citet{Olsson-2013} have extended these laws to elasto-plastic spheres that exhibit power-law plastic hardening behavior, and have verified their validity with detailed finite element simulations. This formulation assumes elastic behavior, approximated by Hooke's law, and Irwin's fracture mechanics to describe elastic recovery of the deformed spheres and the breakage of solid bridges. \citet{Gonzalez2018generalized} developed generalized loading-unloading contact laws for elasto-plastic spheres with bonding strength, which are continuous at the onset of unloading by means of a regularization term, in the spirit of a cohesive zone model. These contact laws are updated incrementally to account for strain path dependency and have been show{n to be numerically robust, efficient, and mechanistically sound in three-dimensional particle mechanics static calculations. Here we endow the master contact law for micro-crystalline cellulose particles, proposed in Section~\ref{Section-MasterLaw}, with elastic relaxation during unloading. Specifically, we follow \citet{Gonzalez2018generalized} and modify Eqn.~\eqref{Eqn-Master-StressStrain2} accordingly, i.e. \begin{equation} \sigma(\epsilon) = \left\{ \begin{array}{l} \left[ C_1~\eta_p \epsilon - \left( C_3~\eta_p \epsilon \right)^3 + \left( C_5~\eta_p \epsilon \right)^5 \right]\mbox{MPa} \hspace{0.90in} \mbox{plastic loading} \\ \frac{2}{\pi} \sigma_{\mbox{\tiny P}} \left[ \arcsin\left( \phi(\epsilon, \epsilon_{\mbox{\tiny P}} ) \right) - \phi(\epsilon, \epsilon_{\mbox{\tiny P}}) \sqrt{1-\phi(\epsilon, \epsilon_{\mbox{\tiny P}})^2} \right] \hspace{0.35in} \mbox{elastic (un)loading} \end{array} \right. \label{Eqn-MasterLoadingUnloading} \end{equation} where the internal variables are updated, i.e., $\{\epsilon_{\mbox{\tiny P}},\sigma_{\mbox{\tiny P}} \} \leftarrow \{\epsilon,\sigma(\epsilon)\} $, during plastic loading (i.e., when $\epsilon \ge \epsilon_{\mbox{\tiny P}}$). In the above equation, the function $\phi(\epsilon, \epsilon_{\mbox{\tiny P}})$ is given by \begin{equation} \phi(\epsilon, \epsilon_{\mbox{\tiny P}}) = \left[1 - 4.12 \bar{E} \frac{(\epsilon_{\mbox{\tiny P}} - \epsilon)^2 \epsilon_{\mbox{\tiny P}}}{\sigma_{\mbox{\tiny P}}} \right]_+^{1/2} \end{equation} for an elastic, perfectly plastic particle (i.e., for $m\rightarrow\infty$) with effective elastic modulus equal to $\bar{E}=E/(1-\nu^2)=16$~GPa. The assumption of perfectly plastic material is in agreement with the observation made in Section~\ref{Section-MasterLaw} and the value of the effective elastic modulus is the result of a simple calibration. \section{Results and discussion} \label{Section-Results} We have proposed that the experimentally characterized micro-crystalline cellulose particles deform under diametrical compression following a master contact law $\sigma(\epsilon)$, given by Eqn.~\eqref{Eqn-MasterLoadingUnloading} with plastic coefficients $C_i$ in Table~\ref{Table-StressStrain-Param}, effective elastic modulus $\bar{E}=16$~GPa, and a shape factor $\eta_p$, given by Eqn.~\eqref{Eqn-ShapeFactor} with $a=1.43$. Figs.~\ref{Fig-ExpVSModel-1}, \ref{Fig-ExpVSModel-2} and \ref{Fig-ExpVSModel-3} exhibit a very good agreement between predictions of the calibrated contact law and the experimental values. It is evident from the figures that the apparent plastic strain-hardening occurs at distinctly different strain values and that the proposed contact law captures its dependency on particle dimensions---i.e., on $D_1$, $D_2$ and $D_3$ (see Fig.~\ref{Fig-DimensionStatistics}). It is also evident that the elastic relaxation during unloading is accurately predicted with low uncertainty. \section{Summary} \label{Section-Summary} We have proposed a semi-empirical mechanistic contact law for micro-crystalline cellulose (Avicel PH-200) particles. This loading-unloading contact law has been characterized experimentally using diametrical compression force-displacement curves, obtained with a Shimadzu MCT-510 micro-compression tester. The irregular MCC particles have been approximated by an ellipsoid and the lengths of their principal axes have been measured using an in-built microscope and a side camera. To generalize the contact mechanics of an elastic ellipsoidal particle to an elasto-plastic irregular particle approximated by an ellipsoid, we have introduced the concepts of a shape factor and a master contact law. It is worth noting that the force-displacement curves exhibit an apparent strain-hardening at distinctly different strain values. We have postulated that this deformation mechanism depends on particle dimensions and loading configuration,which is captured by the shape factor function. The proposed loading-unloading contact law is, therefore, a function of (i) three characteristic diameters (lengths of the principal axes of an approximated ellipsoid), (ii) a geometric parameter associated with the loading condition through the shape factor function, (iii) three plastic material properties, and (iv) one effective elastic material property. The three plastic material properties are log-normal distributions estimated from the loading experimental curves, while the effective elastic property is estimated from the unloading experimental curves. It bears emphasis that the proposed loading contact law is in spirit of the curvature-corrected nonlocal contact formulation for elastic spherical particles \citep{AGARWAL201826}, with the first term correponding to the stress-strain relationship described by small-strain contact models developed for spherical indentation \citep{Tabor-1951,Johnson1985,Biwa-1995} and contact of inelastic solids of revolution \citep{Storakers-1997}. Similarly, the proposed unloading contact law follows from generalized loading-unloading contact laws for elasto-plastic spheres with bonding strength \citep{Gonzalez2018generalized}. The study shows a very good agreement between predictions of the calibrated loading-unloading contact law and the experimental values. However, it is recommended that a larger number of experiments be performed to estimate the five parameters of the contact law, in order to reduce the uncertainty in the model predictions. We close by pointing out that the proposed semi-empirical mechanistic contact law is relevant to three-dimensional particle mechanics calculations \citep{Gonzalez2012,gonzalez2016microstructure,yohannes2016evolution,yohannes2017discrete,gonzalez2018statistical,Gonzalez2018generalized}. Therefore, the work presented in this paper, in combination with these detailed calculations, can contribute to develop microstructure-mediated process-structure-property-performance interrelationships and, thus, to establish the relationship between particle-level material properties and tablet performance. Ultimately, these relationships are needed to assist QbD and QbC product development and process control \citep{Yi2018}. \begin{figure}[p!] \centering \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-1.pdf} \caption{Particle 1} \label{fig:Fig-1} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-2.pdf} \caption{Particle 2} \label{fig:Fig-2} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-3.pdf} \caption{Particle 3} \label{fig:Fig-3} \end{subfigure} \par\bigskip \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-4.pdf} \caption{Particle 4} \label{fig:Fig-4} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-5.pdf} \caption{Particle 5} \label{fig:Fig-5} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-6.pdf} \caption{Particle 6} \label{fig:Fig-6} \end{subfigure} \par\bigskip \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-7.pdf} \caption{Particle 7} \label{fig:Fig-7} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-8.pdf} \caption{Particle 8} \label{fig:Fig-8} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-9.pdf} \caption{Particle 9} \label{fig:Fig-9} \end{subfigure} \caption{Experimental values and predictions of loading-unloading contact curves for micro-crystalline cellulose (Avicel PH-200) particles under diametrical compression. The master contact law $\sigma(\epsilon)$ is given by Eqn.~\eqref{Eqn-MasterLoadingUnloading} with plastic coefficients $C_i$ in Table~\ref{Table-StressStrain-Param}, effective elastic modulus $\bar{E}=16$~GPa, and the shape factor $\eta_p$ is given by Eqn.~\eqref{Eqn-ShapeFactor} with $a=1.43$.} \label{Fig-ExpVSModel-1} \end{figure} \begin{figure}[p!] \centering \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-10.pdf} \caption{Particle 10} \label{fig:Fig-10} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-11.pdf} \caption{Particle 11} \label{fig:Fig-11} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-12.pdf} \caption{Particle 12} \label{fig:Fig-12} \end{subfigure} \par\bigskip \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-13.pdf} \caption{Particle 13} \label{fig:Fig-13} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-14.pdf} \caption{Particle 14} \label{fig:Fig-14} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-15.pdf} \caption{Particle 15} \label{fig:Fig-15} \end{subfigure} \par\bigskip \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-16.pdf} \caption{Particle 16} \label{fig:Fig-16} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-17.pdf} \caption{Particle 17} \label{fig:Fig-17} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-18.pdf} \caption{Particle 18} \label{fig:Fig-18} \end{subfigure} \caption{Experimental values and predictions of loading-unloading contact curves for micro-crystalline cellulose (Avicel PH-200) particles under diametrical compression. The master contact law $\sigma(\epsilon)$ is given by Eqn.~\eqref{Eqn-MasterLoadingUnloading} with plastic coefficients $C_i$ in Table~\ref{Table-StressStrain-Param}, effective elastic modulus $\bar{E}=16$~GPa, and the shape factor $\eta_p$ is given by Eqn.~\eqref{Eqn-ShapeFactor} with $a=1.43$.} \label{Fig-ExpVSModel-2} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-19.pdf} \caption{Particle 19} \label{fig:Fig-19} \end{subfigure} \hspace{50pt} \begin{subfigure}[b]{0.32\textwidth} \captionsetup{justification=centering} \includegraphics[width=\linewidth]{Fig-20.pdf} \caption{Particle 20} \label{fig:Fig-20} \end{subfigure} \caption{Experimental values and predictions of loading-unloading contact curves for micro-crystalline cellulose (Avicel PH-200) particles under diametrical compression. The master contact law $\sigma(\epsilon)$ is given by Eqn.~\eqref{Eqn-MasterLoadingUnloading} with plastic coefficients $C_i$ in Table~\ref{Table-StressStrain-Param}, effective elastic modulus $\bar{E}=16$~GPa, and the shape factor $\eta_p$ is given by Eqn.~\eqref{Eqn-ShapeFactor} with $a=1.43$.} \label{Fig-ExpVSModel-3} \end{figure} \section*{Acknowledgements} The author gratefully acknowledges the support received from the United States National Science Foundation grant number CMMI-1538861 and from the United States Food and Drug Administration grant number DHHS-FDA U01FD005535. The views expressed by authors do not necessarily reflect the official policies of the Department of Health and Human Services; nor does any mention of trade names, commercial practices, or organization imply endorsement by the United States Government. \section*{Nomenclature} \noindent \begin{longtable}{ll} $a$ & geometric factor for plastic ellipsoidal particles (-)\\ $c^2_{max}$ & material parameter for similarity contact law (-)\\ $C_1$ & plastic material property 1 ($\mathrm{MPa}$) \\ $C_2$ & plastic material property 2 ($\mathrm{MPa}^{-3}$) \\ $C_3$ & plastic material property 2 ($\mathrm{MPa}^{-5}$) \\ $D_1$ & major diameter of the ellipsoidal particle when looked through MCT's microscope ($\mathrm{mm}$)\\ $D_2$ & minor diameter of the ellipsoidal particle when looked through MCT's microscope ($\mathrm{mm}$)\\ $D_3$ & vertical diameter of the ellipsoidal particle captured by MCT's side-view camera ($\mathrm{mm}$)\\ $E$ & Young's modulus ($\mathrm{MPa}$)\\ $\mathrm{E}[\cdot]$ & expected value (-)\\ $F$ & force acting on the particle ($\mathrm{N}$)\\ $R_e$ & effective radius of the ellipsoidal particle ($\mathrm{mm}$)\\ $m$ & Power-law hardening exponent (-)\\ $N$ & total number of tested particles (-)\\ $s_\epsilon$ & standard deviation of a set of strain values (-)\\ $\#\mathcal{S}_\sigma$ & number of experiments for which a given stress $\sigma$ is reached (-)\\ $\gamma$ & particle deformation at the contact ($\mathrm{mm}$)\\ $\epsilon$ & effective particle strain (-)\\ $\bar{\epsilon}$ & mean of the strain values for a set of particles (-)\\ $\epsilon_p$ & internal strain variable for unloading contact law (-)\\ $\eta_e$ & elastic shape factor (-)\\ $\eta_p$ & plastic shape factor (-)\\ $\kappa$ & particle reference strength ($\mathrm{MPa}$) \\ $\nu$ & Poisson's ratio (-)\\ $\sigma$ & effective particle stress ($\mathrm{MPa}$)\\ $\sigma^m$ & maximum stress value in the master contact law ($\mathrm{MPa}$)\\ $\sigma_y$ & uniaxial yield stress ($\mathrm{MPa}$)\\ $\sigma_p$ & internal stress variable for unloading contact law ($\mathrm{MPa}$) \\ \end{longtable} \bibliographystyle{kona}
2,869,038,156,450
arxiv
\section{INTRODUCTION} \label{Sec:Intro} In this paper, we describe topological techniques for the analysis of geometric data. In particular we apply these methods to study a particular protein, the maltose-binding protein (MBP), whose geometric shape can be represented by 370 points in $\mathbb{R}^3$, or equivalently, as one point in $\mathbb{R}^{3 \times 370}$. However, this structure is not static; it is dynamic. It ``jiggles'' under thermal fluctuations, and changes among various conformations as it performs its biological functions. The space of all such conformations is a subspace of $\mathbb{R}^{3\times 370}$. We use topology to construct summary statistics of these conformations and see what they can tell us about our data. Instead of working with the static spatial coordinates, we use a richer dynamic model of the protein from which we use correlations to calculate a $370\times 370$ matrix of dynamical distances for each conformation. This matrix is the input for our topological and statistical methods. { Since our computational protocol only requires the knowledge of the dynamic cross correlation, it can also be used in reducing the massive amount of data generated by long-term molecular dynamics (MD) simulations of large molecular systems of arbitrary complexity. For example, with help of effective pair potentials obtained from the DRISM-KH molecular theory of solvation \citep{Kobryn:2014} further speed-up of MD simulations is possible at various levels of coarse-grained mapping. For given force-field parametrization and under physiological conditions, the time ensemble average of atomistic details of communications among separate regions of macromolecules often exhibit multi-modal behavior. Modal decomposition of dynamic cross correlation and its convergence analysis for the present study is detailed in \cite{MBP:DynamicalModel}. Briefly, in constructing the matrices of dynamical distances, the reference structures we use are the experimental crystal structures reported in Table~\ref{Tab:MBPstructures}. In conventional uni-modal MD simulation the reference structure can be iteratively updated from averaged coordinate of trajectory segments, however the explicit inclusion of multi-modal motions of atoms is also possible \citep{Kasahara:2014}. Spatially perturbed reference structures detected along the simulated trajectory are expected to yield the biologically important conformational changes only if they alter the dynamic cross correlation matrix in such way to result in a significant rearrangement of the residues that define the topologically most persistent loop. } Algebraic topology classifies topological spaces using topological invariants such as homotopy, homology, and Euler characteristic. Homology detects topological features such as connected components, holes and voids. Computational topologists have introduced a variation of homology called \emph{persistent homology} which records the history of appearances and disappearances of topological features as a parameter changes (\citet{Edels2002}, \citet{ZomCar2005}). Here is a basic example of persistence. Given a set of points sampled from some (unknown) object in a Euclidean space, consider a set of balls centered at these points with a common radius. For certain radii, this union of balls will approximate the unknown object in various ways. Persistent homology summarizes the appearance and disappearance of topological features of the union of balls as the radius increases. Topological attributes that persist over a large range of radii are likely to be a signal while short-lived ones are likely to be noise. Three usual summaries of persistent homology are the \emph{barcode} \citep{CZCG2004a}, the \emph{persistence diagram} \citet{Edels2002}, and the \emph{persistence landscape} \citep{Bubenik2015}. The space of barcodes or persistence diagrams is a space in which it is difficult to do statistics, though theoretical advances have been made by \citet{mileyko2011} and \citet{Turner2013}, as well as \citet{HGK2012} who applied statistics to landmarks of maxillary area in an orthodontic study. However, the space of persistence landscapes has the advantage of being a vector space (in fact, a separable Banach space). \citet{ChazalEtAl2013} and \citet{ChazalEtAl2014} have studied the weak convergence, convergence of the bootstrap, and confidence bands for the average persistence landscape. Furthermore, as observed by \citet{Reininghaus2014}, the persistence landscape may be viewed as the feature map of a positive definite kernel, allowing one to apply all of the usual kernel methods of machine learning. \citet{Reininghaus2014} also developed an alternative kernel-based persistence approach. In this article we will use the persistence landscape for our statistical data analysis, { a more detailed discussion of which can be found in an early study \citep{MScThesis} of twelve samples of the HIV-1 protease. } We also show how the barcode may be useful in exploratory data analysis. For homology group of degree 1, the longest interval in the barcode corresponds to the most persistent loop in the data. { In the present study} we show that biologically pertinent units, the active sites and the allosteric pathway residues, are located close to this loop. { This is an important information that may guide the subsequent docking studies of protein affinities to numerous drug fragments under experimentally relevant conditions as previously done for prion protein and thiamine \citep{Nikolic:2012}.} { For recent applications of persistent homology to studies of protein structure, folding, and flexibility we refer the reader to the articles \citep{Xia:2014, Xia:2015a, Xia:2015b}. } {[A14] Apart from the use of \textsc{javaPlex} library \citep{javaPlex} to visualize the associated barcodes, there is no further similarity between the present or the past \citep{MScThesis} topological analyses and the recent method of molecular topological fingerprints by \cite{Xia:2014}. They used Betti numbers to study protein structure, folding, and flexibility. An interesting aspect of \cite{Xia:2015a} article is that Betti numbers are calculated at various scale of other parameters. The other parameters are configuration index, cut-off distance, scale of resolution, and number of iterations. } A brief outline of the article is given as follows. In Section~\ref{Sec:MBP} we explain our biological motivation and introduce basic facts concerning the MBP and describe the data. Section~\ref{Sec:ComputationalProtocolandData} lists the main steps of our topological data analysis. We define the persistence landscape \citep{Bubenik2015} from which we derive a suitable random variable; this allows us to formulate statistical hypotheses of interest and construct a test statistic. In Section~\ref{Sec:ResultsAndDiscussion} we compare open and closed conformations of MBP and extract additional information from visualized complexes. Our conclusion and signposts for future work appear in Section~\ref{Sec:Summary}. \section{THE PROTEIN DATA} \label{Sec:MBP} The maltose-binding protein is a bacterial protein found in \emph{Escherichia coli} where its primary function is to bind and transport sugar molecules across cell membranes, providing energy to the bacterium \citep{BoosShuman:1998}. Though sometimes causing serious illness, most strains of \emph{E. coli} are nonpathogenic and in fact beneficial. These bacteria colonize the gastrointestinal tract of humans and animals and protect the gut from harmful bacteria \citep{Hudault:2001}. Furthermore, \emph{E. coli} is the best known living organism and is used to study various cellular processes \citep{VanHoudt:2005}. In our paper we use topological methods to study the MBP. While performing biological functions, the MBP changes its structure. Our objects of study are fourteen three-dimensional structures of the MBP, obtained by X-ray crystallography. The structures are available from the Protein Data Bank, \citet{PDB}. Each structure is a large biomolecule with about 3000 heavy atoms grouped into 370 relatively small clusters representing amino acid residues. The term \emph{residue} refers to the fact that during the formation of the protein the constituent amino acids lose parts of their water molecules \citep{IUPAC}. Instead of using an all-atom model, we use a computationally more affordable \emph{coarse-grained model} in which every residue is represented by a single unit \citep{Cavasotto:2005}. { This is compatible with the topological approach as it reduces local information but preserves global topological information. } \begin{figure}[!htbp] \begin{center} \resizebox{\columnwidth}{!}{ \includegraphics[angle=0.,width=0.475\textwidth]{Figure01a.pdf \label{Fig:1mpd_bio_r_500} \includegraphics[angle=0.,width=0.475\textwidth]{Figure01b.pdf \label{Fig:1omp_bio_r_500} } \end{center} \caption{The biological assembly for the closed-holo 1MPD conformal structure (left, \citet{Shilton:1996}) and the open-apo 1OMP conformal structure (right, \citet{Sharff:1992}). { Secondary structures and solvent accessible surfaces of both proteins are shown as blue flat ribbons and gray transparent surfaces, respectively. Active sites in ribbon representations have yellow color and interact with ligand maltose shown here as ball and stick model embedded in 1MPD structure.}} \label{Fig:MBP_BiologicalAssembly} \end{figure} A major conformational change in the protein occurs when a smaller molecule called a \emph{ligand} attaches to the protein molecule, see Figure~\ref{Fig:MBP_BiologicalAssembly}. \citet{Szmelcman:1976} determined that the MBP interacts with various sugar molecules (ligands), starting from the small maltose molecule through the larger maltodextrin. Ligand-induced conformational changes are important because the biological function of the protein occurs through a transition from a ligand-free (\emph{apo}) to a ligand-bound (\emph{holo}) structure \citep{Seeliger:2010}. Simulations and to some extent experiments show that 95\% of the time the two domains of MBP are separated and twisted, which is called an \emph{open} conformation, and 5\% of the time they are close to each other, which is called a \emph{closed} conformation. If closed, it is always due to having a captured ligand, see Figure~\ref{Fig:MBP_BiologicalAssembly}. Open structures can have an attached ligand or not, as verified by experiments (see Table~\ref{Tab:MBPstructures}). From a practical viewpoint, closed conformations are more significant because they are more stable so the detachment of the bound ligand is less likely to happen. We consider seven closed and seven open conformations. We differentiate between them using deformation energies; the energetically more favorable closed conformation binds a sugar molecule while its open counterpart requires greater deformation energies and may or may not engage in the binding process. The list of the fourteen MBP structures we investigate is shown in Table~\ref{Tab:MBPstructures}. \begin{table}[!htbp]\small \centering \caption{Fourteen MBP structures with the names of the ligands for \emph{holo}-forms. Each structure is labeled by a four-letter Protein Data Bank \citep{PDB} code. \label{Tab:MBPstructures}} \begin{center} \resizebox{0.75\columnwidth}{!}{ \begin{tabular}{ccllcc} No.& PDB code & Ligand name && Protein structure & Reference \\ \hline 1 & 1ANF & maltose && closed-{\em holo} & \citet{Quiocho:1997} \\ 2 & 1FQC & maltotriotol && closed-{\em holo} & \citet{Duan:2001} \\ 3 & 1FQD & maltotetraitol&& closed-{\em holo} & \citet{Duan:2001} \\ 4 & 1MPD & maltose && closed-{\em holo} & \citet{Shilton:1996} \\ 5 & 3HPI & sucrose && closed-{\em holo} & \citet{Gould:2010} \\ 6 & 3MBP & maltotriose && closed-{\em holo} & \citet{Quiocho:1997} \\ 7 & 4MBP & maltotetraose && closed-{\em holo} & \citet{Quiocho:1997} \\ \hline 8 & 1EZ9 & maltotetraitol&& open-{\em holo} & \citet{Duan:2002} \\ 9 & 1FQA & maltotetraitol&& open-{\em holo} & \citet{Duan:2001} \\ 10& 1FQB & maltotetraitol&& open-{\em holo} & \citet{Duan:2001} \\ 11& 1JW4 & - && open-{\em apo} & \citet{Duan:2002} \\ 12& 1JW5 & maltose && open-{\em holo} & \citet{Duan:2002} \\ 13& 1LLS & - && open-{\em apo} & \citet{Rubin:2002} \\ 14& 1OMP & - && open-{\em apo} & \citet{Sharff:1992} \\ \hline \end{tabular} } \end{center} \end{table} Using professional docking software like Discovery Studio by Accelrys, one can only visually determine whether or not a conformal structure can be classified as open or closed. An alternative approach, based on modeling of deformation energies in protein structures \citep{MBP:DynamicalModel}, can also differentiate between the two groups in a more systematic way. In this paper we will show yet another approach that relies on topological/statistical methods. Note that out of 370 residues, fewer than twenty are actively involved in sugar binding. These residues which are crucial for the function of the protein are referred to as \emph{active sites}. However, identifying active sites inside a protein structure is difficult. Active sites can be identified using experimental methods that engineer different parts of the protein to be preferential in binding ligands; or theoretical methods can be used to model the various physical interactions between atoms of the protein and the ligand \citep{Amitai:2004}. We also consider \emph{allosteric pathway residues} which behave as bridges between the ligand binding site and the exterior of the protein \citep{Lockless:1999}. Due to thermal vibrations, a closed-holo structure may easily transform to an open-holo in which the risk for detachment of the bound sugar molecule is higher. The stability of the closed form can be increased by influencing the active sites. In a closed conformation they are deeply buried in the interior of the protein and inaccessible to direct influences, however, an indirect access is feasible through the effect of \emph{allostery} \citep{Rizk:2011}. \subsection{Dynamical distances} \label{Subsec:OurData} One {may} attempt to use topological methods to describe the function of the MBP using the spatial coordinates of the residues. This is not a novel idea; \citet{EHbook2010} studied computational ways of predicting protein interactions based solely on their shape. Furthermore, \citet{Gameiro:2012} defined a topological quantity based on persistence diagrams of several proteins and established correspondence with experimental compressibility of the majority of investigated proteins. However, this intuitive approach proves to be inefficient in distinguishing between the closed and the open MBP conformation. The three-dimensional coordinates obtained by x-ray crystallography give a snapshot of the conformal structure. However, this structure is really time dependent and wobbly. { The analysis by x-ray diffraction requires samples to be in solid state, which is quite different physico-chemical phase than the physiological solution found \emph{in vivo}. Namely, to facilitate and optimize protein crystallization various organic solvents, polymers, and salts are typically used as precipitants -- none of which are present \emph{in vivo}. Moreover, the atom positions in x-ray crystallography are deduced from localized electron density maps while in complex physiological solutions inter-atomic distances in proteins are influenced by constraints imposed by their interaction with highly dynamical solvent molecules. } So a dynamic descriptor is more appropriate. Therefore, we do not analyze the geometry of the MBP structure directly. Instead we use the static crystallographic data to construct a dynamic model of the protein structure from which we calculate \emph{dynamical distances} between the residues. Our subsequent analysis will use these dynamical distances and not the geometric distances. We model the dynamics of the protein structure using an \emph{elastic network model} \citep{Atilgan:2001,Tobi:2005}. Though all constituents of the protein constantly exhibit small oscillations due to thermal motion, movements on larger scale occur because neighboring units strongly affect each other. Hence the motion and function of the biomolecule are the result of the coordinated action of mutually interacting residues, i.e. the protein is modeled like a dynamical system of beads joined by elastic springs { with the cut-off distance of 15{\AA} (see Fig.~SM-4 of \cite{MBP:DynamicalModel}). We note that main results of our topological/statistical analysis for all investigated structures remain robust against cut-off distances larger than 4{\AA}, become well-converged at about 12{\AA}, and numerically insensitive above 20{\AA}. } An energy state of such a molecule is a superposition of normal modes of oscillations, leading to different spatial conformations depending on the deformation energy. There exist infinitely many energy modes; however, as a compromise between numerical accuracy and computational efficiency, our model takes into account the lowest twenty nontrivial energy modes \citep{MBP:DynamicalModel}. { This value is obtained as the lowest normal mode for which the averaged difference in deformation energies between open and closed structures remains constant (see Fig.~SM-5 of \cite{MBP:DynamicalModel}). } Taking into account the first twenty nontrivial modes of oscillation, we calculate fourteen cross-correlation matrices ($\mathbf{C}$) of size $370 \times 370$ using the Anisotropic Network Model web server \citep{ANM:2006}. Following \citet{Bradley:2008}, for each correlation matrix we calculate the associated \emph{dynamical distance} matrix ($\mathbf{D}$) using a simple linear map, \begin{equation}\label{Eq:DistanceMatrix} D_{ij} = 1 - \left| C_{ij} \right|, \end{equation} though other choices are also possible. This defines a metric space in which highly (anti)correlated residues lie close to each other. An illustration of the matrix $D$ for the $1MPD$ structure is given in Figure~\ref{Fig:1MPD_CorrelationAndDistanceMatrix}. \begin{figure}[!htbp] \begin{center} \resizebox{\columnwidth}{!}{ \includegraphics[angle=0.,width=1.0\textwidth]{Figure02.pdf } \end{center} \caption{Visualization of the cross-correlation matrix and the dynamical distance matrix of the 1MPD structure. Axes correspond to residue indices running from 1 to 370. (Left) Cross-correlation matrix for the 1MPD structure. Dark regions correspond to high pairwise correlations, with -0.76 as the most negative value. Green horizontal and vertical lines correspond to most flexible residues which poorly correlate with the rest of the protein structure thus their total correlations are approximately zero. (Right) Dynamical distance matrix for the 1MPD structure, calculated from the correlation matrix using Equation~\eqref{Eq:DistanceMatrix}; the linear relationship causes a similar visual layout of the two matrices. } \label{Fig:1MPD_CorrelationAndDistanceMatrix} \end{figure} To visualize this metric space we apply the nonlinear dimension reduction method, ISOMAP \citep{TenenbaumISOMAP}. We prefer this method to {\bf MDS} because for our data the errors on distances are smaller. As indicated by the scree-plots in Figure~\ref{Fig:MBP_ScreePlots}, projecting to three dimensions is appropriate. \begin{figure}[!htbp] \begin{center} \resizebox{0.95\columnwidth}{!}{ \includegraphics[angle=0.,width=0.95\textwidth]{Figure03.pdf } \end{center} \caption{Dimension reduction via \textsc{Isomap} for the 1MPD (left) and the 1OMP (right) structure: the `elbows' in scree plots suggest that a three dimensional embedding is appropriate. } \label{Fig:MBP_ScreePlots} \end{figure} \section {TOPOLOGICAL METHODS} \label{Sec:ComputationalProtocolandData} In this section we outline how topological methods can be applied to geometric data and how these tools can be combined with statistical analysis. A $n \times n$ distance matrix $\mathbf{D}$ defines a discrete metric space on $n$ points $x_1,\ldots,x_n$, where $d(x_i,x_j) = D_{ij}$. From this we construct a parametrized family of simplicial complexes. Given $d \geq 0$, let $\mathcal{R}_d$ denote the simplicial complex on $n$ vertices $x_1,\ldots,x_n$, where an edge between the vertices $x_i$ and $x_j$ with $i\neq j$ is included if and only if $d(x_i,x_j) \leq d$; more generally, we include the $k$-simplex with vertices $x_{i_0},\ldots,x_{i_k}$ if and only if all of the pairwise distances are at most $d$. This simplicial complex is called a \emph{Vietoris-Rips complex}. Since for $d \leq d'$, $\mathcal{R}_d \subseteq \mathcal{R}_{d'}$ this family is a filtered simplicial complex. Notice that there are only finitely many values of $d$ for which we obtain a distinct simplicial complex. For computations we restrict to this finite filtration. Of interest is the topology of this simplicial complex and how it changes as the parameter changes. In particular we are interested in $H_k(\mathcal{R}_d)$ the homology of the Vietoris-Rips complex with coefficients in the field $\mathbb{Z}/2\mathbb{Z}$, for small values of $k$. For coefficients in a field, homology is a vector space. For $k=0$ this vector space has as a basis the connected components of the simplicial complex. For $k=1$ its basis consists of linearly independent cycles that are not boundaries. Similar statements hold in higher degrees. More details can be found in \citet{Hatcher}, for example. For $d \leq d'$, the inclusion $\mathcal{R}_d \subseteq \mathcal{R}_{d'}$ induces a linear map $H_k(\mathcal{R}_d) \to H_k(\mathcal{R}_{d'})$. The set of vector spaces $\{\mathcal{R}_d\}$ together with the corresponding linear maps is referred to as a \emph{persistence module}. This persistence module can be completely described by a collection of intervals referred to as a \emph{barcode} (\citet{Edels2002,ZomCar2005}). By representing each interval by its endpoints, one obtains a collection of points in $\mathbb{R}^2$, called a \emph{persistence diagram}. { The persistence module cannot be recovered from the Betti numbers $\{\dim H_k(\mathcal{R}_d) \}_d$ since this provides no information on the linear maps. } Various software packages compute barcodes; we use the \textsc{javaPlex} library \citep{javaPlex}. The distance matrix may be obtained from the Euclidean distances between a collection of points in $\real^{\rm{d}}$ (point cloud data), the diffusion metric (\citet{Bendich2011}), similarity, correlation and covariance matrices. Filtered simplicial complexes can also be obtained from Morse functions and kernel estimators (\citet{Bubenik2010}). \begin{figure}[!hbtp]\small \begin{center} \resizebox{0.95\columnwidth}{!}{ \includegraphics[angle=0,width=0.60\textwidth]{Figure04.pdf } \end{center} \caption{Steps of our topological statistical analysis of points sampled for a disk and an annulus. (a) Three snapshots of the two filtered Vietoris-Rips complexes. Initially we sample point cloud data from a disk and an annulus; as the filtration parameter increases points get connected, different loops are born and die. (b) The birth and death time of loops is recorded in a barcode for the first homology $H_1$. The long bar in the case of the annulus detects the hole, whereas the shorter (`noisy') bars for the disk and annulus detect transient phenomena. (c) The persistence landscape (PL) corresponding to each barcode. (d) The mean persistence landscapes for the 10 disks and the 10 annuli. A permutation $t$-test ($p$-value $0.0028$) differentiates the disk from the annulus in terms of the one-dimensional cycle, that is, the loop. } \label{Fig:FlowDiskAnnulus} \end{figure} Consider the following simple example, illustrated in Figure~\ref{Fig:FlowDiskAnnulus}. We sample 150 points uniformly and randomly from a circle and annulus. From such point cloud data (PCD) we construct the corresponding filtered Vietoris-Rips complex and calculate the associated barcode. When $d = 0$ we have just the points; when $d = 0.4$ several small loops appear; later when $d = 0.8$ most loops disappear and only a few remain. The time of appearance and disappearance of loops is recorded in a barcode for the first homology group $H_1$, see Figure~\ref{Fig:FlowDiskAnnulus}(b). A barcode consists of intervals that indicate the times of birth (starting points of lines), death (end points of lines), and the duration of survival (lengths of lines), of a topological feature (a loop, in this example). In the next section we will apply this construction to fourteen conformations of the maltose-binding protein. Recall that in our model every residue is represented by a single point; using the spatial structure we would get 370 points in $\mathbb{R}^3$. Using dynamical distance we have a $370 \times 370$ distance matrix. In either case we can calculate the corresponding barcodes. One of our objectives is to carry out a hypothesis test and make statistical inferences. For that purpose the barcodes are submitted to a few steps of transformation until we arrive at a test statistic. We would like to statistically compare two or more different groups of point cloud data. For example, we would like to carry out a hypothesis test that distinguishes the disk from the annulus. The usual procedure for such results involves calculating means and variances. Unfortunately it is not at all clear how to do this for barcodes. For example, two barcodes need not have a unique Fr\'echet mean. One solution is to transform the barcode into the \emph{persistence landscape}, a functional summary of the peristence module introduced by \citet{Bubenik2015}. The persistence landscape consists of a sequence of functions $\lambda_k:\mathbb{R} \to \mathbb{R}$, where $k=1,2,3,\ldots$. Here we define these functions via an auxiliary function. Given $(a,b)$, where $a\leq b$, let $f_{(a,b )}: {\mathbb R} \rightarrow {\mathbb R}$ be the function given by $f_{(a,b)}(t)=\min(t-a, b-t)_+$, where $x_+ =\max (x,0)$. Let ${B}$ be a barcode consisting of $m$ intervals with endpoints $\{(a_i, b_i)\}_{i=1}^m$, where $a_i < b_i$. \begin{definition} \label{Def:PersLand} The persistence landscape corresponding to a barcode ${B}$ is the set of functions $\{{\lambda}_k(t): {\mathbb R} \rightarrow {\mathbb R} \}_{k \in {\mathbb N}}$, where ${\lambda}_k(t)$ is the $k^{th}$ largest value of $\{f_{(a_i,b_i)}(t)\}_{i=1}^m$, and ${\lambda}_k(t)=0$, whenever $k>m$. \end{definition} \noindent These functions may be assembled to give a function $\lambda(k, t)$ defined on ${\mathbb N}\times {\mathbb R}$, which in turn can be extended to ${\mathbb R}^2$ by setting $\lambda(x,t)= \lambda(\lceil x \rceil,t)$ if $x>0$ and $\lambda(x,t)= 0$ otherwise, where $\lceil x\rceil$ denotes the smallest integer obtained when rounding up a real number $x$, and ${\mathbb N} = \left\{ {1,2,3, \ldots } \right\}$ is the set of natural numbers, see Figure~\ref{Fig:PersistenceLandscape}. \begin{figure}[!htbp] \begin{center} \resizebox{\columnwidth}{!}{ \includegraphics[angle=0,width=0.9\textwidth]{Figure05.pdf } \end{center} \caption{Construction of a persistence landscape: (left) from an interval to the auxiliary function; (middle) from a barcode to a persistence landscape; (right) 3D visualization of the persistence landscape.} \label{Fig:PersistenceLandscape} \end{figure} \noindent Persistence landscapes for the disk and annulus example are shown in Figure~\ref{Fig:FlowDiskAnnulus}. Furthermore, we can measure the distance between persistence landscapes as the $p$-norm of their difference. \begin{definition} \label{Def:PLdistance} Let $\lambda \left( {k,t} \right)$ and $\lambda '\left( {k,t} \right)$ be two persistence landscapes. The $p$-Landscape distance between $\lambda$ and $\lambda '$ is defined by ${\Lambda _p}(\lambda,\lambda') = {\left\| {\lambda - \lambda '} \right\|_p}$. That is, \begin{equation} \label{Eq:LandscapeDistance} {\Lambda _p}\left( {\lambda ,\lambda '} \right) = {\left[ {\sum\limits_k {\int\limits_{{\mathbb R}} {{{\left| {{\lambda _k}\left( t \right) - {{\lambda '}_k}\left( t \right)} \right|}^p} {dt}} } } \right]^{{1 \mathord{\left/{\vphantom {1 p}} \right. \kern-\nulldelimiterspace} p}}}. \end{equation} \end{definition} Let $\mathcal{B}$ denote the set of all barcodes, or equivalently, the set of all persistence diagrams. For $p=2$, we can view the persistence landscape as a feature map $\lambda: \mathcal{B} \to L^2(\mathbb{N}\times\mathbb{R})$ to the Hilbert space $L^2(\mathbb{N}\times\mathbb{R})$. From this we obtain a (positive definite) kernel $k:\mathcal{B}\times\mathcal{B}\to\mathbb{R}$ defined by $k(B,B') = \langle \lambda, \lambda' \rangle_{L^2(\mathbb{N}\times\mathbb{R})}$, where $\lambda$ and $\lambda'$ are the persistence landscapes of $B$ and $B'$. This kernel induces a pseudometric on $\mathcal{B}$ given by $d_k(B,B') = [k(B,B)+k(B',B')-2k(B,B')]^{\frac{1}{2}} = \left\| \lambda-\lambda' \right\|_2 = \Lambda_2(\lambda,\lambda')$. Now we can establish the main tools needed for our statistical analysis. Assuming that our persistence landscapes are $p$-integrable, we work in the separable Banach space ${L^p}({\mathbb N}\times {\mathbb R})$. Together with a probability measure on the Borel $\sigma$-algebra, we have also a probability space (e.g. see \citet{Ledoux2002}). In this space, for any continuous linear functional $f$, the random variable ${f}(\lambda(k,t))$ satisfies the Strong Law of Large Numbers (SLLN) and the Central Limit Theorem (CLT). In cases where $\lambda$ has finite support we can let $f$ be given by the integration of $\lambda$ multiplied by the indicator function on this support. Hence we can define a new variable, \begin{equation} X = f\left( {\lambda (k,t)} \right) = \sum\limits_k {\int\limits_{{\mathbb R}} {{\lambda (k,t)}\left(t \right)dt}}, \label{Eq:RandomVariable} \end{equation} whose value corresponds to the total area encompassed by all contours ${\lambda (k,t)}$, $k \in {\mathbb N}$ of a persistence landscape. Since both SLLN and CLT hold, provided a sufficiently large sample the random variable $X$ follows an approximately normal distribution. This result allows applications of classical statistical methods to point cloud data whose underlying space might be high dimensional or nonlinear. We conclude the section by setting up a hypothesis test and a corresponding $p$-value based on a permutation test. A nonparametric test is used due to the low number of samples we have, otherwise a Student's $t$-test would be the preferable choice. Suppose we wish to compare two {groups} of data, obtained by taking $n_1$ and $n_2$ samples from two geometrical objects. Let $\lambda_{11} (k,t), \ldots \lambda_{1{n_1}}(k,t)$ and $\lambda_{21} (k,t), \ldots \lambda_{2n_2}(k,t)$ denote the associated persistence landscapes for a homology in some fixed degree. Let ${x_{11}}, \ldots ,{x_{1{n_1}}}$ and ${x_{21}}, \ldots ,{x_{2{n_1}}}$ be the associated sample values of random variables $X_1 = f(\lambda_1 (k,t))$ and $X_2 = f(\lambda_2 (k,t))$, respectively, where $f$ is the functional from Equation~\eqref{Eq:RandomVariable}. If $\mu_1 $ and $\mu_2 $ are the population means of the random variable $X = f(\lambda (k,t))$ for the two objects, the statistical hypotheses of interest are: \begin{eqnarray} \label{Eq:TestHypothesis} {H_o}: \mu_1 = \mu_2 \hskip3mm \mbox{vs.} \hskip3mm {H_a}: \mu_1 \neq \mu_2. \end{eqnarray} To test the null-hypothesis we use a two-sample permutation test with statistic, \begin{equation} \label{Eq:TestStatistic} t = \frac{{\left| {{{\overline X }_1} - {{\overline X }_2}} \right|}}{{\sqrt {\frac{{Var\left( {{X_1}} \right)}}{{{n_1}}} + \frac{{Var\left( {{X_2}} \right)}}{{{n_2}}}} }} \hskip1mm. \end{equation} Using the above formula we generate the null distribution as the set of all possible values $t_1, \ldots, t_m$ of the test statistic, calculated for permutations $i=1,\ldots, m$. Let the observed value of the test statistic be denoted by $t_{obs}$. Then the $p$-value is obtained as the averaged number of times when the test statistic is at least as extreme as its observed counterpart, $t_q\geq t_{obs}$, where $q \in \left\{ {1, \ldots ,m} \right\}$. Returning to our disk and annulus example (Figure~\ref{Fig:FlowDiskAnnulus}), in degree 1, a two-sample exact permutation test on this example produces a $p$-value of $0.0028$. This is as expected, since the disk and the annulus differ in their degree-1 homology because the annulus contains a cycle that is not the boundary of a disk contained in the annulus. In addition, the p-value in degree 0 is 0.0265. This is somewhat surprising since the disk and the annulus have the same degree-0 homology. We see here that persistent homology is sensitive to geometric differences; the rate at which the points connect differs in the two corresponding filtered simplicial complexes. In the next section, for our protein data the $p$-value for each degree of homology is obtained from a null distribution of size 1716 (given the nonnegativity of the test statistic). We apply persistence landscapes to compare closed and open conformations of the maltose-binding protein. For more on the theory of persistent homology, refer to \citet{Edels2002}, \citet{ZomCar2005}, or \citet{BubenikScott2012}. For researchers in applied fields, supplementary material in \citet{HGK2012} provides a quick review of persistent homology with hands-on calculations. \section{RESULTS AND DISCUSSION} \label{Sec:ResultsAndDiscussion} To investigate if a statistically significant difference between the closed and the open conformation can be determined using topological methods, we construct a filtered Vietoris-Rips complex whose persistent homology is calculated. From persistence intervals we generate persistence landscapes that are further transformed to yield a random variable suitable for hypothesis testing. The topological approach reinstates the separation between the two conformations, in agreement with initial physical modeling \citep{MBP:DynamicalModel}. We also demonstrate other ways of distinguishing between open and closed MBP conformations. \subsection{Snapshots from Evolution} \begin{figure}[!htbp]\small \begin{center} \resizebox{!}{0.85\textheight}{ \includegraphics[angle=0.]{Figure06.pdf } \end{center} \caption{Five snapshots capture the evolution of the filtered Vietoris-Rips complex on the closed-holo 1MPD (left) and the open-apo 1OMP (right) structure of the maltose-binding protein. The complex is constructed on 370 vertices (green circles). The number of vertices that enter the complex (yellow circles) rapidly increases with filtration values. $N$ counts the total number of simplices.} \label{Fig:MBP_Evolution} \end{figure} Figure~\ref{Fig:MBP_Evolution} portrays a few snapshots from the evolution of the filtered Vietoris-Rips complex constructed on \textsc{Isomap}~\citep{TenenbaumISOMAP} embedded 3D dynamical coordinates of the closed-holo 1MPD and the open-apo 1OMP structures from Table~\ref{Tab:MBPstructures}. Observe the rapid increase in the number of simplices at higher filtration values. At filtration $t = 0.3$ the total count of simplices in each structure is about 1.5 million (image is not shown due to excessive memory intake). \subsection{Visual Comparisons} First, we visually compare barcode plots of the closed and the open MBP conformation. Typical barcodes are shown in Figure~\ref{Fig:MBP_HomologyPlots_fromDistances}. \begin{figure}[!htbp] \begin{center} \resizebox{0.95\columnwidth}{!}{ \includegraphics[angle=0.,width=0.95\textwidth]{Figure07.pdf } \end{center} \caption{Barcode plots for the 1MPD closed-holo and the 1OMP open-apo structure corresponding to filtered Vietoris-Rips complexes constructed from the dynamical distance matrices.} \label{Fig:MBP_HomologyPlots_fromDistances} \end{figure} Overall, there is little difference. In all structures, the degree 1 barcode features one very long and pronounced bar, which is born around time 0.2 and dies shortly before time 0.6. This bar is represented by a cycle in the Vietoris-Rips complex. The importance of this most persistent loop will be discussed later. Unlike the barcodes, we can average the corresponding persistence landscapes and compare the mean persistence landscapes of the closed and open conformations in Figure~\ref{Fig:MBP_AveragePersistenceLandscapes_fromDistances}. \begin{figure}[!htbp] \begin{center} \resizebox{\columnwidth}{!}{ \includegraphics[angle=0.,width=0.8\textwidth]{Figure08.pdf } \end{center} \caption{Mean persistence landscapes of the closed (left) and open (right) MBP structures for degree 0 and degree 1 homology groups. } \label{Fig:MBP_AveragePersistenceLandscapes_fromDistances} \end{figure} For dimension 0, the mean shows a greater number of high peaks, implying a greater number of long-persisting components in the dynamical space. These correspond to the outliers in Figure~\ref{Fig:MBP_Evolution}. For dimension 1, the two means have a similar layout, both featuring one separate high peak and a cluster of lower peaks; the high distinct peak corresponds to the longest persisting loop. A small difference appears in the cluster of peaks, as their tops seem less pointed in the case of the closed conformation. The average persistence landscape in Figure~\ref{Fig:MBP_AveragePersistenceLandscapes_fromDistances} suggests the possibility of systematic differences between the persistent homology of closed and open MBP conformations. We will next try to see whether persistence landscapes can capture such a difference among the 14 conformations. We apply support vector machine~(SVM) techniques to persistence landscapes in different ways which we now describe. In degrees 0, 1 and 2, the persistence landscape consists of functions $\lambda_{1}(t)$, $\lambda_{2}(t)$, $\ldots$, $\lambda_{k}(t)$, with $k=370$, $73$, and $78$, respectively. {[A12] In practice, we trace a continuous contour $\lambda_i(t)$ ($i = 1,2, \ldots ,k$) through 50 discrete values $\lambda_i(t_j)$, where $t_{\min} \leq t_j\leq t_{\max}$ are equally spaced. Hence every MBP conformation is associated with a matrix of size ${370 \times 50}$ (for degree 0), ${73 \times 50}$ (for degree 1), and ${78 \times 50}$ (for degree 2). } { First, the contours $\lambda_1(t)$, $\lambda_2(t)$, $\ldots$, $\lambda_k(t)$ of a persistence landscape are concatenated to form one long vector in ${\mathbb R}^{1\times50\cdot 370} = {\mathbb R}^{1\times18500}$, ${\mathbb R}^{1\times50\cdot 73} = {\mathbb R}^{1\times3650},$ and ${\mathbb R}^{1\times50\cdot 78} = {\mathbb R}^{1\times3900}$. Given the 14 samples, we have three feature matrices of sizes $14\times 18500$ (for degree 0), $14\times 3650$ (for degree 1), and $14\times3900$ (for degree 2). Since the number of variables is high (18500 (degree 0), 3650 (degree 1), and 3900 (degree 2)), relative to the sample size (14), we have over-fitting. To reduce the dimension we apply the Principal Components Analysis (PCA) to standardized data from each feature matrix (this step is carried out using packages FactoMineR and ade4 of \cite{Rlanguage}). For each feature matrix the principal components are the eigenvectors of the variance-covariance matrix of the standardized data. As such, the principal components represent certain linear combinations of the concatenated contours. These linear combinations correspond to directions with maximum variability and provide a simpler and more parsimonious description of the covariance structure. The first principal component is the linear combination with maximum variance, the second principal component is the linear combination with second largest variance and so on. SVM with a linear kernel \citep{SVM2013} is then performed with the first three principal components, which account for about $80.42\%$, $52.50\%$, and $58.68\%$ of the variation with respect to degrees 0, 1, and 2. Cross-validation is not performed due to small sample size. The whole data are the training set with the purpose of finding the separating hyperplane between the two groups. The hyperplane shown in Figure~\ref{Fig:SVM3D_PCA_ISOMAP} illustrates that the two groups are separable. } \begin{figure}[!htbp] \begin{center} \resizebox{\columnwidth}{!}{ \includegraphics[angle=0.,width=0.2\textwidth]{Figure09.pdf} } \end{center} \caption{ Results of SVM with linear kernel applied to coordinates obtained from the persistence landscapes of the 14 MBP conformations. Due to small sample size all data are employed as the training set to yield the hyperplane which demonstrates that the two groups are separable. Outcome of SVM implemented on the first three principal components of concatenated contours of sample persistence landscapes in homology degrees of 0 (left), 1 (center), and 2 (right). The x, y, z coordinates correspond to the first three principal components.} \label{Fig:SVM3D_PCA_ISOMAP} \end{figure} { Another way of implementing persistence landscapes in statistical analysis uses a $14 \times 14$ matrix of pairwise landscape distances, calculated from the Eq.~(\ref{Eq:LandscapeDistance}) with $p = 2$ ($L_2$-norm). In such matrix, the $(i, j)$-th entry represents the \emph{$p$-Landscape distance} between the $i$-th and $j$-th conformation ($i,j = 1, 2, \ldots,14$). This distance matrix serves as input for the Isomap software \citep{TenenbaumISOMAP} which in return provides approximate 3D coordinates of the 14 conformations relative to each other. The Isomap coordinates are embedded in the metric space induced by the $L_2$ distance. To asses the error of the Isomap embedding we find the maximum of absolute difference between the landscape $L_2$ distance and the Euclidean distance calculated via Isomap. The maximum error amounts to 0.043 (deg 0), 0.016 (deg 1), and 0.009 (deg 2). We also calculate the mean square error; these values are 0.017 (deg 0), 0.007 (deg 1), and 0.004 (deg 2), which is relatively small. Hence we may proceed with SVM using the 3D Isomap embedded coordinates. Applying SVM with a linear kernel to the entire dataset we find that the classification boundary accurately separates the two protein groups. The hyperplanes are similar to those based on SVM with principal components thus figures are not shown. } \subsection{Statistical Inference} To measure the statistical significance of visually observed differences between the closed and the open conformation we use a permutation test. For each degree, we calculate fourteen sample values of the random variable $X$ using Eq.~(\ref{Eq:RandomVariable}). The permutation test carried out at the significance level of $0.05$ yields a $p$-value of $5.83 \times 10^{-4}$ for both homology in degree 0 and in degree 1. We obtain the same p-value since in both cases the observed statistic was the most extreme among all 1716 possible permutations. Hence, at the significance level $\alpha = 0.05$ we have compelling evidence that in the space of dynamical distances the closed and the open MBP conformation significantly differ both in the number of connected components and the number of one dimensional loops. Concerning the second homology group $H_2$, the test $p$-value of 0.0396 indicates moderate evidence of difference between closed and open proteins at the level $\alpha = 0.05$. What can we infer from these results? In the space of dynamical distances the interpretation of results is not as straightforward as when actual protein coordinates are used. One may wonder about the meaning of the `number of connected components' in this space; since dynamical distances make the residues with (anti) correlated motion to cluster together, it seems reasonable that results in dimension 0 apply to the number of correlated pieces; similarly, the meaning of the one dimensional `loop' could correspond to `a channel of interaction.' If so, then we have observed a statistically significant difference between the two conformations in terms of the number of mutually correlated pieces and the number of interaction channels between residues. In the light of findings from this and the previous section, we note that topological data analysis not only provides different ways to visually compare closed and open MBP conformations, but also gives rise to a hypothesis test for measuring the statistical significance of visually observed differences. Note that our topological results correspond to those obtained from the initial physical modeling of MBP, see \citet{MBP:DynamicalModel}; this affirms that the first twenty nontrivial normal modes we considered in correlation matrices are sufficient to establish a functional difference between the two protein conformations. \subsection{Exploring Locations of Residues} The last part of our research explores locations of residues pertinent for the protein functioning, in particular, active sites and allosteric pathway residues. We also touch upon flexible residues. As we already know, active sites are essential in sugar binding. They are fairly constrained in their motion inside the protein and well correlated with other residues. It is thus expected that the dynamical distances from Equation~\eqref{Eq:DistanceMatrix} are rather small for these residues. We show that \emph{in the dynamical space active sites lie in the vicinity of most persistent loops} in the Vietoris-Rips complex. This is illustrated for example in the 1MPD structure, see Figure~\ref{Fig:1MPD_ActiveSites}. Out of thirteen active sites in this structure, ten are positioned near the longest lived loop and the other three dwell in the vicinity of the second most persistent loop. \begin{figure}[!htbp]\small \begin{center} \resizebox{0.65\columnwidth}{!}{ \includegraphics[angle=0.,width=0.65\textwidth]{Figure10.pdf } \end{center} \caption{Active sites (red circles) in the Rips complex of the 1MPD structure, at filtration $t = 0.150$, when the largest loop is still in formation. The majority of active sites lie close to this loop and just a few are positioned around the second largest loop.} \label{Fig:1MPD_ActiveSites} \end{figure} A similar result holds for all holo-conformations from Table~\ref{Tab:MBPstructures} -- the bulk of active site residues cluster around the largest loop in the Rips complex and a few are found near other prominent loops. Therefore the most persistent loops seem to be of special importance. Next we investigate how the most persistent loop relates to shortest allosteric pathway residues. As mentioned in Section~\ref{Sec:MBP}, allostery provides indirect access to active sites. The interaction is channeled via a pathway connecting the active site and the allosteric site at the exterior of the protein. Of interest are the shortest allosteric paths as they are most likely to efficiently transmit stimuli. Such paths can be computed via the \textsc{AlloPathFinder} software \citep{Tang:2007} which uses Dijkstra's algorithm \citep{Dijkstra:1959}. Among twenty three residues from the allosteric site in the 1MPD structure \citep{Rizk:2011}, four are best candidates to interact with an allosteric effector, given the size of their solvent accessible surface area \citep{ASA-View}. Combined with thirteen active sites we get 52 endpoint assignments, or (since a pair of endpoints can yield multiple solutions) 316 unique shortest allosteric pathways of lengths ranging from five through ten. After excluding long paths as less potent in conducting impulses from the allosteric effector, we focus on paths of lengths five and six, comprised by 19 and 51 residues, respectively. The first set is a subset of the other, so there are 51 residues of interest, depicted in Figure~\ref{Fig:1MPD_AlloPath_and_FlexibleResidues}. Nearly all are located near the most persistent loop in the Rips complex. \begin{figure}[!htbp] \begin{center} \resizebox{0.75\columnwidth}{!}{ \includegraphics[angle=0.,width=0.75\textwidth]{Figure11.pdf } \end{center} \caption{Layout of shortest allosteric pathway residues (pink dots) of the 1MPD structure inside the Rips complex shown at filtration $t = 150$. There are 51 residues of interest and all but one are scattered around the largest loop. These residues are well correlated with the rest of the protein structure and connect to the Rips complex at early stages of its formations. In contrast, flexible residues (green dots) are among the last ones to connect; they dwell in peripheral regions of the protein where they oscillate with large amplitudes and consequently are the least correlated with other residues.} \label{Fig:1MPD_AlloPath_and_FlexibleResidues} \end{figure} Let us now mention flexible residues (see Figure~\ref{Fig:1MPD_AlloPath_and_FlexibleResidues}). While the majority of vertices connect early on during the evolution of the Rips complex, vertices corresponding to flexible residues are among the last ones to connect. They strongly oscillate around their equilibrium positions and are poorly correlated with the rest of the protein structure, thus unlikely to take a role in sugar binding. For more details see \citet{MBP:DynamicalModel}. \begin{figure}[!htbp] \begin{center} \resizebox{0.75\columnwidth}{!}{ \includegraphics[angle=0.,width=0.75\textwidth]{Figure12.pdf } \end{center} \caption{The short cycle representing the most persistent loop in the 1MPD structure, obtained via the \emph{Short Loop} algorithm of \citet{ShortLoop}. Computations are performed on the Vietoris-Rips complex built on \textsc{Isomap} embedded dynamical coordinates at the filtration parameter which corresponds to the midpoint of the lifetime of the most persistent loop ($t = 0.2760$).} \label{Fig:1MPD_ShortLoop_HalfTime} \end{figure} Results presented so far in this section are qualitative, obtained from a visual representation of the Rips complex. Now we take a more quantitative approach to the observed most persistent loop. We use the \emph{Short Loop} software \citep{ShortLoop}. This algorithm computes the shortest cycle that represents a given homology class of degree one. To calculate a cycle representing the most persistent loop, for each investigated structure we consider the filtered Rips complex at filtration value which is the midpoint of the longest interval in the degree-one barcode. For the resulting cycle for the 1MPD structure, see Figure~\ref{Fig:1MPD_ShortLoop_HalfTime}. Three out of the six vertices in the shortest cycle belong to the set of 51 interesting allosteric pathway residues; if paths of length seven are included, then four vertices from the short loop belong to the set of allosteric pathway residues. Moreover, if all twenty three residues from the allosteric site are considered, then five out of six vertices from the short loop belong to the set of allosteric pathway residues, taking into account paths of length up to seven. Last but not least, we observe that all open conformations feature several short loops, while in all closed conformations the algorithm finds a single short loop (except in the case of the 1FQC structure where an additional smaller cycle appears). In summary, the most persistent loop in the filtered Rips complex of the maltose-binding protein seems to hold a special biological importance; in all holo-structures the majority of active sites as well as residues that comprise shortest allosteric paths are identified around the most persistent loop in the complex. Hence the topological approach provides a valuable input in identification of active sites and allosteric pathway residues. Such information can be useful in future research to single out the best candidates for ligand binding, e.g. in the design of glucose biosensors \citep{Marvin:1997}. Instead of looking at large number of possible residues, we can focus our attention on those that are in the vicinity of the largest loops, saving time and resources while investigating new protein structures. \section{SUMMARY AND FURTHER GOALS} \label{Sec:Summary} We have studied a new functional summary for the `shape' of data, the persistence landscape, which was introduced by \citet{Bubenik2015}. Unlike other topological summaries, e.g. the barcode and the persistence diagram, one can obtain the Fr\'{e}chet mean and Fr\'{e}chet standard deviation of persistence landscapes. Consequently, persistence landscapes are advantageous for statistical inference. Following a successful application of this theory to synthetic data from geometrical objects (disk and annulus), we analyzed data sets of biological importance, namely, fourteen structures of the maltose-binding protein found in the Escherichia coli bacterium. For that purpose we used dynamical distances obtained from pairwise correlations among amino acid residues {[A15] . The } correlation matrices originated from the elastic network model developed by taking into account the first twenty non trivial modes of oscillation. After performing the topological data analysis we confirmed a statistically significant difference between closed and open conformations of the maltose-binding protein, i.e. we were able to discriminate among structural changes pertinent to protein functioning. SVM with linear kernel showed a good distinction between closed and open protein conformations. In addition, snapshots of the filtered Vietoris-Rips complex revealed that the most persistent loops host amino acid residues that are actively involved in sugar uptake. Moreover, we observed that residues which comprise shortest allosteric interaction pathways also cluster along the largest loop in the complex. Therefore, the presented topological approach can provide a preliminary screening method in identification of residues susceptible to ligand binding and allosteric manipulation, which could have a potential use in biosensors. Our confidence is reinforced by the fact that the topological results correspond to the results attained via a physical approach. For point cloud data of smaller sizes it is natural to apply shape analysis developed by several researchers, \citet{Dryden1998, BP03, BBP09}, and \citet{Bha08}, for example. However for the maltose-binding protein it is important to analyze the dynamical distance matrices which contain information on the mutual interaction between residues, and shape analysis is not applicable to these correlation matrices. We conclude with possible future research goals. \citet{mileyko2011} and \citet{Turner2013} have laid out a theoretical foundation for distributions of persistence diagrams. They also provided an algorithm for computing the sample Fr\'{e}chet mean of persistence diagrams. It would be interesting to see how their approach works with our maltose-binding protein data. \citet{Bendich2011} observed that computing homology by growing Euclidean balls is sensitive to outliers and recommended the use of a metric derived from a random walk function. \citet{FasyEtAl2014} investigated analysis of PCD and kernel density estimator and showed that persistent homology based on kernel density estimations is less sensitive to outliers. Motivated by works of \citet{Bendich2011} and \citet{FasyEtAl2014} we are currently investigating extensions of this work. In this article we did not explore the maltose-binding protein using persistence landscapes for homology in degree two. Analyzing persistence diagrams for degrees 1 and 2, \citet{Gameiro:2012} were able to successfully predict the compressibility of various protein structures (\citet{Gekko:1986}). It would be interesting to study the compressibility of protein structures based on higher degree persistence landscapes. \section*{ACKNOWLEDGMENTS} {\footnotesize V.K.N. would like to thank to Walter Roberson from the Matlab Team for a clarification on SVM. \newline \noindent P.~B. gratefully acknowledges support from AFOSR grant \# FA9550-13-1-0115. \newline \noindent G.~H. acknowledges funding support provided by the University of Alberta Orthodontic Division's McIntyre Memorial Fund and NSERC Discovery Grant 293180. \newline \noindent Computations in this research were largely enabled by resources provided by WestGrid and Compute/Calcul Canada. } \section*{GRAPHICS SOFTWARE} The following software was used to generate and process images. {\footnotesize \begin{description} \item[\textbf{Figure~\ref{Fig:FlowDiskAnnulus}:}] \noindent Evolution plots (a) were generated using the \textsc{PLEX} library \citep{PLEX} called in \citet{MATLAB2005} and further edited in \citet{MATLAB2011}, which was also the the source of other plots. Barcode plots (b) were made via \textsc{javaPlex} library \citep{javaPlex}. Persistence landscapes (c) generated using codes provided by \citet{Bubenik2015}. All images were formatted in \citet{Inkscape}. \item[\textbf{Figure~\ref{Fig:1MPD_CorrelationAndDistanceMatrix}}:] \noindent The correlation matrix was retrieved from the ANM web server \citep{ANM:2006} and based on this the distance matrix was calculated using \citet{MATLAB2011}, which was also used for visualizing both matrices. Images were formatted in \citet{Inkscape}. \item[\textbf{Figures~\ref{Fig:MBP_ScreePlots}}:] \noindent Plots were created via the \textsc{Isomap} library \citep{TenenbaumISOMAP} called in \citet{MATLAB2011} and formatted using \citet{Inkscape}. \item[\textbf{Figure~\ref{Fig:PersistenceLandscape}},~\textbf{\ref{Fig:MBP_AveragePersistenceLandscapes_fromDistances}}:] \noindent Images were created in \citet{MATLAB2011} and formatted using \citet{Inkscape}. Persistence landscape codes were provided by \citet{Bubenik2015}. \item[\textbf{Figures~\ref{Fig:MBP_Evolution}}, \textbf{\ref{Fig:1MPD_ActiveSites}--\ref{Fig:1MPD_ShortLoop_HalfTime}}:] \noindent Plots generated via the \textsc{PLEX} library \citep{PLEX} called in \citet{MATLAB2005}, further edited in \citet{MATLAB2011}, and formatted using \citet{Inkscape}. \item[\textbf{Figure~\ref{Fig:MBP_HomologyPlots_fromDistances}}:] \noindent With the aid of computing resources provided by WestGrid and Compute/Calcul Canada, plots were created via the \textsc{javaPlex} library \citep{javaPlex}, called in \citet{MATLAB2011}. Plots were formatted using \citet{Inkscape}. \item[\textbf{Figure~\ref{Fig:SVM3D_PCA_ISOMAP}}] \noindent This image was generated in \citet{MATLAB2011} and formatted using \citet{Inkscape}. Code for making a 3D plot was adopted from \cite{SVM2013} and accordingly modified. \end{description} } \bibliographystyle{apalike2}
2,869,038,156,451
arxiv
\section{INTRODUCTION} The heartbeat is controlled by a particular pattern of an electrical wave. When the heart is damaged by a myocardial infarction (MI), this pattern is disturbed, leading to arrhythmias and heart failure. Thus, it is important to understand how this pattern is formed and how MI scars affect it. Here we begin to explore these effects by mathematical modelling and simulation of action potential propagation in a slab of cardiac tissue, based on and compared to experiments performed on post MI rabbit hearts. \section{MATHEMATICAL MODEL} \subsection{Tissue model} We consider the monodomain model given by the set of equations \begin{subequations} \label{eq::monodomain} \begin{gather} \label{eq::tissue_monodomain} \chi C_m\frac{\partial V}{\partial t} - \nabla \cdot (\vec{\sigma} \nabla V) = -\chi \, I_\text{ion} -\chi \, I_\text{stim} (\vec{x},t), \\ \label{eq::Iion} I_\text{ion} = I_\text{ion}\big(V(\vec{x},t),\vec{y}(\vec{x},t)\big),\\ \label{eq::gates} \frac{\partial \vec{y}}{\partial t} = \vec{R}\big(V,\vec{y}\big),\\ \text{for } \quad \vec{x} \in \Omega, \quad t \in [0,\infty), \end{gather} with boundary conditions \begin{gather} \label{eq::bc_monodomain} \frac{\partial V}{\partial \vec{n}} = 0 \quad \text{on } \quad \vec{x} \in \partial\Omega, \end{gather} \end{subequations} in a spacial domain $\Omega\in \mathbb{R}^3$ representing a piece of cardiac tissue with $\vec{n}$ being the outer normal unit vector to its boundary $\partial \Omega$. Here $V$ is the cardiac transmembrane electric voltage potential measured in $\text{mV}$, $I_\text{ion}$ is electric current density across the membrane of cardiomyocyte cells measured in $\mu\text{A mm}^{-2}$, $I_\text{stim}$ is the density of an externally applied stimulus current also measured in $\mu\text{A mm}^{-2}$, $\chi$ is the surface-to-volume ratio of cardiomyocytes measured in $\text{mm}^{-1}$, $\vec{\sigma}$ is the effective conductivity of the cardiac tissue measured in $\text{mS mm}^{-1}$ and $C_m$ is the specific cell membrane capacitance measured in $\mu\text{F mm}^{-2}$. The transmembrane current $I_\text{ion}$ is modelled as a function of a vector of state variables $\vec{y}$ representing ionic concentrations and ionic channel gating variables determined by a system of nonlinear ordinary differential equations with rates given by $\vec{R}$. The monodomain model provides a biophysical continuum representation of cardiac electrophysiology in both space and time, linking tissue-scale electrical propagation with cellular electrical excitation. The monodomain equations are derived from the laws of conservation of charge and the assumption that infinitesimal pieces of the cardiomyocyte membrane may be modelled as an circuit of a conductor and capacitor connected in parallel. Specific values for $\sigma$, $C_m$ and $\chi$ as well as for the geometry of the tissue used are given further below. \subsection{Single-cell electrophysiology models} A large number of single-cell ionic current models given by equations \eqref{eq::Iion} and \eqref{eq::gates} of the monodomain system \eqref{eq::monodomain} exist to represent the conducting properties of cardiac myocyte membranes. These models can be classified into conceptual and detailed with the detailed ionic models further divided into models for various type cells (atrial, ventricular, sino-atrial, Purkinje), various species (human, porcine, canine, leporine, murine) and various state of remodelling (healthy normal, in heart failure etc.) These models are subject to continuous re-evaluation and refinement as new experimental data becomes available. The contemporary models include tens of ordinary differential equations and online model repositories such as CellML\footnote{\url{http://models.cellml.org}} have been setup for ease of their dissemination and use. Details of the specific single-cell ionic current models we use are provided further below. \section{NUMERICAL METHODS OF SOLUTION} \subsection{Operator splitting} The monodomain model \eqref{eq::monodomain} is characterised by a large range of significant scales, e.g. cardiac action potentials have extremely fast and narrow upstrokes (depolarization) and very slow and bread recovery (repolarization) phases. An effective numerical scheme based on an operator splitting approach (Godunov and Strang splitting, \citep{Strang1968}, also known as the fractional timestep method \citep{Press92a}), was proposed by \citet{ZhilinQu1999} and is adopted in our study in the following form. The nonlinear monodomain model \eqref{eq::monodomain} is split into a set of nonlinear ordinary differential equations \begin{subequations} \begin{align} \frac{\partial V}{\partial t} &= -\frac{1}{C_m}\big(I_\text{ion}(V,\vec{y}) + I_\text{stim}\big), \nonumber\\ \frac{\partial \vec{y}}{\partial t} &= \vec{R}\big(V,\vec{y}\big), \label{eq::monodomain:cell} \end{align} and a linear diffusion partial differential equation \begin{align} \frac{\partial V}{\partial t} &= \frac{1}{\chi C_m}\nabla \cdot (\vec{\sigma} \nabla V). \label{eq::monodomain:diff} \end{align} \end{subequations} To integrate the complete monodomain model \eqref{eq::monodomain} in the interval $[t_n,t_n+\Delta t]$ we take the following three fractional steps of the splitting algorithm. \begin{enumerate} \item Solve the nonlinear ODE system for $V_\theta^{n}$ at $t_n < t \le t_n + \theta \Delta t$ with known $V^n$ \begin{align*} \frac{\partial V}{\partial t} = -\frac{1}{C_m}I_\text{ion}(V,\vec{y}), \qquad \frac{\partial \v{y}}{\partial t} = \vec{R}(V, \vec{y}), \qquad V(t_n) = V^n. \end{align*} \item Solve the linear PDE for $V_\theta^{n+1}$ at $t_n < t \le t_n + \Delta t$ \begin{equation*} \frac{\partial V}{\partial t} = \frac{1}{\chi C_m}\nabla \cdot (\vec{\sigma} \nabla V), \quad \quad V(t_n) = V_\theta^{n}. \end{equation*} \item Solve the ODE system again for $V^{n+1}$ at $t_n + \theta \Delta t < t \le t_n + \Delta t$ \begin{align*} \frac{\partial V}{\partial t} = -\frac{1}{C_m}I_\text{ion}(V,\vec{y}), \qquad \frac{\partial \v{y}}{\partial t} = \vec{R}(V, \vec{y}), \qquad V(t_n+ \theta \Delta t) = V_\theta^{n+1}. \end{align*} \end{enumerate} Further details on the operator splitting method applied to the monodomain problem can be found in \citep{Sundnes2006}. \subsection{Reaction part} In this form the normally stiff initial value problem \eqref{eq::monodomain:cell} can be integrated separately using one of the many known methods for solution of initial value problems, including adaptive time stepping. Depending on the specific ionic models, a forward Euler method may be used for temporal stepping for less stiff cases, or a fourth-order Runge-Kutta method, for stiffer problems, for instance. \subsection{Diffusion part} The diffusion part of the monodomain model \eqref{eq::monodomain} is solved using a finite-element method as detailed below. For the spacial disretisation of the equation \eqref{eq::monodomain:diff} the numerical approximation $V^h(\vec{x},t)$ of the transmembrane voltage potential $V(\vec{x},t)$ is assumed to take the form of a finite expansion in a set of continuous piecewise polynomial nodal basis functions $\big\{\phi^h_i(\vec{x}), i=1..p\big\}$ with time-dependent coefficients $\big\{V_i(t), i=1..p\big\}$ each representing a nodal value at time $t$ \begin{align} V(\vec{x},t) \approx V^h(\vec{x},t) = \sum_{i=1}^p \phi_i(\vec{x}) V_i(t), \label{eq::femexpansion} \end{align} where $p = \dim\{\phi^h\}$, and $h$ denotes a parameter measuring the size of the domain partition. Substituting expansion \eqref{eq::femexpansion} in equation \eqref{eq::tissue_monodomain}, taking the Galerkin projection and using the boundary condition \eqref{eq::bc_monodomain}, the following weak variational form of the monodomain equation is obtained \begin{align} \label{eq::variational_form_mon} \chi \, C_m \Big(\frac{\partial V^h}{\partial t}, \phi^h_i\Big)_\Omega + \Big(\vec{\sigma} \nabla V^h, \nabla \phi^h_i\Big)_\Omega =0 , \\ \quad i=1,\dots,p, \nonumber \end{align} representing a weighted-residual condition for minimization of the residual error, where the round brackets $(v,w)_\Omega = \int_{\Omega} vw \,d\Omega$ denote the inner product with the basis functions. The Galerkin approximaiton \eqref{eq::variational_form_mon} represents a set of $p$ ordinary differential equations in time for the $p$ coefficient functions $V_i(t)$ in the expansion \eqref{eq::femexpansion}. For brevity in the following we will drop the superscript $h$. For the temporal disretisation of the Galerkin projection equations \eqref{eq::variational_form_mon} the time derivative approximated a first-order accurate forward finite difference formula and the following implicit numerical scheme is used \begin{equation} \chi \, C_m \, \vec{M}\frac{\vec{V}^{n+1}-\vec{V}^n}{\Delta t} + \vec{K}\,\vec{V}^{n+1} = 0, \label{eq::algberiac_fe_mono} \end{equation} where $\vec{V}^n = [V_1^n,V_2^n,\dots, V_p^n]^T$ now denotes $p$-dimensional vector of voltage values at time level $t_n=n\Delta t$ with time step $\Delta t$, and where $$ [M_{ij}] = (\phi_i, \phi_j)_\Omega, $$ denotes the mass matrix and $$ [K_{ij}] = (\nabla \phi_i, \vec{\sigma}\nabla \phi_j)_\Omega, $$ denotes the stiffness matrix. Finally, the vector of unknowns voltage values at time level $t_{n+1}$ is determined by solving the linear system of equations \begin{equation} (\vec{M} + \frac{\Delta t}{\chi \, C_m}\, \vec{K}) \, \vec{V}^{n+1} = \vec{M}\,\vec{V}^n. \label{eq::monodomain_FE} \end{equation} \subsection{Practical implementation} Equation \eqref{eq::monodomain_FE} is solved using the \texttt{libMesh} open source parallel C++ finite element library\footnote{\url{libmesh.github.io}} \citep{Kirk2006}, and the solution of linear systems and the time stepping relies on the solvers provided by the \texttt{PETSc} library\footnote{\url{www.mcs.anl.gov/petsc}}. Simulations are run both on our local Linux workstations with 2 Intel(R) Xeon (R) CPU E5-2699 2.30 GHz (up to 72 threads) and 128 GB of memory at the School of Mathematics and Statistics, University of Glasgow as well as on the RCUK flagship High-Performance parallel computer ARCHER\footnote{\url{www.archer.ac.uk}}. Visit\footnote{\url{https://visit.llnl.gov}} is used for post-processing the two- and three-dimensional simulations. \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{BenchCross} \caption{Computational domain A contour plot of activation times for the benchmark problem defined in section \ref{sect::benchmark}. The computational domain is clearly visible. The stimulus site is in the lower central vertex while the most distant point is the upper central vertex. } \label{fig::bench:geometry} \end{figure} \begin{table}[t] \centering \begin{tabular}{c l |c c c c|} \cline{3-6} ~ & ~ & \multicolumn{4}{c|}{$\Delta x$}\\ \cline{3-6} ~ & ~ & 0.5mm & 0.333mm & 0.2mm & 0.1mm \\ [0.5ex] \hline \multicolumn{1}{ |c | }{\multirow{4}{*}{$\Delta t$}}&0.05ms & 81.75 & 60.95 & 52.15 & 47.20 \\ \multicolumn{1}{ |c | }{} &0.025ms & 80.70 & 59.85 & 49.94 & 45.40\\ \multicolumn{1}{ |c | }{} &0.01ms & 80.06 & 59.20 & 49.94 & 44.26\\ \multicolumn{1}{ |c | }{} &0.005ms & 79.82 & 58.96 & 49.65 & 43.85\\ \hline \end{tabular} \caption{Values for the activation time [ms] at different spatial and temporal discretisation steps measured in our numerical code for the benchmark problem described in section \ref{sect::benchmark}.} \label{table:TPBench} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.99\linewidth]{TPBenchLog} \caption{Convergence of the value of the activation time as time and space steps are decreased. Values are measured in our numerical code for the benchmark problem described in section \ref{sect::benchmark}.} \label{fig:tpbenchlog} \end{figure} \section{BENCHMARKING AND VALIDATION} \label{sect::benchmark} \subsection{Benchmark description} Our mathematical model and its numerical implementation was validated by comparison with a standard cardiac tissue electrophysiology simulation benchmark case developed by the research community \citep{Niederer2011}. The benchmark involved 11 independently developed numerical codes providing numerical simulations of a well-defined problem with unique solution for a number of different resolutions. The benchmark seeks to compare solutions of the monodomain equations \eqref{eq::monodomain} on a cuboid domain of dimensions $20 \times 7 \times 3$ mm using the \citet{tenTusscher2006} model of human epicardial myocytes as a model of the transmembrane ion current density $I_\text{ion}$. The initial stimulus current $I_\text{stim}$ has a current density amplitude of $50000 \mu\text{A cm}^{-3}$ and is applied to a cube with size $1.5 \times 1.5 \times 1.5$ mm positioned at the corner of the full cuboid domain and a stimulus duration of 2 ms. The value of the cell surface to volume ratio $\chi$ is 140 mm$^{-1}$, and it is assumed that the cardiac fibres are aligned with the long, 20 mm, axis of the cuboid domain so the conductivity tensor $\vec{\sigma}$ is diagonal with values $[0.1334, 0.0176, 0.0176]$ S m$^{-1}$ along its main diagonal. The so called ``activation time'' defined as the time it takes for a cardiac action potential to travel from the stimulation site to the most distant point in the computational domain (i.e. the point opposite the stimulation site) is requested as a diagnostic output quantity from the numerical simulation. Figure \ref{fig::bench:geometry} shows the geometry of the benchmark case. \subsection{Validation and benchmarking} We have verified that our numerical code is in excellent agreement with the community benchmark results. Since the computational domain of the benchmark problem is rectangular domain we have used a regular square finite-element mesh with space step $\Delta x$. For time stepping we have used the simple forward Euler method with time step $\Delta t$. Table \ref{table:TPBench} shows the values of the activation time we have obtained at various spatial and temporal resolutions. At the highest resolution of $\Delta x=0.1$ mm and $\Delta t = 10^{-4}$ ms the activation time obtained using our code is $43.85$ ms which is within 2\% error bar from the 42.82 ms high-accuracy value agreed upon in the benchmark paper \citep{Niederer2011}. Figure \ref{fig:tpbenchlog} shows a convergence test we have performed with decreasing space step and time step and it is clear that our solution is converging to values closer to the community benchmark value just quoted. We remark that for our code the increase of the spatial resolution leads to more significant increases of accuracy than the increase in temporal resolution. \begin{table}[t] \centering \begin{tabular}{l c c} \hline Parameters & M-cells (Healthy) & M-cells (MI) \\ \hline $K_0$, External potassium concentration (mM) &4.5865 &4.3087 \\ $Ca_0$, External calcium concentration (mM)&2.0467&2.5678 \\ $Na_0$, External sodium concentration (mM) &146.30 &165.71 \\ $gna$, Peak INa conductance & 14 & 12 \\ $gca$, Strength of Ca current flux (mmol/(cm C)) &259.86 & 193.23 \\ $pca$, Constant in ICal (cm/s) &0.0002 & 0.0008 \\ $r1$, Opening rate in ICal & 0.4804 &0.3546 \\ $r2$, Closing rate in ICal & 2.2825 &2.8694 \\ $gkix$, Peak IK1 conductance (mS/$\mu$F) & 0.4400 & 0.2604 \\ $gtof$, Peak Ito conductance (mS/$\mu$F) &0.0221 & 0.0421 \\ \hline (Global) root mean square error & 0.0133 & 0.1070 \\ [1ex] \hline \end{tabular} \caption{Parameter values of the model of \citet{Mahajan2008} re-fitted to the experimental data on M-cells at 3 Hz pacing rate form \citep{McIntosh2000}.} \label{table3} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{Weiss_exp_Mcell_333ms_healthy_HF} \caption{Action potential computed using the model of \citet{Mahajan2008} with parameter values given in table \ref{table3} in comparison with experimental data from \citep{McIntosh2000}.} \label{AP_Mcell} \end{figure} \section{PARAMETER RE-FITTING OF A RABBIT VENTRICULAR SINGLE CELL IONIC MODEL} \label{sect::refitting} In order to achieve an accurate comparison with experimental measurements in rabbit ventricular tissue samples e.g. \citep{Allan2016,Myles2010,Myles2009phd} an appropriate single cell ionic action potential model must be selected and refitted. To this end we have selected to use \citet{Mahajan2008} detailed action potential model, one of the modern rabbit ventricular AP models designed to accurately reproduce the dynamics of the cardiac action potential and intracellular calcium (Cai) cycling at rapid heart rates as relevant to ventricular tachycardia and fibrillation. Cardiac electrophysiology models are based on experimental data from a variety of sources, including measurements in different species and under different experimental conditions \citep{Cooper2016}. Refitting of model parameters is therefore necessary whenever new or more appropriate data sets are available. In our case, the model of \citet{Mahajan2008} was refitted to match the single cell experimental data of \citet{McIntosh2000} since these were measured by the same research group using identical experimental protocols. In \citep{McIntosh2000} action potential and intracellular Ca2+ transient characteristics $x_i^\text{target}$ were measured in single cardiac myocytes from mid-myocardial regions of the left ventricle of rabbits with and without heart failure. These were fitted to the outputs of the model of \citet{Mahajan2008}, $x_i^\text{sim}$, by minimising the error function \begin{equation} \label{eq:err} \text{Err}_{\text{AP}} = \frac{1}{M} \sum_{i=1}^M (x_i^\text{sim} - x_i^\text{target})^2 \end{equation} with respect to selected parameter values, aka ``parameter estimation''. For parameter estimation we used a standard Matlab routine for unconstrained multivariable minimisation based on the bounded Nelder-Mead simplex-like method \citep{Lagarias}. The results are shown in Table \ref{table3} and figure \ref{AP_Mcell} below. \begin{table}[t] \centering \begin{tabular}{c l |c c c c |} \cline{3-6} ~ & ~ & \multicolumn{4}{c|}{$\Delta x$}\\ \cline{3-6} ~&~ & 0.5mm & 0.333mm & 0.2mm & 0.1mm \\ [0.5ex] \hline \multicolumn{1}{ |c | }{\multirow{5}{*}{$\Delta t$}}&0.05ms & X & X & X & X \\ \multicolumn{1}{ |c | }{}& 0.01ms & X & X & X & 54.19 \\ \multicolumn{1}{ |c | }{}& 0.005ms & X & X & 63.94 & 53.90 \\ \multicolumn{1}{ |c | }{}& 0.0025ms & X &82.30 & 63.81 & 53.76 \\ \multicolumn{1}{ |c | }{}& 0.0001ms & X &82.25 & 63.75 & 53.68 \\ \hline \end{tabular} \caption{Values for the activation time [ms] at different spatial and temporal discretisation steps measured in our numerical code for the benchmark geometry described in section 4 but for the model of \citet{Mahajan2008} refitted to healthy values rather than \citet{tenTusscher2006} model.} \label{table:WeissBench} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.99\linewidth]{WeissBenchLog} \caption{ Convergence of the value of the activation time as time and space steps are decreased. Values are measured in our numerical code for the benchmark problem described in section \ref{sect::benchmark} with the model of \citet{Mahajan2008} refitted to healthy values.} \label{fig:weissbenchlog} \end{figure} The benchmark convergence test of section \ref{sect::benchmark} was repeated using the model of \citet{Mahajan2008} newly re-fitted to healthy values in order to establish suitable resolution. Based on the results of Table \ref{table:WeissBench} and figure \ref{fig:weissbenchlog} we determine that $\Delta x= 0.1$ and $\Delta t = 5\times10^{-3}$ ms provides a good trade-off between resolution and model accuracy and we use this values for the simulations detailed in the next section. \section{MODELLING OF PROPAGATION IN SCARRED TRANSMURAL VENTRICULAR SLABS} \begin{figure}[t] \centering \begin{tabular}{ll} a) & b) \\ \includegraphics[width=0.22\linewidth]{MylesImage} & \includegraphics[width=0.42\linewidth]{MylesEndoStim}\\ c) & d) \\ \includegraphics[width=0.42\linewidth]{MylesEndo} & \includegraphics[width=0.42\linewidth]{MylesEpi} \\ \end{tabular} \caption{Transmural conduction into an infarct zone. Plots taken from figure 5.8 of \citep{Myles2009phd}.} \label{fig::expdata} \end{figure} \begin{figure}[t] \centering \begin{tabular}{ll} (a) & (b) \\ \includegraphics[width=0.4\linewidth]{Sig}& \includegraphics[width=0.49\linewidth]{Symm0000}\\ \end{tabular} \caption{(a) Isotropic conductivity as a function of $x$ as given by equation \eqref{eq:cond:sigm}. (b) Simulated activation times [ms] for the conductivity profile in part (a).} \label{fig:sig} \centering \begin{tabular}{l l l} \multirow{2}{*}{\includegraphics[width=0.12\linewidth]{VLegend}} & \includegraphics[width=0.42\linewidth]{Symm0018} & \includegraphics[width=0.42\linewidth]{Symm0024} \\ ~& \includegraphics[width=0.42\linewidth]{Symm0020} & \includegraphics[width=0.42\linewidth]{Symm0026} \\ ~& \includegraphics[width=0.42\linewidth]{Symm0022} & \includegraphics[width=0.42\linewidth]{Symm0028} \end{tabular} \caption{Propagation of an action potential in the case of sigmoidal conductivity \eqref{eq:cond:sigm}. Density maps of the transmembrane voltage potential are plotted at equidistant times $t_i= 5 + i \Delta t$, $i=1,\dots, 6$ and $\Delta t= 10$ ms.} \label{fig:voltages:sigm} \end{figure} \begin{figure}[t] \centering \begin{tabular}{ll} (a) & (b) \\ \includegraphics[width=0.4\linewidth]{SigSine} & \includegraphics[width=0.49\linewidth]{Sine0000}\\ \end{tabular} \caption{(a) Isotropic conductivity with a fingering effect as a function of $x$ and $y$ as given by equation \eqref{eq:cond:fing}. (b) Simulated activation times [ms] for the conductivity profile in part (a).} \label{fig:sigsine} \centering \begin{tabular}{l l l} \multirow{2}{*}{\includegraphics[width=0.12\linewidth]{VLegend}} & \includegraphics[width=0.42\linewidth]{Sine0004} & \includegraphics[width=0.42\linewidth]{Sine0011} \\ ~& \includegraphics[width=0.42\linewidth]{Sine0007} & \includegraphics[width=0.42\linewidth]{Sine0012} \\ ~& \includegraphics[width=0.42\linewidth]{Sine0008} & \includegraphics[width=0.42\linewidth]{Sine0015} \end{tabular} \caption{Propagation of an action potential in the case of finger-like conductivity \eqref{eq:cond:fing}. Density maps of the transmembrane voltage potential are plotted at equidistant times $t_i= 5 + i \Delta t$, $i=1,\dots, 6$ and $\Delta t= 10$ ms.} \label{fig:voltages:fing} \end{figure} \subsection{Physiology of the infarcted zone} \subsection{Description of experiments} An example of the experimental measurements of transmural conduction into an infarct zone available from our collaborators \citep{Myles2009phd} is shown in Figure \ref{fig::expdata}. The upper left panel shows a plain image of the transmural surface of a wedge preparation from a ligated heart, with the endocardium uppermost and the epicardium at the bottom of the picture. The black squares indicate the position from which example APs are available for comparison: a) remote zone b) border zone and c) infarct zone. In the upper right panel is a schematic diagram of the preparation, indicating the position and the shape of the infarct border zone. The lower two panels show isochronal maps of activation time during endocardial and epicardial stimulation at left and right panels, respectively. The experiment has a number of notable features, including, \begin{description} \item[a)] The infarct zone has lower density of electrically excitable cells. \item[b)] The infarct border zone has significant undulations that protrude the healthy zone. \item[c)] The infarct zone has a reduced volume compared to the healthy zone resulting in a wedge-like trapezoidal shape of the transmural slab rather than a rectangular shape. \end{description} We will take the approach of modelling these features separately, in order to investigate their effects one at a time before we attempt to address the phenomena in full complexity and detail. To this end we perform direct numerical simulations of the monodomain problem \eqref{eq::monodomain} as specified in section \ref{sect::benchmark} except that the ionic model $I_\text{ion}$ is replaced by the model of \citet{Mahajan2008} refitted to healthy values as described in section \ref{sect::refitting} and conductivity values as specified further below. \subsection{Modelling} The simplest way to model the infarct zone and feature (a) is to assume that the lower density of the myocites in the infarct can be described by a reduced effective value of the conductivity in the infarct zone. To further focus on the effect of a well-defined border zone we will also assume that conductivity is isotropic so all components or the conductivity tensor are equal to the same scalar value $\sigma$. We take this value to depend sigmoidally on the intra-longitudinal coordinate $x$, \begin{equation} \sigma(x) = \sigma_a +(\sigma_b - \sigma_b)\frac{\exp(\alpha (x-x_0))}{1+\exp(\alpha(x-x_0))}, \qquad x_0=1, \label{eq:cond:sigm} \end{equation} where $\sigma_a=1.3342$ and $\sigma_b=0.3$ are the values of the conductivity deep into the healthy and the infarct zone, respectively, $x_0$ is the location of the border zone, $\alpha=10$ is the steepness of the sigmoidal function. This conductivity profile is shown in \ref{fig:sig}(a). The corresponding activation times are shown in Figure \ref{fig:sig}(b) while snapshots of the transmembrane voltage potential at a set of equidistant moments are shown in Figure \ref{fig:voltages:sigm}. The simulations show that the travelling front propagates faster when the conductivity is large and slows down when the conductivity is small. This effect is not observed in the experimental measurements as seen in both Figures \ref{fig::expdata}(a,b). Figure \ref{fig::expdata}(b) shows in fact that the propagating wave slows down within the infarct border zone but subsequently speeds up when in the infarct zone and travels to as a speed similar to the speed in the healthy zone. To investigate if this is an effect of the finger-like undulations in the infarct border zone we consider a conductivity profile given by the expression \begin{equation} \sigma(x,y) = \sigma_a +(\sigma_b - \sigma_b)\frac{\exp(\alpha (x-x_0(y)))}{1+\exp(\alpha(x-x_0(y)))}, \qquad x_0(y) = 1+0.1\sin(44.86y), \label{eq:cond:fing} \end{equation} where the border location $x_0$ is now modulated as a function of the intra-transversal $y$-direction. The modulating sine function mimics a fingering effect as shown in Figure \ref{fig:sigsine}(a). The corresponding activation times are shown in Figure \ref{fig:sigsine}(b) while snapshots of the transmembrane voltage potential at a set of equidistant moments are shown in Figure \ref{fig:voltages:fing}. The simulation results in this case are rather similar to the case of unmodulated infarct boundary apart from a weak modulation of the action potential front when it passes through the border. No speed-up is observed within the infarct zone. \section{CONCLUSION} We have constructed a mathematical model and implemented a numerical code for the solution of the monodomain problem \ref{eq::monodomain} for the description of propagation of electrical excitation in cardiac tissue. We have validated the code against a community developed benchmark. We have selected an appropriate single cell ionic current model and we have re-fitted its parameters to experimental data that conforms to the protocols and procedures used in the lab of our collaborators. With this we have performed several direct numerical simulations where an infarct zone is modelled simply as a region with reduced values of the conductivity. This alone has not been sufficient to provide a good qualitative comparison with observations even if the undulation of the infarct border zone is taken into account. Our work can be extended and refined in a number of ways. Firstly, the conductivity values in the healthy and the infarct zones can be better estimated by further parameter fitting, this time applied to the spacially extended problem. Secondly, the parameters of the conductivity profiles should be systematically investigated. The wedge-like shape of the experimental tissue sample should be taken into account. The model of \citet{Mahajan2008} re-fitted to heart-failure values should be used within the infarct zone. These and further features will be considered systematically in our future work. \section*{ACKNOWLEDGEMENTS} This work was supported by the EPSRC grant EP/N014642/1 ``SofTMech centre for Multiscale Soft Tissue Mechanics with applications to heart and cancer''.
2,869,038,156,452
arxiv
\section{Introduction} In complex analysis the Szeg\"{o} kernel and Bergman kernel are well-known, which had also been generalized into Clifford analysis (including quaternionic analysis as a special case, see \cite{BDS}). But in octonionic analysis the existence of such kernels is still unknown, let alone the explicit expressions. The difficulty arises mainly because the octonion algebra (Cayley algebra) is non-associative. The motivation for us to consider this kind of problem is that we want to unify the formulation of the analytic function theory in the largest normed division algebra over $\mathbb{R}$, namely, in octonions $\mathbb{O}$ (including complex numbers, quaternions as its special cases). Recall that in complex analysis the Bergman space on the unit disc is defined to be the collection of functions that are holomorphic and square integrable on the unit disc. This definition can be naturally generalized to octonionic analytic functions. Since the Cayley algebra is non-commutative, there exist two different but symmetric octonionic analytic function theory. In this paper we focus on the left octonionic analytic functions, and we denote by $\mathcal{B}^2(B)$ the corresponding octonionic Bergman space, where $B$ is the unit ball in $\mathbb{R}^8$ centered at origin. A nature problem comes: Does the octonionic Bergman kernel exist? and what is it? Of course this problem is closely related to the definition of the associated inner product. Usually the inner product of two Bergman functions $f$ and $g$ is defined to be the integral of $f\overline{g}$ on $B$. Since the octonions is non-associative, the usual definition is no longer valid to guarantee the existence of the kernel. We thus need to give a new definition. \begin{defi}[inner product on $\mathcal{B}^2(B)$] Let $f, g\in\mathcal{B}^2(B)$, we define $$(f,g)_B:=\frac{1}{\omega_8}\int_B\left(\overline{g(x)}\frac{\overline{x}}{|x|}\right) \left(\frac{x}{|x|}f(x)\right)dV,$$ where $\omega_8$ is the surface area of the unit sphere in $\mathbb{R}^8$, $dV$ is the volume element on $B$. \end{defi} Note that this modified inner product is real-linear and conjugate symmetric. The induced norm $$\|f\|_B^2:=(f,f)_B=\frac{1}{\omega_8}\int_B|f|^2dV$$ coincides with the norm induced by the usual inner product. We can now state the main theorem of this paper. \begin{theo}\label{main} Let$$B(x,a)=\frac{\left(6(1-|a|^2|x|^2)+2(1-\overline{x}a)\right)(1-\overline{x}a)} {|1-\overline{x}a|^{10}},$$ then $B(\cdot,a)$ is the desired octonionic Bergman kernel, i.e., $B(\cdot,a)\in\mathcal{B}^2(B)$, and for any $f\in\mathcal{B}^2(B)$ and any $a\in B$, there holds the following reproducing formula $$f(a)=(f,B(\cdot,a))_B.$$ \end{theo} The rest of the paper is organized as follows. In Section 2 we give a brief review on the octonion algebra and octonionic analysis. In Section 3 we will exploit our new idea in defining the structure of the inner product to investigate the octonionic Szeg\"{o} kernel for the unit ball in $\mathbb{R}^8$. Section 4 is then devoted to the proof of our main result Theorem \ref{main}. In the last section we point out that the Bergman kernel can be unified in one form in both complex analysis and hyper-complex analysis. \section{The octonions and the octonionic analysis} \subsection{The octonions} If an algebra $\mathbb{A}$ is meanwhile a normed vector space, and its norm ``$\|\cdot\|$'' satisfies $\|ab\|=\|a\|\|b\|$, then we call $\mathbb{A}$ a normed algebra. If $ab=0$ ($a, b\in \mathbb{A}$) implies $a=0$ or $b=0$, then we call $\mathbb{A}$ a division algebra. Early in 1898, Hurwitz had proved that the real numbers $\mathbb{R},$ complex numbers $\mathbb{C},$ quaternions $\mathbb{H}$ and octonions $\mathbb{O}$ are the only normed division algebras over $\mathbb{R}$ (\cite{Hur}), with the imbedding relation $\mathbb{R}\subseteq \mathbb{C}\subseteq \mathbb{H}\subseteq \mathbb{O}$. As the largest normed division algebra, octonions, which are also called Cayley numbers or the Cayley algebra, were discovered by John T. Graves in 1843, and then by Arthur Cayley in 1845 independently. Octonions are an 8 dimensional algebra over $\mathbb{R}$ with the basis $e_0,e_1,\ldots,e_7$ satisfying $$e_0^2=e_0,~e_ie_0=e_0e_i=e_i,~e_i^2=-1,~\text{for}~i=1,2,\ldots,7.$$ So $e_0$ is the unit element and can be identified with $1$. Denote $$W=\{(1,2,3),(1,4,5),(1,7,6),(2,4,6),(2,5,7),(3,4,7),(3,6,5)\}.$$ For any triple $(\alpha,\beta,\gamma)\in W$, we set $$e_\alpha e_\beta=e_\gamma=-e_\beta e_\alpha,\quad e_\beta e_\gamma=e_\alpha=-e_\gamma e_\beta,\quad e_\gamma e_\alpha=e_\beta=- e_\alpha e_\gamma.$$ Then by distributivity for any $x=\sum_0^7 x_ie_i$, $y=\sum_0^7 y_je_j \in \mathbb{O}$, the multiplication $xy$ is defined to be $$xy:=\sum_{i=0}^7\sum_{j=0}^7x_iy_je_ie_j.$$ For any $x=\sum_0^7 x_ie_i \in \mathbb{O}$, $\mbox{Re}\,x:=x_{0}$ is called the scalar (or real) part of $x$ and $\overrightarrow{x}:=x-\mbox{Re}\,x$ is called its vector part. $\overline{x}:=\sum_0^7x_i\overline{e_i}=x_0-\overrightarrow{x}$ and $|x|:=(\sum_0^7x_i^2)^\frac{1}{2}$ are respectively the conjugate and norm (or modulus) of $x$, they satisfy: $|xy|=|x||y|,$ $x\overline{x}=\overline{x}x=|x|^2,$ $\overline{xy}=\overline{y}\,\overline{x}$ $(x,y\in \mathbb{O}).$ So if $x\neq0,$ $x^{-1}=\overline{x}/{|x|^2}$ gives the inverse of $x.$ Octonionic multiplication is neither commutative nor associative. But the subalgebra generated by any two elements is associative, namely, The octonions are alternative. $[x, y, z]:=(xy)z-x(yz)$ is called the associator of $x, y, z\in \mathbb{O},$ it satisfies (\cite{B1, Jaco}) $$ [x,y,z]=[y,z,x]=-[y,x,z], \quad [x,x,y]=[\overline{x},x,y]=0. $$ \subsection{The octonionic analysis} As a generalization of complex analysis and quaternionic analysis to higher dimensions, the study of octonionic analysis was originated by Dentoni and Sce in 1973 (\cite{DS}), and it was not until 1995 that it began to be systematically investigated by Li et al (\cite{Li}). Octonionic analysis is a function theory on octonionic analytic (abbr. $\mathbb{O}$-analytic) functions. Suppose $\Omega$ is an open subset of $\mathbb{R}^8$, $f=\sum_0^7f_je_j\in C^1(\Omega,\mathbb{O})$ is an octonion-valued function, if $$Df=\sum_{i=0}^{7}e_{i}\frac{\partial f}{\partial x_{i}} =\sum_{i=0}^{7}\sum_{j=0}^{7}\frac{\partial f_j}{\partial x_{i}}e_ie_j=0$$ $$\left(fD=\sum_{i=0}^{7} \frac{\partial f}{\partial x_{i}}e_{i}= \sum_{i=0}^{7}\sum_{j=0}^{7}\frac{\partial f_j}{\partial x_{i}}e_je_i=0\right),$$ then $f$ is said to be left (right) $\mathbb{O}$-analytic in $\Omega$, where the generalized Cauchy--Riemann operator $D$ and its conjugate $\overline{D}$ are defined by $$D:=\sum_{i=0}^7e_i\frac{\partial}{\partial x_i},~~\overline{D}:= \sum_{i=0}^7\overline{e_i}\frac{\partial}{\partial x_i}$$ respectively. A function $f$ is $\mathbb{O}$-analytic means that $f$ is meanwhile left $\mathbb{O}$-analytic and right $\mathbb{O}$-analytic. From $$\overline{D}(Df)=(\overline{D}D)f=\triangle f=f(D\overline{D})=(fD)\overline{D},$$ we know that any left (right) $\mathbb{O}$-analytic function is always harmonic. In the sequel, unless otherwise specified, we just consider the left $\mathbb{O}$-analytic case as the right $\mathbb{O}$-analytic case is essentially the same. A Cauchy-type integral formula and a Laurent-type series for this setting are: \begin{lemm}[Cauchy's integral formula, see \cite{DS,LP2}] Let $\mathcal{M}\subset\Omega$ be an 8-dimensional, compact differentiable and oriented manifold with boundary. If $f$ is left $\mathbb{O}$-analytic in $\Omega$, then $$f(x)=\frac{1}{\omega_8}\int_{y\in\partial\mathcal{M}}E(y-x)(d\sigma_yf(y)),\quad x\in \mathcal{M}^o,$$ where $E(x)=\frac{\overline{x}}{|x|^8}$ is the octonionic Cauchy kernel, $d\sigma_y =n(y)dS$, $n(y)$ and $dS$ are respectively the outward-pointing unit normal vector and surface area element on $\partial\mathcal{M}$, $\mathcal{M}^o$ is the interior of $\mathcal{M}$. \end{lemm} \begin{lemm}[Laurent expansion, see \cite{LLW,LZP2}] Let $\mathcal{D}$ be an annular domain in $\mathbb{R}^8$. If $f$ is left $\mathbb{O}$-analytic in $\mathcal{D}$, then $$f(x)=\sum_{k=0}^\infty P_kf(x)+\sum_{k=0}^\infty Q_kf(x), \quad x\in\mathcal{D},$$ where $P_kf$ and $Q_kf$ are respectively the inner and outer spherical octonionic-analytics of order $k$ associated to $f$. \end{lemm} Octonionic analytic functions have a close relationship with the Stein--Weiss conjugate harmonic systems. If the components of $F$ consist a Stein--Weiss conjugate harmonic system on $\Omega\subset\mathbb{R}^8$, then $\overline{F}$ is $\mathbb{O}$-analytic on $\Omega$. But conversely this is not true (\cite{LP1}). For more information and recent progress about octonionic analysis, we refer the reader to \cite{Li, LPQ1, LPQ2, LW, LZP1, LLW, WLL}. \section{The octonionic Szeg\"{o} kernel} To see how our new definition works, let us check the octonionic Szeg\"{o} kernel for the unit ball in $\mathbb{R}^8$. Recall that on the unit ball the octonionic Hardy space $\mathcal{H}^2(B)$ consists of the left octonionic analytic functions whose mean square value on the sphere is bounded for radius $r\in[0,1)$. For any $f\in\mathcal{H}^2(B)$, according to the Cauchy's integral formula, for all $a\in B$ there holds \begin{align*} f(a)&=\frac{1}{\omega_8}\int_{x\in S^7}\frac{\overline{x}-\overline{a}}{|x-a|^8}(xf(x))dS \\&=\frac{1}{\omega_8}\int_{x\in S^7}\left( \frac{\overline{1-\overline{x}a}}{|1-\overline{x}a|^8}\overline{x} \right)(xf(x))dS, \end{align*} where $S^7=\partial B$ is the unit sphere, $dS$ is the area element on $S^7$. If we define the inner product for $\mathcal{H}^2(B)$ to be \begin{align}\label{inner1} (f, g)_{S^7}:=\frac{1}{\omega_8}\int_{S^7}(\overline{\eta g(\eta)})(\eta f(\eta))dS= \frac{1}{\omega_8}\int_{S^7}(\overline{g(\eta)}\overline{\eta})(\eta f(\eta))dS, \end{align} and let $$S(x,a)=\frac{1-\overline{x}a}{|1-\overline{x}a|^8},$$ then $S(\cdot,a)\in\mathcal{H}^2(B)$, and the Cauchy's integral formula can be rewritten as $$f(a)=(f,S(\cdot,a))_{S^7}.$$ We call $S(\cdot,a)$ the octonionic Szeg\"{o} kernel. Denote by $L^2(S^7)$ the space of square integrable (octonion-valued) functions on the unit sphere, for which we define its inner product to be the same as that in (\ref{inner1}). We have \begin{prop}\label{propo} Let $f, g\in L^2(S^7)$ be associated with the spherical octonionic-analytics expansions: $$f(\omega)=\sum_{k=0}^\infty(P_kf(\omega)+Q_kf(\omega)),\quad g(\omega)=\sum_{k=0}^\infty(P_kg(\omega)+Q_kg(\omega)),\quad\omega\in S^7.$$ Then \begin{align*} (f, g)_{S^7}=&\sum_{k=0}^\infty\left((P_kf, P_kg)_{S^7}+(Q_kf, Q_kg)_{S^7}\right)\\&+\sum_{k=0}^\infty\left((P_kf, Q_{k+1}g)_{S^7}+(Q_{k+1}f, P_kg)_{S^7}\right). \end{align*} \begin{proof} From $$\triangle(xP_kf(x))=x\triangle(P_kf(x))+2D(P_kf(x))=0,$$ we can easily see that the restriction of $xP_kf(x)$ on $S^7$ is a spherical harmonic of order $k+1$. Similarly, the restriction of $xQ_kf(x)$ on $S^7$ is a spherical harmonic of order $k$. The proposition immediately follows by the fact that spherical harmonics of different orders are mutually orthogonal. \end{proof} \end{prop} Thus we get \begin{coro} Let $f\in L^2(S^7)$ be associated with the spherical octonionic-analytics expansion $$f(\omega)=\sum_{k=0}^\infty(P_kf(\omega)+Q_kf(\omega)),\quad \omega\in S^7.$$ Then $$ \|f\|^2_{S^7}=\sum_{k=0}^\infty\left(\|P_kf\|^2_{S^7}+\|Q_kf\|^2_{S^7}\right) +\sum_{k=0}^\infty2{\rm Re}\left((P_kf, Q_{k+1}f)_{S^7}\right). $$ \end{coro} \noindent\textbf{Remark:} Proposition \ref{propo} is similar to the Parseval's theorem. It is worthwhile to note that this version is a bit different from that in Clifford analysis where the second part in the summation vanishes (\cite{BDS}), here $(P_kf, Q_{k+1}g)_{S^7}$ may not be zero. Below we give a counter-example. Let $$f(x)=x_1-x_0e_1,$$ $$g(x)=\frac{\overline{x}}{|x|^{12}}(x_1x_2e_4+x_0x_2e_5+x_0x_1e_6).$$ Then $P_1f=f$, $Q_2g=g$, but $$(P_1f, Q_2g)_{S^7}=\frac{-2e_6}{\omega_8}\int_{S^7}x_0^2x_1^2dS\neq 0.$$ \section{Derivation of the octonionic Bergman kernel} In this section we will prove Theorem \ref{main}. For the main idea we use in the proof one can also refer to \cite{BDS}. \begin{proof}[Proof of Theorem \ref{main}] By definition it is straightforward that $$(f,g)_B=\int_0^1r^7(f_r,g_r)_{S^7}dr,$$ where $f_r(\eta)=f(r\eta)$, $\eta\in S^7$. Together with Proposition \ref{propo}, we get \begin{align*} (f,g)_B&=\sum_{k=0}^\infty(P_kf,P_kg)_B\\ &=\sum_{k=0}^\infty\int_0^1r^{2k+7}(P_kf,P_kg)_{S^7}dr\\ &=\sum_{k=0}^\infty(2k+8)^{-1}(P_kf,P_kg)_{S^7}. \end{align*} Therefore, $f\in\mathcal{B}^2(B)$ if and only if $f$ is left octonionic analytic in $B$ and $$\|f\|_B^2=\sum_{k=0}^\infty(2k+8)^{-1}\|P_kf\|_{S^7}^2<\infty.$$ From this viewpoint, if $f\in\mathcal{H}^2(B)$, then $$\sqrt{T}f:=\sum_{k=0}^\infty\sqrt{2k+8}P_kf\in\mathcal{B}^2(B).$$ Similarly, if $g$ is left octonionic analytic in $B_R$ (the ball centered at the origin of radius $R$, with $R>1$), then $\sqrt{T}g\in\mathcal{B}^2(B_{R'})$, with $1\leq R'<R$. Consequently, $$Tg:=\sqrt{T}^2g=\sum_{k=0}^\infty(2k+8)P_kg \in\mathcal{B}^2(B_{R'}),~1\leq R'<R.$$ Now, assume $f\in\mathcal{B}^2(B)$, when $|a|<r$ we have \begin{align} f(a)&=\frac{1}{\omega_8}\int_{\partial B_r} \frac{\overline{x}-\overline{a}}{|x-a|^8}d\mu_xf(x)\nonumber\\ &=\frac{r^7}{\omega_8}\int_{S^7}\frac{r\overline{\eta}-\overline{a}} {|r\eta-a|^8}(\eta f(r\eta))dS\nonumber\\ &=\lim_{r\rightarrow 1^-}\frac{r^7}{\omega_8}\int_{S^7} \frac{r\overline{\eta}-\overline{a}} {|r\eta-a|^8}(\eta f(r\eta))dS\nonumber\\ &=\lim_{r\rightarrow 1^-}r^7(f_r, S^r(\cdot,a))_{S^7}\label{proof1}, \end{align} where $$S^r(x,a)=\frac{r-\overline{x}a}{|r-\overline{x}a|^8}.$$ Since $S^r(x,a)$ is left octonionic analytic in $B_{r/|a|}$ ($r/|a|>1$) with respect to $x$, we have $$TS^r(\cdot,a)=\sum_{k=0}^\infty(2k+8)P_kS^r(\cdot,a)\in\mathcal{B}^2(B).$$ So, \begin{align} (f_r,TS^r(\cdot,a))_B&=\sum_{k=0}^\infty(2k+8)^{-1} (P_kf_r,(2k+8)P_kS^r(\cdot,a))_{S^7}\nonumber\\&=\sum_{k=0}^\infty (P_kf_r,P_kS^r(\cdot,a))_{S^7}\nonumber\\&=(f_r,S^r(\cdot,a))_{S^7}\label{proof2}. \end{align} By (\ref{proof1}) and (\ref{proof2}) we get $$f(a)=\lim_{r\rightarrow 1^-}r^7(f_r,TS^r(\cdot,a))_B =(f,TS(\cdot,a))_B,$$ where $S(\cdot,a)$ is the octonionic Szeg\"{o} kernel. We can now see that the octonionic Bergman kernel $B(x,a)$ is $$B(x,a)=TS(x,a)=\sum_{k=0}^\infty(2k+8)P_kS(x,a).$$ The remaining thing we need to do is to evaluate the above summation. To this end, first note that $$S(x,a)=\mathcal{K}(E(x,\overline{a})),$$ where $E(x,a)=\frac{\overline{x}-\overline{a}}{|x-a|^8}$ ($|x|>1$), $\mathcal{K}f:=E(x,0)f(x^{-1})$ is the Kelvin inversion. So, $$P_kS(x,a)=\mathcal{K}(Q_kE(x,\overline{a}))=\overline{x} \overline{Q_kE(x,a)}|x|^{2k+6}.$$ Define the adjoint operator $A$ as follows: $$(Af)(x):=\overline{D}(|x|^{-6}\overline{f}(x/|x|^2)),$$ then it is easy to show that $$A(Q_kE(x,a))=(2k+8)\overline{x} \overline{Q_kE(x,a)}|x|^{2k+6}.$$ Hence, \begin{align*} B(x,a)&=\sum_{k=0}^\infty A(Q_kE(x,a))\\ &=A\left(\sum_{k=0}^\infty Q_kE(x,a)\right)\\ &=A(E(x,a)) \\&=\overline{D}_x\left(\frac{x-a|x|^2}{|1-\overline{x}a|^8}\right) \\&=\frac{\left(6(1-|a|^2|x|^2)+2(1-\overline{x}a)\right)(1-\overline{x}a)} {|1-\overline{x}a|^{10}}. \end{align*} The proof of Theorem \ref{main} is complete. \end{proof} \section{Final remarks} By direct computation one can show that $$\overline{B(x,a)}\overline{x}= \overline{D}_a\left(\frac{1-|a|^2|x|^2}{|1-a\overline{x}|^8}\right).$$ In fact, similar formulas also hold in both complex analysis and Clifford analysis. We therefore can unify the reproducing formulas in complex and hyper-complex contexts. Let $\mathscr{A}$ denote the complex algebra or hyper-complex algebra, i.e., $\mathscr{A}$ may refer to complex numbers $\mathbb{C}$, quaternions $\mathbb{H}$, octonions $\mathbb{O}$, or Clifford algebra $\mathscr{C}$. Assume that the dimension of $\mathscr{A}$ is $m$. Then for any function $f$ which belongs to the Bergman space $\mathcal{B}^2(B_m)$ and any point $a\in B_m$ ($B_m$ is the unit ball centered at origin in $\mathbb{R}^m$), there holds \begin{align*} f(a)&=(f,B(\cdot,a))_{B_m} \\ &=\frac{1}{\omega_m}\int_{B_m}\left(\overline{B(x,a)}\frac{\overline{x}}{|x|}\right) \left(\frac{x}{|x|}f(x)\right)dV \\&=\frac{1}{\omega_m}\int_{B_m} \overline{D}_a\frac{1-|a|^2|x|^2}{|1-a\overline{x}|^m} \left(\frac{x}{|x|^2}f(x)\right)dV \\&=\frac{1}{\omega_m}\int_{B_m} \frac{\left((m-2)(1-|a|^2|x|^2)+2(1-\overline{a}x)\right)(\overline{x}-|x|^2\overline{a})} {|1-\overline{x}a|^{m+2}}\left(\frac{x}{|x|^2}f(x)\right)dV, \end{align*} where $\omega_m$ is the surface area of the unit sphere in $\mathbb{R}^m$, $dV$ is the volume element on $B_m$, and $D$ is the generalized Cauchy--Riemann operator in the respective context. \vskip 0.8cm \noindent{\Large\textbf{Acknowledgements}} \vskip 0.3cm \noindent This work was supported by the Scientific Research Grant of Guangdong University of Foreign Studies for Introduction of Talents (No. 299--X5122145), the Research Grant of Guangdong University of Foreign Studies for Young Scholars (No. 299--X5122199), and the Foundation for Young Innovative Talents in Higher Education of Guangdong, China (No. 2015KQNCX037).
2,869,038,156,453
arxiv
\section{Introduction} Dialogue systems have received a considerable amount of attention from academic researchers and have achieved remarkable success in a myriad of industry scenarios, such as in chit-chat machines \cite{shum2018eliza}, information seeking and searching \cite{aliannejadi2019asking,Hashemi2020GuidedTL}, and intelligent assistants \cite{li2017alime}. From the perspective of domains involved in previous studies, existing studies can be categorized into two groups, i.e., domain-specific and open-domain. Domain-specific models generally pursue to solve and complete one specific target (e.g., restaurant reservation \cite{lei2018sequicity}, train routing \cite{ferguson1996trains}), which always involve domain knowledge and engineering. Unlike domain-specific studies, open-domain dialogues between human and machine involve unlimited topics within a conversation \cite{ritter2011data}, as a result of which building an open-domain dialogue system is more challenging with the lack of enough knowledge engineering. Benefiting from the explosion of available dialogue datasets, constructing open-domain dialogue systems has attracted a growing number of researchers. Among dialogue systems in open-domain, generation-based \cite{shangL2015neural,sordoni2015neural,vinyals2015neural} and retrieval-based \cite{wang2013dataset,wu2018learning,wu2017sequential} methods are the mainstreams in both academia and industry, where generation methods learn to create a feasible response for a user-issued query while retrieval-based methods extract a proper response from a candidate pool. In contrast to the ``common response''\footnote{Sequence-to-sequence neural networks along with the log-likelihood objective function tend to create short, high-frequency, and commonplace responses (e.g., ``I don't know'', ``I'm OK''), which also refers to common response in previous study \cite{sordoni2015neural,vinyals2015neural,serban2016building}} created by generation models \cite{li2015diversity}, retrieval-based methods can extract fluent and informative responses from human conversations \cite{tao2019multi}. Early retrieval-based methods mainly address the issue of single-turn response selection, where the dialogue context only contains one utterance \cite{wang2015syntax}. Recent studies focus on modeling multi-turn response selection \cite{lowe2015ubuntu}. For multi-turn response selection, a dialogue system is required to properly calibrate the matching degree between a multi-turn dialogue context and a given response candidate. The response selection task thus can be naturally transformed to learning the matching degrees between the semantic representations and the dependency relationships between context and response candidates. SMN (sequential matching network) attempts to learn fine-grained (e.g., word-level, sentence-level) semantic matching information between each utterance in context and the response candidate and aggregate the matching information to calculate the final matching results. MIX (multi-channel information crossing) \cite{chen2018mix} models the matching degrees between context and response from the perspective of interaction representations \cite{tao2019multi} to extract multi-channel matching patterns and information. Deep attention matching network (DAM) \cite{zhou2018multi} captures sophisticated dependency information in utterances and cross utterances, i.e., using self-attention mechanism and cross-attention strategy to learn the representations of context and response candidate. Although these methods have achieved promising results, there is still room for improving their capability of context utterance modeling, such as combining dependency relationships and multi-channel interaction representations \cite{chen2018mix,tao2019multi}. Besides, existing models are trapped into learning matching signals from context and response, leaving introducing extra information unexplored. Table \ref{tab:intro_case} illustrates a sampled case in our experiments. We can observe that the word ``\textit{xfce4}'' is the crucial clue for selecting the target response. However, other words are likely to overwhelm the word ``\textit{xfce4}'', leading to unsatisfactory performance as it appears only once in the context. If we exploit the information in history-response matching, i.e., the high-frequency word ``\textit{xfce4}'', the performance of response selection will be enhanced. A recent study \cite{yang2018response} proposes to use pseudo-relevant feedback documents as an extra information source to enrich the response representation. Such a strategy is useful but still risky since the pseudo-relevant feedback might introduce much-unrelated information. Thus, it is imperative to use more accurate extra information, i.e., user-specific dialogue history, for improving the matching performance. \input{table-1} To address these obstacles, we propose a novel personalized hybrid matching network (PHMN) for multi-turn response selection, in which the customized dialogue history is introduced as additional information, and multiple types of representations in context and response are merged as the hybrid representation. Explicitly, we incorporate two types of representations in the hybrid representation learning, i.e., attention-based representation for learning dependency information, interaction-based representation for extracting multiple matching patterns and features. Such a strategy has been proved to be effective in recent studies for improving context-response matching \cite{tao2019multi,tao2019one}. In exploiting information in dialogue history, we introduce two different ways for enhancing multi-turn context response matching. For one thing, we extract the wording behavior of a specific user in the corresponding dialogue history as long-term information to supplement the short-term context-response matching. For another, we compute personalized attention weights for candidate responses to extract critical information for response selection. More concretely, we perform the personalized attention upon the learned hybrid representation and then utilize a gate mechanism to fuse the wording behavior matching information and weighted hybrid context-response matching information. Hence, our model is effective for both context modeling and dialogue history exploiting. We conduct experiments on two challenging datasets for personalized multi-turn response retrieval, i.e., personalized Ubuntu dialogue corpus (P-Ubuntu) and personalized Weibo dataset (P-Weibo), to evaluate the effectiveness of our proposed PHMN model, where there exist user ids in both datasets. Experimental results confirm that our model achieves state-of-the-art performance on the newly created corpora. Through introducing personalized wording behaviors and personalized attention, our model yields a significant improvement over several strong baseline models, which suggests that introducing user-specific dialogue history and learning hybrid representations are appealing for multi-turn response retrieval. \section{Preliminaries} \subsection{Deep Matching Network} Generally, the recent effective deep matching networks for multi-turn response retrieval or short text matching consist of four elements: representations learning, dependency modeling, matching, aggregation and fusion. \textbf{\textit{Representation Learning.}} Most studies for multi-turn response selection first transform context utterances and response candidates to either vector representations \cite{huang2013learning} or interaction matrices \cite{hu2014convolutional,pang2016text,guo2016deep} for convenient matching calculation. For vector representations learning, various deep neural networks are designed for learning multi-level and multi-dimension semantic information from conversation utterances, including CNN-based \cite{kalchbrenner2014convolutional,kim2014convolutional}, RNN-based \cite{li2015tree,liu2016recurrent}, and tree-RNN-based methods \cite{irsoy2014deep,socher2011parsing}. As to interaction-based representation learning methods, they first generate an interaction matrix for each utterance pair between context utterances and response candidates. Then, direct matching features such as the degree and structure of matching are captured by a deep neural network \cite{xiong2017end,dai2018convolutional}. \textbf{\textit{Dependency Modeling.}} Besides the semantic representations and matching structures in the interaction-based method, there exist sophisticated dependency information and reference relations within utterances and across utterances. Benefited from the great success of the Transformer on neural machine translation, various attention-based methods are proposed to capture the dependency structure and information from different levels. DAM \cite{zhou2018multi} leverages a novel attention-based multi-turn response selection framework for learning various dependency information and achieves very competitive performance. DAM borrows the self-attention strategy from Transformer for capturing word-level intra-utterance dependency and sentence-level representations and uses a cross-attention mechanism to capture dependency (e.g., reference relations) between those latently matched segment pairs. \textbf{\textit{Matching.}} Once obtaining utterance representations at each level of granularity, the matching relations between two segments will be calculated. According to the information in utterance representations, semantic matching methods and structure matching approaches are designed to calibrate the matching degree between two representations. To date, various matching degree calculation methods have been investigated, e.g., using Euclidean distance between two vectors as the matching degree, performing cosine similarity calculation, computing element-wise dot production. Based on the information in vector representations (semantic or structure) and different matching degree calibration strategies, an effective matching method can be designed for comprehensively computing the matching degree between two utterances. \textbf{\textit{Aggregation and Fusion.}} After calculating the matching degree between context and response at each level of granularity, a typical deep matching network contains an aggregation or fusion module for learning the final matching score. SMN \cite{wu2017sequential} proposes to use RNN to sequentially accumulate the matching degree of each utterance-response pair and further compute the matching score between the context and the response. As utterances relationships within a dialogue context have effects on the calculation of the final matching score, DUA \cite{zhang2018modeling} refines the utterance representations with gated self-attention and further aggregates this information into a matching score. DAM \cite{zhou2018multi} aggregates all the matching degrees of segments across each utterance and response into a 3D tensor and then leverages two-layered 3D convolutions with max-pooling operations to fuse the matching degree information and compute the final matching score. \subsection{Problem Formulation} We follow the conventional settings in previous multi-turn response retrieval works \cite{tao2019one,tao2019multi} and introduce the following necessary notations to formulate the personalized multi-turn response retrieval task. A dataset with user dialogue history content \begin{math} \mathcal{D}=\{(c_i, r_i, m_i, y_i)\}_{i=1}^N \end{math} is first given, where $c_i$, $r_i$, $m_i$, $y_i$ represent dialogue context, response candidate, dialogue history and the corresponding binary label of the response candidate respectively. Note that we treat user dialogue history utterances as the extra information for building a personalized multi-turn dialogue response retrieval model. For the sake of clarity, we omit the subscript $i$ which denotes the case index in $\mathcal{D}$ when elaborating the details of our model. Herein, an utterance $c$ in the dialogue context is represented as \begin{math} c=(u_1, u_2,\ldots, u_j,\ldots, u_{n_c}) \end{math} where $u_{j}$ represents an utterance with length $n_{u_{j}}$ in the j-th turn of the dialogue context and there are $n_c$ utterances in the dialogue context. Similarly, there are $n_m$ history utterances of the current user who is supposed to raise a response for the given dialogue context, which is denoted as \begin{math} m=(u_{m,1}, u_{m,2},\ldots, u_{m,k},\ldots, u_{m,n_m}) \end{math}, where $u_{m,k}$ represents an utterance with length $n_{u_{m,k}}$. $n_r$ denotes the number of words in a candidate response $r$. $y=1$ means the given response candidate is proper for the context and corresponded user dialogue history, otherwise $y=0$. Then, our task is defined as learning a matching function $f(\cdot)$ from the given dataset that can yield a matching score between the dialogue context and the given response candidate with the help of user dialogue history. \section{Model} Inspired by the advanced deep multi-turn dialogue response selection framework mentioned above, we design our model from two directions, i.e., obtaining more comprehensive information from context and response, introducing more auxiliary information other than context and response. We proposed a personalized hybrid matching network (PHMN) for multi-turn response selection, which incorporates hybrid representations in context and response (i.e., semantic matching, interaction-based features, and dependency relations information) and personalized dialogue content (i.e., user-specific dialogue history). As shown in figure \ref{fig:framework}, our proposed PHMN comprises three main sub-modules, i.e., hybrid representation learning module, personalized dialogue content modeling, aggregation and fusion. \subsection{Hybrid Representations Learning} We consider obtaining semantic representations of context and response at two different levels, i.e., word-level and phrase-level. Concretely, we adopt word embeddings as the word-level representations and the combination of uni-gram, bi-gram, tri-gram semantic information as phrase representations. We also borrow the strategy of self-attention from the Transformer \cite{vaswani2017attention} and DAM \cite{zhou2018multi} to learn abundant dependency relationships in conversations. To capture the matching structure and patterns, we transform the semantic representations of context and response to interaction matrices. Details of learning \textit{word representation}, \textit{phrase representation}, \textit{dependency representation} and constructing \textit{interaction matrices} are elaborated as follows: \textbf{Word Representations.} We use word embeddings as word-level representations insomuch as they contain rich semantic information and co-occurrence information. In learning, we initialized word embeddings with pre-trained Word2Vec on each benchmark dataset, i.e., P-Ubuntu dialogue corpus in English and P-Weibo dataset in Chinese. Upon both datasets, the dimension of word embedding is $d_w$. Note that any proper Word2vec learning algorithm and pre-trained results are applicable such as BERT \cite{devlin2018bert}. Thus, the word-level representation of an utterance $u_j$ is \begin{math} {\bm{U_j}}=[\bm{e_{u_j, 1}}, \bm{e_{u_j, 2}}, \dots, \bm{e_{u_j, k}}, \dots, \bm{e_{u_j, n_{u_j}}}] \in \mathbb{R}^{n_{u_j}\times d_w} \end{math}; and similarly a response candidate $r$ can be written as \begin{math} {\bm R}=[\bm{e_{r,1}}, \bm{e_{r,2}}, \dots, \bm{e_{r,k}}, \dots, \bm{e_{r,n_r}}] \in \mathbb{R}^{n_{r}\times d_w} \end{math}. The dimensions of $\bm{e_{u_j, k}}$ and $\bm{e_{r,k}}$ are both $d_w$. \input{figure-1} \textbf{Phrase Representations.} In an actual situation, obtaining semantic representations solely based on word representations is risky as the semantic assemble patterns of words differ from each other. For instance, ``all in'' and ``in all'' have totally different semantic information, while ``work hard'' and ``hard work'' deliver the same semantic content. We consider modeling the semantic assemble patterns with a convolutional neural network. In both English and Chinese, the minimal semantic unit typically includes 1 to 3 words \cite{chen2018mix}. As a result, we conduct convolution operations upon the word embedding representations with different window sizes to capture uni-gram, bi-gram, and tri-gram information. Concretely, we conduct 1-D convolution on the word embeddings of a given utterance \begin{math} {\bm{U_j}}=[\bm{e_{u_j, 1}}, \bm{e_{u_j, 2}}, \dots, \bm{e_{u_j, k}}, \dots, \bm{e_{u_j, n_{u_j}}}] \end{math} with window size $l$ from 1 to 3, where there are $d_f$ filters for each window size and the stride length is 1. The $l$-gram phrase representation in the $k$-th location is calculated as: \begin{equation} \bm{{o_k}^l} = ReLU(\bm{{Z_k}^l}\bm{W_l} + \bm{b_l}) \end{equation} where $\bm{W_l}$ and $\bm{b_l}$ are trainable parameters of the convolutional filter with window size $l$, and $\bm{{Z_k}^l} \in \mathbb{R}^{l \times d_w}$ stands for the input unigram embeddings in the current sliding window which is formulated as: \begin{equation} \bm{{Z_k}^l} = [\bm{{e_{k-\lfloor \frac{1}{2}(l-1) \rfloor}}}, \ldots, \bm{{e_{k}}}, \ldots, \bm{{e_{k+\lfloor \frac{1}{2}l \rfloor}}}] \end{equation} where $\bm{{e_{k}}}$ is the word embedding representation of a word in either the dialogue context or the response (i.e., it can be either $\bm{e_{u_j, k}}$ or $\bm{e_{r, k}}$). Here we set $d_f$ the same as $d_w$. The output sequence of vectors of the convolution has the same length as the input sequence of vectors by utilizing the zero-padding strategy. Thus, a given utterance $u_j$ is transformed to three matrix, i.e., ${\bm {U_j}^1}=[\bm{o^1_1}, \bm{o^1_2},\dots, \bm{o^1_{n_{u_j}}}]$, ${\bm {U_j}^2} =[\bm{o^2_1}, \bm{o^2_2},\dots, \bm{o^2_{n_{u_j}}}]$, ${\bm {U_j}^3}=[\bm{o^3_1}, \bm{o^3_2},\dots, \bm{o^3_{n_{u_j}}}]$. $\bm {U_j}^1$, $\bm {U_j}^2$ and $\bm {U_j}^3$ correspond to \{1,2,3\}-gram semantic information, respectively. Similarly, we also conduct 1-D convolution on a given response \begin{math} {\bm R}=[\bm{e_{r,1}}, \bm{e_{r,2}}, \dots, \bm{e_{r,k}}, \dots, \bm{e_{r,n_r}}] \end{math} using the same convolutional filters, which outputs three matrix $\bm{R^1}$, $\bm{R^2}$, $\bm{R^3}$. \textbf{Dependency Representations.} To obtain the sophisticated dependency representations in conversations, we utilized an attentive module that is similar to the attention module in Transformer \cite{vaswani2017attention} and DAM. The attentive module takes three sentences as input, namely the query sentence, the key sentence, and the value sentence, which are denoted as ${\bf \mathcal{Q}}=[\bm{e_i}]_{i=0}^{n_\mathcal{Q}-1}$, ${\bf \mathcal{K}}=[\bm{e_i}]_{i=0}^{n_\mathcal{K}-1}$, ${\bf \mathcal{V}}=[\bm{e_i}]_{i=0}^{n_\mathcal{V}-1}$ respectively, where $n_\mathcal{Q}$, $n_\mathcal{K}$, $n_\mathcal{V}$ represent the number of words in each sentence and $n_\mathcal{K}=n_\mathcal{V}$, and $\bm{e_i}$ is the $d_w$-dimension word embedding representation of a word. The attentive module first uses each word in the query sentence to attend each word in the key sentence through the scaled dot-product attention mechanism. Then, the obtained attention score is functioned upon the value sentence $\bf \mathcal{V}$ to form a new representation of $\bf \mathcal{Q}$, which is formulated as follows: \begin{equation} Att({\bf \mathcal{Q,K,V}})= softmax(\frac{{\bf \mathcal{QK}}^T}{\sqrt{d_w}})\bf \mathcal{V} \end{equation} In practice, the key sentence and the value sentence are identical, i.e., $\mathcal{K}=\mathcal{V}$. Thus, each word in the query sentence $\mathcal{Q}$ is represented by the joint meaning of its similar words in $\mathcal{V}$. We dispense $h$ heads to $\mathcal{Q}$, $\mathcal{K}$, $\mathcal{V}$ to capture multiple aspects dependency information via the scaled dot-product multi-head attention. The output of head $i$ is then written by \begin{equation} {\bf \mathcal{O}_i}=Att({\mathcal{Q} \bm{W_i}^{\mathcal{Q}},{\mathcal{K}} \bm{W_i}^{\mathcal{K}},{\mathcal{V}} \bm{W_i}^{\mathcal{V}}}) \end{equation} where $\bm{W_i}^\mathcal{Q}$, $\bm{W_i}^\mathcal{K}$, $\bm{W_i}^\mathcal{V}$ $\in \mathbb{R}^{d_w\times(d_w/h)}$ are trainable parameters for linear transformations. The outputs of each head are concatenated to obtain the attention representations, formulated as: \begin{equation} \mathcal{O}=(\mathcal{O}_1\oplus \mathcal{O}_2\oplus \dots \oplus \mathcal{O}_h) \bm{W}_\mathcal{O} \end{equation} where $\oplus$ represents column-wise concatenation operation and $\bm{W_\mathcal{O}} \in \mathbb{R}^{d_w\times d_w}$ is trainable. We then applied a layer normalization operation for preventing the vanishing or exploding of gradients. We also use a residual connection to add the output $\mathcal{O}$ to the query sentence $\mathcal{Q}$. From here, we denote the whole attentive module as $Attention({\bf \mathcal{Q,K,V}})$. Note that the output of the attentive module has an identical dimension with the query sentence $\mathcal{Q}$. In experiments, $\mathcal{Q,K,V}$ are set to same, i.e., $\mathcal{Q=K=V}$. For a given context utterance $u_j$, its attention-based representation $\bm {U_j}^a$ is calculated as the output of $Attention(\bm{U_j},\bm{U_j},\bm{U_j})$. In this way, an utterance can attend itself to represent each word with other related words within the utterance. As a result, dependency relation information among the utterance can be captured. Similarly, the dependency representation of a given response is ${\bm{R^a}}=Attention(\bm{R},\bm{R},\bm{R})$. \textbf{Interaction Matrices.} Given an utterance $u_j$ in a context and a response $r$, we have five-channel representations for $u_j$ and $r$ respectively, i.e., ${\bm{U_j}, \bm{{U_j}^1}, \bm{{U_j}^2}, \bm{{U_j}^3}, \bm{{U_j}^a}}$ and ${\bm{R}, \bm{R^1}, \bm{R^2}, \bm{R^3}, \bm{R^a}}$, where each representation channel of $u_j$ has a dimension of $\mathbb{R}^{n_{u_j}\times d_w}$ and each representation channel of $r$ has a dimension of $\mathbb{R}^{n_r\times d_w}$. We then construct five interaction matrices for each utterance-response pair, which correspond to the interactions of $\bm{U_j}$-$\bm R$, $\bm {U_j}^1$-$\bm{R^1}$, $\bm {U_j}^2$-$\bm {R^2}$, $\bm {U_j}^3$-$\bm {R^3}$, $\bm {U_j}^a$-$\bm {R^a}$. Take the interaction of $\bm U_j$-$\bm R$ as an example, the $(p,q)$-th element of the interaction matrix $\bm{M_j}$ is calculated by the dot production of the $p$-th element of $\bm R$ and the $q$-th element of $\bm {U_j}$. In practice, we directly use matrix multiplications to calculate each of the five interaction matrices. The calculation of $\bm {M_j}$ is as follows: \begin{equation} \bm{M_j}=\bm{R} \cdot \bm{U_j}^T \end{equation} Following the same calculation procedure, we can obtain the other four interaction matrices $\bm {M_j}^1, \bm{M_j}^2, \bm{M_j}^3, \bm{M_j}^a$, where each matrix has a dimension of $\mathbb{R}^{n_r \times n_{u_j}}$. \subsection{Personalized Dialogue Content Modeling} In addition to the hybrid representation of context and response, we propose to use the user-specific dialogue content from two perspectives. For one thing, for a given dialogue context, the different user has distinctive attention to the context when matching a response candidate. In other words, some words or phrases are more important than others for response selection, and those vital content changes for different users. For another, we assume the user wording behavior in dialogue history is effective supplementary information for response selection. Thus, personalized dialogue content modeling depends on how to learn personalized attention to allocate weight to each word or phrase in matching context utterances and response candidate, and how to extract wording behavior matching information between dialogue history and response candidate. \textbf{Personalized Attention.} \input{figure-2} Intuitively, the words and phrases in context utterances and response candidate are not equally important for response selection. Moreover, different users may have distinctive attention to the dialogue content. To model the relative importance among the words and phrases with the consideration of the users' persona, we propose a simple but effective method for calculating the personalized attention scores from history utterances, which takes phrases distributions at multiple granularities into account. We first construct the personalized TF-IDF corpus by treating the dialogue history of each user as a document. Then we can compute the $\{1,2,3\}$-gram TF-IDF scores for each given utterance. In doing so, each $\{1,2,3\}$-gram phrase in the response candidate is allocated with a weight. We then conduct these weights on the interaction matrices of each context utterance and response pair. Recall that we have $\bm{M_j}, \bm{{M_j}^1}, \bm{{M_j}^2}, \bm{{M_j}^3}, \bm{{M_j}^a}$ for each context utterance and response pair, representing interactions at the word embedding level, the uni-gram level, the bi-gram level, the tri-gram level, and the self-attention dependency. Specifically, for the given response $r$, we calculate its $\{1,2,3\}$-gram personalized weights as $\bm{{a}^1}$, $\bm{{a}^2}$ and $\bm{{a}^3}$ whose dimensions are all $\mathbb{R}^{n_{r}\times 1}$. We then copy these score vectors $n_{u_j}$ times in the column direction to form the personalized mask matrices $\bm{{A}^1}$, $\bm{{A}^2}$ and $\bm{{A}^3}$. All the three personalized mask matrices have the same dimension of $\mathbb{R}^{n_{r} \times n_{u_j}}$, and the values in the same row within a matrix are the same. As the rows of the interaction matrices represent the response, we directly multiply the $\{1,2,3\}$-gram personalized mask matrices to the corresponding $\{1,2,3\}$-gram interaction matrices. Concretely, we multiply $\bm {A}^1$ to $\bm {M_j}, \bm{{M_j}^1}, \bm{{M_j}^a}$, multiply $\bm {A}^2$ to $\bm {M_j}^2$, and multiply $\bm {A}^3$ to $\bm {M_j}^3$. As shown in Figure \ref{fig:mask}, we denote these weights as the personalized masks to extract vital matching signals in the interaction matrices, resulting five new interaction matrices $\bm{{M_j}^{'}}, \bm{{M_j}^{1'}}, \bm{{M_j}^{2'}}, \bm{{M_j}^{3'}}, \bm{{M_j}^{a'}}$ for each context utterance response pair. \textbf{Wording Behavior Matching.} In analogy to the phrase representations of context utterances and response, we treat the $\{1,2,3,4\}$-grams matching information and patterns as wording behavior matching information. In details, we conduct 1-D convolution on a response candidate ${\bm R}=[\bm{e_{r,1}}, \bm{e_{r,2}}, \ldots, \bm{e_{r,n_r}}]$, and a history utterance ${\bm{U_{m,k}}}=[\bm{e_{{m,k}, 1}}, \bm{e_{{m,k}, 2}},$ \ldots, $\bm{e_{m, n_{u_{m,k}}}}]$, where the convolution window size is from 1 to 4. There are $\frac{1}{4}d_f$ convolution filters for each window size, and the stride length is 1. The zero-padding is used to let the input sequence and the output sequence of the convolution operation have the same length. Thus, a history utterance $u_{m,k}$ has four corresponding matrices $\bm {U_{m,k}}^1, \bm{{U_{m,k}}^2}, \bm{{U_{m,k}}^3}, \bm{{U_{m,k}}^4}$ with a same dimension $\mathbb{R}^{n_{u_{m,k}}\times \frac{1}{4} d_f}$. We perform a concatenation operation on the four matrices as the final representations of wording behavior, written by: \begin{equation} \bm {U_{m,k}}^c = ({\bm{U_{m,k}}^1}\oplus \bm{{U_{m,k}}^2}\oplus \bm{{U_{m,k}}^3}\oplus \bm{{U_{m,k}}^4}) \end{equation} where ${\bm {U_{m,k}}^c} \in \mathbb{R}^{n_{u_{m,k}}\times d_f}$. Accordingly, the wording behavior representation of a response is ${\bm {R_m}^c} \in \mathbb{R}^{n_r\times d_f}$ . We further calculate the interaction matrix of ${\bm {U_{m,k}}^c}$ and ${\bm {R_m}^c}$ for capturing matching structure and patterns on wording behavior level. Similar to the calculation of the interaction matrices elaborated in the last subsection, the $(p,q)$-th element of the $\bm{M_{m,k}}$ is calculated by the dot production of the $p$-th element of ${\bm {R_m}^c}$ and the $q$-th element of ${\bm {{U_{m,k}}^c}}$. In practice, we use matrix multiplications to calculate $\bm{M_{m,k}}$ as follows: \begin{equation} \bm{M_{m,k}}= {\bm {{R_m}^c}} \cdot {\bm {{U_{m,k}}^c}}^T \end{equation} \subsection{Aggregation and Fusion} To aggregate matching degree information between a context utterance and a response, we alternatively stack two layers of 2-D convolution and max-pooling operation on the interaction matrices $\bm{{M_j}^{'}}, \bm{{M_j}^{1'}}, \bm{{M_j}^{2'}}, \bm{{M_j}^{3'}}, \bm{{M_j}^{a'}}$, where each interaction matrix is treated as an input channel, and the activation function is ReLU. After this operation, a concatenation operation and an MLP with one hidden layer are used to flatten the output of the stacked CNN and generate a low-dimension vector for each context utterance response pair, denoted as $\bm{v_j}$. As to the matching information aggregation between a history utterance and a response, we perform the same 2-D CNN with two layers on the interaction matrix $\bm{M_{m,k}}$. After the concatenation and flatten layer, we obtain a vector $\bm{v_{m,k}}$ as the aggregation of $\bm{M_{m,k}}$. The dimensions of $\bm{v_j}$ and $\bm{v_{m,k}}$ are both $d_h$. For multi-turn context-response matching, PHMN computes the aggregated matching vector between each utterance in context \begin{math} c=(u_1, u_2, \dots, u_j, \dots, u_{n_c}) \end{math} and the corresponding response candidate $r$, resulting in a sequence of matching vectors $\bm{v_1}, \bm{v_2}, \dots, \bm{v_j}, \dots, \bm{v_{n_c}}$. In matching between dialogue history and response, PHMN outputs a bag of matching vectors $\bm{v_{m,1}}, \bm{v_{m,2}}, \dots, \bm{v_{m,k}}, \dots, \bm{v_{m,n_m}}$ between each utterance in history \begin{math} m=(u_{m,1}, u_{m,2}, \dots, u_{m,k}, \dots, u_{m,n_m}) \end{math} and the response candidate $r$. Noticing that utterances in a context have a temporal relationship, we thus leverage an RNN with GRU cell to process the aggregated matching vectors $\bm{v_1}, \bm{v_2}, \dots, \bm{v_j}, \dots, \bm{v_{n_c}}$ and the use last state of RNN as the aggregated matching degree, namely $\bm{m^{rnn}}\in \mathbb{R}^{d_h \times 1}$. On the other hands, utterances in dialogue history are parallel, and thus we use an attention mechanism \cite{bahdanau2014neural} to fuse the matching vectors $\bm{v_{m,1}}, \bm{v_{m,2}}, \dots, \bm{v_{m,k}}, \dots, \bm{v_{m,n_m}}$, i.e., computing the weighted sum as the aggregated matching degree, denoted as $\bm{m^{att}}\in \mathbb{R}^{d_h \times 1}$. To facilitate the combination of context-response matching information and history-response matching degree, we leverage a dynamic gate mechanism \cite{tu2018learning}, which is formulated as follows: \begin{equation} \bm{\lambda}=\sigma({{\bm {U^{{rnn}}}}\bm{m^{rnn}}+{\bm {V^{{att}}}}\bm{m^{att}}}) \end{equation} where $\bm{m^{rnn}}$ is the fused context-response matching degree and $\bm{m^{att}}$ corresponds to history-response matching, $\sigma$ represents the $sigmoid$ activation function. The final combination of $\bm{m^{rnn}}$ and $\bm{m^{att}}$ is computed by \begin{equation} \bm{m^t}=(1-\bm{\lambda})\otimes \bm{m^{att}}+ \bm{\lambda}\otimes \bm{m^{rnn}} \end{equation} where $\otimes$ denotes element-wise multiplication. $\bm{m^t}$ is then processed by a fully connected layer followed by a softmax function to obtain a binary output. \subsection{Training} In learning the matching functions $f(\cdot)$, the objective is to minimize the cross-entropy with dataset $\mathcal{D}$, which can be formulated as: \begin{equation} \mathcal{L}=-\sum_{i=1}^Ny_ilog(f(c_i,m_i,r_i))+ (1-y_i)log(1-f(c_i,m_i,r_i)) \end{equation} We also construct two auxiliary loss functions to enhance the training process. The first loss function refers to learning the binary classification outputs only based on context-response matching information (the upper section of Figure 1), written by: \begin{equation} \mathcal{L}_1=-\sum_{i=1}^Ny_ilog(g_1(c_i,r_i))+ (1-y_i)log(1-g_1(c_i,r_i)) \end{equation} while another loss function corresponds to outputting the binary results based on history-response matching information (the bottom part in Figure 1), formulated as: \begin{equation} \mathcal{L}_2=-\sum_{i=1}^Ny_ilog(g_2(m_i,r_i))+ (1-y_i)log(1-g_2(m_i,r_i)) \end{equation} where $g_1(\cdot)$ and $g_2(\cdot)$ refer to the matching function of context-response and history-response respectively. \section{Experiments} \input{table-2} \subsection{Datasets} To evaluate the effectiveness of our proposed model, we conduct experiments on two large open-datasets with user-id information, i.e., P-Ubuntu dialogue corpus in English, P-Weibo dataset in Chinese. In detail, the P-Ubuntu dialogue corpus contains multi-turn technical support conversations with corresponded open user ids, which is collected from the Ubuntu forum \footnote{\url{https://ubuntuforums.org/}}. We utilized Ubuntu v1.0 \cite{lowe2015ubuntu} as the raw dataset and followed the previous pre-processing strategy to replace numbers, paths, and URLs with placeholders \cite{xu2016incorporating}. The P-Weibo corpus is crawled from an open Chinese online chatting forum \footnote{\url{https://www.weibo.com}} which contains massive multi-turn conversation sessions and user identification information. However, the traditional pre-processed version only contains context-response pairs, neglecting the user's ids and their dialogue history utilized in our proposed personalized ranking-based chatbots. To mitigate this issue, we further process the raw dataset into a personalized version as follows. We firstly filter out users who spoke less than 30 utterances in P-Ubuntu and 10 utterances in P-Weibo. The remaining users are considered as valid users, and we collect their utterances from the corresponding corpora as their dialogue history. The user's dialogue histories are truncated to the max length of 100 for P-Ubuntu and 50 for P-Weibo. We then collect dialogue sessions of which the two speakers are both valid users from the raw corpora. Next, we create dialogue cases from dialogue sessions by splitting them into several fragments each of which is comprised of several consecutive dialogue utterances. The last utterance in the fragment is considered as the gold response, and the remaining utterances are as the context. To achieve this, we use a sliding window to split out dialogue cases from sessions. We set the maximum context turn to 10 for both corpora and the minimum context turn to 5 for P-Ubuntu and 3 for P-Weibo given their statistics. Furthermore, we pair each dialogue case with its users' information to facilitate the incorporation of personalized response selection, which contains the speaker's id, the speaker's dialogue history, the responder's id, the responder's dialogue history. Note that for each pre-processed case, we make sure that the provided speaker's or the responder's dialogue history has no overlap with the dialogue session that the current dialogue case comes from to avoid information leakage. Finally, after the aforementioned pre-processing steps, we get 600000 such six-point groups (context, response, speaker's id, speaker's dialogue history, responder's id, responder's dialogue history) as positive cases for both corpora. We randomly split them into 500000/50000/50000 for training/validation/testing. For training, we randomly sample a negative response from other responses of the full dataset, so the proportion of the positive sample and the negative sample is 1:1. While for validation and testing, the number of randomly selected negative responses from the full dataset is 9 and the proportion is 1:9. More statistical details of the two corpora are given in Table \ref{tab:data}. \subsection{Baselines} In our experiments, we compare our model with the following related and strong baselines. Note that since we utilize two newly created datasets P-Ubuntu and P-Weibo, we run all these models by ourselves. \textbf{TF-IDF}~\cite{lowe2015ubuntu}, a simple but effective matching method, computes the TF-IDF scores of each word in both context utterances and response. Both context utterances and responses are represented by their corresponding weighted addition of word embeddings. The matching score between context and response is then calculated by cosine similarity. \textbf{LSTM}~\cite{lowe2015ubuntu} concatenates all utterances in the context into a long sentence and employs a shared LSTM network to convert both the context and the response into vector representations. Their matching degree is then calculated through a bi-linear function with sigmoid activation. \textbf{Multi-View}~\cite{zhou2016multi} performs context-response matching calculation from multi-views, i.e., integrating information from both word sequence view and utterance sequence view to model two different levels of dependency. \textbf{SMN}~\cite{wu2017sequential} refers to the sequential matching network. This framework separately processes each utterance in a given context to learn a matching vector between each utterance and the response with the CNN network. Then, the learned matching vectors are aggregated by RNN to calculate the final matching score between the context and the response candidate. \textbf{DAM}~\cite{zhou2018multi}, the deep attention matching network, is a strong baseline for multi-turn response retrieval. This model builds a similar matching calculation pipeline upon the SMN, while the dependency between utterances in context and response candidates are captured by stacked self-attention and cross-attention mechanisms. \textbf{MRFN}~\cite{tao2019multi} represents the multi-representation fusion network. The model performs context-response matching based on multiple types of sentence representations and fuses matching information from different channels effectively. \textbf{IOI}~\cite{tao2019one} refers to the interaction-over-interaction network. This model performs deep-level matching by stacking multiple interaction blocks, i.e., extracting and aggregating the matching information within an utterance-response pair in an iterative fashion. \textbf{MSN}~\cite{yuan2019multi} refers to the multi-hop selector network. This model firstly adopts a multi-hop selector to select the relevant utterances as context to avoid the side effect of using too many context utterances. Then, the model matches the candidate response with the filtered context to get a matching score. \textbf{BERT$_{ft}$}~\cite{devlin2018bert} refers to the fine-tuned BERT-base model. This model is initialized with BERT-base-uncased and BERT-base-Chinese for P-Ubuntu and P-Weibo respectively. It takes the concatenation of the context and the candidate response as the input and utilizes stacked self-attention layers to extract fine-grained representations. The matching score is calculated with an MLP built upon the top layer. \subsection{Experimental Settings} We introduce the experimental settings in this subsection. Unless otherwise stated, the pre-processing methods and the hyperparameters are the same for both corpora. We construct a shared vocabulary for context utterances, history utterances and responses, which contains the 30000 most frequent words on the training sets. We then run Word2Vec\footnote{\url{https://code.google.com/archive/p/word2vec/}} on the training sets of the two corpora with the dimension of the word embedding as 200. Following previous work, we limit the length of context to 10 turns and truncate all context utterances to the max length of 50. As to user dialogue histories, we provide up to 100 user utterances for P-Ubuntu dataset and 50 sentences for P-Weibo dataset respectively. If the number of turns in a context and the number of utterances in a user dialogue history have not reached the given upper limit, we append blank sentences whose words are all padding tokens. In the hybrid representations learning of context-response matching module, we set the number of filters $d_w$ as 200 for \{1, 2, 3\}-gram CNN, and the number of heads as 8 for multi-head self-attention. In the personalized dialogue content modeling part, we choose 50 as the filter number for all \{1, 2, 3, 4\}-gram CNN. In the aggregation stage, the window sizes of 2-D convolution and pooling are (3, 3) for both context-response and history-response interactions. The dimension of the hidden state of the turn-level aggregation GRU is 200. For training, we set the mini-batch size to 60 and adopt the Adam optimizer \cite{kingma2014adam} with the initial learning rate set to 3e-4. We exponentially decay the learning rate with the decay rate as 0.95 for every 2000 training steps. We utilize early stopping as a regularization strategy. The model which achieves the best performance on the validation set is used for testing. For baseline models, we adopt their released codes if possible or implement ourselves and experiment on our proposed datasets. We ensure that all of our implemented baseline models achieve similar results as reported in the original papers in the standard Ubuntu v1.0 corpus. These models utilize the same vocabulary and initial word embeddings as our model. Specifically, for BERT$_{ft}$, we use BERT-base-uncased \footnote{\url{https://huggingface.co/bert-base-uncased}} for P-Ubuntu and BERT-base-Chinese \footnote{\url{https://huggingface.co/bert-base-chinese}} for P-Weibo respectively. We first truncate the response to the max length of 50 and then iteratively insert the context utterances in reverse order before the response until we exhaust the context, or the total sequence exceeds the max sequence length of BERT (i.e., 512). We fine-tune the model using Adam optimizer \cite{kingma2014adam} with the learning rate of 3e-5 and the batch size of 32. \subsection{Evaluation Metrics} Given the candidate responses for each context of the test set, we evaluate the performance of different models with $R_{n}@ks$, which denotes whether top-$k$ retrieved responses from $n$ candidates contain the positive response. Besides, we also provide the top-$k$ ranking list for each test context to calculate the mean reciprocal rank (MRR) score, which is computed as follows: \begin{equation} \text{MRR} = \frac{1}{\arrowvert \mathcal{T} \arrowvert} \sum_{ \langle c,m\rangle \in \mathcal{T}} \frac{1}{rank(\langle c,m\rangle)} \end{equation} where $\mathcal{T}$ indicates the context set for testing, $rank(\langle c,m\rangle)$ is the position of the true response regarding to the input $\langle c,m\rangle$ in the candidate ranking list. \section{Results} \input{table-3} Table \ref{tab:overall} reports the results of baselines and our proposed methods on P-Ubuntu and P-Weibo dataset. Table \ref{tab:ablation} supplements the evaluation results of model ablation on two datasets. We analyze these results from the following aspects. \subsection{Main Performance} Overall, our proposed PHMN model significantly outperforms all other models in all metrics and achieves the new state-of-the-art results on P-Ubuntu dialogue corpus and P-Weibo dataset. Especially for $R_{10}@1$, PHMN achieves significant improvement over the most strong model without using BERT and its variations, i.e., MSN, on both datasets (i.e., (78.2 v.s. 70.9) on P-Ubuntu Corpus and (74.5 v.s. 70.3) on P-Weibo dataset). Surprisingly, when compared with BERT$_{ft}$ baseline, our proposed PHMN (without the support of BERT and its variations) still obtain significantly better results on P-Ubuntu Corpus (78.2 v.s. 75.7) and P-Weibo dataset (74.5 v.s. 74.0). For baseline models, TF-IDF, LSTM, and Multi-view only achieve fundamental performances on each dataset and metric. Benefiting from the deep neural model in matching feature extraction and sequential modeling strategy, SMN performs much better than the previous three baseline models on both datasets. With the enhancement of powerful attention mechanism and deep stacked layers, DAM not surprisingly yields substantial improvements over SMN, which confirms that the attention mechanism is powerful for learning dependency representations of conversations. Through fusing multiple types of sentence representation, MRFN yields substantial improvement over DAM on the both P-Ubuntu corpus and P-Weibo dataset. Furthermore, IOI and MSN perform slightly better than MRFN, these models are the strongest baselines to date without BERT and its variations. BERT$_{ft}$ improves the scores of different metrics over other baselines by a large margin, but with the cost of model complexity and time efficiency, where the details are shown in Table \ref{tab:model_complexity}. Our proposed HMN achieves comparable results with DAM by taking advantage of attention-based representations and interaction-based matching. Considering that HMN contains only three convolution layers while DAM stacks multiple attention layers, the hybrid representations are thus time-efficient and effective. Moreover, we notice that the simplified versions of PHMN, i.e., HMN$_{W}$ and HMN$_{Att}$, outperform MRFN and IOI on both corpora by a large margin. \input{table-4} \subsection{The Effect of Wording Behavior} As mentioned previously, the wording behavior is introduced for modeling long-term personal information other than the current dialogue context so as to enhance the performance of response candidate selection. We conduct the following two groups experiments, i.e., HMN$_W$ v.s. HMN, PHMN v.s. HMN$_{Att}$ as ablation studies to investigate how the wording behavior extracted from user dialogue history affects the response selection results. HMN is the simplified version of PHMN without containing wording behavior modeling and personalized attention module. HMN$_W$ boosts HMN with wording behavior modeling as extra hints for selecting response candidates, whereas HMN$_{Att}$ enhances HMN with the personalized attention module to extract important information from context utterances. PMN only takes dialogue history utterances and response as input to extract wording behavior matching patterns and degrees. {\textit{Using wording behavior information alone yields a relatively inferior matching performance.}} As demonstrated in Table \ref{tab:overall}, PMN achieves a basic performance in terms of various matching accuracy. PMN yields a better result than TF-IDF and insignificantly worse performance than the recently proposed models (i.e., LSTM and Multi-View) on the P-Ubuntu dialogue corpus. It is also obtained that PMN is marginally better than LSTM and Multi-View on the P-Weibo dataset. However, there is a significant gap between PMN and state-of-the-art models. These results support the intuition that context utterances contain most of the patterns and information regarding selecting a proper response while the wording behavior models general and long-term matching information. {\textit{Wording behavior significantly enhances context-response matching network by introducing supplementary matching information.}} Note again that wording behavior in dialogue history serves as the long-term information and can be utilized to supplement the short-term information in context utterances. Not surprisingly, HMN$_W$ achieves a significant improvement over the HMN model, and even achieves a significant improvement over the MRFN, IOI, and MSN models, which are very strong among these models without incorporating BERT and its variations. With the enhancement of wording behavior information, our proposed PHMN yields an observable improvement over HMN$_{Att}$ and obtains the new state-of-the-art on two large datasets. These results confirm that wording behavior matching between user-specific dialogue history and response candidate is effective for multi-turn response selection. \subsection{The Influence of Personalized Attention} As previously stated, introducing the personalized attention module is expected to bring a positive effect on extracting important information in context-response matching. We investigate the influence of personalized attention with two groups of comparison, i.e., HMN$_{Att}$ v.s. HMN, PHMN v.s. HMN$_{W}$. Following observations are made in this investigation, which confirms that personalized attention is an effective add-on to the existing context-response matching model. \input{table-5} {\textit{Personalized attention module effectively improves the accuracy of context-response matching through extracting important information in context-response matching.}} When personalized attention is introduced, HMN$_{Att}$ and PHMN model are capable of extracting meaningful matching information from the interaction matrices of context utterances and response while allocating less weight to unrelated matching signals. As illustrated by the evaluation results in Table \ref{tab:overall}, personalized attention can substantially improve the performance of HMN and HMN$_{W}$. {\textit{Performance improvement achieved by using personalized attention is less than by modeling wording behavior in dialogue history.}} Recall that we propose to employ user-specific dialogue history content from two different perspectives, i.e., wording behavior and personalized attention. It is natural to compare the effectiveness of personalized attention and wording behavior. As illustrated in Table \ref{tab:overall}, personalized attention results in a substantial improvement over base models on two corpora while wording behavior achieves a significant improvement on two corpora, which indicates that wording behavior modeling is more important than personalized attention. \subsection{The effect of Fusion Gate and Auxiliary Loss} Table \ref{tab:ablation} summarizes the evaluation results of eight model variations so as to investigate the effect of the auxiliary loss and the gate mechanism. We have the observation that, for both PHMN and HMN$_W$, the auxiliary loss is helpful for training on two corpora. For PHMN and HMN$_W$, when adding the gate mechanism, it is not surprised that observable performance improvement is achieved. We believe the improvement is partly because wording behavior information in dialogue history is not at the same level with hybrid representations while the gate mechanism can effectively balance the distinctions between different levels of representations. \subsection{The effect of Dialogue History Size on Model Performance} In our proposed PHMN model, the dialogue histories are used to calculate the personalized attention mask and perform wording behavior matching with the candidate response. On the one hand, we often don't have enough history utterances from the same user in some scenarios. On the other hand, there is a trade-off between speed and model performance. Therefore, we study how the size of dialogue history influences the model performance in this subsection and leave the comparison of inference speed together with baselines to the next subsection. As illustrated in Table \ref{tab:history_length}, we set the number of utterances in the dialogue history of the P-Weibo dataset to \{10, 20, 30, 40, 50\} and set the number of utterances in the dialogue history of the P-Ubuntu dataset to \{10, 30, 50, 70, 100\} for studying the influence of dialogue history size on model performance. It can be observed that even the available number of dialogue history is small (i.e., 10 and 30 utterances), all the three models can still yield a considerable improvement over the HMN baseline. And with the increase of dialogue history size, all the models' performance continues to improve and is not saturated under the current limitation. We can reasonably expect that with more dialogue histories available, PHMN will bring us more surprises. \begin{table}[ht] \begin{center} \caption{\label{tab:model_complexity} Comparison of model size and inference speed.} \resizebox{\columnwidth}{!}{ \begin{tabular}{l|cccccccc } \hline & LSTM& Multi-View & SMN & DAM & MRFN & IOI & MSN & BERT$_{ft}$ \\ \hline \# Params(M) & 6.3 & 6.7 & 6.4 & 8.1 & 9.6 & 15.3 & 7.6 &110 \\ \hline \ Latency(ms) & 0.882 & 0.894 & 0.364 & 2.526 & 1.818 & 4.416 & 0.822 &17.2 \\ \hline & HMN & HMN$_{Att}$ & HMN$_{W,100}$ & PHMN$_{10}$ & PHMN$_{30}$ & PHMN$_{50}$ & PHMN$_{70}$ & PHMN$_{100}$ \\ \hline \# Params(M) & 6.7 &6.7 & 7.6 & 7.6 & 7.6 & 7.6 & 7.6 & 7.6 \\ \hline \ Latency(ms) & 0.642 &0.648 &1.824 & 0.796 & 1.030 & 1.230 & 1.452 & 1.834 \\ \hline \end{tabular} } \end{center} \end{table} \subsection{Comparison of Model Complexity} Moreover, we also study the time and memory cost of our models by comparing them with baselines in terms of the number of model parameters and inference latency, which is measured as the average time cost of evaluating a single case on the same RTX 2080Ti GPU. For our proposed models, we study HMN, HMN$_{Att}$, HMN$_{W}$, and PHMM. Recall that the model architectures are the same (i.e., HMN and HMN$_{Att}$, HMN$_{W}$ and PHMM) for some of our models, and the inference latency is relevant to the user dialogue histories for some of our models (i.e., HMN$_{W}$ and PHMM). To be more specific, HMN$_{Att}$ shares the same model architecture with HMN, and the computation cost of the introduced personalized attention mechanism is agnostic to the number of user dialogue histories. While HMN$_{W}$ has the same model architecture as PHMN, both models' inference latency increases with the number of user dialogue histories. Thus, we also give the inference latency of PHMN models with different user dialogue history sizes (denoted as the subscript number). The comparison results are illustrated in Table \ref{tab:model_complexity}. Comparing our proposed models with baselines, we can easily conclude that PHMN is both time- and memory-efficient while performing remarkably well. In terms of parameter size, it can be observed that our proposed PHMN model has similar parameters as the start-of-the-art non-BERT baseline MSN and is smaller than MRFN and IOI, not to mention BERT$_{ft}$ (which is 14.5 times larger than PHMN). This indicates that the significant performance improvement of our proposed model comes from the introduced strategies in this paper rather than a larger model size. When it comes to inference latency, we can find that PHMN is similar to MRFN and is 2.4 times faster than IOI. BERT$_{ft}$ again significantly drags on the group (which is 9.4 times slower than PHMN). We then compare our proposed models. Comparing HMN with HMN$_{Att}$ or comparing HMN$_{W,100}$ with PHMN$_{100}$, we can find that the personalized attention mechanism is quite time-efficient as it almost adds no additional time cost. As for the influence of dialogue histories on the inference speed, it can be seen that the latency increases linearly with the number of used dialogue histories, which poses a trade-off between speed and performance that can be tuned to tailor the application scenarios. \subsection{Case Study} \input{table-6} In addition to evaluating PHMN with quantitative results, we also conduct case studies to illustrate its superiority over baselines. Table \ref{tab:case_wording_behavior} and Table \ref{tab:case_personalized_attention} illustrate two examples of context-response matching with the enhancement of user-specific dialogue history modeling. Further, the tables also give the predictions of our proposed PHMN and two strong baselines (i.e., MSN and BERT$_{ft}$), with which we can better understand the superiority of PHMN. For the example in Table \ref{tab:case_wording_behavior}, it is clearly shown that wording behavior is helpful for retrieving the correct response, i.e., ``\textbf{\textit{I've told you...}}'' in dialogue history can serve as the supplementary information other than context-response matching. From the models' prediction scores we can observe that all the models provide a high matching score to the first negative case, which not only has a large word overlap with the context (i.e., ``\textbf{\textit{man apt-get}}'') but also seems to have a plausible tone to respond to the last context utterance ``\textbf{\textit{What IS THE COMMAND}}'' though it ignores the remaining context information. Both MSN and BERT$_{ft}$ rank this negative response as the most appropriate response against all the 10 candidate responses, including the ground truth response. And our proposed PHMN successfully ranks the ground-truth response on top thanks to the wording behavior model mechanism that effectively captures supplementary information. Example in Table \ref{tab:case_personalized_attention} reveals the effectiveness of personalized attention mechanism in extracting accurate information from the interaction matrices of context utterances and response. By allocating a large weight to the key clue word ``\textbf{\textit{xfce4}}'' in response, the matching accuracy is enhanced. Again, it can be seen from the models' prediction scores that although all the three models rank the ground-truth response on top, the prediction scores of the first negative candidate response given by MSN and BERT$_{ft}$ is not low. Meanwhile, PHMN assigns a high matching score to the ground truth response and a relatively low matching score to the first negative candidate response. The gap between the top-ranked score and the second-ranked score of PHMN is much larger than that of BERT$_{ft}$ (0.80 v.s. 0.36) and MSN (0.80 v.s. 0.18), which indicates that our proposed PHMN is much more confident to select the ground-truth response. This superiority is owed to the personalized attention mechanism that highlights the key clue word ``\textbf{\textit{xfce4}}''. It is observed that there are also inferior context-response matching cases in experiments. A notable example pattern is that the extracted literal wording behavior information might overwhelm other informative words and structured knowledge in the dialogue history. One potential solution for addressing such an issue is to enhance PHMN with fine-grained personalized information modeling and structured knowledge extraction. We also notice that there exist a few extraordinary bad cases where both wording behavior and personalized attention introduce noise signals for context-response matching. We believe this is due to the limited size of the dialogue history. These phenomena and analyses point out the direction of potential future work. \input{table-7} \subsection{Study of Speaker's Persona Information} We also consider incorporating the speaker's persona information into our proposed personalized response selection model to find whether it can help the model learn better and make the conversation more engaging. Specifically, we assign persona embeddings that capture high-level persona features(i.e., topics, talking preferences, and so on) for speakers whose occurrences in the processed datasets are larger than a lower threshold (named \textbf{User Occurrence Threshold}). We fuse the speaker's persona embedding into the context-response matching process to provide a speaker-aware matching vector for better capturing the speaker's preference. We borrow the iterative mutual gating mechanism from Mogrifier LSTM \cite{melis2019mogrifier}, whose effectiveness has been verified, to allow the context-response matching vector and the speaker's persona embedding vector to mutually refine the useful information they carried. We name PHMN enhanced with the speaker's embedding as PHMN$_e$. Under this motivation, there could be many influential factors that might be crucial to the performance, here we mainly study four factors: (1) the gate position, (2) the number of mutual gating iterations, (3) the dimension of the persona embedding, and (4) the number of users who have persona embedding (which is closely related to \textbf{User Occurrence Threshold}). For (1), we can perform mutual gating between the context-response matching vector and the speaker's persona embedding vector before or after the turn-level aggregation GRU. If the gate is before the turn-level GRU, the speaker's persona embedding can provide utterance-level guidance for original matching vectors $v_j$. We abbreviate this gate position as \textbf{Before}. If we inject the speaker's persona embedding after the turn-level GRU, it can guide the aggregated matching vector $m^{RNN}$ from a global perspective. We abbreviate this gate position as \textbf{After}. For (2), the gating iterations are set to \{1, 2, 3\} to study whether deep mutual interactions can boost the performance. For (3), the dimension of persona embedding is set to \{50, 100, 200\}. And we use the \textbf{User Occurrence Threshold} mentioned just before as the indicator for (4). Concretely, we set the \textbf{User Occurrence Threshold} to \{3, 4, 5\}, which means we only provide the speakers whose occurrences in the processed datasets are larger than \{3, 4, 5\} with a specific user embedding, while leaving other speakers with a shared UNK embedding. Under these settings, the numbers of remaining users for Ubuntu and Weibo are \{33303, 26228, 21059\} and \{54683, 24950, 12162\} respectively. We conduct extensive experiments on the two corpora to determine whether incorporating the speaker's persona information is helpful. The experiment results are shown in Table \ref{tab:speaker_ablation}. Unfortunately, we don't observe an improvement when taking the speaker's persona information into account, given additional computation and memory cost. Specifically, on the Ubuntu corpus, all attempts fail to obtain better performance, while on Weibo, some settings of PHMN$_e$ get comparable performance with PHMN. Another interesting observation is that the less interaction iteration is, the smaller the persona embedding dimension is, the larger the lower threshold of user occurrence is, the better performance (also the smaller performance drop on Ubuntu dataset) the model PHMN$_e$ can achieve. The above observations indicate that incorporating the speaker's persona information brings little benefit but more computation and memory cost. Thus we don't involve the speaker's persona information in our model. Nevertheless, the speaker's preference may still be beneficial to response selection in some scenarios. We leave the study of the incorporation of the speaker's persona information as one of our future works. \input{table-8} \section{Related Work} In the past decades, human-machine conversation systems have been widely investigated and developed. Early studies mainly focus on building rules and templates for computers to yield a human-like response. Such a strategy has been evolved and successfully used in various domains, such as museum guiding \cite{ferguson1996trains}, restaurant booking \cite{lei2018sequicity}. Later on, with the explosive growth of data, the application of the open-domain conversation model is promising. However, conventional methods for domain-specific settings have obstacles to scale to open area. Given this, various data-driven approaches have been proposed for modeling open-domain conversation, including two main groups: generation-based approaches \cite{serban2015building,chan2019modeling,qiu2019training,li2019insufficient,li2018overview}, retrieval-based methods\cite{yan2016learning}. Early work of the first group builds their systems upon statistical machine translation model \cite{ritter2011data}. Recently, on top of the sequence to sequence architecture \cite{shangL2015neural,vinyals2015neural}, various extensions have been proposed to address the ``common response'' issue \cite{li2015diversity}; to leverage external knowledge \cite{mou2016sequence,serban2016multiresolution,xing2017topic}; to model the hierarchical structure of conversation contexts \cite{serban2016building,serban2017hierarchical,xing2018hierarchical}; to generate personalized responses \cite{li2016persona,zhou2018emotional}; and to pursue effective optimization strategies \cite{li2016deep,li2017adversarial}. Early work for retrieval-based dialogue systems studies single-turn response selection \cite{hu2014convolutional,ji2014information,lu2013deep,wang2013dataset}. Later on, various multi-turn response selection methods have been proposed, including the dual LSTM model, \cite{lowe2015ubuntu}, the multi-view matching method \cite{zhou2016multi}, the sequential matching network \cite{wu2017sequential}, and the deep attention matching network \cite{zhou2018multi}. Recently, various effective methods have been proposed for investigating the fusion of multiple types of sentence representations \cite{tao2019multi}, the deep interaction in matching feature extraction \cite{tao2019one}, model ensemble \cite{zhang2019ensemblegan,yang2019hybrid,song2018ensemble}, external knowledge combination \cite{yang2018response}, the influence of stickers in multi-modal response selection \cite{gao2020learning}, and emotion control in context-response matching \cite{lisong2020emotion}. With the rapid explosion of pre-trained language models, researchers also have made considerable efforts in combining pre-trained language models with response selection. One typical method is to combine a pre-trained language model (BERT) with post-training method in the task of response selection\cite{whang2020effective}. Gu et al. \cite{gu2020speaker} further investigate the problem of employing pre-trained language models for Speaker-Aware multi-turn response selection. Lu et al. \cite{lu2020improving} propose two strategies to improve pre-trained contextual language models for response retrieval in multi-turn conversation, namely speaker segmentation and dialogue augmentation. A deep context modeling architecture (DCM) with BERT as the context encoder has also been proposed for multi-turn response selection \cite{li58deep}. To address the issue of ignoring the sequential nature of multi-turn dialogue systems in utilizing pre-trained language models, the utterance manipulation strategie (UMS) has been proposed \cite{whang2020response}. Wang et al. \cite{wang2020response} propose an essential pre-training step to embed topic information into BERT with self-supervised learning in multi-party multi-turn response selection. More details of progresses and challenges in building intelligent open-domain dialogue systems can be found in recent surveys \cite{huang2020challenges,boussaha2019deep}. In this work, we proposed a personalized hybrid matching network (PHMN) for multi-turn response selection. We combine deep attention-based representations and interaction information as hybrid representations to achieve comprehensive modeling of multi-turn context utterances. Besides, we introduce personalized dialogue history as additional information to enhance the accuracy of context-response matching. Through extracting wording behavior and personalized attention weights from the dialogue history, our proposed PHMN achieves state-of-the-art performance on two datasets. \section{Conclusion} In this study, we propose a novel personalized hybrid matching network (PHMN) for multi-turn response selection through leveraging user-specific dialogue history as extra information. Building upon the advanced multi-dimension hybrid representation learning strategy, we incorporate the information in dialogue history from various granularities, i.e., wording behaviors matching, user-level attention for extracting vital matching information from context-response matching. Experimental results on two large datasets with different languages, personalized Ubuntu dialogue corpus (P-Ubuntu), and personalized Weibo (P-Weibo), confirm that our proposed method significantly outperforms state-of-the-art models (without using BERT). We also conduct a thorough ablation study to investigate the effect of wording behavior modeling and the influence of personalized attention, which confirms that both wording behavior and personalized attention are effective for enhancing context-response matching. Besides, we further explored the influence of Speaker A's persona in conversation insomuch as individuals will perform distinctive behaviors when they have a chat with different people. In the near future, we pursue to learn a structured knowledge representation of users and encapsulate this structured information into response selection. \begin{acks} We would like to thank the efforts of anonymous reviewers for improving this paper. This work was supported by the National Key Research and Development Program of China (No.2020AAA0106600). \end{acks} \bibliographystyle{ACM-Reference-Format}
2,869,038,156,454
arxiv
\section{Nonconvex Variational Problem and Motivation} A large class of finite deformation problems in nonlinear elasticity can be formulated on the basis of a variational principle $({\cal{P}})$ in which it is required to minimize certain nonconvex potential energy. Typically, this takes the form \begin{equation} ({\cal{P}}): \;\; \min_{{\mbox{\boldmath$\chi$}} \in {\cal X}_a} \left\{ \Pi({\mbox{\boldmath$\chi$}}) = \int_{\Omega} W(\nabla {\mbox{\boldmath$\chi$}}) {\rm d} \Omega + \int_{\Omega} \phi({\mbox{\boldmath$\chi$}}) \rho {\rm d} \Omega - \int_{\Gamma_t} {\mbox{\boldmath$\chi$}} \cdot {\bf t} {\rm d} \Gamma \right\} , \label{pprobm} \end{equation} where ${\mbox{\boldmath$\chi$}}$ represents the deformation field (a bijection), $W({\bf F})$ is the strain energy per unit reference volume, which is a nonlinear differentiable function of the deformation gradient ${\bf F}=\nabla {\mbox{\boldmath$\chi$}}$, and $\nabla$ is the gradient operator in a simply-connected domain (the reference configuration of the body) $\Omega \subset {\mathbb R}} %\newcommand{\real}{{\bf R}^3 $ with boundary $\partial \Omega = \Gamma = \Gamma_t \cup \Gamma_\chi$ such that $\Gamma_t\cap\Gamma_\chi=\emptyset$. Each material point in $\Omega$ is labeled by its position vector $\mathbf{X}$ and the corresponding point in the deformed configuration is denoted by $\mathbf{x}\,(={\mbox{\boldmath$\chi$}}(\mathbf{X}))$. The body force ${\bf f}$ (per unit mass) is taken to be conservative with potential $\phi(\mathbf{x})$ so that ${\bf f}=-\grad \phi$, and $\rho$ is the reference mass density. On the part $\Gamma_t$ of the boundary the surface traction ${\bf t}$ is prescribed to be of dead-load type, while on $\Gamma_\chi$ the deformation ${\mbox{\boldmath$\chi$}}$ is given. The notation ${\cal X}_a$ identifies a \emph{kinematically admissible space} of deformations ${\mbox{\boldmath$\chi$}}$, defined by \begin{equation} {\cal X}_a = \{ {\mbox{\boldmath$\chi$}} \in {\cal W}^{1,p} (\Omega; {\mathbb R}} %\newcommand{\real}{{\bf R}^3)\; \big| \;\ \nabla {\mbox{\boldmath$\chi$}} \in {\cal F}_a, \;\; {\mbox{\boldmath$\chi$}} ={\mbox{\boldmath$\chi$}}_0 \;\; \mbox{ on } \Gamma_\chi \}, \end{equation} where ${\cal W}^{1,p}$ is the Sobolev space, i.e. a function space in which both ${\mbox{\boldmath$\chi$}}$ and its weak derivative $\nabla {\mbox{\boldmath$\chi$}}$ have a finite $L^p(\Omega)$ norm. ${\cal F}_a = \{ {\bf F} \in {\cal L}^p(\Omega; {\mathbb R}} %\newcommand{\real}{{\bf R}^{3 \times 3} )| \; \det {\bf F} > 0 \} $ denotes the admissible deformation gradient space with $p > 1$. Clearly, solutions $ {\mbox{\boldmath$\chi$}} \in {\cal X}_a$ of the problem $({\cal{P}})$ are not necessarily to be smooth. The criticality condition $\delta \Pi({\mbox{\boldmath$\chi$}}) = 0$ leads to a mixed boundary-value problem $(BV\!P)$, namely \begin{equation} (BVP): \;\; \left\{ \begin{array}{l} \nabla \cdot [\nabla_{\bf F} W(\nabla {\mbox{\boldmath$\chi$}}) ] + \rho{\bf f} = \mathbf{0} \quad \mbox{in } \Omega,\\[0.2cm] {\bf n} \cdot [\nabla_{\bf F} W(\nabla {\mbox{\boldmath$\chi$}}) ] = {\bf t} \quad\mbox{on } \Gamma_t, \end{array} \right. \label{eq-bvp1} \end{equation} where $\nabla_{\bf F} W(\nabla {\mbox{\boldmath$\chi$}}) = \partial W({\bf F})/\partial {\bf F}$ (in components $ \partial W/\partial F_{i\alpha}$), ${\bf n}$ is the unit outward normal to $\Gamma_t$ and, in component form, we adopt the conventions $\nabla \cdot {\mbox{\boldmath$\tau$}}= \{\partial \tau_{i\alpha}/\partial X_\alpha \}$ and ${\mbox{\boldmath$\tau$}}\cdot {\bf n}= \{\tau_{i\alpha}n_\alpha \}$. Note that $\nabla \cdot {\mbox{\boldmath$\tau$}}$ is defined in the weak sense where $\nabla{\mbox{\boldmath$\chi$}}$ is discontinuous. In general, it is rarely possible to solve this nonlinear boundary-value problem by use of direct methods. Indeed, the strain energy $W({\bf F})$ is a nonconvex function of ${\bf F}$, the problems $({\cal{P}})$ and $(BVP)$ are not equivalent, and $(BVP)$ may possess multiple solutions. Identification of the global minimizer of the variational problem $({\cal{P}})$ is a fundamentally difficult task in nonconvex analysis. From the point of view of numerical analysis, any numerical discretization of the problem $({\cal{P}})$ leads to a nonconvex minimization problem, and it is well known in global optimization theory that most nonconvex minimization problems are NP-hard \cite{gao-jogo00,gao-amma03,gao-optm03}. Duality principles play fundamental roles in sciences and engineering, especially in continuum mechanics and variational analysis. For linear elasticity, since the stored strain energy $W$ is a convex function of the (infinitesimal) strain tensor, it is well-known that each potential variational (primal) problem is linked a unique equivalent (dual) complementary variational problem via the conventional Legendre transformation. This one-to-one duality relation is also known as the complementary variational principle, which has been well-studied with extensive applications in both mathematical physics and engineering mechanics (see Arthurs, Nobel-Sewell, Oden-Reddy, Tabarrok-Rimrott, etc). In finite deformation theory, if the stored energy density $W({\bf F})$ is a strictly convex function of the deformation gradient tensor ${\bf F}$ over the field $\Omega$, then the first Piola-Kirchhoff stress tensor can be uniquely determined by ${\mbox{\boldmath$\tau$}} = \nabla W({\bf F})$ and the complementary energy density $W^*$ can be obtained explicitly by the Legendre transformation: \begin{equation} W^*({\mbox{\boldmath$\tau$}}) = \left\{ {\bf F}\! :\! {\mbox{\boldmath$\tau$}} - W({\bf F})\, \big | \;\; {\mbox{\boldmath$\tau$}} = \nabla W({\bf F}) \right\}, \end{equation} where ${\bf F} : {\mbox{\boldmath$\tau$}}$ is defined as ${\mbox{tr}}({\bf F}\cdot{\mbox{\boldmath$\tau$}}^{\rm T})$ and $^{\rm T}$ signifies the transpose. In this case, the complementary variational problem can be defined as \begin{equation} \min_{{\mbox{\boldmath$\tau$}} \in {\cal T}_a} \left\{ \Pi^c({\mbox{\boldmath$\tau$}}) = \int_\Omega W^*({\mbox{\boldmath$\tau$}}) {\rm d} \Omega - \int_{\Gamma_\chi} {\mbox{\boldmath$\chi$}}_0 \cdot {\mbox{\boldmath$\tau$}} \cdot {\bf n} {\rm d} \Gamma \right\} , \label{comprom} \end{equation} where ${\cal T}_a$ is the \emph{statically admissible space} defined by \begin{equation} {\cal T}_a = \left\{ {\mbox{\boldmath$\tau$}} \in {\cal L}^q(\Omega)\; \big | \ \nabla \cdot {\mbox{\boldmath$\tau$}} + \rho {\bf f} = \mathbf{0}\ \mbox{ in } \Omega, \ {\mbox{\boldmath$\tau$}} \cdot{\bf n} = {\bf t}\ \mbox{ on } \Gamma_t \right\}, \end{equation} where $q$ is the conjugate number of $p$, i.e. it is given by $1/p + 1/q = 1$. This complementary variational problem was first studied by Levinson \cite{levi-65}. The well-known Levinson principle states that if $\bar{\mbox{\boldmath$\tau$}}$ is a solution of the complementary variational problem (\ref{comprom}), then the deformation field $\bar{\mbox{\boldmath$\chi$}}$ defined through the inverse constitutive law ${\bf F}(\bar{\mbox{\boldmath$\chi$}}) = \nabla W^*(\bar{\mbox{\boldmath$\tau$}})$ is a solution of the potential variational problem (\ref{pprobm}) and the complementarity condition \[ \Pi(\bar{\mbox{\boldmath$\chi$}}) + \Pi^c(\bar{\mbox{\boldmath$\tau$}}) = 0 \] holds. This principle can be proved easily by using the traditional Lagrangian duality theory (see Gao, 2000). The Levinson principle is simply the counterpart in finite deformation theory of the complementary variational principle in linear elasticity. In finite deformation theory, the stored strain energy $W({\bf F})$ is in general nonconvex such that the stress-deformation relation ${\mbox{\boldmath$\tau$}} = \nabla W({\bf F}) $ is not uniquely invertible \cite{ogden75,ogden77} and the complementary energy function $W^*$ cannot be defined explicitly via the Legendre transformation. Although by the Fenchel transformation $$ W^\sharp ({\mbox{\boldmath$\tau$}}) = \max_{{\bf F}} \{ {\bf F}\! :\! {\mbox{\boldmath$\tau$}} - W({\bf F} ) \}, $$ the Fenchel-Moreau type dual problem can be formulated in the form of \begin{equation} \max_{{\mbox{\boldmath$\tau$}} \in {\cal T}_a} \left\{ \Pi^\sharp (\tau) = \int_{\Gamma_\chi} {\mbox{\boldmath$\chi$}}_0 \cdot {\mbox{\boldmath$\tau$}} \cdot {\bf n} {\rm d} \Gamma -\int_\Omega W^\sharp({\mbox{\boldmath$\tau$}}) {\rm d} \Omega \right\}, \label{eq-Pic} \end{equation} the nonconvexity of $W$ leads only to the so-called \emph{weak duality theorem} $$ \min_{{\mbox{\boldmath$\chi$}} \in {\cal X}_a} \Pi({\mbox{\boldmath$\chi$}}) \ge \max_{{\mbox{\boldmath$\tau$}} \in {\cal T}_a} \Pi^\sharp({\mbox{\boldmath$\tau$}}) $$ due to the Fenchel-Young inequality$ W({\bf F}) \ge {\bf F}\! :\! {\mbox{\boldmath$\tau$}} - W^\sharp({\mbox{\boldmath$\tau$}})$. In nonconvex analysis, the nonzero $\theta = \min_{{\mbox{\boldmath$\chi$}} \in {\cal X}_a} \Pi({\mbox{\boldmath$\chi$}}) - \max_{{\mbox{\boldmath$\tau$}} \in {\cal T}_a} \Pi^\sharp({\mbox{\boldmath$\tau$}}) > 0 $ is called the \emph{duality gap}. This duality gap shows that the well-developed Fenchel-Moreau duality theory can be used to solve mainly convex problems. In finite deformation theory, the well-known Hellinger-Reissner principle \cite{hell-14, reiss53} and the Fraeijs de Veubeke principle \cite{veub72} hold for both convex and nonconvex problems. However, these principles are not considered as \emph{pure complementary variational principles} since the Hellinger-Reissner principle involves both the displacement field and the second Piola-Kirchhoff stress tensor; while the Fraeijs de Veubeke principle has both the rotation tensor and the first Piola-Kirchhoff stress as its variational arguments. The existence of a pure complementary variational principle in general finite deformation theory has been discussed by many researchers over several decades (see, for example, \cite{koiter76, lee-shie80, lee-shie80b, li-cupta,oden-redd83,ogden75,ogden77}). Moreover, since the extremality condition in nonconvex variational analysis and global optimization is fundamentally difficult to resolve, none of the classical complementary-dual variational principles in finite deformation theory can be used for reliable numerical computations. Canonical duality theory provides a potentially useful methodology for solving a large class of nonconvex problems in complex systems. This theory consists mainly of (1) a \emph{canonical dual transformation}, which can be used to formulate perfect dual problems in nonconvex systems; (2) a {\em complementary-dual variational principle,} which allows a unified analytical solution form in terms of the canonical dual solutions; (3) a \emph{triality theory}, which provides sufficient criteria for identifying both global and local extrema. The original idea of the canonical dual transformation was introduced by Gao and Strang \cite{gao-strang89a} in finite deformation systems. In order to recover the duality gap in nonconvex variational problems, they discovered a so-called {\em complementary gap function}, which leads to a complementary-dual variational principle in finite deformation mechanics. They proved that if this gap function is positive on a dual feasible space, the generalized Hellinger-Reissner energy is a saddle-functional. It turns out that this gap function provides a sufficient condition for global optimal solution in nonconvex variational problems. Seven years later, it was realized that the negative gap function can be used to identify local extrema. Therefore, a triality theory was first proposed in post-buckling problems of a large deformation beam model \cite{gao-amr97}, and a pure complementary energy principle was eventually obtained in \cite{gao-mrc99}. This principle can be used to obtain a general analytical solution for 3-D large deformation elasto-plasticity \cite{gao-mecc99}. It was shown by Gao and Ogden (see \cite{gao-ogden-qjmam08,gao-ogden-zamp}) that for one-dimensional nonlinear elasticity problems, both global and local minimal solutions are usually nonsmooth and can't be obtained by any Newton type of numerical methods. For finite dimensional systems, the canonical duality theory has been successfully applied for solving a large class of challenging problems in computational mechanics \cite{cai-gao-qin,gao-yu,hugo-gao} and global optimization with extensive applications in computational biology \cite{zgy}, chaotic dynamical systems \cite{li-zhou-gao,ruan-gao-ima}, discrete and network optimization \cite{gao-cace09,gao-ruan-jogo08,gao-ruan-pardalos,ruan-gao-jiao-coap08}. The purpose of this paper is to illustrate the application of the pure complementary variational principle in combination with triality theory by solving a general nonconvex variational problem governed by St Venant-Kirchhoff material. The paper is organized as follows. Section \ref{primal} presents a brief review on the canonical duality theory in nonlinear elasticity. Some fundamental issues in nonlinear elasticity are addressed, including the reasons why the Legendre-Hadamard condition provides only necessary condition for local minima, how the Gao-Strang gap function and the triality theory can be used to identify both global and local extremal solutions. In Section \ref{stvenant} we show that for the St Venant-Kirchhoff materials, the pure complementary variational problem can be solved principally to obtain all possible solutions. Some concluding remarks are contained in Section \ref{finish}. \section{Canonical Duality Theory and Complementary Variational Principle}\label{primal} It is known that the stored-energy function $W:{\cal F}_a \rightarrow {\mathbb R}} %\newcommand{\real}{{\bf R}$ must obey certain physical laws and requirements in continuum mechanics, such as the principle of material frame-indifference \cite{Truesdell-Noll}, which lay a mathematical foundation for the canonical duality theory. Let \eb \mbox{SO}(3) = \{ {\bf Q} \in {\mathbb R}} %\newcommand{\real}{{\bf R}^{3\times 3} | \; {\bf Q}^T = {\bf Q}^{-1} , \;\; \det {\bf Q} = 1 \} \ee be the special orthogonal group. \begin{definition}[Objectivity and Isotropy \cite{gao-dual00}] $\;$\newline \vspace{-.5cm} \begin{verse} {\em (D1) {\em Objective Set and Objective Function}: A subset ${\cal F}_a \subset {\mathbb R}} %\newcommand{\real}{{\bf R}^{3\times 3}$ is said to be {\em objective} if for every ${\bf F} \in {\cal F}_a$ and every ${\bf Q} \in \mbox{SO}(3)$, ${\bf Q} {\bf F} \in {\cal F}_a$. A scalar-valued function $W:{\cal F}_a \rightarrow {\mathbb R}} %\newcommand{\real}{{\bf R}$ is said to be {\em objective } if its domain is objective and \eb W({\bf Q} {\bf F}) = W({\bf F}) \;\; \forall {\bf F} \in {\cal F}_a, \; \forall {\bf Q} \in \mbox{SO}(3). \ee (D2) {\em Isotropic Set and Isotropic Function}: A subset ${\cal F}_a \subset {\mathbb R}} %\newcommand{\real}{{\bf R}^{3\times 3}$ is said to be {\em isotropic} if for every ${\bf F} \in {\cal F}_a$ and every ${\bf Q} \in \mbox{SO}(3)$, $ {\bf F}{\bf Q} \in {\cal F}_a$. A scalar-valued function $W:{\cal F}_a \rightarrow {\mathbb R}} %\newcommand{\real}{{\bf R}$ is said to be {\em isotropic } if its domain is isotropic and \eb W( {\bf F}{\bf Q}) = W({\bf F}) \;\; \forall {\bf F} \in {\cal F}_a, \; \forall {\bf Q} \in \mbox{SO}(3). \ee } \end{verse} \end{definition} The objectivity implies that the constitutive law of material is independent with the observer (coordinate free). While the isotropy means that the material possesses certain symmetry. Generally speaking, the deformation gradient ${\bf F}$ is a two-point tensor, which is not considered as a strain measure. The {\em right Cauchy-Green tensor } ${\bf C} = {\bf F}^T {\bf F}$ is a (Lagrange type) strain measure which is objective (rotation free), i.e., \[ {\bf C} ({\bf Q} {\bf F})= ({\bf Q} {\bf F})^T ({\bf Q} {\bf F}) = {\bf F}^T {\bf Q}^T {\bf Q} {\bf F} = {\bf C}({\bf F}) \;\; \forall {\bf Q} \in \mbox{SO}(3). \] Dually, the {\em left Cauchy-Green tensor} ${\bf B} = {\bf F}^T {\bf F}$ is an isotropic function of ${\bf F}$. In continuum mechanics, the objectivity is also known as {\em the principle of frame-indifference}. According to P.G. Ciarlet, the stored energy function of a hyper-elastic material is objective if and only if there exists a function $U({\bf C})$ such that $W({\bf F}) = U({\bf C}({\bf F}))$ (see Theorem 4.2-1 in \cite{ciarlet}). This principle lays a foundation for the canonical duality theory. Indeed, the canonical dual transformation was developed from the concept of the objectivity. The key step of this transformation is the introduction of a geometrically admissible strain measure $\mbox{\boldmath$\xi$} = {\bf \Lambda} ({\mbox{\boldmath$\chi$}}):{\cal X}_a \rightarrow {\cal E}_a \subset {\mathbb R}} %\newcommand{\real}{{\bf R}^{3\times 3}$ and the {canonical function} $U(\mbox{\boldmath$\xi$}) : {\cal E}_a \rightarrow {\mathbb R}} %\newcommand{\real}{{\bf R} $ such that the nonconvex stored energy $W({\bf F})$ can be written in the canonical form $W(\nabla {\mbox{\boldmath$\chi$}} ) = U({\bf \Lambda}({\mbox{\boldmath$\chi$}}))$. According to \cite{gao-dual00}, a convex differentiable real-valued function $U({\mbox{\boldmath$\xi$}}) $ is said to be canonical on its domain ${\cal E}_a$ if the duality relation ${\mbox{\boldmath$\xi$}}^* = \nabla U({\mbox{\boldmath$\xi$}}) : {\cal E}_a \rightarrow {\cal E}_a^* $ is invertible such that the conjugate function $U^*({\mbox{\boldmath$\xi$}}^*)$ of $U(\mbox{\boldmath$\xi$})$ can be defined uniquely by the Legendre transformation \eb U^*({\mbox{\boldmath$\xi$}}^*) = \{ {\mbox{\boldmath$\xi$}} : {\mbox{\boldmath$\xi$}}^* - U({\mbox{\boldmath$\xi$}}) | \; {\mbox{\boldmath$\xi$}}^* = \nabla U({\mbox{\boldmath$\xi$}}) \;\; \forall {\mbox{\boldmath$\xi$}} \in {\cal E}_a \}. \ee By the theory of convex analysis, it is easy to prove that the following canonical duality relations hold on ${\cal E}_a \times {\cal E}^*_a$ \eb {\mbox{\boldmath$\xi$}}^* = \nabla U({\mbox{\boldmath$\xi$}}) \; \Leftrightarrow \;\; {\mbox{\boldmath$\xi$}} = \nabla U^*({\mbox{\boldmath$\xi$}}^*) \; \Leftrightarrow \; U({\mbox{\boldmath$\xi$}}) + U^*({\mbox{\boldmath$\xi$}}^*) = {\mbox{\boldmath$\xi$}} : {\mbox{\boldmath$\xi$}}^* \ee and the pair $({\mbox{\boldmath$\xi$}}, {\mbox{\boldmath$\xi$}}^*)$ is called the {\em canonical dual pair } on ${\cal E}_a \times {\cal E}_a^*$. Thus, on replacing $W(\nabla {\mbox{\boldmath$\chi$}})$ in the total potential energy $\Pi({\mbox{\boldmath$\chi$}})$ by its canonical form $W(\nabla {\mbox{\boldmath$\chi$}}) = U({\bf \Lambda}({\mbox{\boldmath$\chi$}}))$, and we take the body force to be a constant, so that $\phi({\mbox{\boldmath$\chi$}})=-\mathbf{f}\cdot{\mbox{\boldmath$\chi$}}$, the minimal potential energy variational problem (1) can be written in the following canonical form \eb ({\cal{P}}): \;\; \min_{{\mbox{\boldmath$\chi$}} \in {\cal X}_a} \left \{ \Pi({\mbox{\boldmath$\chi$}}) = \int_\Omega [U({\bf \Lambda}({\mbox{\boldmath$\chi$}})) - \rho {\mbox{\boldmath$\chi$}} \cdot \mathbf{f}] \,\mbox{d}\Oo - \int_{\Gamma_t} {\mbox{\boldmath$\chi$}} \cdot {\bf t} \,\mbox{d} \Gamma \right\}. \ee Furthermore, in terms of $\mbox{\boldmath$\varsigma$} = {\mbox{\boldmath$\xi$}}^*$ and by the Fenchel-Young equality \[ U({\bf \Lambda}({\mbox{\boldmath$\chi$}})) = {\bf \Lambda}({\mbox{\boldmath$\chi$}})\! :\! \mbox{\boldmath$\varsigma$} - U^*(\mbox{\boldmath$\varsigma$}), \] the so-called \emph{total complementary energy functional} \cite{gao-strang89a} $\Xi: {\cal X}_a \times {\cal E}^*_a \rightarrow {\mathbb R}} %\newcommand{\real}{{\bf R}$ can be written, in the present context, as \begin{equation} \Xi({\mbox{\boldmath$\chi$}}, \mbox{\boldmath$\varsigma$}) = \int_{\Omega} \left[{\bf \Lambda}({\mbox{\boldmath$\chi$}})\! :\! \mbox{\boldmath$\varsigma$} - U^*(\mbox{\boldmath$\varsigma$}) - \rho {\mbox{\boldmath$\chi$}} \cdot \mathbf{f} \right] {\rm d} \Omega - \int_{\Gamma_t} {\mbox{\boldmath$\chi$}} \cdot {\bf t} {\rm d} \Gamma. \end{equation} For a given statically admissible field ${\mbox{\boldmath$\tau$}} \in {\cal T}_a$, this total complementary functional can be written in the following form \eb \Xi_{\mbox{\boldmath$\tau$}} ({\mbox{\boldmath$\chi$}}, \mbox{\boldmath$\varsigma$}) = \int_{\Gamma_\chi} {\mbox{\boldmath$\chi$}}_0 \cdot {\mbox{\boldmath$\tau$}} \cdot {\bf n} {\rm d} \Gamma + \int_{\Omega} \left[{\bf \Lambda}({\mbox{\boldmath$\chi$}})\! :\! \mbox{\boldmath$\varsigma$} - U^*(\mbox{\boldmath$\varsigma$}) - (\nabla {\mbox{\boldmath$\chi$}}): {\mbox{\boldmath$\tau$}} \right] \,\mbox{d}\Oo . \ee For a given $\mbox{\boldmath$\varsigma$} \in {\cal E}^*_a$, the \emph{canonical dual functional} $\Pi^d(\mbox{\boldmath$\varsigma$})$ is then defined by \begin{equation} \Pi^d(\mbox{\boldmath$\varsigma$}) = \left\{ \Xi({\mbox{\boldmath$\chi$}}, \mbox{\boldmath$\varsigma$}) \;\big | \; \delta_{{\mbox{\boldmath$\chi$}}} \Xi({\mbox{\boldmath$\chi$}}, \mbox{\boldmath$\varsigma$}) =0\right\} = F^{{\bf \Lambda}}(\mbox{\boldmath$\varsigma$}) - \int_\Omega U^*(\mbox{\boldmath$\varsigma$}) {\rm d} \Omega, \end{equation} where $F^{{\bf \Lambda}}(\mbox{\boldmath$\varsigma$})$ is defined by the so-called ${\bf \Lambda}$-conjugate transformation \cite{gao-dual00, gao-optm03} \begin{equation} F^{{\bf \Lambda}}(\mbox{\boldmath$\varsigma$}) ={\rm sta} \left\{ \int_{\Omega} [{\bf \Lambda}({\mbox{\boldmath$\chi$}})\! :\! \mbox{\boldmath$\varsigma$} - \rho {\mbox{\boldmath$\chi$}} \cdot \mathbf{f} ] {\rm d} \Omega - \int_{\Gamma_t} {\mbox{\boldmath$\chi$}} \cdot {\bf t} {\rm d} \Gamma \; \big| \;\; {\mbox{\boldmath$\chi$}} \in {\cal X}_a \right\}, \end{equation} with sta indicating the stationary value at fixed $\mbox{\boldmath$\varsigma$} \in {\cal E}^*_a$. In terms of ${\mbox{\boldmath$\tau$}} \in {\cal T}_a$, we have the following form \begin{equation} F^{{\bf \Lambda}}_{\mbox{\boldmath$\tau$}} (\mbox{\boldmath$\varsigma$}) = \int_{\Gamma_\chi} {\mbox{\boldmath$\chi$}}_0 \cdot {\mbox{\boldmath$\tau$}} \cdot {\bf n} {\rm d} \Gamma + {\rm sta} \left\{ \int_{\Omega} [{\bf \Lambda}({\mbox{\boldmath$\chi$}})\! :\! \mbox{\boldmath$\varsigma$} - (\nabla {\mbox{\boldmath$\chi$}} ): {\mbox{\boldmath$\tau$}} ] {\rm d} \Omega \; \big| \;\; {\mbox{\boldmath$\chi$}} \in {\cal X}_a \right\}. \end{equation} In finite deformation theory, \eb \Pi^d_{\mbox{\boldmath$\tau$}}(\mbox{\boldmath$\varsigma$}) = F^{{\bf \Lambda}}_{\mbox{\boldmath$\tau$}} (\mbox{\boldmath$\varsigma$}) - \int_\Omega U^*(\mbox{\boldmath$\varsigma$}) \,\mbox{d}\Oo \ee is also called the \emph{pure complementary energy functional}, which was first proposed in \cite{gao-mrc99}. \begin{thm}[Complementary-Dual Variational Principle \cite{gao-mecc99}] For a given statically admissible field ${\mbox{\boldmath$\tau$}} \in {\cal T}_a$, the following statements are equivalent: \begin{enumerate} \item $(\bar{\mbox{\boldmath$\chi$}}, \bar{\mbox{\boldmath$\varsigma$}})$ is a critical point of $\Xi_{\mbox{\boldmath$\tau$}}({\mbox{\boldmath$\chi$}}, \mbox{\boldmath$\varsigma$})$; \item $\bar{\mbox{\boldmath$\chi$}}$ is a critical point of $\Pi({\mbox{\boldmath$\chi$}})$; \item $\bar{\mbox{\boldmath$\varsigma$}}$ is a critical point of $\Pi^d_{\mbox{\boldmath$\tau$}}(\mbox{\boldmath$\varsigma$})$. \end{enumerate} Moreover, we have \eb \Pi(\bar{\mbox{\boldmath$\chi$}}) = \Xi (\bar{\mbox{\boldmath$\chi$}}, \bar{\mbox{\boldmath$\varsigma$}}) = \Xi_{\mbox{\boldmath$\tau$}}(\bar{\mbox{\boldmath$\chi$}}, \bar{\mbox{\boldmath$\varsigma$}})= \Pi^d_{\mbox{\boldmath$\tau$}}(\bar{\mbox{\boldmath$\varsigma$}}). \ee \end{thm} This theorem shows that to find a critical solution to the nonconvex total potential $\Pi({\mbox{\boldmath$\chi$}})$ is equivalent to find a critical point of its canonical dual function $\Pi^d_{\mbox{\boldmath$\tau$}} (\mbox{\boldmath$\varsigma$})$. For a given $ {\mbox{\boldmath$\tau$}} \in {\cal T}_a$, different choice of the geometrical measure ${\bf \Lambda}({\mbox{\boldmath$\chi$}})$ will leads to different, but equivalent, $\Pi^d_{\mbox{\boldmath$\tau$}}(\mbox{\boldmath$\varsigma$})$ on a subset ${\cal S}_a \subset {\cal E}^*_a$. In finite deformation theory, the canonical duality relation is also known as the Hill {\em work conjugate} and the canonical function $U({\mbox{\boldmath$\xi$}})$ is called strain energy-density. According to Hill, for a given hyper-elastic material, there exist a class of strain measures ${\mbox{\boldmath$\xi$}}$ and the associated canonical functions $U({\mbox{\boldmath$\xi$}})$ such that the associated stress can by defined uniquely by the canonical duality relation ${\mbox{\boldmath$\xi$}}^* = \nabla U({\mbox{\boldmath$\xi$}})$. There are many canonical strain measures in finite elasticity and many of these strain measures belong to the well-known Hill-Seth strain family \[ {\bf E}^{(\eta)} = \frac{1}{2 \eta} [ {\bf C}^{\eta} - {\bf I} ], \] where ${\bf I} $ is an identity tensor in ${\mathbb R}} %\newcommand{\real}{{\bf R}^{3\times 3}$ and $\eta $ is a real number. Canonical duality theory and pure complementary energy principle for general strain measures have been studied in \cite{gao-dual00}. In this paper, we consider only the Green-St Venant strain tensor ${\bf E}^{(1)}$, simply denoted as ${\bf E} $. In this case, the geometrical operator \eb {\bf E} = \Lam({\mbox{\boldmath$\chi$}}) = \frac{1}{2} [ (\nabla {\mbox{\boldmath$\chi$}})^T (\nabla {\mbox{\boldmath$\chi$}}) - {\bf I} ] : {\cal X}_a \rightarrow {\cal E}_a \ee is a quadratic operator and its domain can be defined by \eb {\cal E}_a = \{ {\bf E} \in {\cal L}^{p/2}(\Omega; {\mathbb R}} %\newcommand{\real}{{\bf R}^{3 \times 3}) | \; {\bf E} = {\bf E}^T, \; (2 {\bf E} + {\bf I}) \succ 0 \}. \ee We assume that the associated strain energy density $U({\bf E}):{\cal E}_a \rightarrow {\mathbb R}} %\newcommand{\real}{{\bf R}$ is convex such that the conjugate stress $\mbox{\boldmath$\varsigma$}$ of ${\bf E}$, denoted by ${\bf T}$, can be defined uniquely by the constitutive law \eb {\bf T} = \nabla U({\bf E}) :{\cal E}_a \rightarrow {\cal E}^*_a . \ee This associated stress ${\bf T} $ is the well-known second Piola-Kirchhoff stress, which is well-defined on ${\cal E}^*_a = \{ {\bf T} \in {\cal L}^{p/(p-2)}(\Omega; {\mathbb R}} %\newcommand{\real}{{\bf R}^{3\times 3} )| \;\; {\bf T} = {\bf T}^T \}$. In this case, the pure complementary energy $\Pi^d_{\mbox{\boldmath$\tau$}} $ has the form of \begin{equation} \Pi^d_{\mbox{\boldmath$\tau$}} ({\bf T}) = \int_{\Gamma_\chi} {\mbox{\boldmath$\chi$}}_0 \cdot {\mbox{\boldmath$\tau$}} \cdot {\bf n} {\rm d} \Gamma -\int_\Omega \left[ \frac{1}{2}{\mbox{tr}} ( {\mbox{\boldmath$\tau$}} \cdot {\bf T}^{-1} \cdot {\mbox{\boldmath$\tau$}} + {\bf T} ) + U^*({\bf T}) \right]{\rm d} \Omega , \end{equation} which is well-defined on the canonical dual space \eb {\cal S}_a = \{ {\bf T} \in {\cal E}^*_a | \;\; {\mbox{tr}}( {\mbox{\boldmath$\tau$}} \cdot {\bf T}^{-1} \cdot {\mbox{\boldmath$\tau$}}) \in {\cal L}^1(\Omega; {\mathbb R}} %\newcommand{\real}{{\bf R})\; \; \forall {\mbox{\boldmath$\tau$}} \in {\cal T}_a \}. \ee Therefore, the canonical dual problem is to find the critical point $\bar{\bf T} \in {\cal S}_a$ such that \eb ({\cal{P}}^d): \;\; \Pi^d_{\mbox{\boldmath$\tau$}}(\bar{\bf T}) = {\rm sta} \{ \Pi^d_{\mbox{\boldmath$\tau$}}({\bf T}) | \; {\bf T} \in {\cal S}_a \} . \ee \begin{thm}[Analytical Solution Form \cite{gao-dual00}]\label{thm-ana} For a given ${\mbox{\boldmath$\tau$}} \in {\cal T}_a$, if $ \bar{\bf T} $ is a critical point of $\Pi^d_{\mbox{\boldmath$\tau$}}({\bf T})$, then along any path from $\mathbf{X}_0 \in \Gamma_\chi$ to $\mathbf{X} \in \Omega$, the deformation defined by \begin{equation} \bar{{\mbox{\boldmath$\chi$}}} = \int_{\mathbf{X}_0}^{\mathbf{X}} {\mbox{\boldmath$\tau$}} \cdot \bar{\bf T}^{-1} \cdot {\rm d} \mathbf{X} + {\mbox{\boldmath$\chi$}}_0(\mathbf{X}_0) \label{eq-anasolu} \end{equation} is a critical solution to $({\cal{P}})$. Moreover, if \eb \nabla \times ({\mbox{\boldmath$\tau$}} \cdot \bar{\bf T}^{-1}) = \mathbf{0}, \label{eq-compat} \ee then $\bar{{\mbox{\boldmath$\chi$}}} $ is a closed form solution to the boundary value problem (BVP) (\ref{eq-bvp1}). \end{thm} The proof of this theorem can be found in \cite{gao-ima98,gao-mrc99,gao-mecc99}. In fact, the criticality condition $\delta \Pi^d_{\mbox{\boldmath$\tau$}}({\bf T})= 0$ leads to the following dual tensor equation: \begin{equation} {\bf T} \cdot \left[ {\bf I} + 2 (\nabla U^*({\bf T})) \right] \cdot {\bf T} = {{\mbox{\boldmath$\tau$}}}^{\rm T} \cdot{\mbox{\boldmath$\tau$}} , \label{cdtevk} \end{equation} which is equivalent to \[ \nabla U^* (\bar{\bf T}) = \frac{1}{2} \left( ({\mbox{\boldmath$\tau$}} \cdot \bar{\bf T}^{-1})^T {\mbox{\boldmath$\tau$}} \cdot \bar{\bf T}^{-1} - {\bf I} \right) . \] This is actually the constitutive law ${\bf E} = {\bf \Lambda}(\bar{\mbox{\boldmath$\chi$}}) = \frac{1}{2} [{\bf F}^T {\bf F} - {\bf I}] = \nabla U^*(\bar{\bf T})$ subjected to ${\bf F} = {\mbox{\boldmath$\tau$}} \cdot \bar{\bf T}^{-1}$. Therefore, if the compatibility condition $\nabla\times{\bf F}=\mathbf{0}$, in index notation \[ \frac{\partial F_{i\alpha}}{\partial X_\beta } = \frac{\partial F_{i\beta}}{ \partial X_\alpha} , \] holds, then ${\bf F}$ is the deformation gradient and $\bar{\mbox{\boldmath$\chi$}}$ is a solution to $(BVP)$. \begin{rem}[PDE $\Leftrightarrow$ Algebraic Equation] { Theorem \ref{thm-ana} shows that by the pure complementary energy principle, the nonlinear partial differential equation $(BVP)$ is equivalently converted to a canonical dual tensor equation (\ref{cdtevk}), which can be solved to obtain the stress field $\bar{\bf T}$ for certain materials. From the equation (\ref{cdtevk}) we know that ${\bf T} = {\bf 0} $ if ${\mbox{\boldmath$\tau$}} = {\bf 0}$. Therefore, although ${\bf T}^{-1}$ appears in $\Pi^d_{{\mbox{\boldmath$\tau$}}}({\bf T})$, this pure complementary energy is well-defined on ${\cal S}_a$. The equation (\ref{eq-anasolu}) presents an analytical solution form to the boundary value problem in terms of the canonical dual stress field $\bar{\bf T}$ and the statically admissible ${\mbox{\boldmath$\tau$}} \in {\cal T}_a$. Of course, this is purely formal and in general it is not easy to obtain the solution for general practices unless the deformation compatibility condition (\ref{eq-compat}) holds. It has been assumed here that the relation between ${\bf T}$ and ${\bf E}$ is invertible. This certainly holds in a neighborhood of the (stress-free) reference configuration since the canonical strain energy $U({\bf E})$ is convex in such a neighborhood. It is a reasonable assumption to extend this to a sufficiently large domain that includes deformations of practical interest. Finite element implementations of nonlinear elasticity are usually based on the variables ${\bf T}$ and ${\bf E}$ and the associated tangent tensor $\partial{\bf T}/\partial{\bf E} = \nabla^2 U({\bf E})$, which is assumed to be positive definite. It is always possible to select forms of the strain-energy function $W$ such that this is the case, although the possibility of its failure for particular materials is not in general ruled out.} \end{rem} \medskip In terms of the deformation ${\mbox{\boldmath$\chi$}} \in {\cal X}_a$ and the second Piola-Kirchhoff stress ${\bf T} \in {\cal E}^*_a$, the total complementary functional $\Xi({\mbox{\boldmath$\chi$}},{\bf T})$ can be written as \eb \Xi_{\mbox{\boldmath$\tau$}}({\mbox{\boldmath$\chi$}}, {\bf T}) = \int_{\Omega} \left[{\bf E}({\mbox{\boldmath$\chi$}})\! :\! {\bf T} - U^*({\bf T}) - (\nabla {\mbox{\boldmath$\chi$}}) : {\mbox{\boldmath$\tau$}} \right] {\rm d} \Omega + \int_{\Gamma_\chi} {\mbox{\boldmath$\chi$}}_0 \cdot {\mbox{\boldmath$\tau$}} \cdot {\bf n} {\rm d} \Gamma \end{equation} which is actually the well-known Hellinger-Reissner energy if the first Piola-Kirchhoff stress is replaced by external force field. From the nonlinear canonical dual tensor equation (\ref{cdtevk}) we know that for a given ${\mbox{\boldmath$\tau$}} \in {\cal T}_a$, the pure complementary energy $\Pi^d_{\mbox{\boldmath$\tau$}}({\bf T})$ may have multiple critical points. In order to identify the global extremum, We need to introduce the following subspaces: \eb {\cal S}^+_a = \{ {\bf T} \in {\cal S}_a | \;\; {\bf T} \succ 0 \}, \;\; {\cal S}^-_a = \{ {\bf T} \in {\cal S}_a | \;\; {\bf T} \prec 0 \}. \ee \begin{thm} \label{thm-tri} Suppose for a given ${\mbox{\boldmath$\tau$}} \in {\cal T}_a$, the pair $(\bar{\mbox{\boldmath$\chi$}}, \bar{\bf T})$ is an isolated critical point of $\Xi_{\mbox{\boldmath$\tau$}}({\mbox{\boldmath$\chi$}}, {\bf T})$. If $\bar{\bf T} \in {\cal S}^+_a$, then $\bar{\mbox{\boldmath$\chi$}}$ is a global minimizer of $\Pi({\mbox{\boldmath$\chi$}})$ on $ {\cal X}_a$ if and only if $\bar{\bf T} $ is a global maximizer of $\Pi^d_{\mbox{\boldmath$\tau$}}({\bf T})$ on ${\cal S}^+_a$, i.e., \eb \label{sadminmax} \Pi(\bar{\mbox{\boldmath$\chi$}}) = \min_{{\mbox{\boldmath$\chi$}} \in {\cal X}_a} \Pi({\mbox{\boldmath$\chi$}}) \;\;\Leftrightarrow \;\; \max_{{\bf T} \in {\cal S}^+_a} \Pi^d_{\mbox{\boldmath$\tau$}}({\bf T}) = \Pi^d_{\mbox{\boldmath$\tau$}}(\bar{\bf T}). \ee If $\bar{\bf T} \in {\cal S}^-_a$, then $\bar{\mbox{\boldmath$\chi$}}$ is a local maximizer of $\Pi({\mbox{\boldmath$\chi$}})$ if and only if $\bar{\bf T}$ is a local maximizer of $\Pi^d_{\mbox{\boldmath$\tau$}}({\bf T})$, i.e., on a neighborhood ${\cal X}_o \times {\cal S}_o \subset {\cal X}_a \times {\cal S}^-_a$, \eb \Pi(\bar{\mbox{\boldmath$\chi$}}) = \max_{{\mbox{\boldmath$\chi$}} \in {\cal X}_o} \Pi({\mbox{\boldmath$\chi$}}) \;\;\Leftrightarrow \;\; \max_{{\bf T} \in {\cal S}_o} \Pi^d_{\mbox{\boldmath$\tau$}}({\bf T}) = \Pi^d_{\mbox{\boldmath$\tau$}}(\bar{\bf T}). \label{eq-dobmax} \ee If $\bar{\bf T} \in {\cal S}^-_a$ and $\nabla^2_{{\bf F}} W(\nabla \bar{\mbox{\boldmath$\chi$}}) \succ 0 $, then $ \bar{\mbox{\boldmath$\chi$}}$ is a local minimizer of $\Pi({\mbox{\boldmath$\chi$}})$. \end{thm} \begin{rem}[The Complementary Gap Function and Triality Theory] $\;$ \newline Theorem \ref{thm-tri} shows that the extremality of the primal solution ${\mbox{\boldmath$\chi$}}$ depends on its canonical dual solution ${\bf S}$. This result was first discovered by Gao and Strang in 1989 \cite{gao-strang89a}, i.e. they proved that $\bar{\mbox{\boldmath$\chi$}} (\bar{\bf S})$ is a global minimizer of $\Pi({\mbox{\boldmath$\chi$}})$ if the complementary gap function satisfies \eb G_{ap}({\mbox{\boldmath$\chi$}},\bar{\bf S}) = \int_\Omega \frac{1}{2} [(\nabla {\mbox{\boldmath$\chi$}})^T (\nabla{\mbox{\boldmath$\chi$}}) + {\bf I}]:\bar{\bf S} \,\mbox{d}\Oo \ge 0 \;\; \forall {\mbox{\boldmath$\chi$}} \in {\cal X}_a \label{eq-gapp} \ee Since $G_{ap}({\mbox{\boldmath$\chi$}},\bar{\bf S})$ is quadratic in ${\mbox{\boldmath$\chi$}}$, this gap function is positive for any given ${\mbox{\boldmath$\chi$}} \in {\cal X}_a$ if $\bar{\bf S} \succeq 0 $. Replacing ${\bf F} = \nabla {\mbox{\boldmath$\chi$}}$ by ${\bf F} = {\mbox{\boldmath$\tau$}} \cdot {\bf S}^{-1}$, this gap function can be written as the so-called pure gap function \eb G_{ap}({\mbox{\boldmath$\chi$}}({\bf S}), {\bf S}) = \int_\Omega \frac{1}{2} {\mbox{tr}} ({\mbox{\boldmath$\tau$}} \cdot {\bf S}^{-1} \cdot {\mbox{\boldmath$\tau$}} + {\bf S}) \,\mbox{d}\Oo , \ee which is a main term in the pure complementary energy $\Pi^d_{\mbox{\boldmath$\tau$}}({\bf S}) $ in addition to $U^*({\bf S})$. Comparing $\Pi^d_{\mbox{\boldmath$\tau$}}({\bf S}) $ with $\Pi^\sharp({\mbox{\boldmath$\tau$}})$ given by (\ref{eq-Pic}), we can understand that this gap function not only recovers the duality gap in the Fenchel-Moreau duality theory, but also provides a global extremality condition for nonconvex variational problem $({\cal{P}})$. To see this in detail, let us consider the canonical transformation $W({\bf F}) = U({\bf E}({\bf F}))$. By chain rule we have \eb \frac{\partial^2 W({\bf F})}{\partial F^i_{\alpha} \partial F^j_\beta} = \delta^{ij} S_{{\alpha}\beta} + \sum_{\theta, \nu = 1}^3 F^i_\theta H_{\theta {\alpha}\beta \nu} F^j_\nu, \label{eq-hessian} \ee where ${\bf H} = \{ H_{\theta {\alpha}\beta \nu}\} = \nabla^2 U({\bf E})$. By the convexity of the canonical function $U({\bf E})$, we have ${\bf H} \succ 0 $. Therefore, if ${\bf S} = \{ S_{{\alpha}\beta} \}\in {\cal S}_a^+ $, the Hessian $\nabla^2 W({\bf F}) \succ 0$ and, by Gao and Strang \cite{gao-strang89a}, the associated deformation field ${\mbox{\boldmath$\chi$}}$ is a global minimizer of $\Pi({\mbox{\boldmath$\chi$}})$. The statement (\ref{sadminmax}) shows that the nonconvex minimization problem $({\cal{P}})$ is equivalent to a concave maximization dual problem over a convex space ${\cal S}^+_a$, i.e., \eb \max \{ \Pi^d_{\mbox{\boldmath$\tau$}}({\bf T}) | \;\; {\bf T} \in {\cal S}_a^+ \}, \ee which is much easier than the nonconvex primal problem $({\cal{P}})$. The global optimality condition ${\bf S} \in {\cal S}_a^+$ is a strong case of Gao and Strang's positive gap function (\ref{eq-gapp}). Subsequently, in a study of post-buckling analysis for a nonlinear beam theory, it was found that if the dual solution ${\bar{{\bf T}}} $ is negative definite in the domain $\Omega$, the solution $\bar{{\mbox{\boldmath$\chi$}}}$ could be either a local minimizer or a local maximizer of the total potential energy. To see this, we substitutive ${\bf F} = {\mbox{\boldmath$\tau$}} \cdot {\bf T}^{-1}$ into (\ref{eq-hessian}) to obtain \eb \frac{\partial^2 W({\bf F})}{\partial F^i_{\alpha} \partial F^j_\beta} = \delta^{ij} S_{{\alpha}\beta} + \sum_{\theta, \nu, \delta, \lambda = 1}^3 \tau^i_\theta S^{-1}_{\theta \delta} H_{\delta {\alpha}\beta \nu} S^{-1}_{\nu\lambda}\tau^j_{\lambda} \ee which shows that even if ${\bf T} \prec 0$, the Hessian matrix $\nabla^2 W({\bf F})$ could be either positive or negative definite, depending on the eigenvalues of ${\bf T} \in {\cal S}^-_a$. Thus, in addition to the double-max duality (\ref{eq-dobmax}), we have the so-called double-min duality \eb \Pi(\bar{\mbox{\boldmath$\chi$}}) = \min_{{\mbox{\boldmath$\chi$}} \in {\cal X}_o} \Pi({\mbox{\boldmath$\chi$}}) \;\;\Leftrightarrow \;\; \min_{{\bf T} \in {\cal S}_o} \Pi^d_{\mbox{\boldmath$\tau$}}({\bf T}) = \Pi^d_{\mbox{\boldmath$\tau$}}(\bar{\bf T}), \ee which holds under certain condition (see \cite{gao-amma03}). For this reason, a so-called triality theory was proposed first in post-buckling analysis of a large deformed beam model \cite{gao-amr97}, and then in general nonconvex mechanics \cite{gao-mecc99,gao-dual00}. This triality theory reveals an important fact in nonconvex analysis, i.e. for a given statically admissible field ${\mbox{\boldmath$\tau$}} \in {\cal T}_a$, if the canonical dual equation (\ref{cdtevk}) has multiple solutions $\{{\bf S}_k\}$ in a subset $\Omega_o \subset \Omega$, then the boundary value problem $(BVP)$ could have an infinite number of solutions $\{ {\mbox{\boldmath$\chi$}}_k ({\bf X}) \}$ in $\Omega$. The well-known Legendre-Hadamard (L-H) condition is only a necessary condition for a local minimal solution, while the triality theory can identify not only the global minimizers, but also both local minimizers and local maximizers. It is known that an elliptic equation is corresponding to a convex variational problem. If the boundary-value problem (\ref{eq-bvp1}) has multiple solutions $\{ {\mbox{\boldmath$\chi$}}_k ({\bf X}) \}$ at one material point ${\bf X} \in \Omega$, the total potential $\Pi({\mbox{\boldmath$\chi$}})$ is not convex and the operator $A({\mbox{\boldmath$\chi$}}) = \nabla \cdot [\nabla_{\bf F} W(\nabla {\mbox{\boldmath$\chi$}}) ] $ may not be elliptic at ${\bf X} \in \Omega$ even if the L-H condition holds at certain ${\mbox{\boldmath$\chi$}}_k ({\bf X}) $. \end{rem} The pure complementary energy principle and triality theory play a fundamental role not only in nonconvex analysis, but also in computational science and global optimization (see \cite{gao-amma03,gao-mms04,gao-cace09,gao-review14}). \section{Application to St Venant-Kirchhoff Material}\label{stvenant} For St. Venant-Kirchhoff material, the canonical energy function $U({\bf E})$ has the most simple form: \eb U(\mathbf{E})=\mu{\mbox{tr}}(\mathbf{E}^2)+\frac{1}{2}\lambda({\mbox{tr}}\mathbf{E})^2. \ee The second Piola-Kirchhoff stress depends linearly on the Green-St Venant strain via the Hooke's law: \eb {\bf S} = \nabla U({\bf E}) = 2\mu {\bf E} +\lambda({\mbox{tr}}{\bf E} )\mathbf{I} = {\bf H} : {\bf E} , \ee where ${\bf H}$ is the Hooke tensor for St Venant-Kirchhoff material. The complementary energy is \eb U^*({\bf S})=\frac{1}{4\mu}{\mbox{tr}}({\bf S}^2)-\frac{\lambda}{4\mu(3\lambda+2\mu)}({\mbox{tr}}{\bf S})^2, \ee and hence \eb {\bf E} = \nabla U^*({\bf T}) = \frac{1}{2\mu}{\bf T} -\frac{\lambda}{2\mu(3\lambda+2\mu)}({\mbox{tr}}{\bf T})\mathbf{I}\equiv {\bf H}^{-1}:{\bf S} . \ee By the canonical dual tensor equation (\ref{cdtevk}), we have \eb {{\bf S}}^2+2{\bf T}( {\bf H}^{-1}: {\bf T}){\bf T} = {\bf T}^2+\frac{1}{\mu}{\bf T}^3 -\frac{\lambda}{\mu(3\lambda+2\mu)}({\mbox{tr}}{\bf T}){\bf T}^2 = {\mbox{\boldmath$\tau$}}^T {\mbox{\boldmath$\tau$}}. \ee The diagonalization of this tensor equation leads to the following {coupled} quebec nonlinear algebraic systems: \eb S_i^2+\frac{1}{\mu} S_i^3-\frac{\lambda}{\mu(3\lambda+2\mu)}(S_1+ S_2+ S_3) S_i^2 = \tau_i^2 \;\; \quad i=1,2,3 . \label{mainsystem} \ee For convenience, we make the following substitutions in (\ref{mainsystem}): \[ S_i=\mu \varsigma_i, \;\; \tau_i^2=\mu^2 \sig_i, \; \; i=1,2,3, \] and $k = \frac{\lam}{3 \lam + 2 \mu} < 1/3$ (due to $\mu > 0$). So, the system (\ref{mainsystem}) can be written as follows \eb \label{mainsystem1} \varsigma_i^3+ \varsigma_i^2- k (\varsigma_1+ \varsigma_2+ \varsigma_3) \varsigma_i^2 = \sig_i , \;\; i=1,2,3. \ee \subsection{Auxiliary Equation} In this section we will study solutions of the following equation: \begin{equation}\label{meq} G(\varsigma,q,\sig)=\varsigma^3 + (1-k q )\varsigma^2 - \sig=0 , \end{equation} where $\sig > 0$, $0< k <\frac{1}{3}$, and $q$ is an arbitrary real number. Also, since $\sig>0$, we can assume that $\varsigma\neq 0$. Since the parameter $q$ in this section is assumed to be independent on $\varsigma$, the following results are similar to one-dimensional nonlinear elasticity problems studied by Gao \cite{ gao-dual00,gao-na00}, Gao and Ogden \cite{gao-ogden-qjmam08}. \begin{lem}\label{l1}If $\varsigma_1,\varsigma_2,\varsigma_3$ are solutions of the equations $G(\varsigma,q,\sig_1)=0$, $G(\varsigma,q,\sig_2)=0$, $G(\varsigma,q,\sig_3)=0$ correspondingly, and $\varsigma_1+\varsigma_2+\varsigma_3=q$, then $\varsigma_1,\varsigma_2,\varsigma_3$ satisfy (\ref{mainsystem1}). \end{lem} \noindent {\bfseries Proof}. Obvious. \hfill $\Box$ \begin{lem}\label{l2}Equation (\ref{meq}) has exactly one positive solution. It has negative solutions iff $$q\leq \frac{1}{k}(1-3\sqrt[3]{\frac{\sig}{4}})$$ There is only one negative solution if and only if $q= \frac{1}{k}(1-3\sqrt[3]{\frac{\sig}{4}})$. \end{lem} \noindent {\bfseries Proof}. To check that there is exactly one positive root one can apply the Descartes' rule of signs. To prove the rest, let's fix $q,\sig$ and notice that $G(\varsigma,q,\sig)=0$ has negative solutions iff it has at least two different solutions. This will happen iff the values of the function at local minimum and maximum have different signs. The extremums of G are at $\varsigma_0=-\frac{2}{3}(1-kq)$ and 0. Since the value of G at 0 is $-\sig < 0$, we find when $G(\varsigma_0,q,\sig)\geq 0$. Solving this inequality we get $q\leq \frac{1}{k}(1-3\sqrt[3]{\frac{\sig}{4}})$. \begin{cor}\label{cor2l2} The equation $G(\varsigma,0,\sig)=0$ has negative solution(s) iff \[ \sig\leq \frac{4}{27} . \] \end{cor} \noindent {\bfseries Proof}. Apply Lemma \ref{l2} to $q=0$. \hfill $\Box$ \begin{lem}\label{l3} Let's fix $\sig>0$ and assume that $\varsigma_0,q_0$ satisfy (\ref{meq}), and $\varsigma_0\neq 0$, $\varsigma_0\neq -\sqrt[3]{2\sig}$. Then there exists a unique continuously differentiable function $\varsigma(q)$, such that $\varsigma(q_0)=\varsigma_0$, $\varsigma(q)$ and $q$ both satisfy (\ref{meq}) and $$\frac{d\varsigma}{dq}=\frac{k\varsigma^3}{\varsigma^3+2\sig} .$$ \noindent Moreover, there are three possibilities (``branches") for $\varsigma(q)$: \begin{verse} (a) If $\varsigma_0 \in (-\infty,-\sqrt[3]{2\sig})$, then the range of $\varsigma(q)$ is $(-\infty,-\sqrt[3]{2\sig})$, the domain is $(-\infty, \frac{1}{k}(1-3\sqrt[3]{\frac{\sig}{4}}))$, and $\varsigma(q)$ is monotonically increasing. (b) If $\varsigma_0 \in (-\sqrt[3]{2\sig},0)$, then the range of $\varsigma(q)$ is $(-\sqrt[3]{2\sig},0)$, the domain is $(-\infty, \frac{1}{k}(1-3\sqrt[3]{\frac{\sig}{4}})))$ , and $\varsigma(q)$ is monotonically decreasing. (c) If $\varsigma_0 \in (0,+\infty)$, then the range of $\varsigma(q)$ is $(0,+\infty)$, the domain is $(-\infty, +\infty)$, and $\varsigma(q)$ is monotonically increasing. \end{verse} \end{lem} \noindent {\bfseries Proof}. Let's fix $\sig$ and find $q$ from (\ref{meq}) $$q(\varsigma)=\frac{\varsigma^3+\varsigma^2-\sig}{k\varsigma^2}.$$ \noindent Since, $\frac{dq}{d\varsigma}=\frac{\varsigma^3+2\sig}{k\varsigma^3}$ and $\sig>0$ it is obvious that $q(\varsigma)$ is monotonically increasing in the intervals $\varsigma\in (-\infty,-\sqrt[3]{2t})$ and $\varsigma\in (0,+\infty)$ and is monotonically decreasing in the interval $\varsigma\in (-\sqrt[3]{2t},0)$. The corresponding intervals for $q$ are $(-\infty, \frac{1}{k}(1-3\sqrt[3]{\frac{\sig}{4}}))$, $(-\infty, +\infty)$, and $(-\infty, \frac{1}{k}(1-3\sqrt[3]{\frac{\sig}{4}}))$. Also, one can easily check that $$\frac{d\varsigma}{dq}=\frac{k\varsigma^3}{\varsigma^3+2\sig}.$$ Thus, the lemma is proved. \hfill $\Box$ \begin{defi} The three branches of $\varsigma(q,\sig)$ ($\sig$ is fixed) described in Lemma \ref{l3} will be denoted as follows: \begin{description} \item[(a)] $\varsigma^1(q,\sig)$ is a positive branch with the domain $(-\infty, +\infty)$ and range $(0,+\infty)$; \item[(b)] $\varsigma^3(q,\sig)<\varsigma^2(q,\sig)$ are two negative branches with the domain $(-\infty, \frac{1}{k}(1-3\sqrt[3]{\frac{\sig}{4}}))$ and ranges $(-\infty,-\sqrt[3]{2\sig})$ and $(-\sqrt[3]{2\sig},0)$ correspondingly.(Note that Corollary 3.1 implies that $\frac{1}{k}(1-3\sqrt[3]{\frac{\sig}{4}})\leq 0$) \end{description} \end{defi} \begin{defi} Let's introduce the following notations: \begin{description} \item[(a)] $\bar{\varsigma}^i(q,\sig)=\varsigma^i(q,\sig)-\frac{q}{3}$, $i=1,2,3$; \item[(b)] $F^{i,j,k}(q,\sig_1,\sig_2,\sig_3)= \bar{\varsigma}^i(q,\sig_1)+\bar{\varsigma}^j(q,\sig_2)+\bar{\varsigma}^k(q,\sig_3)$, $i,j,k=1,2,3$. \end{description} \end{defi} \begin{lem}\label{l4}The following statements are true: \begin{description} \item[(a)] For $i=1,2,3$ $$\bar{\varsigma}^i(q,\sig)=-\frac{(1-3k)\varsigma^i(q,\sig)^3+\varsigma^i(q,\sig)^2-\sig}{3k\varsigma^i(q,\sig)^2}$$ and $$\frac{d\bar{\varsigma}^i}{dq}=-\frac{(1-3k)\varsigma^i(q,\sig)^3+2\sig}{3(\varsigma^i(q,\sig)^3+2\sig)} .$$ \item[(b)] $\varsigma^1(0,\sig)=\bar{\varsigma}^1(0,\sig)>0$, $\varsigma^2(0,\sig)=\bar{\varsigma}^2(0,\sig)<0$, and $\varsigma^3(0,\sig)=\bar{\varsigma}^3(0,\sig)<0$. \item[(c)] For a fixed $\sig$, $\bar{\varsigma}^1(q,\sig)$ is monotonically decreasing in $q$ and \\$\lim_{q\rightarrow +\infty}\bar{\varsigma}^1(q,\sig) = -\infty$. \item[(d)] For a fixed $\sig$, \[ \lim_{q\rightarrow -\infty}\bar{\varsigma}^2(q,\sig) = +\infty \;\;\; \mbox{ and } \;\; \lim_{q\rightarrow -\infty}\bar{\varsigma}^3(q,t) = +\infty . \] \item[(e)] For fixed $\sig_1,\sig_2,\sig_3$, each of $F^{i,j,k}(q,\sig_1,\sig_2,\sig_3)$, $i,j,k=1,2,3$, is continuous. Moreover, $F^{1,1,1}(q,\sig_1,\sig_2,\sig_3)$ is monotonically decreasing in $q$. \end{description} \end{lem} \noindent {\bfseries Proof}. To check (a), first substitute $q(\varsigma)=\frac{\varsigma^3+\varsigma^2-\sig}{k\varsigma^2}$ into $\bar{\varsigma}^i(q,\sig)=\varsigma^i(q,\sig)-\frac{q}{3}$, $i=1,2,3$. Expression for $\frac{d\bar{\varsigma}^i}{dq}$ can be obtained either by direct differentiation of the previously obtained expression for $\varsigma^i(q,\sig)$ or subtracting $\frac{1}{3}$ from $\frac{d\varsigma}{dq}=\frac{k\varsigma^3}{\varsigma^3+2\sig}$.\\ \noindent (b) is obvious.\\ \noindent To prove (c), recall, that $k<\frac{1}{3}$, and use formulas from (a). \noindent To prove (d), recall, that $k<\frac{1}{3}$, and use the first formula from (a). \noindent (e) immediately follows from (a) and (c). \hfill $\Box$\\ \begin{lem}\label{l5}Solutions, $\varsigma^1(0,\sig), \varsigma^2(0,\sig), \varsigma^3(0,\sig)$, of the equation $G(\varsigma,0,\sig)=\varsigma^3+\varsigma^2-\sig=0$, $0<\sig\leq \frac{4}{27}$, enjoy the following properties: \begin{description} \item[(a)] If $\sig=\frac{4}{27}$ the solutions are $\varsigma^1(0,\frac{4}{27})=\frac{1}{3}$, $\varsigma^2(0,\frac{4}{27})=\varsigma^3(0,\frac{4}{27})=-\frac{2}{3}$ \item[(b)] If $0<\sig_1<\sig_2\leq 0$, then $$0<\varsigma^1(0,\sig_1)<\varsigma^1(0,\sig_2)\leq\frac{1}{3}$$ and $$-1< \varsigma^3(0,\sig_1)<\varsigma^3(0,\sig_2)\leq -\frac{2}{3} \leq \varsigma^2(0,\sig_2)<\varsigma^2(0,\sig_1)<0$$ \item[(c)] $\varsigma^1(0,\sig)+\varsigma^2(0,\sig)<0$ \end{description} \end{lem} \noindent {\bfseries Proof}. (a) can be checked directly. \\ \noindent To prove (b), one can either apply the implicit function theorem to $H(\varsigma,\sig)=G(\varsigma,0,\sig)=0$. Or, less formally, draw the graph of $y=\varsigma^3+\varsigma^2-\frac{4}{27}$ and observe what happens to its roots when the graph is shifted upward until it becomes $y=\varsigma^3+\varsigma^2$.\\ \noindent (c) Obviously, $\varsigma^1(0,\sig)+\varsigma^2(0,\sig)+\varsigma^3(0,\sig)=-1$. So, $\varsigma^1(0,\sig)+\varsigma^2(0,\sig)=-1 - \varsigma^3(0,\sig)<0$, since $\varsigma^3(0,\sig)>-1$. \subsection{Solutions of the St. Venant-Kirchhoff Material} We are now ready to present our main results. \begin{thm}\label{p1}For any given force field ${\bf f}:\Omega \rightarrow {\mathbb R}} %\newcommand{\real}{{\bf R}^d$ and the surface traction ${\bf t} : {\Gamma_t} \rightarrow {\mathbb R}} %\newcommand{\real}{{\bf R}^d$ such that the statically admissible stress ${\mbox{\boldmath$\tau$}} \in {\cal T}_a$ has no zero eigenvalues almost ever where in $\Omega$, the canonical dual problem $({\cal{P}}^d)$ has a unique positive critical solution ${\bf T} \in {\cal S}^+_a$. \end{thm} \noindent {\bfseries Proof}. We need to prove that for arbitrarily given $\sig_1,\sig_2,\sig_3>0$, the system of equations (\ref{mainsystem1}) has a unique positive solution $(\varsigma_1,\varsigma_2,\varsigma_3)$, such that all $\varsigma_i>0$, $i=1,2,3$. From Lemma \ref{l4}(b), it follows that $$F^{1,1,1}(0,\sig_1,\sig_2,\sig_3)>0 .$$ From Lemma \ref{l4}(c), it follows that for some $q_1>0$, large enough, $$F^{1,1,1}(q_1,\sig_1,\sig_2,\sig_3)<0 .$$ Therefore, since $F^{1,1,1}$ is continuous and monotonically decreasing in $q$ (Lemma \ref{l4}(e)), there exists a unique $q_0$, $0< q_0<q_1$, such that $$F^{1,1,1}(q_0,\sig_1,\sig_2,\sig_3)=0 .$$ i.e. $$\varsigma^1(q_0,\sig_1)+\varsigma^1(q_0,\sig_2)+\varsigma^1(q_0,\sig_3)=q_0 . $$ So, from Lemma \ref{l1} it follows that $\varsigma^1(q_0,\sig_1)$, $\varsigma^1(q_0,\sig_2)$, $\varsigma^1(q_0,\sig_3)$ form a positive solution of (\ref{mainsystem1}), which are eigenvalues of the second Piola-Kirchhoff stress ${\bf T}$. Therefore, Problem $({\cal{P}}^d)$ has a unique global maximizer ${\bf T} \in {\cal S}^+_a$. \hfill $\Box$ \begin{thm}\label{p2}For any given force field ${\bf f}:\Omega \rightarrow {\mathbb R}} %\newcommand{\real}{{\bf R}^d$ and the surface traction ${\bf t} : {\Gamma_t} \rightarrow {\mathbb R}} %\newcommand{\real}{{\bf R}^d$ such that the eigenvalues of the statically admissible stress tensor function ${\mbox{\boldmath$\tau$}} \in {\cal T}_a$ satisfy $0<\sig_1,\sig_2,\sig_3<\frac{4}{27}$, the total complementary energy $\Pi^d_{\mbox{\boldmath$\tau$}} ({\bf T}) $ has eight negative solutions ${\bf T}_k \in {\cal S}^-_a$, $k=1, \dots, 8$. \end{thm} \noindent {\bfseries Proof}. We need to prove that for arbitrarily given $0<\sig_1,\sig_2,\sig_3<\frac{4}{27}$, the system of equations (\ref{mainsystem1}) has 8 solutions $(\varsigma_1,\varsigma_2,\varsigma_3)$, such that all $\varsigma_i<0$, $i=1,2,3$. From Corollary \ref{cor2l2} it follows that each of the equations $G(\varsigma,0,\sig_i)$, has two negative solutions, $\varsigma^2(0,\sig_i)>\varsigma^3(0,\sig_i)$, $i=1,2,3$. From Lemma \ref{l4}(b), it follows that for $i,j,k=2,3$ $$F^{i,j,k}(0,\sig_1,\sig_2,\sig_3)<0 .$$ \noindent From Lemma \ref{l4}(d) it follows that there exists $q_1<0$ such that \[ F^{i,j,k}(q_1,\sig_1,\sig_2,\sig_3)>0 . \] Therefore, since $F^{i,j,k}$ is continuous in $q$ (Lemma \ref{l4}(e)), there exists $q_0$, $0> q_0>q_1$, such that $$F^{i,j,k}(q_0,\sig_1,\sig_2,\sig_3)=0 , $$ i.e. \[ \varsigma^i(q_0,\sig_1)+\varsigma^j(q_0,\sig_2)+\varsigma^k(q_0,\sig_3)=q_0 . \] So, from Lemma \ref{l1} it follows that $\varsigma^i(q_0,\sig_1)$, $\varsigma^j(q_0,\sig_2)$, $\varsigma^k(q_0,\sig_3)$ form a negative solution of (\ref{mainsystem1}). Since, each of $i,j,k$ can be chosen independently from the set $\{2,3\}$, we have total 8 different negative solutions. \hfill $\Box$ \begin{thm}\label{p2} For any given force field ${\bf f}:\Omega \rightarrow {\mathbb R}} %\newcommand{\real}{{\bf R}^d$ and the surface traction ${\bf t} : {\Gamma_t} \rightarrow {\mathbb R}} %\newcommand{\real}{{\bf R}^d$ such that the eigenvalues of the statically admissible stress tensor function ${\mbox{\boldmath$\tau$}} \in {\cal T}_a$ satisfy $0<\sig_1,\sig_2,\sig_3<\frac{4}{27}$, the total complementary energy $\Pi^d_{\mbox{\boldmath$\tau$}} ({\bf T}) $ has at least 15 mixed stationary points, i.e., some eigenvalues $\varsigma_i$, $i=1,2,3$, of ${\bf T}$ are positive, some are negative. \end{thm} \noindent {\bfseries Proof}. Each of the equations $G(\varsigma,0,\sig_i)$, has one positive and two negative solutions: $\varsigma^1,\varsigma^2,\varsigma^3$. \noindent Applying Lemma \ref{l5} it is easy to check that \begin{description} \item[(1)] for $i,j=2,3$, $$F^{1,i,j}(0,\sig_1,\sig_2,\sig_3)<0, F^{i,1,j}(0,\sig_1,\sig_2,\sig_3)<0$$ \item[(2)] $F^{2,3,1}(0,\sig_1,\sig_2,\sig_3)<0$, $F^{3,2,1}(0,\sig_1,\sig_2,\sig_3)<0$, $F^{3,3,1}(0,\sig_1,\sig_2,\sig_3)<0$. \item[(3)] $F^{1,1,2}(0,\sig_1,\sig_2,\sig_3)<0$, $F^{1,1,3}(0,\sig_1,\sig_2,\sig_3)<0$, $F^{1,3,1}(0,\sig_1,\sig_2,\sig_3)<0$, \\ $F^{3,1,1}(0,\sig_1,\sig_2,\sig_3)<0$. \end{description} \noindent For each of these 15 combinations, $F^{a,b,c}$, there exists $q_1<0$ such that $F^{a,b,c}(q_1,\sig_1,\sig_2,\sig_3)>0$. \\ Therefore, since $F^{a,b,c}$ is continuous in $q$ (Lemma \ref{l4}(e)), there exists $q_0$, $0> q_0>q_1$, such that $$F^{a,b,c}(q_0,l_1,l_2,l_3)=0$$ that is $$\varsigma^a(q_0,\sig_1)+\varsigma^b(q_0,\sig_2)+\varsigma^c(q_0,\sig_3)=q_0$$ So, from Lemma \ref{l1} it follows that $\varsigma^a(q_0,\sig_1)$, $\varsigma^b(q_0,\sig_2)$, $\varsigma^c(q_0,\sig_3)$ form a mixed solution of (\ref{mainsystem1}).\\ Obviously, these 15 combinations result in different mixed stationary points of $\Pi^d_{{\mbox{\boldmath$\tau$}}}({\bf T})$. \hfill $\Box$\\ \section{Conclusions} \label{finish} We have illustrated that by using the canonical duality theory, the nonconvex minimal potential problem $({\cal{P}})$ is canonically dual to a concave maximization problem in a convex stress space ${\cal S}_a^+$, which can be solved by well-developed numerical methods. By the pure complementary energy principle, the general nonlinear partial differential equation in nonlinear elasticity is actually equivalent to an algebraic (tensor) equation, which can be solved for certain materials to obtain all possible stress solutions. Both global and local extremal solutions can be identified by the triality theory, while the Legendre-Hadamard condition is only necessary for local minimizers. Our results shows that for St. Venant-Kirchhoff material, the nonlinear boundary value problem could have 24 solutions at each material point, but only one global minimizer if the statically admissible stress ${\mbox{\boldmath$\tau$}} \neq 0$. It is important to have a detailed study on these solutions in the future. \\%forthcoming papers. \\ \noindent {\bf Acknowledgements}\\ The research of the first author was supported by the US Air Force Office of Scientific Research under the grant AFOSR FA9550-10-1-0487. Results presented in Section 3 were discussed with Professor Ray Ogden from University of Glasgow.
2,869,038,156,455
arxiv
\section{Introduction} Coupled laser arrays are photonic structures with great potential for a large variety of applications in optical communications, sensing and imaging. One of the main features that allows such applications is their electronically controlled operation and tunability, suggesting their functionality as active metasurfaces for the transformation of appropriatelly designed spatially inhomogeneous current distributions to desirable field patterns. In that sense, a pair of coupled lasers can be considered as a fundamental element (a photonic ``molecule'') from which larger and more complicated structures can be built. The properties of such a pair of coupled lasers are determined mostly by its stationary states and their stability, that can be controlled by the current injection in the two lasers. The existence of stable asymmetric phase-locked states with unequal field amplitudes and phase differences for the pair of coupled lasers crucially determines its far field patterns \cite{Choquette_2015, Choquette_13} and the capabilities of such a system as a building block for synthesizing larger controllable active structures characterized by complex dynamics \cite{Wang_88, Winful_90, Otsuka_90, Winful_92, Rogister_07, Soriano_13, Hizanidis_17, Erneux_book}. In addition to beam shaping applications \cite{Valagiannopoulos}, a pair of coupled lasers can be considered as an element of a ``photonic processor'' \cite{Yamamoto_1, Yamamoto_2}.\ A pair of coupled lasers is also a fundamental element for non-Hermitian optics that have been recently the subject of intense research interest. In this context, laser dynamics is commonly described by coupled mode equations for the complex field amplitudes \cite{Choquette_2017, El-Ganainy_2016, Christodoulides_2014, Christodoulides_2016, Rotter_2012} and cases of $PT$-symmetric configurations have been considered \cite{PT_1, PT_2, PT_3, PT_4, PT_5, PT_6}. The essential condition for $PT$-symmetry in a linear system is that there is no detuning between the two lasers; in most cases balanced gain and loss are considered, however, this condition can be relaxed to an arbitrary gain or loss contrast \cite{Christodoulides_2016, Choquette_2017}. Similar studies have been reported for coupled microcavities \cite{Chong_2016} and coupled waveguides \cite{Ramezani_2010}. Deviation from exact $PT$-symmetry can be either necessitated by practical reasons \cite{Christodoulides_2016} or intentionally designed due to advantages related to the existence of stable Nonlinear Supermodes \cite{Kominis_2016, Kominis_2017}. \ The coupled mode equations, considered in the study of $PT$-symmetric lasers commonly ignore the nonlinearity of the system due to the dynamic coupling between field amplitudes and carrier densities. This approximation is valid only when the field amplitudes have small values at or below the threshold or when we take a constant gain coefficient at its saturated values, given the knowledge of the field amplitudes \cite{Choquette_2017}. More importantly, this approximation excludes some important features of the complex dynamics of the system that can be interesting with respect to photonics applications, such as the existence of symmetric and asymmetric phase-locked states and limit cycles \cite{Winful&Wang_88, Yanchuk_04} as well as localized synchronization effects \cite{Kuske&Erneux_97, Kovanis_97}, that can be described only when carrier density dynamics is taken into account \cite{Winful&Wang_88, Choquette_2017}. The latter allows for non-fixed but dynamically evolving gain and loss that enter the coupled field equations and introduce multiscale characteristics of the system due to the significant difference between carrier and photon lifetimes, resulting in dynamical features that have no counterpart in standard coupled oscillator systems \cite{Aranson_1990}. Moreover, carrier density dynamics introduces the role of current injection as a control mechanism for determining the dynamics of the system and its stationary states. In this work, we investigate coupled laser dynamics in terms of a model taking into account both laser coupling and carrier density dynamics. More specifically, we study the existence and stability of asymmetric phase-locked modes and investigate the role of detuning and inhomogeneous pumping between the lasers. \textit{For the case of zero detuning, we show the existence of stable asymmetric modes even when the two lasers are homogeneously pumped, i.e. the lasers are absolutely symmetric.} These modes bifurcate to stable limit cycles in an ``oscillation death'' scenario (Bar-Eli effect) although an analogous requirement for dissimilar oscillators is not fullfiled \cite{Aranson_1990}. \textit{For the case of nonzero detuning, we show the existence of phase-locked modes with arbitrary power, amplitude ratio and phase difference for appropriate selection of pumping and detuning values. } These asymmetric states are shown to be stable for large regions of the parameter space, in contrast to common coupled oscillators where asymmetric states are usually unstable \cite{Aranson_1990}. Clearly, the above differences between coupled laser dynamics and common coupled oscillator systems are a consequence of the inclusion of carrier density dynamics in the model. In all cases, the asymmetric states have carrier densities corresponding to values that are above and below threshold resulting in gain coefficients of opossite signs in each laser, so that the respective electric fields experience gain and loss, as in the case of $PT$-symmetric configurations. However, it is shown that deviation from $PT$-symmetry, expressed by a non-zero detuning, enables the existence of asymmetric phase-locked states of arbitrary field amplitude ratio and phase difference, that are most promising for applications. This paper is organized as follows: The rate equations model for coupled diode lasers is described in Section II. In Sections III and IV we analytically solve the inverse problem of determining the phase-locked states of the system, that is, we obtain the appropriate selection of the parameters of the system in order to have a given asymmetric phase-locked state and investigate their stability, for the case of zero and non-zero detuning, respectivelly. In Section V we numerically investigate the deformation of the symmetric phase-locked states of the system under nonzero detuning and asymmetric pumping. In Section VI the main conclusions of this work are summarized. \section{Rate equations model for coupled diode lasers} The dynamics of an array of $M$ evanescently coupled semiconductor lasers is governed by the following equations for the slowly varying complex amplitude of the normalized electric field $\mathcal{E}_i$ and the normalized excess carrier density $N_i$ of each laser: \begin{eqnarray} \frac{d\mathcal{E}_i}{dt}&=&(1-i\alpha)\mathcal{E}_i N_i+i\eta(\mathcal{E}_{i+1}+\mathcal{E}_{i-1}) +i\omega_i \mathcal{E}_i \nonumber \\ T\frac{dN_i}{dt}&=&P_i-N_i-(1+2N_i)|\mathcal{E}_i|^2 \hspace{7em} i=1...M \label{array} \end{eqnarray} where $\alpha$ is the linewidth enhancement factor, $\eta$ is the normalized coupling constant, $P_i$ is the normalized excess electrical pumping rate, $\omega_i$ is the normalized optical frequency detuning from a common reference, $T$ is the ratio of carrier to photon lifetimes, and $t$ is the normalized time. \cite{Winful&Wang_88} When the lasers are uncoupled ($\eta=0$), they exhibit free running relaxation with frequencies \begin{equation} \Omega_i=\sqrt{\frac{2P_i}{T}} \end{equation} Since we consider inhomogeneously pumped ($P_i \neq P_j$) lasers, we use a reference value $P$ in order to define a frequency \begin{equation} \Omega=\sqrt{\frac{2P}{T}} \label{Omega_0} \end{equation} which is further untilized in order to rescale Eqs. (\ref{array}) as \begin{eqnarray} \frac{d\mathcal{E}_i}{d\tau}&=&(1-i\alpha)\mathcal{E}_i Z_i+i\Lambda(\mathcal{E}_{i+1}+\mathcal{E}_{i-1}) +i\Omega_i \mathcal{E}_i \nonumber \\ 2P\frac{dZ_i}{d\tau}&=&P_i-\Omega Z_i-(1+2\Omega Z_i)|\mathcal{E}_i|^2 \hspace{7em} i=1...M \label{array_norm} \end{eqnarray} where \begin{equation} \tau\equiv\Omega t, Z_i \equiv N_i/\Omega, \Lambda\equiv\eta/\Omega, \Omega_i\equiv\omega_i/\Omega \end{equation} In the following, we investigate the existence and stability of asymmetric phase-locked states for a pair of coupled lasers under symmetric or asymmetric electrical pumping. By introducing the amplitude and phase of the complex electric field amplitude in each laser as $\mathcal{E}_i=X_ie^{i\theta_i}$, the Eq. (\ref{array_norm}) for $M=2$ are written as \begin{eqnarray} \frac{dX_1}{d\tau}&=&X_1Z_1-\Lambda X_2\sin\theta \nonumber \\ \frac{dX_2}{d\tau}&=&X_2Z_2+\Lambda X_1\sin\theta \nonumber \\ \frac{d\theta}{d\tau}&=&\Delta -\alpha(Z_2-Z_1)+\Lambda\left(\frac{X_1}{X_2}-\frac{X_2}{X_1}\right)\cos\theta \label{pair} \\ 2P\frac{dZ_1}{d\tau}&=&P_1-\Omega Z_1-(1+2\Omega Z_1)X_1^2 \nonumber \\ 2P\frac{dZ_2}{d\tau}&=&P_2-\Omega Z_2-(1+2\Omega Z_2)X_2^2 \nonumber \end{eqnarray} where $\Delta=\Omega_2-\Omega_1$ is the detuning, $\theta=\theta_2-\theta_1$ is the phase difference of the electric fields and we have used a reference value $P=(P_1+P_2)/2$ in order to define $\Omega$ as in Eq. (\ref{Omega_0}). As a reference case, we consider a pair of lasers with $\alpha=5$, $T=400$ which is a typical configuration relevant to experiments and we take $P=0.5$. For these values we have $\Omega=5 \times 10^{-2}$ and a coupling constant $\eta$ in the range of $10^{-5} \div 10^0$ corresponds to a $\Lambda$ in the range $0.5 \times 10^{-3} \div 0.5 \times 10^{2}$. The phase-locked states are the equilibria of the dynamical system (\ref{pair}), given as the solutions of the algebraic system obtained by setting the time derivatives of the system equal to zero and their linear stability is determined by the eigenvalues of the Jacobian of the system. For the case of zero detuning ($\Delta =0$) and symmetric electrical pumping ($P_1=P_2=P_0$), two phase-locked states are known analytically: $X_1=X_2=\sqrt{P_0}$, $Z_1=Z_2=0$ and $\theta=0,\pi$. The in-phase state ($\theta=0$) is stable for $\eta>\alpha P_0 /(1+2P_0)$ whereas the out-of-phase state ($\theta=\pi$) is stable for $\eta<(1+2P_0)/2\alpha T$ \cite{Winful&Wang_88}. The phase difference $\theta$ and the electric field amplitude ratio $\rho \equiv X_2/X_1$ of a phase-locked state crucially determine the intensity response of the system. The incoherent intensity is defined as the sum of the individual laser intensities $S \equiv |\mathcal{E}_1|^2+|\mathcal{E}_2|^2=(1+\rho^2)X_1^2$ and can be measured by placing a broad detector next to the output face of the system. The coherent intensity corresponds to a coherent superposition of the individual electric fields $I \equiv |\mathcal{E}_1+\mathcal{E}_2|^2=(1+\rho^2+2\rho \cos \theta)X_1^2$ and can be measured by placing a detector at the focal plane of an external lens. The coherent intensity depends on the phase difference $\theta$; however it does not take into account the spatial distribution of the lasers, i.e. the distance between them. The latter determines the far-field pattern of the intensity resulting from constructive and destructive interference at different directions in space. By considering, for the sake of simplicity, the lasers as two point sources at a distance $d$, the coherent intensity is given by $I_\phi=|\mathcal{E}_1+e^{i 2\pi (d/ \lambda)\sin \phi} \mathcal{E}_2 |^2=\mathcal{E}_2|=[1+\rho^2+2\rho \cos (\theta+2\pi (d/\lambda) \sin \phi)]X_1^2$ where $\phi$ is the azimuthal angle measured from the direction normal to the distance between the lasers and $\lambda$ is the wavelength. \cite{Hecht} It is clear that the asymmetry of the phase-locked state described by $\rho$ and $\theta$ along with the geometric parameter of the system $d/\lambda$ define a specific far-field pattern $I_\phi$ with desirable characteristics. \begin{figure}[pt] \begin{center} \subfigure[]{\scalebox{\scl}{\includegraphics{L-r_a5_T400}}} \subfigure[]{\scalebox{\scl}{\includegraphics{L-r_a1p5_T400}}} \subfigure[]{\scalebox{\scl}{\includegraphics{L-r_a5_T2000}}} \caption{Stability regions of asymmetric phase-locked states in a symmetric configuration with $\Delta=0$ and $P_1=P_2=P_0$ in the $(\Lambda, \rho)$ parameter space. Dark blue and light yellow areas correspond to stability and instability, respectively. (a) $\alpha=5$ and $T=400$, (b) $\alpha=1.5$ and $T=400$, (c) $\alpha=5$ and $T=2000$ (case of \cite{Winful&Wang_88}).} \end{center} \end{figure} \begin{figure}[pt] \begin{center} \subfigure[]{\scalebox{\scl}{\includegraphics{theta_L-r_a5_T400}}} \subfigure[]{\scalebox{\scl}{\includegraphics{X0_L-r_a5_T400}}} \caption{Steady-state phase difference $\theta$ (a) and field amplitude $X_0$ (in logarithmic scale) (b) of the stable asymmetric phase-locked state in the $(\Delta, \rho)$ parameter space. Parameter values correspond to Fig. 1(a).} \end{center} \end{figure} \section{Asymmetric phase-locked states under zero detuning} Although we cannot find analytical solutions of the system of equations that provide the field amplitudes and phase difference for a given set of laser parameters, we can solve explicitly the reverse problem: for a given phase-locked state with field amplitude ratio $\rho \equiv X_2/X_1$ and phase difference $(\theta)$ we can analytically solve the algebraic system of equations obtained by setting the rhs of Eq. (\ref{pair}) equal to zero to determine the steady-state carrier densities $(Z_{1,2})$, and the appropriate detuning $(\Delta)$ and pumping rates $(P_{1,2})$, in terms of $\rho$ and $\theta$. In this section, we consider the case of zero detuning ($\Delta=0$) between the coupled lasers. It is straightforward to verify that for every $\rho$ there exists an equilibrium of the dynamical system (\ref{pair}) with a fixed phase difference $\theta$ \begin{equation} \tan\theta= \frac{1}{\alpha} \frac{\rho^2-1}{\rho^2+1} \label{theta_rho} \\ \end{equation} and \begin{eqnarray} Z_1&=&\Lambda \rho \sin\theta \nonumber \\ Z_2&=&-\frac{\Lambda}{\rho} \sin\theta\\ P_1&=&X_0^2+(1+2X_0^2)\Omega\Lambda\rho\sin\theta \nonumber \\ P_2&=&\rho^2X_0^2-(1+2\rho^2 X_0^2) \frac{\Omega\Lambda}{\rho}\sin\theta \label{P12} \end{eqnarray} where $X_0 \equiv X_1$. For the case of symmetrically pumped lasers ($P_1=P_2=P_0$) the phase-locked states have a fixed field amplitude $X_0$ given by \begin{equation} X_0^2 = \frac{\Omega \Lambda \sin\theta (\rho^2+1)}{\rho\left[(\rho^2-1)-4\Omega \Lambda \rho \sin\theta\right]} \label{X0_rho} \end{equation} and the common pumping is \begin{equation} P_0=X_0^2+(1+2X_0^2)\Omega \Lambda \rho \sin\theta \label{P0_rho} \end{equation} It is quite remarkable that an asymmetric phase-locked state, with arbitrary amplitude ratio ($\rho$) exists even for the case of identical coupled lasers. This asymmetric state describes a localized synchronization effect \cite{Kuske&Erneux_97, Kovanis_97}, with the degree of localization determined by the field amplitude ratio $\rho$. More interestingly, this state is stable within a large area of the parameter space as shown in Fig. 1, in contrast to what is expected \cite{Yanchuk_04}. In fact, there exist areas of the parameter space where the previously studied \cite{Winful&Wang_88} symmetric, in-phase and out-of-phase, states are unstable, so that this asymmetric state is the only stable phase-locked state of the system. The stability of this state depends strongly on the parameters $\alpha$ and $T$, as shown in Fig. 1. The asymmetric phase-locked state undergoes Hopf-bifurcations giving rise to stable limit cycles where the fields oscillate around the respective phase-locked values, similarly to the case of symmetric phase-locked states, \cite{Winful&Wang_88} but with different oscillation amplitudes in general \cite{Kovanis_97}. It is worth mentioning, that this ``oscillation death'' (or Bar-Eli) effect, \cite{Aranson_1990} commonly occuring for coupled dissimilar oscillators, is taking place even for the case of identical lasers due to the consideration of the role of carrier density dynamics. The amplitude ratio $\rho$ determines the phase difference $\theta$ as shown in Eq. (\ref{theta_rho}) and Fig. 2(a). The extreme values of the phase difference are determined by the linewidth enhancement factor $\alpha$ with smaller values of $\alpha$ allowing for larger phase differences. Moreover, the amplitude of the field depends strongly on the amplitude ratio $\rho$ and the normalized coupling constant $\Lambda$ as shown in Eq. (\ref{X0_rho}) and Fig. 2(b). The appropriate symmetric pumping $P_0$ in order to have an asymmetric phase-locked state is given by Eq. (\ref{P0_rho}). It is obvious from the above equations that for $\rho=1$ the well-known symmetric states are obtained \cite{Winful&Wang_88}. \begin{figure}[pt] \begin{center} \subfigure[]{\scalebox{\scl}{\includegraphics{L-r_a5_T400_X0_10m3}}} \subfigure[]{\scalebox{\scl}{\includegraphics{L-r_a5_T400_X0_10m1p5}}} \subfigure[]{\scalebox{\scl}{\includegraphics{L-r_a5_T400_X0_10m1}}} \caption{Stability regions of asymmetric phase-locked states in an asymmetric configuration with $\Delta=0$ and $P_1 \neq P_2$ in the $(\Lambda, \rho)$ parameter space. Dark blue and light yellow areas correspond to stability and instability, respectively. $\alpha=5$, $T=400$, and (a) $\log X_0=-3$, (b) $\log X_0=-1.5$, (c) $\log X_0=-1$ .} \end{center} \end{figure} Phase-locked states with fixed phase difference $\theta$ but arbitrary electric field amplitude $X_0$ exist for different pumping between the two lasers, given by Eq. (\ref{P12}). The stability of these states depends crucially on the electric field amplitude $X_0$ as shown in Fig. 3. In comparison to Fig. 1(a) corresponding to the same parameter set but with $P_1=P_2=P_0$ the extent of the stability region is significantly reduced. \begin{figure}[pt] \begin{center} \subfigure[]{\scalebox{\scl}{\includegraphics{stab_rho_theta_X0_sqrt_p5_L_m2p1}}} \subfigure[]{\scalebox{\scl}{\includegraphics{stab_rho_theta_X0_sqrt_p5_L_m1p9}}}\\ \subfigure[]{\scalebox{\scl}{\includegraphics{stab_rho_theta_X0_sqrt_p5_L_m1p7}}} \subfigure[]{\scalebox{\scl}{\includegraphics{stab_rho_theta_X0_sqrt_p5_L_0}}}\\ \subfigure[]{\scalebox{\scl}{\includegraphics{stab_rho_theta_X0_sqrt_p5_L_0p5}}} \subfigure[]{\scalebox{\scl}{\includegraphics{stab_rho_theta_X0_sqrt_p5_L_2}}} \caption{Stability regions of phase-locked states of arbitrary asymmetry, characterized by the steady-state field amplitude ratio $\rho$, phase difference $\theta$ and $X_0=\sqrt{0.5}$. Dark blue and light yellow areas correspond to stability and instability, respectively. The parameters of the coupled lasers are $\alpha=5, T=400$ and $\log\Lambda=-2.1,-1.9,-1.7,0,0.5,2$ (a)-(f). The respective detuning $\Delta$ and pumping $P_{1,2}$ values are given by Eqs. (\ref{D_eq})-(\ref{P_eq}) . The stability regions are symmetric with respect to the transformation $\rho \rightarrow 1/\rho$ and $\theta \rightarrow 2\pi-\theta$. The topology and the extent of the stability region depends crucially on the coupling coefficient $\Lambda$. } \end{center} \end{figure} \begin{figure}[pt] \begin{center} \subfigure[]{\scalebox{\scl}{\includegraphics{TS_fig4a_theta_09pi_rho_075}}} \subfigure[]{\scalebox{\scl}{\includegraphics{TS_fig4a_theta_09pi_rho_050}}}\\ \subfigure[]{\scalebox{\scl}{\includegraphics{TS_fig4a_theta_09pi_rho_025}}} \subfigure[]{\scalebox{\scl}{\includegraphics{TS_fig4a_theta_09pi_rho_015}}}\\ \subfigure[]{\scalebox{\scl}{\includegraphics{TS_fig4a_theta_09pi_rho_010}}} \subfigure[]{\scalebox{\scl}{\includegraphics{TS_fig4a_theta_09pi_rho_005}}} \caption{Time evolution of the electric field ampltudes and phase difference for parameters corresponding to Fig. 4(a). The initial conditions correspond to asymmetric phase-locked states with $\theta=0.9\pi\simeq2.83$ and $\rho=$0.75 (a), 0.50 (b), 0.25 (c), 0.15 (d), 0.10 (e), 0.05 (f), perturbed by random noise. In accordance to Fig. 4(a), the phase locked states are unstable for $0.06<\rho<0.54$. Cases of stable phase-locked states are shown in (a), (f). In the case of unstable phase-locked states the system evolves either to stale limit cycles [(b), (c), (e)] or to chaotic states (d). } \end{center} \end{figure} \section{Phased-locked states with arbitrary asymmetry under non-zero detuning} Analogously to the case of zero detuning we can find analytically the steady-state carrier densities $(Z_{1,2})$ as well as the appropriate detuning $(\Delta)$ and pumping rates $(P_{1,2})$ for an arbitrary field amplitude ratio $(\rho)$ and phase difference $(\theta)$, as follows \begin{eqnarray} Z_1&=&\Lambda \rho \sin\theta \nonumber \\ Z_2&=&-\frac{\Lambda}{\rho}\sin\theta \label{Z_eq} \end{eqnarray} \begin{eqnarray} \Delta&=&-\alpha \Lambda\sin\theta\left(\frac{1}{\rho}+\rho\right)-\Lambda\cos\theta\left(\frac{1}{\rho}-\rho\right) \label{D_eq}\\ P_1&=&X_0^2+(1+2X_0^2)\Omega\Lambda\rho\sin\theta \nonumber\\ P_2&=&\rho^2X_0^2-(1+2\rho^2 X_0^2) \frac{\Omega\Lambda}{\rho}\sin\theta \label{P_eq} \end{eqnarray} Therefore, there always exists a phase-locked state with arbitrary field amplitude asymmetry and phase difference, provided that the detuning $\Delta$ and the pumping rates $P_{1,2}$ have values given by Eqs. (\ref{D_eq}) and (\ref{P_eq}), respectively, while the steady-state carrier densities $(Z_{1,2})$ are given by Eqs. (\ref{Z_eq}). These phase-locked states exist in the whole parameter space and can have an arbitrary power $X_0$. However, their stability depends strongly on the coupling $(\Lambda)$ as well as the power $(X_0)$ and the degree of their asymmetry, characterized by $\rho$ and $\theta$, as shown in Figs. 4(a)-(f). It is worth mentioning that there is enough freedom in parameter selection in order to have a controllable configuration that supports a large variety of stable asymmetric phase-locked states with unequal field amplitudes and phase differences, with the latter crucially determining the far field patterns of the pair of coupled lasers. The asymmetric states are characterized by carrier densities having opposite signs $Z_1/Z_2=-\rho^2<0$ so that the electric fields of the two lasers experience gain and loss, respectively. For $\rho=1$ we have equal gain and loss and a phase-locked state with equal field amplitude and phase difference given by Eq. (\ref{D_eq}) as $\sin\theta=-\Delta/2\alpha\Lambda$. At the boundaries of the stability regions, the system undergoes Hopf bifurcations giving rise to stable limit cycles characterized by asymmetric synchronized oscillations of the electric fields, that can have different mean values and amplitudes of oscillation. Characteristic cases for the time evolution of the electric field amplitudes $X_{1,2}$ and the phase difference $\theta$ are depicted in Fig. 5 for various degrees of asymmetry. The parameters of the system correspond to those of Fig. 4(a), with phase difference $\theta=0.9\pi\simeq2.83$ and various values of $\rho$. For $\rho=0.75$ [Fig. 5(a)] the asymmetric phase-locked state is stable and perturbed initial conditions evolve to the stable state. As $\rho$ decreases to $\rho=0.5$ [Fig. 5(b)] and $0.25$ [Fig. 5(c)], the phase-locked states become unstable and the system evolves to stable limit cycles of increasing period. Close to the center of the unstable region the system evolves to chaotic states [$\rho=0.15$, Fig. 5(d)]. Further decreasing $\rho$ results in stable limit cycles [$\rho=0.10$, Fig. 5(e)] and stable phase-locked states [$\rho=0.05$, Fig. 5(f)] corresponding to the stability region of lower $\rho$ shown in Fig. 4(a). \begin{figure}[pt] \begin{center} \subfigure[]{\scalebox{\scl}{\includegraphics{D0_stab}}} \subfigure[]{\scalebox{\scl}{\includegraphics{D0p05_stab}}} \subfigure[]{\scalebox{\scl}{\includegraphics{D0p1_stab}}} \caption{Existence and stability regions of phase-locked states in the $(\Lambda, \Delta P)$ parameter space for $\alpha=5$ and $T=400$ when $P_0=0.5$. Dark blue and light yellow areas correspond to stability and instability, respectively. The green area corresponds to nonexistence of a phased-locked state due to non-zero detuning. (a) $\Delta =0$, (b) $\Delta =0.05$, (c) $\Delta =0.1$. } \end{center} \end{figure} \section{Deformation of symmetric phased-locked states under nonzero detuning and pumping asymmetry} For the case of a given nonzero detuning and/or asymmetrically pumped lasers $P_1=P_0+\Delta P, P_2=P_0-\Delta P$, the equilibria of the system (\ref{pair}) cannot be analytically obtained for a given set of values $(\Delta, \Delta P)$. The respective algebraic system consists of transcendental equations and is solved by utilizing a numerical continuation algorithm, according to which we start from $\Delta=0$ and $\Delta P=0$ corresponding to the symmetric case with the two known equilibria $(\theta=0,\pi)$. For each one of them, we increase $\Delta$ and/or $\Delta P$ in small steps; in each step the solution of the previous step is used as an initial guess for the iterative procedure (Newton-Raphson method) that provides the solution. For the case of zero detuning $\Delta=0$, the domain of existence of stable phase-locked states in the ($\Lambda$, $\Delta P$) parameter space is shown in Fig. 6(a). For $\Delta P =0$ the results are similar to the case considered in \cite{Winful&Wang_88}, with the in-phase state being stable for large $\Lambda$ and the out-of-phase being stable for small values of $\Lambda$. As the pumping difference increases, the stable in-phase state extends only over a quite small range of $\Delta P$, whereas the out-of-phase state extends almost in the entire range of $\Delta P$. In both cases, as $\Delta P$ increases from zero the phase difference is slightly differentiated from the values $\theta=0,\pi$ for $\Delta P = 0$. Surprisingly, another region of stable phase-locked states appears in the strong coupling regime (large $\Lambda$) above a threshold of pumping difference $\Delta P$. This is an out-of-phase state with phase difference close to $\pi$ that appears for values of $\Delta P$ for which no stable in-phase state exists. In fact, this stable state exists for a large part of the parameter space and extends to values $\Delta P=P_0$ for which only one of the lasers is pumped above threshold ($P_1=1$, $P_2=0$). This area of stability extending from intermediate to high values of coupling is enabled by the asymmetric pumping and indicates its stabilizing effect. The dependence of the phase difference of the stable states on the coupling $\Lambda$ and pumping difference $\Delta P$ is depicted in Fig. 7(a). In Fig. 8, the electric field amplitudes $X_{1,2}$ and carrier densities $X_{1,2}$ of all the stable states are shown. It is clear that for $\Delta P=0$ we have $X_{1,2}=\sqrt{P_0}$ and $Z_{1,2}=0$ and for small values of $\Lambda$ we have $X_{1,2}=\sqrt{P_{1,2}}$ and $Z_{1,2}=0$, as the two lasers are essentially uncoupled. For $\Delta P>0$ and finite coupling values the electric field and carrier density destributions in the two lasers become highly asymmetric. \begin{figure}[pt] \begin{center} \subfigure[]{\scalebox{\scl}{\includegraphics{D0_theta}}} \subfigure[]{\scalebox{\scl}{\includegraphics{D0p05_theta}}} \caption{Steady-state phase difference ($\theta$) of the stable phase-locked states in the $(\Lambda, \Delta P)$ parameter space for $\alpha=5$, $T=400$ and $P_0=0.5$. (a) $\Delta=0$, (b) $\Delta=0.05$ correspond to the cases of Fig. 6(a) and (b), respectively. } \end{center} \end{figure} \begin{figure}[pt] \begin{center} \subfigure[]{\scalebox{\scl}{\includegraphics{D0_X}}} \subfigure[]{\scalebox{\scl}{\includegraphics{D0_Z}}} \caption{Steady-state electric field amplitudes $X_{1,2}$ (a) and carrier densities $Z_{1,2}$ (b) of the stable phase-locked states in the $(\Lambda, \Delta P)$ parameter space for $\alpha=5$, $T=400$ and $P_0=0.5$, corresponding to the case of Fig. 6(a). } \end{center} \end{figure} \begin{figure}[pt] \begin{center} \subfigure[]{\scalebox{\scl}{\includegraphics{L1_theta}}} \subfigure[]{\scalebox{\scl}{\includegraphics{Lm2p2_theta}}} \caption{Steady-state phase difference ($\theta$) of the stable phase-locked states in the $(\Delta, \Delta P)$ parameter space for $\alpha=5$, $T=400$ and $P_0=0.5$. (a) strong coupling $\log \Lambda=1$, (b) weak coupling $\log \Lambda =-2.2$. } \end{center} \end{figure} A non-zero detuning ($\Delta \neq 0$) between the two lasers strongly affects the existence of a stable out-of-phase state in the weak coupling regime, as shown in Figs. 6(b), (c) for $\Delta=0.05, 0.1$, respectively. The role of detuning in terms of the phase of the stable states is clearly presented in Fig. 7(b) for $\Delta=0.05$. In comparison to Fig. 7(a), corresponding to zero detuning, it is obvious that the phase of the stable states existing for intermediate and strong coupling is hardly affected by the detuning whereas the stable state, existing in the weak coupling regime, has a phase that ranges from $\pi$ to $3\pi/2$ depending strongly on $\Lambda$ and $\Delta P$. The role of detuning is also shown in Fig. 9(a) and (b) for $\Delta =0.05$ in the case of strong ($\log \Lambda=1$) and weak ($\log \Lambda =-2.2$) coupling, respectively. The detuning introduces phase sensitivity to current injection for stable modes of the weak coupling regime that can be quite interesting for beam-steering applications \cite{Choquette_13}. \section{Conclusions} We have investigated the existence of stable asymmetric phase-locked states in a system of two coupled semiconductor lasers. The asymmetric phase-locked states are characterized by a non-unitary field amplituded ratio and non-trivial phase difference. The asymmetry is shown to be directly related to operation conditions that result in the presence of gain in one laser and loss in the other. The crucial role of carrier density dynamics has been taken into account by considering a model where the field equations are dynamically coupled with the carrier density equations, in contrast to standard coupled mode equations, commonly considered in studies on non-Hermitian photonics and PT-symmetric lasers. It has been shown that stable asymmetric states exist even in absolutely symmetric configurations and that states of arbitrary asymmetry can be supported by appropriate selection of the detuning and the pumping profile of the system. The role of the current injection suggests a dynamic mechanism for the control of the phase-locked states and, therefore, the far-field emission patterns of this fundamental photonic element consisting of two coupled lasers. \section*{Acknowledgements} Y. K. is grateful to the School of Science and Technology of Nazarbayev University, Astana, Kazakhstan for its hospitality during his visit at NU in November, 2016. This research is partly supported by the ORAU grant entitled "Taming Chimeras to Achieve the Superradiant Emitter", funded by Nazarbayev University, Republic of Kazakhstan. This work was partially also supported by the Ministry of Education and Science of the Republic of Kazakhstan via Contract No. 339/76-2015. \clearpage
2,869,038,156,456
arxiv
\section{Introduction} In general, automorphism groups of algebras are difficult to determine. In fact, some algebras contain wild automorphisms, see \cite{Joseph1976, Shestakov2004}. This leads to a question: what invariants of an algebra control its automorphism group? Yakimov \cite{Yakimov2014} proved the Andruskiewitsch-Dumas conjecture \cite{Andruskiewitsch2008} which concerns the automorphism groups of the quantum nilpotent algebras $\mathcal{U}_q^+(\mathfrak{g})$ for all simple Lie algebras $\mathfrak{g}$. His proof exhibit a general classification method for automorphism groups of related algebras such as quantum cluster algebras, algebras defined by iterated Ore extensions \cite{Goodearl2000}, and so on. The key strategy is that $\mathrm{Aut}\left(\mathcal{U}_q^+(\mathfrak{g})\right)$ are controlled by the group of certain continuous bi-finite automorphisms of completed quantum tori. Ceken et al. developed the discriminant method to determine automorphism groups of certain noncommutative algebras such as quantum Weyl groups and so on, see \cite{Ceken2015, Ceken2016, Chan2018}. Other related works, please refer to \cite{Li2017, Chen2013}. As for the Hopf algebra automorphisms of pointed Hopf algebras, Panaite et al. \cite[Lemma 1]{Panaite1999} and Shilin Yang \cite{Yang2007} made use of the fact that the spaces of group-like and skew primitive elements are invariants of Hopf algebra automorphisms. Musson determined the Hopf automorphism group of the the quantized enveloping algebras $U_q(\mathfrak{g})$ defined over $\Bbb{Q}(q)$ with $q$ transcendental and the Lie algebra $\mathfrak{g}$ semisimple, according to its coradical filtration\cite{Chin1996}. Similar results, please refer to \cite{Braverman1994, Twietmeyer1992}. Radford proved that the group ${\rm Aut}_{\rm Hopf}(H)$ of Hopf algebra $H$ is finite, if $H$ is a semisimple Hopf algebra over a field of characteristic $0$, or that $H$ is semisimple cosemisimple involutory Hopf algebra over a field of characteristic $p>\dim H$ \cite{Radford1990}. In this paper, we view Yetter-Drinfeld modules as invariants of Hopf algebra automorphisms. Let $H$ be a Hopf algebra. Given any finite-dimensional Yetter-Drinfeld module over $H$ and any Hopf algebra automorphisms $\psi$ of $H$, we can build a Yetter-Drinfeld module $V^{\psi}$. We have $\dim V=\dim V^{\psi}$ and ${\rm Supp} (V^\psi)=\psi({\rm Supp}(V))$, see Lemma \ref{KeyLemma}. So it could be efficient to calculate $\mathrm{Aut}_{\mathrm{Hopf}}(H)$, provided a classification of simple Yetter-Drinfeld modules over $H$. The paper is organized as follows. In Section 1, we introduce some background of the paper. In Section 2, we present necessary knowledges about the Suzuki Hopf algebras. In Section 3, all Hopf algebra automorphisms of the Suzuki Hopf algebras are calculated. \section{The Suzuki Hopf algebras $A_{Nn}^{\mu\lambda}$} Let $\Bbbk$ be an algebraicaly closed field of characteristic $0$. Since our results in the paper rely on the classification of simple Yetter-Drinfeld modules over the Suzuki Hopf algebras \cite{Shi2020even, Shi2020odd}, which are based on an algebraicaly closed field of characteristic $0$. Suzuki introduced a family of cosemisimple Hopf algebras $A_{Nn}^{\mu\lambda}$ parametrized by integers $N\geq 1$, $n\geq 2$ and $\mu$, $\lambda=\pm 1$, and investigated various properties and structures of them \cite{Suzuki1998}. Wakui studied the Suzuki algebras from different perspectives \cite{Wakui2010a, Wakui2019, Wakui2003}. The author studied the Nichols algebras over simple Yetter-Drinfeld modules of $A_{Nn}^{\mu\lambda}$, see \cite{Shi2020even, Shi2020odd}. The Suzuki Hopf algebra $A_{Nn}^{\mu\lambda}$ is generated by $x_{11}$, $x_{12}$, $x_{21}$, $x_{22}$ subject to the relations: \begin{align*} &x_{11}^2=x_{22}^2,\quad x_{12}^2=x_{21}^2,\quad \chi _{21}^n=\lambda\chi _{12}^n, \quad \chi _{11}^n=\chi _{22}^n,\\ &x_{11}^{2N}+\mu x_{12}^{2N}=1,\quad x_{ij}x_{kl}=0\,\, \text{whenever $i+j+k+l$ is odd}, \end{align*} where $\chi _{11}^m$, $\chi _{12}^m$, $\chi _{21}^m$ and $\chi _{22}^m$ are defined as follows for $m\in \Bbb Z^+$: $$\chi _{11}^m:=\overbrace{x_{11}x_{22}x_{11}\ldots\ldots }^{\textrm{$m$ }},\quad \chi _{22}^m:=\overbrace{x_{22}x_{11}x_{22}\ldots\ldots }^{\textrm{$m$ }},$$ $$\chi _{12}^m:=\overbrace{x_{12}x_{21}x_{12}\ldots\ldots }^{\textrm{$m$ }},\quad \chi _{21}^m:=\overbrace{x_{21}x_{12}x_{21}\ldots\ldots }^{\textrm{$m$ }}.$$ The comultiplication, counit and antipode of $A_{Nn}^{\mu\lambda}$ are given by \begin{equation}\label{eq5.3} \Delta (\chi_{ij}^k)=\chi_{i1}^k\otimes \chi_{1j}^k+\chi_{i2}^k\otimes \chi_{2j}^k,\quad \varepsilon(x_{ij})=\delta_{ij}, \quad S(x_{ij})=x_{ji}^{4N-1}, \end{equation} for $k\geq 1$, $i,j=1,2$. Let $\overline{i,i+j}=\{i,i+1,i+2,\cdots,i+j\}$ be an index set. Then the basis of $A_{Nn}^{\mu\lambda}$ can be represented by \begin{equation}\label{eq5.2} \left\{x_{11}^s\chi _{22}^t,\ x_{12}^s\chi _{21}^t \mid s\in\overline{1,2N}, t\in\overline{0,n-1} \right\}. \end{equation} Thus for $s,t\geq 0$ with $s+t\geq 1$, \begin{align*} \Delta (x_{11}^s\chi _{22}^t) &=x_{11}^s\chi _{22}^t\otimes x_{11}^s\chi _{22}^t +x_{12}^s\chi _{21}^t\otimes x_{21}^s\chi _{12}^t,\\ \Delta (x_{12}^s\chi _{21}^t) &=x_{11}^s\chi _{22}^t\otimes x_{12}^s\chi _{21}^t +x_{12}^s\chi _{21}^t\otimes x_{22}^s\chi _{11}^t. \end{align*} The cosemisimple Hopf algebra $A_{Nn}^{\mu\lambda}$ is decomposed to the direct sum of simple subcoalgebras such as \[A_{Nn}^{\mu\lambda} =\bigoplus_{g\in G}\Bbbk g\oplus\bigoplus_{ \substack{s\in\overline{0,N-1},\,\, t\in\overline{1,n-1}}}C_{st}, \] see \cite[Theorem 3.1]{Suzuki1998} and \cite[Proposition 5.5]{Wakui2010a}, where \begin{align*} G&=\left\{x_{11}^{2s}\pm x_{12}^{2s}, x_{11}^{2s+1}\chi_{22}^{n-1}\pm \sqrt{\lambda}x_{12}^{2s+1}\chi_{21}^{n-1}\mid s\in\overline{1,N}\right\},\\ C_{st}&=\Bbbk x_{11}^{2s}\chi_{11}^t+\Bbbk x_{12}^{2s}\chi_{12}^t+ \Bbbk x_{11}^{2s}\chi_{22}^t+\Bbbk x_{12}^{2s}\chi_{21}^t,\quad s\in\overline{1,N}, t\in\overline{1,n-1}. \end{align*} The set $\left\{\Bbbk g\mid g\in G\right\}\cup \left\{\Bbbk x_{11}^{2s}\chi_{11}^t +\Bbbk x_{12}^{2s}\chi_{21}^t \mid s\in\overline{1,N}, t\in\overline{1,n-1} \right\}$ is a full set of non-isomorphic simple left $A_{Nn}^{\mu\lambda}$-comodules, where the coactions of the comodules listed above are given by the coproduct $\Delta$. Denote the comodule $\Bbbk x_{11}^{2s}\chi_{11}^t +\Bbbk x_{12}^{2s}\chi_{21}^t $ by $\Lambda_{st}$. That is to say the comodule $\Lambda_{st}=\Bbbk w_1+\Bbbk w_2$ is defined as \begin{align*} \rho\left(w_1\right) = x_{11}^{2s}\chi_{11}^t\otimes w_1 +x_{12}^{2s}\chi_{12}^t\otimes w_2,\quad \rho\left(w_2\right) = x_{11}^{2s}\chi_{22}^t\otimes w_2 +x_{12}^{2s}\chi_{21}^t\otimes w_1 . \end{align*} We define the support of $\Lambda_{st}$ as \begin{align} {\rm Supp}(\Lambda_{st})=\Bbbk x_{11}^{2s}\chi_{11}^t+\Bbbk x_{12}^{2s}\chi_{12}^t+ \Bbbk x_{11}^{2s}\chi_{22}^t+\Bbbk x_{12}^{2s}\chi_{21}^t=C_{st}. \end{align} \section{Automorphism groups of the Suzuki Hopf algebras} \begin{lemma}\cite[Lemma 6.1]{MR1780094}\label{KeyLemma} Let $H$ be a Hopf algebra, $\psi: H\rightarrow H$ an automorphism of Hopf algebras, $V$, $W$ Yetter-Drinfeld modules over $H$. Let $V^\psi$ be the same space underlying $V$ but with action and coaction $$h\cdot_\psi v=\psi(h)\cdot v,\quad \rho^\psi (v) =\left(\psi^{-1}\otimes \mathrm{id}\right)\rho(v), \quad h\in H, v\in V.$$ Then $V^\psi$ is also a Yetter-Drinfeld module over $H$. If $T: V\rightarrow W$ is a morphism in ${}_H^H\mathcal{YD}$, then $T^\psi: V^\psi\rightarrow W^\psi$ also is. Moreover, the braiding $c: V^\psi\otimes W^\psi\rightarrow W^\psi \otimes V^\psi$ coincides with the braiding $c: V\otimes W\rightarrow W\otimes V$. \end{lemma} \begin{remark} We have ${\rm Supp} (V^\psi)=\psi({\rm Supp}(V))$. \end{remark} \begin{lemma} Let $n=2m+1$ be odd and $\psi$ be any Hopf algebra automorphism of $A_{Nn}^{\mu\lambda}$, then there exist some $s\in\overline{1,N}$ and $t\in\overline{1,n-1}$ such that $\psi(C_{N1})=C_{st}$. \end{lemma} \begin{proof} According to \cite[Theorem 3.1 and Table 1]{Shi2020odd}, there are exactly $8N^2m(m+1)$ pairwise non-isomorphic Yetter-Drinfeld modules over $A_{N\, 2m+1}^{\mu\lambda}$ of dimension 2: \begin{enumerate} \item $\mathscr{C}_{jk,p}^{st}$, $s\in\overline{1,N}$, $t\in\overline{0,m-1}$, $\frac j2\in\overline{1,m}$, $k\in\overline{0,N-1}$, $p\in\Bbb{Z}_2$, $\mathscr{C}_{jk,p}^{st}\cong \Lambda_{s, 2t+2}$ as comdules; \item $\mathscr{D}_{jk,p}^{st}$, $s\in\overline{1,N}$, $t\in\overline{0,m-1}$, $\frac j2\in\overline{1,m}$, $k\in\overline{0,N-1}$, $p\in\Bbb{Z}_2$, $\mathscr{D}_{jk,p}^{st}\cong \Lambda_{s, 2t+2}$ as comdules; \item $\mathscr{E}_{jk,p}^{s}$, $s\in\overline{1,N}$, $\frac j2\in\overline{1,m}$, $k\in\overline{0,N-1}$, $p\in\Bbb{Z}_2$, $\mathscr{E}_{jk,p}^{s}\cong \Bbbk g_s^+\oplus \Bbbk g_s^-$ as comodules; \item $\mathscr{F}_{k,p}^{st}$, $s\in\overline{1,N}$, $t\in\overline{0,m-1}$, $k\in\overline{0,N-1}$, $p\in\Bbb{Z}_2$, $\mathscr{F}_{k,p}^{st}\cong \Lambda_{s, 2t+2}$ as comodules; \item $\mathscr{G}_{k,p}^{st}$, $s\in\overline{1,N}$, $t\in\overline{0,m-1}$, $k\in\overline{0,N-1}$, $p\in\Bbb{Z}_2$, $\mathscr{G}_{k,p}^{st}\cong \Lambda_{s, 2t+1}$ as comodules; \item $\mathscr{H}_{jk,p}^{st}$, $s\in\overline{1,N}$, $t\in\overline{0,m-1}$, $\frac j2\in\overline{1,m}$, $k\in\overline{0,N-1}$, $p\in\Bbb{Z}_2$, $\mathscr{H}_{jk,p}^{st}\cong \Lambda_{s, 2t+1}$ as comodules; \item $\mathscr{I}_{jk,p}^{st}$, $s\in\overline{1,N}$, $t\in\overline{0,m-1}$, $\frac j2\in\overline{1,m}$, $k\in\overline{0,N-1}$, $p\in\Bbb{Z}_2$, $\mathscr{I}_{jk,p}^{st}\cong \Lambda_{s, 2t+1} $ as comodules in this situation; \item $\mathscr{I}_{jk,p}^{st}$, $s\in\overline{1,N}$, $t=m$, $\frac j2\in\overline{1,m}$, $k\in\overline{0,N-1}$, $p\in\Bbb{Z}_2$, $\mathscr{I}_{jk,p}^{st}\cong \Bbbk h_s^+\oplus \Bbbk h_s^-$ as comodules in this situation. \end{enumerate} Here $g_s^{\pm}=x_{11}^{2s}\pm x_{12}^{2s}$, $h_s^{\pm}=x_{11}^{2s+1}\chi_{22}^{2m} \pm\sqrt{\lambda} x_{12}^{2s+1}\chi_{21}^{2m}$ are group-likes of $A_{N\, 2m+1}^{\mu\lambda}$. Set $W=\mathscr{G}_{k,p}^{N0}$, then ${\rm Supp}(W)=C_{N1}=\Bbbk x_{11}+\Bbbk x_{12}+\Bbbk x_{21}+\Bbbk x_{22}$. Since $C_{N1}$ does not contain any group-likes, so $\psi(C_{N1})=\psi({\rm Supp}(W))={\rm Supp} (W^\psi)=C_{s\,\, 2t+1}$ or $C_{s\,\, 2t+2}$ for some $s\in\overline{1,N}$, $t\in\overline{0,m-1}$. \end{proof} \begin{lemma} Let $n=2m$ be even and $\psi$ be any Hopf algebra automorphism of $A_{Nn}^{\mu\lambda}$, then there exist some $s\in\overline{1,N}$ and $t\in\overline{1,n-1}$ such that $\psi(C_{N1})=C_{st}$. \end{lemma} \begin{proof} According to \cite[Theorem 3.1 and Table 1]{Shi2020even}, there are exactly $2N^2(4m^2-1)$ non-isomorphic Yetter-Drinfeld modules over $A_{N\, 2m}^{\mu\lambda}$ of dimension 2: \begin{enumerate} \item $\mathscr{B}_{01k}^s$, $s\in\overline{1,N}$, $k\in\overline{0,N-1}$, $\mathscr{B}_{01k}^s\cong \Bbbk g_s^+\oplus \Bbbk g_s^-$ as comodules; \item $\mathscr{C}_{ijk,p}^{st}$, $ij=00$ or $01$, $k\in\overline{0,N-1}$, $p\in \Bbb{Z}_2$, $s\in\overline{1,N}$, $t\in\overline{0,m-2}$, $\mathscr{C}_{ijk,p}^{st}\cong \Lambda_{s\, 2t+2}$ as comodules in this situation; \item $\mathscr{C}_{ijk,p}^{st}$, $i=0$, $j=\left\{\begin{array}{ll} i+1,&\text{if}\,\lambda=1,\\i,&\text{if}\,\lambda=-1,\end{array}\right.$ $k\in\overline{0,N-1}$, $s\in\overline{1,N}$, $p=0$, $t=m-1$, $\mathscr{C}_{ijk,p}^{st}\cong \Bbbk h_s^+\oplus \Bbbk h_s^-$ as comodules in this situation; \item $\mathscr{D}_{jk,p}^{st}$, $\frac j2\in\overline{1,m-1}$, $k\in\overline{0,N-1}$, $p\in \Bbb{Z}_2$, $s\in\overline{1, N}$, $t\in\overline{0,m-1}$, $\mathscr{D}_{jk,p}^{st}\cong \left\{\begin{array}{ll} \Lambda_{s\, 2t+2}, & t\neq m-1, \\ \Bbbk h_s^+\oplus \Bbbk h_s^-, & t=m-1, \\ \end{array}\right.$ as comodules; \item $\mathscr{E}_{jk,p}^{st}$, $\frac j2\in\overline{1,m-1}$, $k\in\overline{0,N-1}$, $p\in \Bbb{Z}_2$, $s\in\overline{1,N}$, $t\in\overline{0,m-1}$, $\mathscr{E}_{jk,p}^{st}\cong \left\{\begin{array}{ll} \Lambda_{s\, 2t}, & t\neq 0, \\ \Bbbk g_s^+\oplus \Bbbk g_s^-, & t=0, \\ \end{array}\right.$ as comodules; \item $\mathscr{G}_{jk,p}^{st}$, $\left\{\begin{array}{ll}\frac j2\in\overline{1,m-1}, &\text{if}\,\lambda=1,\vspace{1mm}\\ \frac {j+1}2\in\overline{1,m}, &\text{if}\,\lambda=-1, \end{array}\right.$ $k\in\overline{0,N-1}$, $p\in \Bbb{Z}_2$, $s\in\overline{1,N}$, $t\in\overline{0,m-1}$, $\mathscr{G}_{jk,p}^{st}\cong \Lambda_{s\, 2t+1}$ as comdules; \item $\mathscr{H}_{jk,p}^{st}$, $\left\{\begin{array}{ll}\frac j2\in\overline{1,m-1}, &\text{if}\,\lambda=1,\vspace{1mm}\\ \frac {j+1}2\in\overline{1,m}, &\text{if}\,\lambda=-1, \end{array}\right.$ $k\in\overline{0,N-1}$, $p\in \Bbb{Z}_2$, $s\in\overline{1,N}$, $t\in\overline{0,m-1}$, $\mathscr{H}_{jk,p}^{st}\cong \Lambda_{s\, 2t+1}$ as comdules; \item $\mathscr{P}_{ijk,p}^{st}$, $ij=00$ or $01$, $k\in\overline{0,N-1}$, $p\in \Bbb{Z}_2$, $s\in\overline{1,N}$, $t\in\overline{0,m-1}$, $\mathscr{P}_{ijk,p}^{st}\cong \Lambda_{s\, 2t+1}$ as comdules. \end{enumerate} Here $g^{\pm}_s=x_{11}^{2s}\pm x_{12}^{2s}$, $h^{\pm}_s=x_{11}^{2s}\chi_{11}^{2m} \pm\sqrt{\lambda} x_{12}^{2s}\chi_{12}^{2m}$ are goup-likes of $A_{N\, 2m}^{\mu\lambda}$. Set $W=\mathscr{G}_{k,p}^{N0}$, then ${\rm Supp}(W)=C_{N1}=\Bbbk x_{11}+\Bbbk x_{12}+\Bbbk x_{21}+\Bbbk x_{22}$. Since $C_{N1}$ does not contain any group-likes, so $\psi(C_{N1})=\psi({\rm Supp}(W))={\rm Supp} (W^\psi)=C_{st}$ for some $s\in\overline{1,N}$, $t\in\overline{1,n-1}$. \end{proof} Let $\phi$ be any Hopf algebra automorphism of $A_{Nn}^{\mu\lambda}$, then $\phi(C_{N1})=C_{st}$ for some $s\in\overline{1,N}$, $t\in\overline{1,n-1}$. So we can suppose that \begin{align*} \phi(x_{11})&=a_1x_{11}^{2s}\chi_{11}^t+(1-a_1)x_{11}^{2s}\chi_{22}^t +a_2x_{12}^{2s}\chi_{12}^t+a_3x_{12}^{2s}\chi_{21}^t,\\ \phi(x_{22})&=b_1x_{11}^{2s}\chi_{11}^t+(1-b_1)x_{11}^{2s}\chi_{22}^t +b_2x_{12}^{2s}\chi_{12}^t+b_3x_{12}^{2s}\chi_{21}^t,\\ \phi(x_{12})&=d_1x_{11}^{2s}\chi_{11}^t-d_1 x_{11}^{2s}\chi_{22}^t +d_2x_{12}^{2s}\chi_{12}^t+d_3x_{12}^{2s}\chi_{21}^t,\\ \phi(x_{21})&=e_1x_{11}^{2s}\chi_{11}^t-e_1 x_{11}^{2s}\chi_{22}^t +e_2x_{12}^{2s}\chi_{12}^t+e_3x_{12}^{2s}\chi_{21}^t, \end{align*} where $a_i$, $b_i$, $d_i$, $e_i\in\Bbbk$ for $i\in\overline{1,3}$. \begin{lemma} \begin{enumerate} \item In case that $t$ is odd, then \begin{align} \phi\left(x_{11}^2\right)=\phi\left(x_{22}^2\right) &\Leftrightarrow \left\{\begin{array}{ll} (a_1-b_1)(1-a_1-b_1)=0,&\\ a_2^2+a_3^2=b_2^2+b_3^2,&\\ a_2a_3=b_2b_3, &\text{if}\,\, t\neq \frac n2,\\ (1+\lambda)a_2a_3=(1+\lambda)b_2b_3, &\text{if}\,\, t= \frac n2. \end{array}\right. \label{eq1}\\ \phi\left(x_{12}^2\right)=\phi\left(x_{21}^2\right) &\Leftrightarrow \left\{\begin{array}{ll} d_1^2=e_1^2, &\vspace{1mm}\\ d_2^2+d_3^2=e_2^2+e_3^2,&\\ d_2d_3=e_2e_3, &\text{if}\,\, t\neq \frac n2,\\ (1+\lambda)d_2d_3=(1+\lambda)e_2e_3, &\text{if}\,\, t= \frac n2. \end{array}\right. \label{eq2}\\ \phi(x_{11}x_{12})=0 &\Leftrightarrow \left\{\begin{array}{ll} (1-2a_1)d_1=0,\quad a_2d_2+a_3d_3=0,&\\ d_1=0,\quad a_2d_3=0,\quad a_3d_2=0,&\text{if}\,\, t\neq \frac n2,\vspace{1mm}\\ a_2d_3+\lambda a_3d_2=0, &\text{if}\,\, t=\frac n2. \end{array}\right. \label{eq3}\\ \phi(x_{22}x_{12})=0 &\Leftrightarrow \left\{\begin{array}{ll} (1-2b_1)d_1=0,\quad b_2d_2+b_3d_3=0,&\\ d_1=0,\quad b_2d_3=0,\quad b_3d_2=0,&\text{if}\,\, t\neq \frac n2,\vspace{1mm}\\ b_2d_3+\lambda b_3d_2=0, &\text{if}\,\,t=\frac n2. \end{array}\right. \label{eq4}\\ \phi(x_{11}x_{21})=0 &\Leftrightarrow \left\{\begin{array}{ll} (1-2a_1)e_1=0,\quad a_2e_2+a_3e_3=0,&\\ e_1=0,\quad a_2e_3=0,\quad a_3e_2=0,&\text{if}\,\, t\neq \frac n2,\vspace{1mm}\\ a_2e_3+\lambda a_3e_2=0, &\text{if}\,\,t=\frac n2. \end{array}\right. \label{eq5}\\ \phi(x_{22}x_{21})=0 &\Leftrightarrow \left\{\begin{array}{ll} (1-2b_1)e_1=0,\quad b_2e_2+b_3e_3=0,&\\ e_1=0,\quad b_2e_3=0,\quad b_3e_2=0,&\text{if}\,\, t\neq \frac n2,\vspace{1mm}\\ b_2e_3+\lambda b_3e_2=0, &\text{if}\,\,t=\frac n2. \end{array}\right. \label{eq6} \end{align} \item In case that $t$ is even, then \begin{align} \phi\left(x_{11}^2\right)=\phi\left(x_{22}^2\right) &\Leftrightarrow \left\{\begin{array}{ll} (a_1-b_1)(1-a_1-b_1)=0, &\\ a_2a_3=b_2b_3,&\\ a_1=b_1,\quad a_2^2=b_2^2,\quad a_3^2=b_3^2, &\text{if}\,\, t\neq \frac n2,\vspace{1mm}\\ a_2^2+\lambda a_3^2=b_2^2+\lambda b_3^2, &\text{if}\,\, t= \frac n2. \end{array}\right. \label{eq7}\\ \phi\left(x_{12}^2\right)=\phi\left(x_{21}^2\right) &\Leftrightarrow \left\{\begin{array}{ll} d_1^2=e_1^2, &\\ d_2d_3=e_2e_3,&\\ d_2^2=e_2^2,\quad d_3^2=e_3^2, &\text{if}\,\, t\neq \frac n2,\vspace{1mm}\\ d_2^2+\lambda d_3^2=e_2^2+\lambda e_3^2, &\text{if}\,\, t= \frac n2. \end{array}\right. \label{eq8}\\ \phi(x_{11}x_{12})=0 &\Leftrightarrow \left\{\begin{array}{ll} (1-2a_1)d_1=0,\quad a_2d_3+a_3d_2=0,&\\ d_1=0,\quad a_2d_2=0,\quad a_3d_3=0,&\text{if}\,\, t\neq \frac n2,\vspace{1mm}\\ a_2d_2+\lambda a_3d_3=0, &\text{if}\,\,t=\frac n2. \end{array}\right.\label{eq9}\\ \phi(x_{22}x_{12})=0 &\Leftrightarrow \left\{\begin{array}{ll} (1-2b_1)d_1=0,\quad b_2d_3+b_3d_2=0,&\\ d_1=0,\quad b_2d_2=0,\quad b_3d_3=0,&\text{if}\,\, t\neq \frac n2,\vspace{1mm}\\ b_2d_2+\lambda b_3d_3=0, &\text{if}\,\,t=\frac n2. \end{array}\right.\label{eq10}\\ \phi(x_{11}x_{21})=0 &\Leftrightarrow \left\{\begin{array}{ll} (1-2a_1)e_1=0,\quad a_2e_3+a_3e_2=0,&\\ e_1=0,\quad a_2e_2=0,\quad a_3e_3=0,&\text{if}\,\, t\neq \frac n2,\vspace{1mm}\\ a_2e_2+\lambda a_3e_3=0, &\text{if}\,\,t=\frac n2. \end{array}\right. \label{eq11}\\ \phi(x_{22}x_{21})=0 &\Leftrightarrow \left\{\begin{array}{ll} (1-2b_1)e_1=0,\quad b_2e_3+b_3e_2=0,&\\ e_1=0,\quad b_2e_2=0,\quad b_3e_3=0,&\text{if}\,\, t\neq \frac n2,\vspace{1mm}\\ b_2e_2+\lambda b_3e_3=0, &\text{if}\,\,t=\frac n2. \end{array}\right.\label{eq12} \end{align} \end{enumerate} \end{lemma} \begin{lemma} \begin{align} \Delta\phi\left(x_{11}^{2s}\chi_{11}^t\right) =(\phi\otimes \phi)\Delta\left(x_{11}^{2s}\chi_{11}^t\right) &\Leftrightarrow \left\{\begin{array}{ll} a_1^2+d_1e_1=a_1,& a_2a_1+d_2e_1=0,\\ a_1a_2+d_1e_2=a_2,& a_2^2+d_2e_2=0,\\ a_1a_3+d_1e_3=0,& a_2a_3+d_2e_3=a_1,\\ a_3a_1+d_3e_1=a_3,& a_3a_2+d_3e_2=1-a_1,\\ a_3^2+d_3e_3=0.& \end{array}\right. \label{eq13}\\ \Delta\phi\left(x_{12}^{2s}\chi_{12}^t\right) =(\phi\otimes \phi)\Delta\left(x_{12}^{2s}\chi_{12}^t\right) &\Leftrightarrow \left\{\begin{array}{ll} a_1d_1+d_1b_1=d_1, & a_1d_2+d_1b_2=d_2,\\ a_1d_3+d_1b_3=0, & a_2d_1+d_2b_1=0,\\ a_2d_2+d_2b_2=0,&a_2d_3+d_2b_3=d_1,\\ a_3d_1+d_3b_1=d_3,&a_3d_2+d_3b_2=-d_1,\\ a_3d_3+d_3b_3=0.& \end{array}\right. \label{eq14} \end{align} \end{lemma} \begin{theorem} \label{MainProof} The group of Hopf algebra automorphisms of $A_{Nn}^{\mu\lambda}$ is consist of the following automorphisms. \begin{enumerate} \item $\Psi_{d_2}^{s,t}$ is a Hopf algebra automorphism of $A_{Nn}^{\mu\lambda}$ such that \begin{align*} \Psi_{d_2}^{st}\left(x_{11}\right)&=x_{11}^{2s}\chi_{11}^t,\vspace{1mm} & \Psi_{d_2}^{st}\left(x_{22}\right)&=x_{11}^{2s}\chi_{22}^t,\vspace{1mm}\\ \Psi_{d_2}^{st}\left(x_{12}\right)&=d_2x_{12}^{2s}\chi_{12}^t,\vspace{1mm}& \Psi_{d_2}^{st}\left(x_{21}\right)&=d_2^{-1}x_{12}^{2s}\chi_{21}^t. \end{align*} And one of the following conditions is satisfied. \begin{enumerate} \item $t$ is odd, $t\neq \frac n2$, $(N, 2s+t)=1$, $d_2^{2N}=1$ in case that $n$ is even or $d_2^2=1$ in case that $n$ is odd. \item $t=\frac n2=1$, $d_2^{2N}=1$ and $(2s+1, N)=1$. \end{enumerate} \item $\Phi_{d_3}^{s,t}$ is a Hopf algebra automorphism of $A_{Nn}^{\mu\lambda}$ such that \begin{align*} \Phi_{d_3}^{s,t}\left(x_{11}\right)&=x_{11}^{2s}\chi_{22}^t,\vspace{1mm}& \Phi_{d_3}^{s,t}\left(x_{22}\right)&=x_{11}^{2s}\chi_{11}^t,\vspace{1mm}\\ \Phi_{d_3}^{s,t}\left(x_{12}\right)&=d_3x_{12}^{2s}\chi_{21}^t,\vspace{1mm}& \Phi_{d_3}^{s,t}\left(x_{21}\right)&=d_3^{-1}x_{12}^{2s}\chi_{12}^t. \end{align*} And one of the following conditions is satisfied. \begin{enumerate} \item $t$ is odd, $t\neq \frac n2$, $(N, 2s+t)=1$, $d_3^{2N}=1$ in case that $n$ is even or $d_3^2=1$ in case that $n$ is odd. \item $t=\frac n2=1$, $d_3^{2N}=1$ and $(2s+1, N)=1$. \end{enumerate} \item If $n=2$, $\lambda=1$, $\mu=1$ and $(2s+1,N)=1$, $\theta_1,\theta_2\in\{\pm 1\}$, then $\Gamma_{\theta_1,\theta_2,s}$ is a Hopf algebra automorphism of $A_{Nn}^{\mu\lambda}$ such that \begin{align*} \Gamma_{\theta_1,\theta_2,s}(x_{11})&=\frac12\left[x_{11}^{2s}\left(\chi_{11}^t+\chi_{22}^t\right) +\theta_2 x_{12}^{2s}\left(\chi_{12}^t+\chi_{21}^t\right)\right],\\ \Gamma_{\theta_1,\theta_2,s}(x_{22})&=\frac12\left[x_{11}^{2s}\left(\chi_{11}^t+\chi_{22}^t\right) -\theta_2 x_{12}^{2s} \left(\chi_{12}^t+\chi_{21}^t\right)\right],\\ \Gamma_{\theta_1,\theta_2,s}(x_{12})&=\frac{\theta_1}2\left[x_{11}^{2s}\left(\chi_{11}^t-\chi_{22}^t\right) -\theta_2 x_{12}^{2s} \left(\chi_{12}^t-\chi_{21}^t\right)\right],\\ \Gamma_{\theta_1,\theta_2,s}(x_{21})&=\frac{\theta_1}2\left[x_{11}^{2s}\left(\chi_{11}^t-\chi_{22}^t\right) +\theta_2 x_{12}^{2s} \left(\chi_{12}^t-\chi_{21}^t\right)\right]. \end{align*} \end{enumerate} \end{theorem} \begin{proof} (1) Suppose $t$ is odd and $t\neq \frac n2$, then $d_1=e_1=0$. The first line of the formula \eqref{eq13} implies that $a_1=0$ or $1$. \begin{enumerate}[(a)] \item Suppose $a_1=1$, then $a_2=a_3=0=d_2e_2$ from the formula \eqref{eq13} and $d_3=0$ from the second line of the formula \eqref{eq14}. $d_1=d_3=0$ implies that $d_2\neq 0$, so $e_2=0$. The second line of the formula \eqref{eq1} $b_2^2+b_3^2=a_2^2+a_3^2=0$ implies that $b_2=b_3=0$. Since $d_1=0\neq d_2$, the second line of the formula \eqref{eq14} implies $b_1=0$. Now it is easy to see that $\phi=\Psi_{d_2}^{s,t}$. It is easy to check that \begin{align*} \phi\left(x_{11}^{2k}+\mu x_{12}^{2k}\right) =x_{11}^{2k(2s+t)}+\mu d_2^{2k}x_{12}^{2k(2s+t)}&\Rightarrow (N, 2s+t)=1, d_2^{2N}=1,\\ S\phi(x_{12})=\phi S(x_{12})&\Rightarrow d_2^{4N-2}=1. \end{align*} $\phi\left(\chi_{12}^n\right)=\lambda \phi\left(\chi_{21}^n\right)$ holds in case that $n$ is even, or $d_2^2=1$ and $n$ is odd. \item Suppose $a_1=0$, then $a_2=a_3=0$, $d_3e_2=1$, $d_2=0=e_3$ from the formula \eqref{eq13}. $b_2=b_3=0$ since $b_2^2+b_3^2=a_2^2+a_3^2=0$. According to the fourth line of formula \eqref{eq14}, $b_1=1$. Hence $\phi=\Phi_{d_3}^{s,t}$. It is easy to check that \begin{align*} \phi\left(x_{11}^{2k}+\mu x_{12}^{2k}\right) =x_{11}^{2k(2s+t)}+\mu d_3^{2k}x_{12}^{2k(2s+t)} &\Rightarrow (N, 2s+t)=1, d_3^{2N}=1,\\ S\phi=\phi S &\Rightarrow d_3^{4N}=1. \end{align*} $\phi\left(\chi_{12}^n\right)=\lambda \phi\left(\chi_{21}^n\right)$ holds in case that $n$ is even, or $d_3^2=1$ and $n$ is odd. \end{enumerate} (2) Suppose $t$ is odd and $t= \frac n2$. If $a_1\neq \frac12$, then $d_1=e_1=0$. The first line of the formula \eqref{eq13} implies that $a_1=0$ or $1$. \begin{enumerate}[(a)] \item Suppose that $a_1=1$, it is easy to see that $\phi=\Psi_{d_2}^{s,t}$. Since $\phi(x_{11}x_{22})=\phi(x_{22}x_{11})$ and $\phi(x_{12}x_{21})=\phi(\lambda x_{21}x_{12})$, we have $n=2$. Let $\theta=\pm 1$, and $k\in\overline{1,N}$, then \begin{align*} \phi\left(x_{11}^2+\theta x_{12}^2\right) &=x_{11}^{2(2s+1)}+\theta d_2^2 x_{12}^{2(2s+1)},\\ \phi\left(x_{11}^{2k}+\mu x_{12}^{2k}\right) &= \phi\left[\left(x_{11}^{2}+x_{12}^{2}\right)^{k-1} \left(x_{11}^2+\mu x_{12}^2\right)\right] =x_{11}^{2(2s+1)k}+\mu d_2^{2k}x_{12}^{2(2s+1)k}. \end{align*} So $d_2^{2N}=1$ and $(2s+1, N)=1$. $\phi S=S\phi\Rightarrow d_2^{4N}=1$. \item Suppose that $a_1=0$, it is easy to see that $\phi=\Phi_{d_3}^{s,t}$. Since $\phi(x_{11}x_{22})=\phi(x_{22}x_{11})$ and $\phi(x_{12}x_{21})=\phi(\lambda x_{21}x_{12})$, we have $n=2$. Let $\theta=\pm 1$, and $k\in\overline{1,N}$, then \begin{align*} \phi\left(x_{11}^2+\theta x_{12}^2\right)&=x_{11}^{4s+2}+\theta d_3^2 x_{12}^{4s+2},\\ \phi\left(x_{11}^{2k}+\mu x_{12}^{2k}\right) &= \phi\left[\left(x_{11}^{2}+x_{12}^{2}\right)^{k-1} \left(x_{11}^2+\mu x_{12}^2\right)\right] =x_{11}^{2(2s+1)k}+\mu d_3^{2k}x_{12}^{2(2s+1)k}. \end{align*} So $d_3^{2N}=1$ and $(2s+1, N)=1$. $\phi S=S\phi\Rightarrow d_3^{4N}=1$. \item Suppose that $a_1=\frac12$, then $b_1=\frac12$ from the formula \eqref{eq1}. The first line of the formula \eqref{eq13} implies $d_1e_1=a_1-a_1^2=\frac 14$. Since $d_1^2=e_1^2$, $d_1=e_1=\frac12$ or $d_1=e_1=-\frac12$. Denote $d_1=e_1=\frac{\theta_1}2$, $\theta_1=\pm 1$. Then the formulas \eqref{eq13} implies that \[ a_2=-d_2\theta_1=e_2\theta_1,\quad a_3=-e_3\theta_1=d_3\theta_1,\quad a_2a_3=\frac14. \] According to the formula \eqref{eq14}, we have $d_2=b_2\theta_1$ and $d_3=-b_3\theta_1$. From the second line of the formulas \eqref{eq3}, we can deduce $a_2^2=a_3^2$, which implies that $a_2=a_3=\frac{\theta_2}{2}$, $\theta_2=\pm 1$. Now we have $\phi=\Gamma_{\theta_1,\theta_2,s}$. Since $\phi(x_{11}x_{22})=\phi(x_{22}x_{11})$ and $\phi(x_{12}x_{21})=\phi(x_{21}x_{12})$, we have $n=2$ and $\lambda=1$. According to Lemma \ref{N2}, we have $\mu=1$ and $(2s+1,N)=1$. \end{enumerate} \par \noindent (3) Suppose $t$ is even and $t\neq \frac n2$, then $a_1=b_1$, $d_1=e_1=0$. The first line of the formulas \eqref{eq13} implies that $a_1=0$ or $1$. If $a_1=0$, then $a_2=a_3=0$ from the formulas \eqref{eq13}. So $b_1=b_2=b_3=0$. It is a contradiction since $\phi(x_{11})=\phi(x_{22})$. If $a_1=1$, then $a_2=a_3=0$ from the formulas \eqref{eq13}. So $b_1=1$, $b_2=b_3=0$. It is a contradiction. \par\noindent (4) Suppose $t$ is even and $t= \frac n2$. If $a_1\neq \frac12$, then $d_1=e_1=0$ from the formulas \eqref{eq9} and \eqref{eq11}. The first line of the formulas \eqref{eq13} implies that $a_1=0$ or $1$. \begin{enumerate}[(a)] \item Suppose $a_1=1$. From formulas \eqref{eq13} and \eqref{eq14}, it is easy to see that $\phi=\Psi_{d_2}^{s,t}$. Since $\phi(x_{11}x_{22})=\phi(x_{22}x_{11})$ and $\phi(x_{12}x_{21})=\phi(x_{21}x_{12})$, we have $n=2$ and $\lambda=1$. It is a contradiction. \item Suppose $a_1=0$. From formulas \eqref{eq13} and \eqref{eq14}, it is easy to see that $\phi=\Phi_{d_3}^{s,t}$. Since $\phi(x_{11}x_{22})=\phi(x_{22}x_{11})$ and $\phi(x_{12}x_{21})=\phi(x_{21}x_{12})$, we have $n=2$ and $\lambda=1$. It is a contradiction. \item Suppose $a_1=\frac12$, then $b_1=\frac{1}2$ from the first line of \eqref{eq7}. The first line of the formulas \eqref{eq13} implies $d_1e_1=a_1-a_1^2=\frac 14$. Since $d_1^2=e_1^2$, we have $d_1=e_1=\frac{\theta_1}{2}$, $\theta_1=\pm 1$. Then the formulas \eqref{eq13} implies that \[ a_2=-d_2\theta_1=e_2\theta_1,\quad a_3=-e_3\theta_1=d_3\theta_1,\quad a_2a_3=\frac14. \] According to the formula \eqref{eq14}, we have $d_2=b_2\theta_1$ and $d_3=-b_3\theta_1$. From the second line of the formulas \eqref{eq3}, we can deduce $a_2^2=a_3^2$, which implies that $a_2=a_3=\frac{\theta_2}{2}$, $\theta_2=\pm 1$. The third line of \eqref{eq9} deduce $\lambda=1$. Now we have $\phi=\Gamma_{\theta_1,\theta_2,s}$. Since $\phi(x_{11}x_{22})=\phi(x_{22}x_{11})$ and $\phi(x_{12}x_{21})=\phi(x_{21}x_{12})$, we have $n=2$. It is a contradiction. \end{enumerate} \end{proof} \begin{lemma}\label{N2} Let $\theta_1,\theta_2\in\{\pm 1\}$ and suppose $\phi$ is a Hopf algebra automorphism of $A_{N2}^{\mu+}$ defined as $\Gamma_{\theta_1,\theta_2,s}$, then $\mu=1$ and $(2s+1, N)=1$. \end{lemma} \begin{proof} Let $\theta=\pm 1$, $k\in\overline{1,N}$, then \begin{align*} \phi\left(x_{11}^2+\theta x_{12}^2\right) &=\frac{1+\theta}2\left(x_{11}^{4s+2}+x_{12}^{4s+2}\right) +\frac{1-\theta}2\left(x_{11}^{4s+1}x_{22}+x_{12}^{4s+1}x_{21}\right),\\ \phi\left(x_{11}^{2k}+\mu x_{12}^{2k}\right) &= \phi\left[\left(x_{11}^{2}+x_{12}^{2}\right)^{k-1} \left(x_{11}^2+\mu x_{12}^2\right)\right]\\ &=\left(x_{11}^{(4s+2)(k-1)}+x_{12}^{(4s+2)(k-1)}\right) \phi\left(x_{11}^2+\mu x_{12}^2\right)\\ &=\left\{\begin{array}{ll} x_{11}^{2(2s+1)k}+x_{12}^{2(2s+1)k}, &\mu=1,\vspace{1mm}\\ x_{11}^{(4s+2)(k-1)+4s+1}x_{22}+x_{12}^{(4s+2)(k-1)+4s+1}x_{21}, &\mu=-1. \end{array}\right. \end{align*} So $1=\phi\left(x_{11}^{2k}+\mu x_{12}^{2k}\right)$ implies that $\mu=1$ and $(2s+1, N)=1$. As for any $l\in\Bbb{Z}^+$, by induction, we have \[ \left(x_{11}+\theta x_{22}\right)^{2l+1} =2^{2l}x_{11}^{2l}\left(x_{11}+\theta x_{22}\right),\quad \left(x_{12}+\theta x_{21}\right)^{2l+1} =2^{2l}x_{12}^{2l}\left(x_{12}+\theta x_{21}\right), \] \begin{align*} \phi S(x_{11})&=\phi\left(x_{11}^{4N-1}\right)\\ &=\frac1{2^{4N-1}}\left[x_{11}^{2s(4N-1)}\left(x_{11} +x_{22} \right)^{4N-1} +\theta_2 x_{12}^{2s(4N-1)}\left(x_{12} +x_{21} \right)^{4N-1}\right]\\ &=\frac1{2}\left[x_{11}^{2s(4N-1)+4N-2}\left(x_{11} +x_{22} \right) +\theta_2 x_{12}^{2s(4N-1)+4N-2}\left(x_{12} +x_{21} \right)\right]\\ &=S\phi(x_{11}). \end{align*} Similarly, $\phi S(x_{22})=S\phi(x_{22})$, $\phi S(x_{12})=S\phi(x_{12})$, $\phi S(x_{21})=S\phi(x_{21})$. \end{proof} Denote $\Bbb{G}_n=\{\xi\in \Bbbk\mid \xi^n=1\}$. As a summary of Theorem \ref{MainProof}, we have \begin{theorem} \begin{enumerate} \item If $n> 2$, the group of Hopf algebra automorphisms of $A_{Nn}^{\mu\lambda}$ is consist of $\Psi_{\xi}^{s,t}$, $\Phi_{\xi}^{s,t}$, where $s\in\overline{1, N}$, $t\in\overline{1, n-1}$, $t\neq \frac n2$, $t\equiv 1({\rm mod }\,\, 2)$, $(2s+t, N)=1$, $\xi\in\Bbb{G}_{2N}$ in case that $n$ is even and $\xi=\pm 1$ in case that $n$ is odd. \item If $(\mu, \lambda)\neq (1, 1)$, the group of Hopf algebra automorphisms of $A_{N2}^{\mu\lambda}$ is consist of $\Psi_{\xi}^{s,1}$, $\Phi_{\xi}^{s,1}$, where $s\in\overline{1, N}$, $(2s+1, N)=1$, $\xi\in\Bbb{G}_{2N}$. \item The group of Hopf algebra automorphisms of $A_{N2}^{++}$ is consist of $\Psi_{\xi}^{s,1}$, $\Phi_{\xi}^{s,1}$, $\Gamma_{\theta_1,\theta_2,s}$, where $s\in\overline{1, N}$, $(2s+1, N)=1$, $\xi\in\Bbb{G}_{2N}$, $\theta_1, \theta_2\in\{\pm 1\}$. \end{enumerate} \end{theorem} \begin{remark} The Kac-Paljutkin algebra $H_8$ \cite{MR0208401} is isomorphic to $A_{12}^{+-}$ as Hopf algebras. The automorphism group of $H_8$ was obtained by \cite{MR2879228} and was used to determine isomorphic classes of Hopf algebras over $H_8$ \cite{Shi2019}. \end{remark}
2,869,038,156,457
arxiv
\section{Introduction} \label{sec:introduction} Large-scale velocity gradients have been observed in clouds and their clums and cores since the 1970s, and have usually been interpreted as evidence of rotation of the clouds \citep[e.g.,] [] {Belloche2013}. In particular, \citet{Fleck.Clark1981} found that the angular velocity, $\Omega$, has a dependence on the radius of the cloud of the form $\Omega (R) \propto R^{p}$, with $p \sim -2/3$,. These authors attributed this result to the turbulent cascade generated from the galactic differential rotation, driven by the shearing motions in the galacic disk, while \citet{Goldsmith.Arquilla85} found that for their sample, the specific angular momentum (hereafter SAM), $j = J / M$, where $J$ is the total AM and $M$ is the cloud's mass, scales with the radius as $R^{1.4}$, so that $\Omega \propto R^{-0.6}$, in agreement with the results of \citet{Fleck.Clark1981}. The interpretation by \citet{Goldsmith.Arquilla85} was that this relation is evidence of the loss of AM during the contraction and fragmentation of the clouds, suggesting that this loss is due to the redistribution of AM in the orbital motions of the fragments. These works illustrate what is known as the ``angular momentum problem'' \citep[e.g.,] [] {Spitzer78, Bodenheimer95}, which consists in the apparent loss of SAM form molecular cloud scales ($\sim 10$ pc) to the scales of dense cores ($\sim 0.1$ pc) and protostellar disks ($\lesssim 0.01$ pc), as illustrated in Fig.\ \ref{fig:data}. Therefore, the angular momentum problem is actually the problem of how AM is redistributed as a cloud contracts gravitationally and fragments. At the scales of giant molecular clouds, \citet{Imara.Blitz2011} have shown that the position angles of the velocity gradients of molecular gas and of the diffuse gas are highly divergent from each other, and interpreted as rotation, have magnitudes implying that the SAM of the cloud is less than that of the surrounding medium. At the scale of dense cores, the decreasing nature of the SAM implied by the velocity gradients has been known since the work of \citet[] [hereafter G93] {Goodman+93}, with a scaling $j \sim R^{1.6}$. Moreover, assuming that {\it i)} the ratio $\beta$ of rotational to gravitational energy was constant for all cores, {\it ii)} a linewidth-size scaling relation of the form $\sigma \propto R^{1/2}$ \citep{Larson81} holds for the cores, and {\it iii)} the cores are in approximate virial equilibrium, G93 were able to analytically derive a scaling relation of the form $j \sim R^{3/2}$. On the even smaller scales of protoplanetary discs, \citet{Pineda+2019} have found that, for three sources in Perseus containing young stellar objects, the radial profile of SAM on scales $\sim 10^{3}$--$10^{4}$ AU scales as $R^{1.8 \pm 0.04}$. More recently, using a sample of 11 objects from the {\it IRAM CALYPSO} catalog of dense gas observations, \citet{Gaudel+2020} have found that the radial profile of SAM on scales from $50$ to $5000$ AU exhibits two regimes inside the protostellar envelope: one in which the SAM scales with radius as $j \propto R^{1.6 \pm 0.2}$ on scales $\gtrsim 10^3$ AU, and another in which $j$ tends to be constant, on scales $\sim 50$--$10^3$ AU. The angular momentum problem has received various tentative solutions within the context of different models for molecular clouds and star formation (SF). In the model of magnetically-supported clouds, with star formation mediated by ambipolar diffusion \citep[see, e.g., the reviews by] [] {Shu+87, Mouschovias91}, the preferred mechanism to explain the redistribution of AM was magnetic braking \citep[see the review by] [and references therein] {Bodenheimer95}. This mechanism consists of the torsion applied, by the rotation of the structure, to the magnetic field lines that permeate both the structure and the surrounding medium, therefore transporting the AM of the structure outwards through the tension of the field lines. However, at present it is known that the magnetic support theory presents a series of problems, and has been superseded by the so called ``gravoturbulent'' scenario, in which clouds are supported against their self-gravity by the pressure exerted by their internal supersonic turbulence \citep[e.g.,] [] {MacLow.Klessen2004}. The compressions, however, produce density enhancements that may locally exceed their own Jeans masses, and proceed to collapse \citep[e.g.,] [] {VS+03, MacLow.Klessen2004} Within the context of an inhomogeneous, self-gravitating, rotating cloud, \citet{Larson84} proposed that gravitational torques exerted by non-radial gravitational forces originating from sheared density fluctuations could be responsible for the AM transfer. More recently, \citet{Li_P+2004}, have shown that the cores formed in Super-Alfv\'enic turbulent simulations follow the relation $j \sim R^{3/2}$. However, \citet{Jappsen.Klessen04} showed that non-magnetic, continuosly-driven turbulent numerical simulations also exhibit a $j \sim R^{3/2}$ scaling, supporting the view that this relation, and the associated AM transfer mechanism, are not due to the magnetic field. Like \citet{Larson84}, these authors also invoked gravitational torques as the mechanism responsible for AM transfer, although no tests were performed to support this claim. On the other hand, \citet {VS+19} have recently discussed a number of problems of the gravoturbulent scenario. Chief among them is the inconsistency \citep{IM+16} between the velocity dispersion-size scaling expected for turbulence ($\sigma \propto R^{1/2}$) and the observed scaling, $\sigma \propto (\Sigma R)^{1/2}$, which involves in addition a dependence of the velocity dispersion on the column density, $\Sigma$. The latter scaling is expected for either virial equilibrium \citep{Heyer+09} or collapse \citep{BP+11}. However, because this scaling is observed across all scales from giant molecular clouds to massive dense cores \citep[e.g.,] [] {Heyer+09, BP+11, Kauffmann+13, Leroy+15, Miville+17, Traficante+18, BP+18}, it does not appear feasible that {\it all} size and density scales are virialized while simultaneously producing (non-virialized) density fluctuations in their interiors. Therefore, it has been suggested that molecular clouds and their substructures are all in a state of non-homologous gravitational contraction and fragmentation \citep[e.g.,] [] {Hartmann_Burkert07, VS+09, VS+19, BP+11, IM+16}, in a regime of global, hierarchical collapse \citep[GHC;] [] {VS+19}. However, the origin of the observed $j$-$R$ scaling and the AM transfer mechanism have not been investigated within the context of the GHC scenario and numerical simulations. This is the objective of the present work, in which we follow the evolution of the AM in dense clumps defined and tracked over time in different ways in a numerical simulation of GHC, in order to understand the mechanism of AM exchange and determine the degree of consistency with the observed scaling. The structure of the paper is as follows. In Sec.\ \ref{sec:AM_transf_mech} we first revisit the sources of torques acting on a finite-size fluid parcel with respect to a given origin, emphasizing the role of turbulent eddy viscosity. Next, in Sec.\ \ref{sec:observational data}, we collect data from several observational studies concerning the $j$-$R$ scaling for structures over two orders of magnitude in size, as a guideline for the expected scaling. In Sec.\ \ref{Sec:Numerical Data} we describe the main features of the numerical simulation used in this work and the prescriptions for defining and time-tracking the clumps. In Sec.\ \ref{sec:results} we present our measurements, and in Sec.\ \ref{sec:discussion} we discuss the implications of our results, revisit G93's derivation of the $j$-$R$\ scaling within the context of GHC, and discuss the possible origin of the near independence of $\beta $ with $R$. Finally, in section Sec.\ \ref{sec:conclusions}, we present our main conclusions. \section{On the nature of the AM transfer mechanism: the available torques} \label{sec:AM_transf_mech} In order to put the remainder of the paper in context, we start by writing the equation governing the evolution of the AM of a fluid parcel of volume $V$ with respect to some coordinate origin. This is formally done by taking the cross product of the momentum conservation equation with the position vector ${\bm r}$ and integrating over $V$: \begin{align} \int_{V} {\bm r} \times \frac{\partial {(\rho {\bm u})}}{\partial t} dV = - \int_{V} {\bm r} \times \nabla \cdot (\rho {\bm u}\uu) dV - \int_{V} {\bm r} \times \nabla P dV \nonumber & \\ - \int_{V} {\bm r} \times \rho \nabla \phi dV + \int_{V} {\bm r} \times \mu (\nabla^{2} {\bm u} + \nabla \nabla \cdot {\bm u}\uu) dV \nonumber & \\ + \int_{V} {\bm r} \times \frac{1}{4 \pi} (\nabla \times {\bm B}) \times {\bm B} dV, \label{eq:total torque} \end{align} In this equation, the terms on the right-hand side are the torques acting on the fluid parcel, such as gravitational torques (third term), viscous torques (fourth term), torques by pressure gradient (second term), magnetic torques (last term), and what we shall call ``ram-pressure'' or ``hydrodynamic'' torques, given by the first term. This term originates from the advection term of the momentum equation, which in general represents the transfer of momentum in the $i$ direction by the velocity in the $j$ direction, where $i$ and $j$ are any two of the coordinate axes. In eq.\ (\ref{eq:total torque}), this translates to the torque exerted by these momentum exchanges when referred to some coordinate origin. When an averaging of the momentum equation is performed to separate the mean flow from the turbulent fluctuations, the nonlinear term gives rise to appearance of the so-called Reynolds stress term, which is responsible for the loss of {\it linear} momentum from the mean motion \citep[see, e.g.,] [Ch.\ 4] {Lesieur08}. In principle, one can take the cross product with the averaged equations and then compute the torques due to the Reynolds stresses. So, the hydrodynamic torque term in eq.\ \eqref{eq:total torque} is related to the torques exerted by the Reynolds stresses. Another noteworthy feature of eq. (\ref{eq:total torque}), is that it can be seen as an intermediate, vector expression between the scalar and tensor forms of the virial theorem (VT), since all of these involve products of the position vector with the momentum equation, integrated over volume. The scalar VT involves the dot product, to obtain the work done by the forces, while the tensor VT involves the direct product (dyadic) between ${\bm r}$ and the momentum equation. Equation (\ref{eq:total torque}) involves the cross product, giving a vector, which is the net torque on the fluid parcel $V$, although it also has dimensions of energy. The role of hydrodynamic and pressure gradient torques seems to have been somewhat neglected in the framework of molecular clouds, clumps and cores. In the study of accretion disks, the AM transfer mechanism constitutes the fundamental process underlying the ability of the disk to transfer mass to the star. Viscous torques are known to be way too small to be important \citep[e.g.,] [] {Hartmann09}, and therefore turbulent, Reynolds stress torques are generally invoked \citep{SS73}. Since the actual turbulent velocity dispersion and eddy size scale are unknown in the disks, the effect of these torques is modeled via an eddy viscosity $\nu_v$, which is a coefficient relating the Reynolds stresses to the mean flow, and is given by \begin{equation} \nu_v = \alpha c_{\rm s} H, \label{eq:alpha_model} \end{equation} where $c_{\rm s}$ is the sound speed, $H$ is the scale height of the disk \citep{SS73}, and $\alpha$ is an adjustable parameter. However, since keplerian disks are Rayleigh stable, the rotational instability cannot be expected to drive the turbulence, and alternative driving mechanisms must be sought, most noticeable among them being the magneto-rotational instability \citep{Balbus_Hawley91}. On the other hand, in molecular clouds and their clumps and cores, the AM transfer mechanism is not clear, since, similarly to the case of disks, the molecular viscosity is negligible. In a seminal paper, \citet{Larson84} argued against eddy viscosity on the basis that no mechanism capable of sustaining the turbulence in a cloud or clump was known at the time, and argued in favour of gravitational torques instead. These were assumed to arise from the gravitational force of the density fluctuations within the cloud or clump. However, at present there is a consensus that the clouds, clumps and cores are in general turbulent, so there is no shortage of turbulence in these objects, even if the precise mechanism responsible for driving it is still a matter of debate. One likely source of turbulence is the very gravitational contraction of these objects \citep[e.g.,] [] {Vazquez-Semadeni+98, Klessen_Hennebelle10, Robertson_Goldreich12, Murray_Chang15, Xu_Lazarian20, Guerrero.Vazquez2020}, which may operate from the scale of dense cores and downwards \citep[within the context of the gravoturbulent scenario of the clouds;] [] {MacLow.Klessen2004}, or even starting at the scale of whole GMCs \citep[within the context of the global hierarchical collapse scenario, GHC;] [] {VS+19}. \citet{Guerrero.Vazquez2020} have estimated through numerical simulations that turbulence driven by gravitational contraction may contain roughly half the kinetic energy as the infall motions. In the remainder of this paper, we will investigate the transfer of AM among fluid parcels in the cloud in SPH numerical simulations and, although we do not explicitly demonstrate that the mechanism is ram and thermal pressure gradient torques, there appears to be no {\it a priori} reason to rule them out, either. In a future contribution we intend to compare simulations with and without self-gravity, in order to determine the role of each type of torque acting on the fluid. \section{The observed $j$-$R$\ scaling} \label{sec:observational data} \begin{figure} \includegraphics[width=\columnwidth]{j_R_data.png} \caption{The observed $j$-$R$\ relation for molecular structures of sizes $\sim 0.01$--10 pc. The solid line represents the best fit. The set of light-gray lines represent the 1-$\sigma$ error. } \label{fig:data} \end{figure} In order to obtain a statistically significant observational guideline for the $j$-$R$ relation we will obtain from our numerical simulation, in Fig. \ref{fig:data} we have compiled the measurements of $ j $ presented by \citet{Goldsmith.Arquilla85}, \citet{Goodman+93}, \citet{Chen_Hope+2019b}, \citet{Chen_X+2007}, \citet{Caselli+2002}, \citet{Pigorov+2003}, and \citet{Tatematsu+16}, for clouds and clumps of sizes ranging from $\lesssim 0.01$ pc to $\gtrsim 10$ pc. A least squares fit to the data in this figure gives the expression \begin{equation} j = 10^{22.7 \pm 0.002} \left(\frac{R}{1 {\rm pc}}\right)^{1.52 \pm 0.06} {\rm cm}^2~ {\rm s}^{-1}, \label{eq:numerical fit} \end{equation} which is represented by the black line in the figure. The associated $1\sigma$ error in the fitting parameters is indicated by the shaded region. It should be noted that these data were obtained from the measurement of velocity centroid gradients in the clouds, which are usually interpreted as rotation \citep{Kutner+77, Phillips99, Rosolowsky+03}, although they can just as well be interpreted as due to expansion, contraction or shear, or combinations thereof \citep{Belloche2013,Tobin_J+2012a}. In particular, it is important to note that at least part of this gradient {\it must} correspond to converging motions, as convergence of the velocity field is required by the continuity equation in order to produce the density enhancement constituting a clump. Therefore, in what follows, we shall assume that some undetermined but non-negligible fraction of the observed large-scale velocity gradient in the clouds corresponds to rotation, while the rest of the kinetic energy may correspond to convergence or to turbulent motions. It is important to note that the methods and molecular tracers used to obtain the values of the radius and SAM may differ between the observational samples collected in this section. Thus, for example, while \citet{Goodman+93} obtained the velocity gradient by fitting a solid-body rotation to the observed $v_{\rm LSR}$ map of a cloud, \citet{Tatematsu+16} measured the velocity gradient using position–velocity diagrams passing through core centers, and made sinusoidal fits against the position angle. This gives rise to the possibility that the same clump has two different values of $j$ and $R$ in two different samples, which implies that the same object may appear more than once in Fig.\ \ref{fig:data}. \section[]{Numerical data} \label{Sec:Numerical Data} In this section we describe the numerical simulation as well as the procedures we used to follow the evolution of the SAM for the selected clumps, as well as the way in which we determine the physical properties of the clumps. \subsection{The simulation} \label{subsec: Simulation} We use a simulation of decaying turbulence in the warm neutral atomic gas first presented in \citet{Heiner+15}. The simulation was performed using \textsc{Gadget-2} \citep{Springel+01}, a smoothed-particle hydrodynamics (SPH) code, using $296^{3} \approx 2.6 \times 10^{7}$ particles in a box of $256$ pc per side. With a constant mass per particle set at $0.6 M_{\odot}$, the total mass in the box is $1.58 \times 10^{6} M_{\odot}$. The initial density and temperature were set at $n(t=0) = 3\, {\rm cm}^{-3}$ and $T(t=0) = 730$ K, respectively. This initial density is intended to represent the density typical in a Galactic spiral arm, and the initial temperature corresponds to thermal equilibrium between heating and cooling at this density. In this version of the code, the prescription for sink particles from \citet{Jappsen+05} was used, setting the density threshold for sink particle formation at $3.2 \times 10^{6}\, {\rm cm}^{-3}$. Also included were the cooling and heating functions from \citet{KI02}, with the typographical correction given by \citet{Vazquez-Semadeni+07}, as well as the rpSPH algorithm from \citet{Abel11}, which reports improvements in the management of physical instabilities such as Kelvin-Helmholz and Rayleigh-Taylor, eliminating some non-physical effects present in the standard SPH prescription. No prescription for stellar feedback is included, so we only considered clumps with a low enough sink particle content that the omission of feedback does not render them unrealistic (cf.\ Sec.\ \ref{subsec: Def and search}). The simulation is initially driven with purely solenoidal modes in the wavenumber range $1 \le k \le 4$ until $t=0.65$ Myr, and then left to decay. At that time, a maximum velocity dispersion of $\sigma \approx 18~ {\rm km~s}^{-1}$ is reached. Although this velocity dispersion is somewhat high compared to actual values \citep[$\sim 8$--$10~ {\rm km~s}^{-1}$; e.g.,] [] {HT03}, this is partially compensated by the purely solenoidal character of the fluctuations, which induces less compressions than a mixture of solenoidal and compressible modes \citep[e.g.,] [] {VS+96, Federrath+08}. In addition, by the time steps in which the simulation is analysed, the velocity dispersion has dropped to $ \sim 4~ {\rm km~s}^{-1}$. \subsection{Clump definition} \label{subsec: Def and search} In the context of this paper, we will use the term ``clump'' in a generic way to denote any overdensity above a specified threshold ($n_{\text{th}}$). Thus, clumps defined at the highest thresholds (and generally more compact) will correspond to ``cores'', while clumps defined at the lowest thresholds (and usually more extended) will correspond to ``clouds''. The clumps are initially defined using the algorithm introduced in \citet{Camacho+16}, as a ``connected'' set of SPH particles (i.e., within their smoothing lengths) above some specified density threshold, around a local maximum of density. Clumps defined in this way can contain substructures; i.e., a single ``cloud'' may contain several ``cores''. Our procedure is similar to the {\sc Dendrograms} algorithm, except that we do not produce a structure tree with the ``lineage'' of the clumps, and the thresholds are arbitrary, rather than being set exactly at the level where a large structure fragments into smaller ones. \citet{Camacho+20} have shown that varying the threshold levels basically changes the types of object selected, but does not significantly affect the general trend of the ensemble of objects. \subsection{The numerical clump sample} \label{subsec:sample} We select four timesteps in the simulation after the formation of the first sink ($t=14.74$ Myr) to analyze the $j$--$R$ relation of a sample of clumps over the entire numerical box at each time. The timesteps correspond to $t=17.92$, $t=19.92$, $t=23.24$ and $t=25.23$ Myr. We apply the clump-finding algorithm at these times with thresholds $n_{\text{th}} = 10^{3}$, $3 \times 10^{3}$, $10^{4}$, $3 \times 10^{4}$, and $10^{5}$ cm${}^{-3}$, rejecting those clumps with less than 60 particles (1.5 times the number of particles within the smoothing radius) in order to guarantee that the retained ones are sufficiently resolved \citep{Bate_Burkert97}. This implies a mass of at least $3.6$ $M_{\odot}$ per clump. Furthermore, we only keep the clumps with column density $\Sigma \gtrsim 30$ M${}_{\odot}$pc${}^{-2}$, since both observations \citep[e.g.,] [] {Keto_Myers86, Leroy+15, Traficante+18} and simulations \citep[e.g.,] [] {Camacho+16, Mejia-Ibanes+2016} suggest that clumps with lower column densities are mostly dominated by turbulence, while above $\Sigma \gtrsim 30$ M${}_{\odot}$pc${}^{-2}$ gravity is probably dominant. \begin{figure} \includegraphics[width=\columnwidth]{box.png} \caption{Parallelepiped defined by the minimum and maximum coordinates along each axis of the SPH particles belonging to the clump (red dots). Any sinks (e.g., the cyan point) within this parallelepiped will be considered as a contribution to $M_*$ in eq.\ (\ref{eq:efficiency}).} \label{fig:box} \end{figure} Since the simulation does not include any form of feedback, it is necessary to establish a criterion to avoid including into account clumps in which the stellar content would be expected to alter their dynamics significantly. We thus restrict our sample to clumps whose star formation efficiency (SFE) satisfies \begin{equation} \centering \frac{M_{*}}{M_{\text{tot}}} \equiv \frac{M_{*}}{M_{*}+M_{\text{gas}}} < 30 \%, \label{eq:efficiency} \end{equation} where $M_{*}$ is the mass in stars (sinks). A sink will be associated with a clump if it is within the box defined by the minimum and maximum values of the positions of the particles on the three coordinate axes, and it will be considered in the calculation of the SFE as a contribution to $M_{*}$ (as can be seen in Fig. \ref{fig:box}). \subsection{Time-tracking of lagrangian particle sets and of regular clumps (overdensities)} \label{sec:time_tracking} In order to study the evolution of the SAM of the clumps and their constituent SPH particles, we use two different approaches. In the first, we consider a few randomly-chosen clumps (originally defined as connected overdensities in the flow) from the full sample, and follow the (fixed) set of their constituent particles over time. We refer to these as {\it lagrangian} SPH particle sets. The time tracking was performed over a few megayears both towards the past and towards the future of the time $t_{\rm def}$ at which the sets were defined. The tracking from the past was carried out over $2.65$ Myr. The tracking to the future was carried out only until the last snapshot before a sink formed within the particle set. This was done because, once a sink forms, it begins to accrete both mass (SPH particles) and AM from the lagrangian set, and thus the variation of the AM ceases to be caused exclusively by exchanges with the set's neighbouring gas parcels. It should be noted that, for a threshold density of $n_{\text{th}} = 10^{3}$ cm${}^{-3}$, the smallest threshold used to define clumps in this work, the free fall time corresponds to $\sim 1.44$ Myr. Thus, the tracking time intervals we use guarantee that all clumps have ample time to evolve dynamically. The second approach is the traditional one, in which we define the clumps as connected overdensities at all times during the tracking. It is very important to note that, with this definition, {\it the clumps do not consist of the same particles at the various times}. In fact, the clumps, defined over the same density threshold $n_{\rm th}$ at the various times, tend to increase their mass. We considered three clumps at times $t=17.26$ and $17.93$ Myr. The characteristics of all the clumps tracked in time in this work are compiled in Table \ref{tab:all clumps}. \begin{table*} \caption{Characteristics of clumps tracked over time} \hspace{-1.1cm} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{} & \multicolumn{2}{c|}{Type} & \multicolumn{2}{c|}{Tracking} \\ \hline Name & $t_{\rm def}$ (Myr) & $n_{\rm th}$ (cm${}^{-3}$) & \hspace{-1cm} \begin{tabular}[c]{@{}c@{}}Lagrangian \\ set\end{tabular} & \hspace{-1cm} \begin{tabular}[c]{@{}c@{}}Conected \\ clump \\ above \\ $n_{\rm th}$\end{tabular} & \hspace{-1cm} \begin{tabular}[c]{@{}c@{}}From \\ past ($t<t_{\rm def}$) \\ to \\ present ($t=t_{\rm def}$)\end{tabular} & \hspace{-1cm} \begin{tabular}[c]{@{}c@{}}From \\ present ($t=t_{\rm def}$)\\ to \\ future ($t>t_{\rm def}$)\end{tabular} \\ \hline C1 & $19.92$ & $10^{3}$ & X & & X & \\ \hline C2 & $19.92$ & $3 \times 10^{3}$ & X & & X & \\ \hline C3 & $19.92$ & $10^{4}$ & X & & X & \\ \hline C4 & $19.92$ & $ 3 \times 10^{4}$ & X & & X & \\ \hline C5 & $19.92$ & $10^{5}$ & X & & X & \\ \hline C6 & $25.23$ & $10^{3}$ & X & & X & \\ \hline C7 & $25.23$ & $ 3 \times 10^{3}$ & X & & X & \\ \hline C8 & $25.23$ & $10^{4}$ & X & & X & \\ \hline C9 & $25.23$ & $3 \times 10^{4}$ & X & & X & \\ \hline C10 & $25.23$ & $10^{5}$ & X & & X & \\ \hline C11 & $17.26$ & $10^{3}$ & X & & & X \\ \hline C12 & $19.92$ & $3 \times 10^{3}$ & X & & & X \\ \hline C13 & $19.92$ & $10^{4}$ & X & & & X \\ \hline C14 & $23.24$ & $3 \times 10^{4}$ & X & & & X \\ \hline C15 & $23.24$ & $10^{3}$ & X & & & X \\ \hline C16 & $17.26$ & $3 \times 10^{3}$& & X & & X \\ \hline C17 & $17.93$ & $3 \times 10^{3}$ & & X & & X \\ \hline C18 & $17.93$ & $3 \times 10^{3}$ & & X & & X \\ \hline \multicolumn{7}{l}{$t_{\rm def}$: definition time; $n_{\rm th}$: definition threshold density} \end{tabular} \label{tab:all clumps} \end{table*} As an illustration of the appearance and mass distribution of the clumps that we will be studying in this work, in Fig.\ \ref{fig:muestra} we show a sample of hierarchically-nested clumps (clumps C1-C5 in Table \ref{tab:all clumps}) that were defined through different density thresholds from $n_{\rm th} = 10^3 {\rm cm}^{-3}$ to $n_{\rm th} = 10^5 {\rm cm}^{-3}$ at time $t = 19.92 $ Myr in the simulation. It can be seen that the clumps span sizes from over 10 to a few tenths of parsec across this density range. \begin{figure} \includegraphics[width=\columnwidth]{muestra_paper} \caption{Sequence of hierarchically-nested clumps defined at time $t=19.92$ Myr (clumps C1-C5 in Table \ref{tab:all clumps}), defined at various density thresholds: (a) $n_{\text{th}} = 10^{3} n_{0}$, (b) $n_{\text{th}} = 3 \times 10^{3} n_{0}$, (c) $n_{\text{th}} = 10^{4} n_{0}$, (d) $n_{\text{th}} = 3 \times 10^{4} n_{0}$ and (e) $n_{\text{th}} = 10^{5} n_{0}$. All clumps are sub-structures of the same cloud. The axes represent the size in pc.} \label{fig:muestra} \end{figure} \subsection{Size estimation} \label{subsec:radius} As can be seen in Fig. \ref{fig:muestra}, clumps have an amorphous structure, so specifying a radius for them can be quite an ambiguous task. As a first approximation, we will calculate the radius of a clump as that of a sphere with the same volume; i.e., $R =(3V/4 \pi)^{1/3}$. In turn, for a discrete set of SPH particles, the total volume can be calculated as \citep{Camacho+16} \begin{equation} V = \sum_{i = 1}^{N_{c}} V_{i} = \sum_{i = 1}^{N_{c}} \frac{m_{p}}{\rho_{i}}=m_{p} \sum_{i = 1}^{N_{c}} \rho_{i}^{-1}. \label{eq:total vol} \end{equation} It should be noted that this way of computing the radius only applies at the time when all the member particles of the clump are ``connected''; that is, when all the particles are within the smoothing radius of other particles in the set. However, when following a set of particles over time (either to the past or the future), some particles of the set may ``disconnect'', ceasing to be withing the smoothing radius of any other particle of the set. In this case, particles that were not part of the set initially will be now located in-between the original member particles. We will refer to these non/member particles as ``intruders''. Note that the intruders may well be part of a new clump defined by means of a density threshold at the new time of observation, but they were not part of the originally defined clump. Once intruder particles have penetrated among the original set of member particles, the radius calculated from eq. (\ref{eq:total vol}) will not reflect the true extent of the volume containing the original member particles. In this case, we will determine the radius as the geometric average of half the maximum difference in position of the constituent particles of the clump along each of the coordinate axes. This is, as \begin{equation} R \approx {\left( \frac{x_{max}-x_{min}}{2} \times \frac{y_{max}-y_{min}}{2} \times \frac{z_{max}-z_{min}}{2}\right)}^{1/3}. \label{eq:geometric average} \end{equation} \subsection{Calculation of the specific angular momentum} \label{subsec:j} The AM of each clump is calculated from the position and velocity vectors of the SPH particles making up the clump, with respect to its own center of mass, so that, \begin{equation} \mathbf{J} = \mathbf{r_{\text{CM}}} \times \mathbf{p_{\text{CM}}} = \mathbf{r_{\text{CM}}} \times m_{\text{p}} \mathbf{v_{\text{CM}}}, \label{eq:ang_mom} \end{equation} where the subscript CM represents quantities measured with respect to the clump's center of mass. In this way, the SAM will be simply given by $\mathbf{J}/M_{\text{gas}}$. It is worth noting that, in some cases, while tracking a clump over time, its member SPH particles can cross the periodic boundaries of the numerical box, appearing at the other side. Failing to take this into account greatly increases the measured SAM and radius of the clump. To avoid this problem, for each clump we perform a coordinate translation to move it to the center of the box. This avoids the problem because no clump is large enough nor moves fast enough to touch the boundary at any time during its evolution when placed at the simulation center at the time it is defined. \section[]{Results} \label{sec:results} In this section we now investigate several aspects of the distribution and evolution of the AM. We first investigate the instantaneous distribution of the numerical clump sample in the $j$-$R$\ diagram, combining data from four different snapshots. Next, in order to understand the redistribution of the AM during fragmentation, we follow the evolution of the SAM of particle sets either as ``lagrangian sets'' (i.e., consisting of the same set of SPH particles at all times) or as connected regions above a threshold at all times. The tracking over time is done either from the past or towards the future of the time $t_{\rm def}$ at which the clumps are defined as connected regions above a threshold. \subsection{$j$-$R$\ relation for the numerical clump sample at fixed times} \label{subsec:fixed times} \begin{figure} \includegraphics[width=\columnwidth]{j_R_S3e1.png} \caption{The $j$-$R$\ relation for the numerical clump sample, considering clumps defined as connected regions above a threshold density $n_{\rm th}$, at times $t=17.92$, $t=19.92$, $t=23.24$ and $t=25.23$ Myr. The black line represents the fit to the observations compiled in Fig.\ \ref{fig:data}, while the red line is the fit for the simulation clump sample. The symbols are colored according to the density threshold used to define the clumps: $n_{\text{th}} = 10^{3}$ (blue), $3 \times 10^{3}$ (cyan), $10^{4}$ (magenta), $3 \times 10^{4}$ (green), and $10^{5}$ cm${}^{-3}$ (yellow). Clumps meet the condition $\Sigma > 30$ M${}_{\odot}$pc${}^{-2}$. The different symbols represent the time at which the clumps were defined, as given in the figure legend. The clumps in the numerical sample clumps are seen to exhibit a trend similar to that of the observations. } \label{fig:fixed times} \end{figure} Figure \ref{fig:fixed times} shows the $j$-$R$\ relation for the numerical clump sample, with the radius $R$ calculated from the clump's volume according to eq.\ (\ref{eq:total vol}). In this figure, the color code corresponds to the density threshold used to define clumps, and the different symbols denote the clump definition time, $t_{\rm def}$. The red line represents the fit to the data for the numerical sample, given by \begin{equation} j = 10^{22.9 \pm 0.03} \left(\frac{R}{1 {\rm pc}}\right)^{1.52 \pm 0.06} {\rm cm}^2~ {\rm s}^{-1}, \label{eq:numerical fit} \end{equation} while the black line shows the fit to the observational sample shown in Fig. \ref{fig:data}. It can be seen that the numerical sample exhibits a slope and intercept remarkably close to those of the observational sample. This suggests that the GHC simulation adequately represents the AM redistribution processes taking place in actual clouds and clumps. \subsection{Specific angular momentum evolution of lagrangian particle sets} \label{subsec:lagrangian} We now discuss the temporal evolution of the SAM of a few lagrangian particle sets, each of which constituted a connected clump at the time $t_{\rm def}$ when they were defined. The particles are tagged, and we follow them as they advance either from the past ($t < t_{\rm def}$) or to the future ($t > t_{\rm def}$). When followed from the past, we then present the forward evolution starting from the earliest time reached, and finishing at the definition time $t_{\rm def}$. When followed to the future, we also show the forward evolution {\it starting} from $t_{\rm def}$. At every time during their evolution, we compute the AM of the sets of particles according to eq.\ (\ref{eq:ang_mom}), and their size according to eq.\ (\ref{eq:geometric average}), recalling that, at times $t \ne t_{\rm def}$, the particles in general do not constitute a connected set, and therefore the size of the region they occupy cannot be computed from the sum of their individual volumes as in eq.\ (\ref{eq:total vol}), and instead, eq.\ (\ref{eq:geometric average}) must be used (cf.\ Sec.\ \ref{subsec:radius}). \subsubsection{Tracking nested lagrangian particle sets from the past} \label{subsubsec: same cloud} \begin{figure*} \includegraphics[width=18cm, height=8cm]{muestra_one} \caption{Past evolution of the density and spatial distribution of the lagrangian set of particles making up clump C3 (see Table \ref{tab:all clumps}), defined at $t_{\rm def} = 19.92$ Myr (rightmost image) and $n_{\rm th} = 10^{4}\, {\rm cm}^{-3}$. The total time span is 2.66 Myr. The set of particles is seen to become denser by nearly two orders of magnitude and to contract from a size of several parsecs to $\sim 1$ pc. The colored dots show the SPH particles, with their color indicating their density. The grey dots indicate their projection on the bounding planes.} \label{fig:muestra one} \end{figure*} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{am_evolution_sc2.png} \caption{Evolutionary tracks from the past (since $t=17.27$ Myr) up to $t_{\rm def} = 19.92$ Myr for the five lagrangian sets C1-C5, which make up the nested clumps shown in Fig.\ \ref{fig:muestra} at $t_{\rm def}$. The lagrangian sets evolve from right to left in this diagram, with the red filled square representing the final time, $t_{\rm def}$. The vertical lines mark the radius of each set at which its track changes slope, with their color indicating their corresponding lagrangian set. The black line represents the fit made to the observational sample shown in Fig.\ \ref{fig:data}. For the three smallest clumps, two main periods of evolution can be identified: an early period with an evolution along a slope similar to the observed $j$-$R$\ relation, and a late period with $j$ approximately constant. For the two largest clumps, only the $j \sim$ cst.\ stage is observed.} \label{fig:evo same clump} \end{figure} We consider the five hierarchically-nested clumps shown in Fig.\ \ref{fig:muestra} (clumps C1-C5 in Table \ref{tab:all clumps}), which were defined with five different density thresholds at time $t_{\rm def} = 19.92$ Myr, and we track them from the past---i.e., from $t < t_{\rm def}$---as lagrangian sets . The tracking was performed over a period of $2.65$ Myr. Figure \ref{fig:muestra one} thus shows the evolution of the clump defined at threshold $n_{th} = 10^{4} {\rm cm}^{-3}$ from $t = t_{\rm def} - 2.65 = 17.27$ Myr to $t_{\rm def}$. It can be seen that, at the earliest time, the set of particles was nearly two orders of magnitude less dense and several times more extended. The evolution of each lagrangian set in the $j$-$R$\ diagram over the 2.65 Myr prior to $t_{\rm def}$ is shown in Fig. \ref{fig:evo same clump}. In this figure, the clumps evolve from right to left, as they shrink and become denser; the colors correspond to different density thresholds, and the vertical lines indicate the time at which a change in the slope of the evolutionary track occurs for the lagrangian set of the corresponding color. The red filled square at the extreme left of each track corresponds to $t_{\rm def}$, which in this case is the {\it final} time. The black line represents the fit to the observational data from Fig. \ref{fig:data}. For the three smallest particle sets, two main periods of evolution can be identified: an early period in which the evolutionary track has a slope similar to that of the observational relation, and a late one over which $j$ remains approximately constant. The transition to $j \sim $ cst. occurs when the clump is already very compact. The period of AM loss occurs when the lagrangian set of particles is very scattered, and presumably with many ``intruders''. For the two largest clumps, defined at the lowest density thresholds, $j \sim$ cst. over the entire tracking period. The above result leads us to suggest that the change in slope observed in Fig.\ \ref{fig:evo same clump} may be due to the more scattered state of the clump member particles in the past, with a larger number of intruder particles interspersed among them. The member particles can then transfer their AM to the intruders, and be able to contract. Instead, at later times, the member particles form a denser, more connected ensemble without many partners to exchange their AM with, and thus tend to conserve it. \begin{figure} \includegraphics[width=\columnwidth]{mass_evolution_2_norm.png} \caption{Ratio of the number of intruder particles within the minimal rectangular box that encloses each of the lagrangian sets C1-C5 at the indicated time $t$ to the number of intruders at $t = t_{\rm def}$ (denoted by the red vertical dotted line). The solid vertical lines represent the time at which the slope of the corresponding evolutionary track in Fig.\ \ref{fig:evo same clump} changes, and their color indicates the corresponding clump. Note that the green vertical line is coincident with the vertical yellow line, and so it is not visible. } \label{fig:extra particles mass} \end{figure} In order to test this hypothesis, we define a ``minimal rectangular box'' as a rectangular volume enclosing the lagrangian set at each time, whose sides are placed at the coordinates of the most extreme particles along each coordinate axis (as in Fig. \ref{fig:box}), and determine the fraction of intruder particles in this volume. It is important to note that, if a lagrangian set has significant protrusions in several directions, it will define a very large volume, which will have a high percentage of intruder particles. To minimize the impact of this geometric effect, we consider the ratio of the number of intruder particles within the minimal box at time $t$, $N(t)$, to the number of intruders within the corresponding minimal box at the final time ($N(t_{\rm def})$. Figure \ref{fig:extra particles mass} shows the evolution of this ratio for the five lagrangian sets considered. The colored vertical lines represent the time at which the slope of the evolutionary track for each clump in Fig.\ \ref{fig:evo same clump} changes, if it does. The green line coincides with the yellow line, and thus it is not visible. The vertical red dotted line indicated the definition time, $t_{\rm def}$. Figure \ref{fig:extra particles mass} shows that, on average, the particle sets exhibiting an evolutionary track segment parallel to the observational slope in the $j$-$R$\ diagram (yellow, green and purple lines) had in the past few Myr a normalized intruder fraction larger than three times the value at the final time. Instead, the clumps that do not exhibit such stage (blue and cyan lines) had an initial value of this ratio at most twice the final value. This result supports the hypothesis that a lagrangian set of SPH particles can lose AM as long as it has a companion set of particles to transfer it to, and instead evolves at roughly constant $j$ when it is evolving mostly as an isolated entity. Also, this argues against the dominant torques being gravitational, since these should act over long distances, and thus should not require the companions to be nearby. They are also not expected to be important when the densities of the interacting fluid parcels are similar. \subsubsection{Tracking independent lagrangian particle sets from the past} \begin{figure} \includegraphics[width=\columnwidth]{am_evolution_paper2.png} \caption{Evolution of the lagrangian sets C6-C10, defined in different regions of the numerical box with a density threshold $n_{\rm th} = 3 \times 10^{3}\, {\rm cm}^{-3}$ at time $t_{\rm def} = 25.23$ Myr (red filled squares), and followed to earlier times by $2.7$ Myr. The direction of evolution is therefore from right to left. The black line represents the fit made to the observational sample shown in Fig. \ref{fig:data}.} \label{fig:evo same dens} \end{figure} In the previous subsection we followed the evolution in the $j$-$R$\ diagram of the lagrangian sets corresponding to five hierarchically-nested clumps originally defined with various threshold densities within the same global density enhancement. The fact that they hierarchically nested raises the question of whether the observed two-stage evolution might be a peculiarity of the chosen parent clump, and so here we track the (past) evolution of five lagrangian sets (clumps C6-C10 in Table \ref{tab:all clumps}), originally defined as independent clumps at various random locations in the simulation using the same five density thresholds as in Fig.\ \ref{fig:evo same clump}, and at definition time of $t_{\rm def} = 25.23$ Myr. The evolution of these five lagrangian particle sets during the 2.7 Myr prior to $t_{\rm def}$ is shown in Fig.\ \ref{fig:evo same dens}. It can be seen that the behavior of these clumps is similar to those of Fig.\ \ref{fig:evo same clump}, since the change in slope in the $j$-$R$\ diagram occurs for thresholds $\gtrsim 10^{4}\, {\rm cm}^{-3}$. This supports the conclusion that the behavior observed in Fig.\ \ref{fig:evo same clump} is not a special feature of the parent structure of the particle sets shown there. \subsubsection{Tracking lagrangian sets at different density thresholds to the future} \label{subsubsec:future} \begin{figure*} \centering \includegraphics[width=12.5cm]{muestra_one_future.png}\\ \includegraphics[width=12.5cm]{c2456_visual.png}\\ \includegraphics[width=12.5cm]{c2083_visual.png}\\ \includegraphics[width=12.5cm]{c5119_visual.png}\\ \includegraphics[width=12.5cm]{c2437_visual.png} \caption{Density and spatial distribution evolution of the set of member particles of five clumps defined by density thresholds $n_{\rm th} = 10^{3}\, {\rm cm}^{-3}$ (C11), $n_{\rm th} = 3 \times 10^{3}\, {\rm cm}^{-3}$ (C12), $n_{\rm th} = 10^{4}\, {\rm cm}^{-3}$ (C13), $n_{\rm th} = 3 \times 10^{4}\, {\rm cm}^{-3}$ (C14), and $n_{\rm th}= 10^{3}\, {\rm cm}^{-3}$ (C15), at times $t_{\rm def} = 17.26$ Myr (C11), $t_{\rm def} = 19.92$ Myr (C12 and C13), and $t_{\rm def} = 23.24$ Myr (C14 and C15). These clumps were tracked toward the future until they formed sinks. It can be seen that, for Clumps A-D, the innermost part of the particle set undergoes collapse, while the outermost particles decrease their density and appear to escape from the clump. Instead, Clump E turns out to be a transient clump that disperses as time passes.} \label{fig:muestra one future} \end{figure*} We now consider the evolution {\it towards the future} of five lagragian sets of particles defined by density thresholds $n_{\rm th} = 10^{3}\, {\rm cm}^{-3}$ (C11 in Fig.\ \ref{fig:muestra one future}), $n_{\rm th} = 3 \times 10^{3}\, {\rm cm}^{-3}$ (C12), $n_{\rm th} = 10^{4}\, {\rm cm}^{-3}$ (C13), $n_{\rm th} = 3 \times 10^{4}\, {\rm cm}^{-3}$ (C14) and $n_{\rm th}= 10^{3}\, {\rm cm}^{-3}$ (C15). C11 was defined at time $t_{\rm def} = 17.2$ Myr, C12, C13 at $t_{\rm def} = 19.92$ Myr, and C15, C14 at $t_{\rm def} = 23.24$ Myr. The spatial distribution and density evolution for all clumps is shown in Fig. \ref{fig:muestra one future}. It can be seen that, as time proceeds, some of the set member particles of clumps C11 to C14 appear to {\it disperse} (red dots), while another group of particles proceeds to collapse, becoming denser and much more compact (cyan, blue and purple dots). Regarding clump C15, it is seen that the entire set of particles disperses, indicating that this is an example of a transient, dispersing clump. In Fig.\ \ref{fig:evo to future} we show the evolutionary tracks in the $j$-$R$\ diagram for these five clumps. In these tracks, the direction of time evolution is from left to right. Note, however, that, because of the tracking to the future, in this case the formation of sink particles among the set members cannot be prevented---in the cases of tracking to the past, the absence of sinks at $t_{\rm def}$ guaranteed that no sinks were present also during the previous evolution towards $t_{\rm def}$---, and indeed a sink appears after 1.32 Myr in C11. Once a sink appears and accretes several of the SPH particles, it absorbs part of the AM of the set. Since we are interested only in the AM exchanges between fluid particles, the evolutionary tracks in this figure are limited to the time before a sink appears. Figure \ref{fig:evo to future} shows that none of the evolutionary tracks undergo an abrupt slope change like those seen in Fig.\ \ref{fig:evo same clump}, nor exhibit a period of evolution at constant $j$. This suggests that the dynamics of these sets of particles is always dominated by interactions with intruder particles. More important, however, is the observation that a fraction of the member particles reduces its density to values below the definition threshold and recedes from the clump, in spite of having been initially all above the threshold density. We have observed this phenomenon in all sets of particles tracked to the future that develop a collapse center. {\it This suggests that losing a fraction of the clump's mass to carry away the AM is a necessary condition for the rest of the clump to be able to collapse,} in a manner similar to the process of disk accretion towards a central object. In the latter case, it is well known that some orbital AM must be transferred outwards in order for the rest of the material to be able to flow towards the center. It is important to note that this mechanism of AM transfer does not require the presence of a magnetic field to operate, and instead operates directly among neighboring fluid particles, probably via ram pressure (or eddy) torques (first term on the right-hand side of eq.\ [\ref{eq:total torque}]) among them. \begin{figure} \centering \includegraphics[width=\columnwidth]{am_evolution_future.png} \caption{Evolution of the specific angular momentum of the five lagrangian sets of particles shown in Fig. \ref{fig:muestra one future}. These clumps were tracked toward the future until they formed sinks. The direction of evolution in the figure is from left to right, with the red filled squares denoting $t_{\rm def}$. The black line represents the fit to the observational data shown in Fig. \ref{fig:data}.} \label{fig:evo to future} \end{figure} \subsection{Specific angular momentum evolution of a regular clump (always defined as a connected overdensity)} \label{subsec:variable clump} \begin{figure*} \includegraphics[width=18cm, height=8cm]{same_thresh_paper.png} \caption{Spatial evolution of clump C16, defined as a connected object above $n_{\rm th} = 3 \times 10^{3}\, {\rm cm}^{-3} $, over a period of $0.4$ Myr starting at $t_{\rm def} = 19.52$ Myr. In this case, the clump-finding algorithm was used to define the clump at all times over the region denoted by the boxes. Due to this form of definition, the clump does not consist of the same fluid particles at all times, but rather grows in mass and size due to accretion from its environment, while shedding some of its other particles, in spite of harboring a local process of gravitational collapse.} \label{fig:muestra same tresh} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{am_evolution_p2.png} \caption{Specific angular momentum evolution of clumps C16-C18 (C16 is shown in Fig.\ \ref{fig:muestra same tresh}), defined at times $t=17.26$ (C16) and $17.93$ (C17 and C18) as connected regions above $n_{\rm th} = 3 \times 10^{3}\, {\rm cm}^{-3}$, each one tracked until it forms sinks. The black line represents the slope of the fit to the observational data shown in Fig.\ \ref{fig:data}. As time advances, clumps grow in size and become more massive, and their SAM increases as well, although they evolve near the observational slope over the whole period}. \label{fig:evo same tresh} \end{figure} We now consider the evolution of clumps defined in the traditional way (clumps C16-C18 in Table \ref{tab:all clumps}); that is, as connected sets of particles above a density threshold throughout their evolution. That is, in this case, the set of SPH particles that compose it is not lagrangian. Instead, the clumps continually exchange fluid particles with their environment. Although both accretion and particle loss occur, the former dominates, and so the clumps grow in mass, size and mean density. Specifically, the clumps are defined at a threshold $n_{\rm th} = 3 \times 10^3\, {\rm cm}^{-3}$ at times $t=17.26$ (C16) and $17.93$ Myr (for C17 and C18). As an illustration, Fig.\ \ref{fig:muestra same tresh} shows the evolution of the spatial distribution, density, and mass of C16. In Fig.\ \ref{fig:evo same tresh} we show the evolution of these three clumps in the $j$-$R$\ diagram while they grow in size and mass. For all three of them, the increase in both radius and SAM appears to occur along evolutionary tracks close to the locus of the observational sample in this diagram. It is important to note that this way of following the clumps is similar to how they would be observed in practice, for example through the emission of a tracer with some specific effective or critical excitation density \citep[e.g.,] [] {Shirley15}, which allows observation only above a threshold. This suggests that the clumps that make up observational samples, such as the one shown in Fig.\ \ref{fig:data}, do not correspond to a sequence of objects that have contracted coherently as a single unit, but rather undergo a true fragmentation process such as that observed in Fig.\ \ref{fig:evo to future}, in which one part of the clump is lost, carrying AM with it, and allowing the remainder to contract. \section{Discussion and implications} \label{sec:discussion} \begin{figure*} \centering \includegraphics[width=0.3\textwidth]{am_evolution_separ_3e3.png} \includegraphics[width=0.3\textwidth]{am_evolution_separ_e4.png} \includegraphics[width=0.3\textwidth]{am_evolution_separ_3e4.png} \hfill \includegraphics[width=0.3\textwidth]{J_evolution_separ_3e3.png} \includegraphics[width=0.3\textwidth]{J_evolution_separ_e4.png} \includegraphics[width=0.3\textwidth]{J_evolution_separ_3e4.png} \hfill \includegraphics[width=0.3\textwidth]{mass_evolution_separ_3e3.png} \includegraphics[width=0.3\textwidth]{mass_evolution_separ_e4.png} \includegraphics[width=0.3\textwidth]{mass_evolution_separ_3e4.png} \caption{Evolution of the specific angular momentum $j$ (SAM, top row), the total angular momentum $J$ (AM, middle row) and the mass ($M$, bottom row), over a period of $1.33$ Myr, of the member particles of C11 (shown in Fig.\ \ref{fig:muestra one future}) that are above and below the density thresholds $n_{\rm th} = 3 \times 10^{3}$ (left column), $10^{4}$ (middle column) and $3 \times 10^{4}\, {\rm cm}^{-3}$ (right column). Note that, in the right column, there are no particles above the threshold before $t = 18.2$ Myr. For each threshold, the mass of the dense subset (light green) is seen to increase faster than its total AM, causing its SAM to decrease after some early transients. The opposite occurs to the low-density subset (orange), which undergoes a steady increase in its SAM during the second half of the time interval considered. The open dark green circles in the second row represent the total AM of the clump. The low-density region is thus seen to contain the vast majority of the total set's AM. The AM of each subset was calculated with respect to the center of mass of the full set.} \label{fig:separ} \end{figure*} \subsection{Interpreting the results} \label{sec:interp} In the previous sections we have shown that the lagrangian sets of particles that make up a clump at some {\it final} time $t_{\rm def}$ tended to evolve close to the observational slope in the $j$-$R$\ diagram at relatively distant earlier times, when their member particles were significantly scattered, and the volume they occupied contained a high fraction of intruder particles. However, at times closer to $t_{\rm def}$, when they already constituted a nearly connected set, with relatively few intruder particles, the particle sets tended to evolve with $j \sim$ cst. On the other hand, true ``clumps'', defined as connected particle sets above a certain density threshold at all times during their evolution, evolve along the observational slope at all times. Finally, when tracking lagrangian sets towards the future, we found that not all of their particles participate in the collapse, and instead some of them (mostly the ones on the periphery) disperse away and decrease their density below the threshold density initially used to define the clump, effectively leaving the clump. The above results suggest that a lagrangian set of SPH particles modifies its AM through the interaction with other neighboring particles, especially those interspersed among the member particles. Since the SPH particles essentially sample the flow at their locations, their masses are very small, and the density does not necessarily vary greatly among them, this form of AM exchange seems to correspond to hydrodynamic torques, rather than (or, at least, in addition) to the gravitational torques suggested by \citet{Larson84} and \citet{Jappsen+05}. These torques are exerted by shearing and compressive (in general, turbulent) stresses among the fluid parcels, as well as by the thermal pressure gradient (in general, the first and second terms on the right-hand side of eq.\ [\ref{eq:total torque}]). This mechanism of AM exchange through hydrodynamic torques is supported by the observation that only particle sets for which at some previous time $t < t_{\rm def}$ the number of intruders within the minimal rectangular box was larger than $3\times$ the number at $t_{\rm def}$ exhibit a period of evolution along the observational slope in the $j$-$R$\ diagram. This interpretation is also supported by the observation that the clump defined as a connected set above a density threshold throughout its evolution (Fig. \ref{fig:muestra same tresh}), so that it contains different sets of SPH particles at different times, evolves essentially parallel to the observational slope in the $j$-$R$\ diagram at all times (Fig. \ref{fig:evo same tresh}). Interestingly, however, since the defining density threshold is held constant, this clump actually increases its mass, size and AM (both total and specific) over time, due to accretion from its environment, in spite of harboring a local center of collapse. But in addition, since not all of the clump's particles participate in the collapse, and instead a part of it is dispersed, it appears that the redistribution of AM occurs in a manner similar to that in accretion disks: if the whole object is subject to its self-gravity at all times, but prevented from contraction by the net rotation, then any local loss of AM in some subregion will allow it to contract, at the expense of transferring it to the rest of the parent structure. In this sense, the process of AM exchange is akin to fragmentation: the parcels that lose AM are the ones that contract. This in turn suggests that it is incorrect to think of a dense core as the result of the monolithic gravitational contraction of a larger clump, a process which should conserve AM. Instead, we suggest that the objects making up the observational data in Fig.\ \ref{fig:data} are precisely the {\it fragments} of larger structures subject to strong self-gravity that have managed to contract gravitationally because they have shed some of its AM via interactions with their neighboring fluid parcels. In this case, the observed $j$-$R$\ relation does not pose a ``problem'', but is just the natural result of turbulent torques being applied among fluid parcels, all subject to strong self-gravity, so that the parcels that lose AM are able to contract further. An important implication of this proposed mechanism is that a whole clump can never collapse in full. Instead, only a fraction of it can collapse, while the rest of its mass must be expelled, in order to carry the excess AM with it. This may impose an upper limit to the mass efficiency of fragmentation into denser units. In the case of accretion disks, it is well known that most of the mass is accreted onto the central object, while a vanishing amount of mass migrates outwards, carrying most of the AM. In the next section we measure the corresponding mass fractions in the case of the fragmentation of a clump. \subsection{Testing the AM transfer mechanism} \label{sec:testing_mechanism} In Sec.\ \ref{sec:interp} we have proposed that the observed $j$-$R$\ relation is the result of the exchange of AM among the fluid parcels making up a turbulent clump which is under the influence of its self-gravity, so that the portion of the clump that loses AM contracts, while the part that gains it expands. This mechanism is consistent with the GHC scenario \citep{VS+19}, in which molecular clouds are globally dominated by their self-gravity, although they still contain moderate turbulence that produces density fluctuations. In this scenario, the global mean Jeans mass decreases over time due to the global contraction \citep{Hoyle1953}, causing fluctuations of ever smaller masses to begin their own local collapse process as time proceeds. Here we suggest that the turbulence also causes AM transfer among the fluid parcels making up the clumps, and that conservation of the total AM limits the fraction of the cloud material that can continue to contract, since it must shed part of its mass to expel part of its AM. This mechanism is thus similar to that operating in accretion disks, but it operates in the amorphous, gaseous phase. To support this suggestion, in Fig.\ \ref{fig:separ} we show the evolution of the AM of the dense (light green lines and dots) and diffuse (orange lines and dots) parts of the lagrangian particle set labeled C11 in Fig.\ \ref{fig:muestra one future}. At each temporal snapshot (indicated by the dots along the lines), we separately consider the particles with densities larger and smaller than the thresholds $3 \times 10^{3}$ (left column), $10^{4}$ (middle column) and $3 \times 10^{4}\, {\rm cm}^{-3}$ (third column), computing the SAM (top row), total AM (middle row), and mass (bottom row) of the high- and low-density subsets. The AM is computed for each group with respect to the center of mass of the entire clump. It can be seen that not only the low-density particles have values of $j$ several times higher than the dense ones, but in fact the low-density particle set tends to increase its SAM over time, while the high-density set tends to reduce it in general, except for a few transients. This reinforces the suggestion that, as the particles exchange AM through turbulent torques, the ones losing it can fall deeper into the gravitational potential well, becoming denser and more compact. \subsection{A gravity-driven model for the constancy of $\beta$ and the $j$-$R$\ scaling} In the previous sections we have presented evidence that the observed apparent loss of SAM is the result of the turbulent exchange of AM among the fluid parcels of a clump. However, the origin of the numerical value of the slope observed in the $j$-$R$\ plot is still not well understood. A few decades ago, \citet{Goodman+93} proposed a semi-analytical derivation of the dependence of the AM and the clump's radius based on assuming the \citet{Larson81} linewidth-size relation (which was thought to arise from virial equilibrium) and the empirical observation that $\beta$, the ratio of the rotational kinetic energy to the gravitational energy appears to be independent of the clump's radius. This property is approximately observed in both our compiled observational data and in our numerical sample, as shown in Fig.\ \ref{fig:numerical sample 2}. From these properties, they obtained a scaling relation of the form $j \propto R^{3/2}$, which was very close to the scaling found in their dense core sample (with exponent $\sim 1.6$) and to the slope fitted in this work for our compilation of observational data ($\sim 1.43$). \begin{figure} \centering \includegraphics[width=.48\textwidth]{beta_data.png} \includegraphics[width=.48\textwidth]{R_beta_S3e1.png} \caption{{\it Top panel:} $\beta$, the ratio of the rotational to the gravitational energy, {\it vs.}\ the radius $R$, for the observational sample. The black line represents a least squares fit to these data. {\it Bottom panel:} The same plot for the numerical sample. The black and red lines respectively represent the fits to the observational and the numerical data. Given the large scatter, the two samples appear to have very similar distributions.} \label{fig:numerical sample 2} \end{figure} However, at present it is generally accepted that the Larson relations have been superseded by the \citet{Heyer+09} relation, of the form \begin{equation} \sigma \propto (\Sigma R)^{1/2}, \label{eq:Heyer09} \end{equation} from which Larson's linewidth-size and density-size relations follow for samples of objects selected so that their column density is approximately constant \citep{BP+11, BP+12}. Additionally, within the context of the GHC scenario, assumed in this work, this scaling arises not from the virial equilibrium condition, $E_{\rm k} \approx |E_{\rm g}|/2$, but from the free fall condition, $E_{\rm k} \approx |E_{\rm g}|$, and applies not only to molecular clouds, but to the massive clumps and dense cores within the clouds \citep{BP+11, BP+18}. The self-similar, scale-free nature of the gravitational contraction process in the range of scales above the Jeans length may possibly explain the observed independence of $\beta$ with clump radius \citep{Goodman+93, Xu-Xuefang+2020b} as follows. Let us assume that all regions larger than the Jeans length are attempting to contract due to the domination of self-gravity, and consider a region of fixed mass $M$. Its gravitational energy per unit mass, $e_{\rm g}$, then, scales with radius as \beq e_{\rm g} \approx \frac{GM} {R} \propto R^{-1}, \label{eq:eg-R} \eeq while its specific rotational energy is \beq e_{\rm r} \approx \frac{1} {2} \frac{I \omega^2} {M} \approx \frac{1} {2} \vrot^2. \label{eq:vrot} \eeq On the other hand, if the AM is conserved during the contraction, \beq J \approx {\rm cst.} \approx I \omega \approx M R^2 \frac{\vrot} {R} = M R \vrot, \label{eq:cst_J} \eeq implying that $\vrot \propto R^{-1}$, and therefore \beq e_{\rm r} \propto R^{-2}. \label{eq:erot-R} \eeq We thus see that, as a fluid parcel contracts due to gravity, the ratio of its rotational energy to its gravitational energy $\beta$ would tend to increase, if its AM were conserved. This could only continue until $e_{\rm r} \sim e_{\rm g}$, at which point the collapse should be halted by rotation. However, if the parcel sheds its AM via turbulent torques, then the contraction can continue. That is, on the one hand, the combined action of AM conservation and gravitational contraction tends to increase the $\beta$ ratio, while on the other hand, the exchange of AM between the fluid parcel and its neighbours tends to counter this growth. It thus seems plausible that the competition between these two processes tends to keep $\beta$ approximately constant, or at least, independent of radius. In a future contribution, we plan to investigate the dependence of the rate of AM transfer on the gradient of the rotational energy density. We can then perform a calculation similar to that of \citet{Goodman+93}, but without assuming Larson's velocity dispersion-size relation, and instead using the gravitational energy directly. We start by explicitly writing $\beta$ as the ratio of the rotational to gravitational energies, both per unit mass, denoted by $e_{\rm r}$ y $e_{\rm g}$. Noting that $e_{\rm r} \approx (1/2) I \omega^2/M \approx (1/2) j\omega$ and $e_{\rm g} \approx GM/R = \pi G R \Sigma $, where $\omega$ is a representative angular velocity for the clump and $\Sigma \approx M/\pi R^2$, we have \begin{equation} \beta \equiv \frac{e_{\rm r}}{e_{\rm g}} \approx \frac{1/2\, j \omega} {\pi G R \Sigma}. \label{eq:beta_GHC} \end{equation} Under the assumption that $\beta \approx$ cst., we can then solve for $\omega$ as \begin{equation} \omega \approx \frac{2 \pi \beta G R \Sigma} {j}. \label{eq:omega} \end{equation} On the other hand, the SAM is $j = I \omega / M \approx R^2 \omega$. Substituting eq. (\ref{eq:omega}) in this expression for $j$, we finally get \begin{equation} \label{eq:final} j \approx (2 \pi \beta G \Sigma)^{1/2} R^{3/2}, \end{equation} so that in the context of the GHC scenario and the \citet{Heyer+09} relation, we recover the dependence of $j$ on $R$, assuming that $\Sigma$ does not exhibit any specific trend with $R$, and that rotation draws its energy from the gravitational one---in agreement with the fundamental premise of the GHC scenario---and loses it by AM transfer. \begin{figure*} \centering \includegraphics[width=\textwidth]{hist_dens.png} \caption{Evolution of the normalized cumulative density histogram for clumps C11 to C14 (shown in Fig.\ \ref{fig:muestra one future}), respectively defined with density thresholds $n_{\rm th} = 10^{3}\, {\rm cm}^{-3}$ (C11), $n_{\rm th} = 3 \times 10^{3}\, {\rm cm}^{-3}$ (C12), $n_{\rm th} = 10^{4}\, {\rm cm}^{-3}$ (C13), and $n_{\rm th} = 3 \times 10^{4}\, {\rm cm}^{-3}$ (C14), at times $t_{\rm def} = 17.26$ Myr for C11, $t_{\rm def} = 19.92$ Myr for C12 and C13, and $t_{\rm def} = 23.24$ for C14. It is seen that, at the final time (which represents the last snapshot prior to the formation of sinks) in the various cases, a fraction between $\sim 5$ and $60\%$ of the mass acquires a density below the definition threshold density, so that only the remaining mass can continue to collapse.} \label{fig:hist_dens} \end{figure*} \begin{comment} \begin{figure*} \centering \centering \includegraphics[width=0.3\textwidth]{Goodman_j_R_data.png} \includegraphics[width=0.3\textwidth]{Goodman_sigma_R_data.png} \includegraphics[width=0.3\textwidth]{Goodman_j_sigma_R_data.png} \hfill \centering \includegraphics[width=0.3\textwidth]{Chen_j_R_data.png} \includegraphics[width=0.3\textwidth]{Chen_sigma_R_data.png} \includegraphics[width=0.3\textwidth]{Chen_j_sigma_R_data.png} \hfill \centering \includegraphics[width=0.3\textwidth]{Tatematsu_j_R_data.png} \includegraphics[width=0.3\textwidth]{Tatematsu_sigma_R_data.png} \includegraphics[width=0.3\textwidth]{Tatematsu_j_sigma_R_data.png} \hfill \centering \includegraphics[width=0.3\textwidth]{Golds_Arq_j_R_data.png} \includegraphics[width=0.3\textwidth]{Golds_Arq_sigma_R_data.png} \includegraphics[width=0.3\textwidth]{Golds_Arq_j_sigma_R_data.png} \hfill \centering \includegraphics[width=0.3\textwidth]{Caselli_j_R_data.png} \includegraphics[width=0.3\textwidth]{Caselli_sigma_R_data.png} \includegraphics[width=0.3\textwidth]{Caselli_j_sigma_R_data.png} \hfill \centering \includegraphics[width=0.3\textwidth]{Chen_X_j_R_data.png} \includegraphics[width=0.3\textwidth]{Chen_X_sigma_R_data.png} \includegraphics[width=0.3\textwidth]{Chen_X_j_sigma_R_data.png} \hfill \allowdisplaybreaks \centering \includegraphics[width=0.3\textwidth]{Pigorov_j_R_data.png} \includegraphics[width=0.3\textwidth]{Pigorov_sigma_R_data.png} \includegraphics[width=0.3\textwidth]{Pigorov_j_sigma_R_data.png} \caption{} \label{fig:data compile} \end{figure*} \end{comment} \subsection{A limit on the mass available for collapse due to angular momentum conservation} In Secs.\ \ref{subsubsec:future} and \ref{sec:testing_mechanism}, when tracking the member particles of a clump to the future, we found that a fraction of them always leaves the clump, decreasing their density and dispersing to more distant locations. This mechanism takes AM away from the clump, and allows the remaining particles to become denser. But, in addition, this mechanism imposes an upper bound to the efficiency of the contraction mechanism, since not all of the clump's mass can reach a higher-density state. To quantify this upper bound, in Fig.\ \ref{fig:hist_dens} we show the evolution, from $t_{\rm def}$ to the last snapshot before the formation of a sink, of the mass fraction below the indicated density for the lagrangian particle sets corresponding to clumps C11 to C14 from Fig.\ \ref{fig:muestra one future}. These clumps were defined with a different density threshold each. It is seen from Fig.\ \ref{fig:hist_dens} that, for all four lagrangian sets, the mass fraction below the clump's defining threshold density increases monotonically in time, reaching $\sim 20\%$ of the total mass of the lagrangian set for C11 after 1.33 Myr, $\sim 5\%$ for C12 after 0.66 Myr, $\sim 5\%$ for C13 after $0.27$ Myr, and $\sim 60\%$ for C14, also after 0.27 Myr. Therefore, the mass loss needed to conserve AM may account for at least part of the observed core-formation efficiency of $\sim 30\%$ in star-forming regions \citep[e.g.,][]{Motte+98, Bontemps+2010, Palau+2013, Palau+2015}. \section{Conclusions} \label{sec:conclusions} In this paper we have investigated the angular momentum exchange mechanisms among sets of fluid particles in an SPH simulation of dense cloud formation in the turbulent warm atomic ISM. Our strategy has profited from the particle nature of the SPH scheme, which allowed us to track over time the particle sets that constituted a ``clump'' (a connected set of particles above some threshold density $n_{\rm th}$) at some time $t_{\rm def}$. We referred to these as the ``member (or lagrangian) particle sets''. The tracking was performed either from the past ($t < t_{\rm def}$) or towards the future ($t > t_{\rm def}$), allowing us to see where the clump member particles came from, and how they evolve subsequently, and to measure their total (AM) and specific (SAM, denoted $j$) angular momentum. For comparison, we also tracked clumps defined in the traditional way (as connected particle sets above the threshold density) throughout their evolution. We also compared our results with the properties of a sample of observed clumps in various clouds, taken from the literature. Our results can be summarized as follows: \begin{itemize} \item The numerical clump sample from our simulation of globally and hierarchically contracting clouds driven by self gravity reproduces with remarkable accuracy the observed $j$-$R$\ relation, showing that the GHC scenario produces clouds and clumps with a realistic AM content and scaling. \item The loss of AM in molecular clouds and their substructures (referred to generically as ``clumps'') can be provided by the various torques acting on a fluid parcel of the cloud, described by the terms on the right-hand side of eq.\ \eqref{eq:total torque}, including the widely discussed magnetic, gravitational, and viscous torques. However, in an inherently turbulent medium such as molecular clouds, the torques due to the Reynolds stress and the pressure gradient terms in the momentum equation (first and second terms in eq.\ [\ref{eq:total torque}]), cannot be neglected as alternative important AM exchange channels. These may likely even be dominant in the interaction among fluid parcels of similar densities at small scales. \item The lagrangian particle sets tracked from the past often exhibit two different possible evolutionary regimes: an early stage ($t$ significantly smaller than $t_{\rm def}$) of evolution along a trajectory roughly parallel to the locus of the observational sample of clumps in the $j$-$R$\ diagram, (thus losing SAM) and a later one ($t$ closer to $t_{\rm def}$) in which $j$ remains approximately constant. \item The lagrangian sets tracked from the past that exhibited an AM loss stage were found to contain a large number (over three times the number at $t_{\rm def}$) of ``intruder'' particles within the minimal rectangular 3D box containing all the particles of the member set. Instead, the particle sets that evolved with $j \sim $ cst.\ throughout the tracking period contained a smaller (less than twice the number at $t_{\rm def}$) number of intruder particles in the minimal rectangular box since the start of the tracking period. This suggests that the evolution at $j \sim $ cst.\ occurs when the particle set does not have enough companion fluid particles to exchange the AM with. Also, this argues against the dominant torques being gravitational, since these should act over long distances, and thus should not require the companions to be nearby. \item In all the lagrangian sets tracked to the future, the SPH particles that reduced their density also moved away from the center of mass, effectively being lost from the dense clump, even if the latter contains a collapse center. Moreover, these particles contained most of the AM of the particle set. This mechanism is therefore qualitatively similar to that taking place in accretion disks, in which part of the mass moves outwards, carrying AM out, and allowing the rest of the mass to fall further inwards. In addition, the evolution in the $j$-$R$ diagram of these sets does not contain an abrupt change in slope as in the case of tracking from the past, lacking a constant-$j$ evolution period, thus suggesting that their dynamics is dominated by self-interactions between their constituent particles. \item Contrary to the case of accretion disks, we have found that the fraction of mass that is lost from the member particle sets ranges from a few to over 50\%. It thus appears that the AM removal process in molecular clumps may be significantly more mass-consuming than the equivalent process in accretion disks, possibly contributing importantly to the low observed core formation efficiency. Further investigation is necessary to establish the relative importance of this mechanism in setting the efficiency. \item On the other hand, clumps defined as connected sets above a density threshold throughout the tracking time seem to always evolve along the observed relation $j$-$R$, increasing both their radius and their mass while doing so. This is not surprising, as these clumps are equivalent to the parts of lagrangian sets that are able to contract by giving their AM to their neighbors. \item At the time of their definition as connected particle sets above a threshold, the clumps in the simulation do not exhibit a significant dependence of $\beta$, the ratio of the rotational to the gravitational energy, with their size $R$, similarly to most observational samples. \item We have suggested that the near independence of $\beta$ with the size $R$ of the clumps may be due to the tendency of the rotational energy to increase faster than the gravitational energy during gravitational contraction if $J \sim$ cst.\ (cf.\ eqs.\ [\ref{eq:erot-R}] and [\ref{eq:eg-R}]), causing a tendency of $\beta$ to increase. If the efficiency of AM exchange increases as the rotational energy density gradient in a clump increases, this may tend to bring $\beta$ to a stationary value, independently of size. \end{itemize} The above results are consistent with the evolution of all parcels within a dense cloud being driven by their self-gravity, but only those that can transfer some of their angular momentum to their neighboring fluid parcels are able to contract gravitationally, while those that receive it are expelled from the clumps, reducing their density, and failing to participate in the collapse. This process is therefore the proposed mechanism within the GHC scenario for molecular clouds, in which the whole cloud is dominated by its self-gravity, and thus prone to develop collapse at multiple scales within it. This interpretation would then suggest that the so called ``angular momentum problem'' is nonexistent, because the collection of objects entering the observed $j$-$R$\ relation have not contracted monolithically to reach their present sizes, but rather have {\it fragmented} out of larger objects, and contracted precisely because they are the parts that have lost some of their AM to their neighbours. In this sense, the observation of the densest fragments in a cloud or clump constitutes a selection effect that picks up precisely the fragments which have lost AM. \section*{Acknowledgements} We thank Javier Ballesteros-Paredes for pointing out the similarity of the proposed AM transfer mechanism in the gaseous phase with that operating in accretion disks, and Gilberto G\'omez for useful discussions. This work has been supported in part by a CONACYT graduate fellowship for G.A.-C.
2,869,038,156,458
arxiv
\section{Introduction} The number of regions of a hyperplane arrangement can be calculated with its characteristic polynomial \cite{zasl}. This result is useful in solving problems of enumerative combinatorics. For example, in \cite{irm}, \cite{irm2}, \cite{irm3} a special hyperplane arrangement with several related geometric constructions was introduced in order to obtain an asymptotic estimation of the number of threshold functions and the number of singular $\pm 1$-matrices . In the paper \cite{gre-zasl} the connection between the chromatic polynomial of a graph and the number of acyclic orientations of that graph is depicted as a property of a specific hyperplane arrangement. This arrangement is called a graphical arrangement, and the set of all edges of the initial graph is used for its construction. The characteristic polynomial of the graphical arrangement is equal to the chromatic polynomial of the initial graph, and the number of regions of the graphical arrangement is equal to the number of acyclic orientations in the initial graph. In the paper \cite{edmonds} a linear program for a maximum matching problem for a general graph was created. A matching polyhedron was defined by the set of its inequalities, and the correspondence between its vertices and matchings of the initial graph was shown. The concept of LP-orientations is connected to linear programming as well \cite{halt-klee}. A matching arrangement was defined in the paper \cite{bol}. That paper also describes some of matching arrangement's properties and properties of its characteristic polynomial. This paper contains a description of a connection between the matching arrangement and the matching polyhedron. A bijection between regions of the matching arragement and LP-orientations of the matching polyhedron is constructed. This bijection allows to calculate the number of LP-orientations of the matching polyhedron with the characteristic polynomial of the matching arrangement. \section{Basic definitions} \begin{definition} \textit{A hyperplane arrangement} is a finite set of affine hyperplanes in some vector space V. In this paper $V = R^n$. \end{definition} \begin{definition}If all hyperplanes intersect in one point, an arrangement is called \textit{central} . \end{definition} \begin{definition} \textit{A region} of an arrangement is a connected component of the complement of the union of hyperplanes. \end{definition} \begin{definition} Let $(r_1,r_2,r_3,...,r_n)$ be a sequence of edges, such that $\forall i$ edges $r_i$ and $r_{i+1}$ are incident to the same vertex. If $r_n$ and $r_1$ are incident to the same vertex, then such sequence is called\textit{a cycle}, otherwise it is called \textit{a path}. A path or a cycle is called simple, if there are no pairs of edges that are incident to the same vertex, other than ($r_i$,$r_{i+1}$) and ($r_1$,$r_n$). \textit{A length} of a path or a cycle is the amount of edges in it. \end{definition} \begin{definition} Let $G(V,E), |E|=n$ be a graph without loops and parallel edges, let $N:E\rightarrow \{1,2,..,n\}$ be a numeration of edges in G. Let P be a sequence of edges $(r_1,r_2,r_3,...,r_n)$ that form a simple path or a simple cycle with even number of edges in graph G. A hyperplane that corresponds to sequence P is a hyperplane with an equation $x_1-x_2+x_3 -...+(-1)^{n+1}x_n=0$. Let F be a set of all sequences of edges in G that form a simple path or a simple cycle with even number of edges in G. Then matching arrangement is a set of all hyperplanes that correspond to elements of F. A matching arrangement will be denoted as MA(G,N). \end{definition} \begin{definition} \cite{edmonds} Let $G(V,E), |E|=n$ be a graph without loops and parallel edges. A matching polyhedron is a convex polyhedron in $R^n$ that is defined by a system of inequalities that consists of inequalities of the following three types: \begin{enumerate} \item $\forall e_i \in E : x_i \geq 0$ \item $\forall v \in V : \sum\limits_{e_i \text{ is incident } v} x_i \leq 1$ \item $\forall S=\{v_{j_1},v_{j_2},..., v_{j_{2k+1}} \}, S\subseteq V : \sum\limits_{e_i=(v_p,v_q) \in E,v_p \in S, v_q \in S} x_i\leq k$ \end{enumerate} \end{definition} Each matching of a graph $G(V,E), |E| = n$ corresponds to a vertex of an n-dimensional boolean cube in the following way. Let $M$ be a matching in $G$. Then the corresponding vertex of a boolean cube is $(x_1, x_2, x_3, …, x_n)$,where $x_i = 1$, if an edge $e_i$ belongs to M, and $x_i = 0$ otherwise. Therefore, a bijection between the set of all matchings in $G$ and a subset $D$ of the set of vertices of a boolean cube is constructed. \begin{theorem}\cite{edmonds} $D$ is a set of vertices of a matching polyhedron. \end{theorem} Let $P$ be a polyhedron in $R^n$, and let $\phi: R^n \rightarrow R$ be a linear function that takes different values on adjacent vertices of $P$. Then the graph of $P$ can be oriented by $\phi$ the following way. An edge $(v_i, v_j)$ that connects vertices $v_i$ and $v_j$, is oriented from $v_i$ to $v_j$, if $\phi(v_i) < \phi(v_j)$. If an orientation can be obtained from a linear function this way, then this orientation is called an LP-orientation . \section{Main result} \begin{theorem} Let $G(V,E) , |E| = n$ be a graph without loops and parallel edges, and let $P(G)$ be a matching polyhedron of $G$. Let $L(P(G))$ be a set of LP-orientations of $P(G)$. The mapping $F:L(P(G)) \rightarrow 2^{R^{n}}$ is defined the following way. For every LP-orientation O \[F(O)= \{(\alpha_1, \alpha_2, ..., \alpha_n) \in R^n |f(x_1,x_2,…,x_n) =\]\[= \alpha_{1}x_1 + \alpha_{2}x_2 + ... + \alpha_{n}x_n \text{induces O}\}\]. Let $RG(MA(G))$ be a set of regions of the matching arrangement of G, $RG(MA(G)) = \{R_1, R_2,...,R_n\}$. Then: \begin{enumerate} \item $\forall O \in L(P(G))$ $ F(O)$ is a region of MA(G),that is $F(O)\in RG(MA(G))$. Therefore, F can be considered as a mapping from $L(P(G))$ to $RG(MA(G))$; \item $F: L(P(G))\rightarrow RG(MA(G))$ is a bijection. \end{enumerate} \end{theorem} \begin{proof} Let $\Pi(v)$ be a matching that corresponds to a vertex $v$ of the matching polyhedron. Matching polyhedra have the following property \cite[Theorem 25.3]{schrijver}: vertices $v_1$ and $v_2$ of a matching polyhedron $P(G)$ are adjacent, if and only if $\Pi(v_1)\triangle\Pi(V_2)$ is either a simple path or a simple cycle with an even length. Let O be an LP-orientation of $P(G)$, and let $a = (a_1,...,a_n)$ and $b=(b_1,...,b_n)$ be elements of $F(O)$. Let's assume,that $a$ and $b$ belong to different regions of MA(G). Then there is either a simple path or simple cycle with an even length $(e_{i_1}, e_{i_2},....,e_{i_k}),e_{i_j} \in E$ ,such that points $a$ and $b$ are separated by the hyperplane $x_{i_1} - x_{i_2} + x_{i_3}...+(-1)^{k+1}x_{i_k} = 0$. Without loss of generality, let $a_{i_1} - a_{i_2} + a_{i_3}...+(-1)^{k+1}a_{i_k} > 0$, $b_{i_1} - b_{i_2} + b_{i_3}...+(-1)^{k+1}b_{i_k} < 0$. By construction, set of edges $\{e_{i_1},e_{i_3},e_{i_5},...\}$ and $\{e_{i_2}, e_{i_4}, e_{i_6},...\}$ are matchings, and the corresponding vertices in $P(G)$ are connected with an edge $t$ of $P(G)$. The edge $t$ is oriented differently in LP-orientations that are obtained from functions $f_a(x)=a_1 x_1 + a_2 x_2 +....+ a_n x_n$ and $f_b(x)=b_1 x_1 + b_2 x_2 +....+ b_n x_n$, therefore $a$ and $b$ cannot belong to $F(O)$ at the same time. Let's assume now, that $a$ belongs to the border of a region R, which means it belongs to a hyperplane $x_{i_1} - x_{i_2} + x_{i_3}...+(-1)^{k+1}x_{i_k} = 0$.Then the function $f_a$ has the same value on vertices of P(G) that correspond to matchings $\{e_{i_1},e_{i_3},e_{i_5},...\}$ and $\{e_{i_2}, e_{i_4}, e_{i_6},...\}$, and are connected by an edge in $P(G)$. This means that an LP-orientation cannot be obtained from $f_a$, and $a$ cannot belong to F(O). As a result, if O is an element of $L(P(G))$, then $F(O)$ is a subset of some region $R_i$. Let $a \in F(O), b\in R_i$, $b$ does not belong to F(O). Since $b$ does not belong to hyperplanes of the arrangement MA(G), values of $f_b$ on adjacent vertices of P(G) are different, which means that an LP-orientation $\hat O$ can be obtained from $f_b$. Let's assume, that an edge $t$ is oriented differently in O and in $\hat O$. Let $v_1$ and $v_2$ be vertices of t, $\Pi(v_1) = \{e_{j_1},e_{j_3},...\}$, $\Pi(v_2) = \{e_{j_2},e_{j_4},...\}$. Then $H:x_{j_1}-x_{j_2} + x_{j_3}-... = 0$ is a hyperplane from MA(G), and $a$ and $b$ are separated by H, which is a contradiction to an assumption, that $a$ and $b$ belong to the same region $R_i$. Therefore $O = \hat O$, $F(O)=R_i$, and $F: L(P(G))\rightarrow RG(MA(G))$ is injective. Let R be a region of MA(G), $b \in R$. Since $b$ does not belong to any hyperplane from MA(G), values of $f_b$ on adjacent vertices of P(G) are different, which means that an LP-orientaion $\bar O$ can be obtained from $f_b$. Then, as it was previously proved, $F(\bar O) = R$. This means that F is surjective. Therefore, F is a bijection. \end{proof} Author is grateful to A.A.Irmatov for the formulation of the problem and for valuable discussions of these results. \clearpage
2,869,038,156,459
arxiv
\section{\bigskip\bigskip Introduction} This note deals with the regularity of the optimal transportation map, when the distributions under consideration are close to restricted Gaussians. From the work of Ma, Trudinger and Wang, ([MTW], [TW]) regularity holds for arbitrary smooth distributions on nice domains when the cost satisfies the MTW\ A3s condition. \ \ \ It is established by Loeper [L] that without this MTW condition on the cost function, one cannot expect regularity for arbitrary smooth distributions, and the question of regularity is wide open. \ Here we show that we can find smooth optimal transportation, at least for some very nice distributions. \ We show two results. \ The first is that when the transportation problem involves distributions somewhat like the standard Gaussian restricted to the unit ball, then if the cost function is close enough to the Euclidean distance squared cost, the map must be regular. \ As a corollary, given two points and any cost which is smooth near these points, we can find very focused Gaussians, restricted to very small balls near the points, so that the optimal transport is regular. \ \ \ Our method yields a way to compute precisely how close the cost function need be to Euclidean, or relatedly, how small the balls must be around the given points. \ Recently other perturbatitive results for regularity of optimal transport have appeared:\ \ Delano\"{e} and Ge [DG] show regularity for certain densities on metrics near constant curvature. \ Caffarelli, Gonzalez and Nguyen\ [CGN] present estimates, when the cost is Euclidean distance raised to powers other than $2.$ Specifically, let $f,\bar{f}$ \ be functions on regions $\Omega,\bar{\Omega }\subse \mathbb{R} ^{n},$ satisfying on $\Omega$ \begin{align} |Df| & \leq1\tag{a1}\\ 1 & \leq\delta\leq D^{2}f\leq2\tag{a2}\\ |D^{3}f| & \leq1 \tag{a3 \end{align} and similarly for $\bar{f}$ on $\bar{\Omega}.$ We define the following mass distributions \begin{equation} m=e^{-f(x)}\chi_{\Omega} \label{mass \end{equation} \begin{equation} \bar{m}=e^{-\bar{f}(\bar{x})}\chi_{\bar{\Omega}} \label{massbar \end{equation} where we may add a constant to $f$ so that both distributions have the same total mass.\ The region $\Omega$ will be required to have a defining function $h$ so that on $\Omega=\left\{ h\leq0\right\} ,$ $h$ satisfies the same three conditions (a1-3) as $f,$ as well as, along the boundary $\partial\Omega$ \begin{equation} |Dh|\text{ }\geq1/2, \label{beta \end{equation} which implies the second fundamental form of the set $\partial\Omega=\left\{ h=0\right\} $ is bounded by $4.$ \ Similarly define an $\bar{h},~\bar{\Omega }.$ \ A solution of the optimal transportation equation for these densities and a given cost function $c(x,\bar{x})$ is a function $u(x)$ which satisfie \begin{align} \det w_{ij} & =e^{-f(x)}e^{\bar{f}\left( T(x,Du)\right) }|\det c_{is}(x,T(x,Du))|\label{OT}\\ T(x,Du)\left( \Omega\right) & =\bar{\Omega \end{align} wher \begin{equation} w_{ij}=u_{ij}(x)-c_{ij}(x,T(x,Du))=c_{si}T_{j}^{s} \label{this is w \end{equation} and $T(x,Du)=(T^{1},T^{2},\ldots,T^{n})\subset\bar{\Omega}$ \ is determined b \[ u_{i}(x)=c_{i}(x,T(x,Du)). \] (Such a solution must also be $c$-convex. \ In our setting, the two notions of convexity are very close, so we won't belabour this point here, \ see Lemma \ref{c convexity lemma}.) We will use the following convention: The derivatives of the cost function in the first variable $x$ will be $i,j,k$ etc. \ The second variable $\bar{x}$ will be denoted by indices $p,s,t,$ etc. \ Also upper index denotes inverse i.e $c^{is}=(c_{is})^{-1}$. The cost $c(x,\bar{x})$ will satisfy the standard conditions (A1) and (A2) but not (A3) (see for example [MTW] section 2.) We will require further that the second derivatives of the cost satisfy the following assumptions \begin{equation} \left\Vert \left( c^{is}-I\right) \right\Vert \leq\epsilon_{0}\leq1/20 \tag{c-a1 \end{equation} \begin{equation} C(n)\left( \left\Vert D^{3}c\right\Vert +\left\Vert D^{4}c\right\Vert \right) \leq\epsilon_{0}\leq1/20 \tag{c-a2 \end{equation} where $C(n)$ is a dimensional constant, and the derivative norms are with respect to both barred and unbarred directions. \ \ Finally we will require that the densities are somewhat close to uniform \begin{equation} e^{-f(x)}e^{\bar{f}\left( \bar{x}\right) }|\det c_{is}|\in\lbrack \Lambda^{-1},\Lambda] \tag{cm-a3 \end{equation} for all $x,\bar{x}$ $\in\Omega\times\bar{\Omega}$ with \begin{equation} \Lambda\leq\left( \frac{n}{3/2}\right) ^{n}. \tag{cm-a3b \end{equation} We are now ready to state our result. \begin{theorem} Let $m,\bar{m}$ be the mass densities defined by (\ref{mass}) (\ref{massbar}) with $f,\bar{f}$ satisfying assumptions (a1-3) on regions \ $\Omega,\bar {\Omega}$ \ whose defining functions also satisfy (a1-3).\ There exists an $\epsilon_{0}(n)$ such that if the cost function satisfies standard assumptions (A1) and (A2) and (c-a1,a2) and (cm-a3) hold, then the optimal map transporting $m$ to $\bar{m}$ is regular. \ \ \end{theorem} \begin{remark} These conditions are nonvacuous. \ For example take $f,h,\bar{f},\bar{h}$ all to be \[ \frac{2}{3}|x|^{2}-\frac{1}{4}, \] and \[ c(x,\bar{x})=-x\cdot\bar{x}. \] One can check that all the assumptions are satisfied with plenty of room to perturb any of the problems components. \end{remark} The following theorem will follow by a change of coordinates and rescaling. \ \begin{theorem} \ Let $x_{0},\bar{x}_{0}$ be two points in manifolds $X,\bar{X}$ such that near $(x_{0},\bar{x}_{0})$ the cost function is smooth and satisfies standard nondegeneracy conditions (A1)(A2). Then there exists a $\lambda$ large depending on the cost function, so that the optimal map from the Gaussian (after a choice of coordinates)\ \[ e^{-\lambda^{2}|x-x_{0}|^{2}/2}\chi_{B_{1/\lambda}(x_{0}) \] to \[ e^{-\lambda^{2}|\bar{x}-\bar{x}_{0}|^{2}/2}\chi_{B_{1/\lambda}(\bar{x}_{0}) \] is smooth. \ \end{theorem} \begin{remark} We do not attempt to obtain any sharp results, rather the convenient smallness assumptions are to minimize crunchiness of the proof.\ Inspection of the proof will show that our choice of assumptions are robust. There is a rather large gap between what is covered here and the counterexamples, and we have no reason to suspect that these results are near sharp. \end{remark} \begin{remark} We would like to obtain a similar result for complete Gaussians, as Caffarelli obtained in the Euclidean case in \cite{C2}. \ In fact, it was an attempt to generalize the calculation in \cite{C2} that led to this result. \ A limitation of our current method is that we cannot force (cm-a3) to hold on large regions. \ \ \end{remark} \subsection{Proof Heuristic} We will solve the problem by continuity, starting with Euclidean cost, obtaining second derivative estimates using the approach of Urbas \cite{U} and Trudinger and Wang \cite{TW}, making use of the Ma, Trudinger and Wang \cite{MTW} calculation together with the calculation of Caffarelli \cite{C2}. \ Making these methods work in the absense of the MTW condition, we use the following observation:\ The bound $M$ on the second derivatives will satisfy the following type of inequality \begin{equation} \delta M^{2}-tM^{n+1}-1\leq0. \label{heu \end{equation} When $t$ is zero, this bounds $M,$ so $M$ is initially bounded. If $t$ is small it follows that $M(t)$ lies either on a relatively small compact interval containing $[-1/2,1/2]$ or on a noncompact interval. \ The bound $M(t)$ is changing continuously with $t,$ thus the interval it lies in must not change, thus from the initial bound we may conclude that for all $t$ in some interval of fixed size, $M(t)$ is bounded. The quadratic coefficient $\delta$ in (\ref{heu}) (same $\delta$ as in (a2)) arises when the target distribution is log-concave, as is the case with Gaussians. \ This fact is essential to the proof. \section{Calculations} Recall the symmetric tensor $w$ (\ref{this is w}).\ \ We use the quantities defined as follow \begin{align*} W(x) & =\sum w_{ii}\sim\max_{i}w_{ij}\sim\left\vert T_{j}^{s}\right\vert \\ \bar{W}(x) & =\sum w^{ii}\sim1/\min_{i}w_{ii}\\ C_{3} & \geq\left\Vert D^{3}c\right\Vert C(n)\\ C_{4} & \geq\left\Vert D^{4}c\right\Vert C(n)\\ \frac{1}{C_{2}}\left\vert \xi\right\vert ^{2} & \leq-c^{si}\xi_{i}\xi _{s}\leq C_{2}\left\vert \xi\right\vert ^{2 \end{align*} From (cm-a3) \ and Newton-McLaurin inequalities, it follows that \begin{align} \bar{W},W & \geq n\frac{1}{\Lambda^{1/n}}\label{2.1}\\ \bar{W} & \leq\frac{1}{n^{n-2}}\Lambda W^{n-1}\label{2.2}\\ W & \leq\frac{1}{n^{n-2}}\Lambda\bar{W}^{n-1}\label{2.3 \end{align} and pluggin in (cm-a3b) \begin{equation} W,\bar{W}\geq3/2. \end{equation} Notice that (a1)(a2)(ca-1)(ca-2) imply the following inequality for any vector in \mathbb{R} ^{n}$ \begin{equation} \left( \bar{h}_{st}-c^{kp}c_{kst}\bar{h}_{p}\right) \xi_{s}\xi_{t}\geq \frac{9}{10}|\xi|^{2}.\label{cconvex \end{equation} Throughout this section we will be assuming we have a smooth solution $u$ to the equation (\ref{OT}) on $\Omega.$ \ Our goal is to prove second derivative estimates. We make use of the linearized operator at a solution $u,$ from \cite{TW} defined b \[ Lv=w^{ij}v_{ij}-\left( w^{ij}c_{ij,s}c^{sk}+\bar{f}_{s}(T(x,Du))c^{sk +c^{is}c_{si,p}c^{pk}\right) v_{k}. \] The following has an immediate consequence when maximums occur on the interior, and is also crucial in the boundary estimates in Section 4. \ \ The proof is a moderately long calculation and follows by the arguments in \cite{MTW} . \begin{lemma} \label{Lw} \ Suppose $u\left( x\right) $ is a solution to (\ref{OT}). Then \[ Lw_{11}= \ \begin{align*} & =w^{ij}\left[ 2c_{ijs1}T_{1}^{s}+c_{ijst}T_{1}^{s}T_{1}^{t}-2c_{11is T_{j}^{s}-c_{11st}T_{i}^{s}T_{j}^{t}\right] \\ & -c_{11p}c^{kp}[-f_{k}+\bar{f}_{s}T_{k}^{s}+c^{is}c_{ist}T_{k}^{t -c_{ijs}w^{ij}T_{k}^{s}-c_{skj}c^{sj}-c^{ti}c^{sj}c_{kst}w_{ij}]\\ & +\bar{f}_{st}T_{1}^{s}T_{1}^{t}-f_{11}+c^{is}(c_{is11}+2c_{ist1}T_{1 ^{t}+c_{istp}T_{1}^{t}T_{1}^{p})\\ & +(c_{1}^{is}+c_{t}^{is}T_{1}^{t})(c_{is1}+c_{isp}T_{1}^{p})\\ & +\left( w^{ij}c_{ijp}+\bar{f}_{p}+c^{ij}c_{isp}\right) c^{pk}\left( c_{11s}T_{k}^{s}-c_{k1,s}T_{1}^{s}-c_{ks,1}T_{1}^{s}-c_{kst}T_{1}^{s}T_{1 ^{t}\right) \\ & -w_{1}^{ij}w_{ij1}. \end{align*} \end{lemma} Applying the maximum principle, \begin{corollary} If the largest eigenvalue $W$ of $w$ is attained on the interior, it must satisfy \begin{equation} \frac{\bar{\delta}}{C_{2}}W^{2}-\left( C_{4}+C_{3}+C_{3}|Df|\right) W^{n+1}-|D^{2}f|-C(C_{3},C_{4})\leq0. \label{split1 \end{equation} \end{corollary} \bigskip The next computation is implicit throughout \cite{TW} sections 2,3 and 4. \ \ We state it for concreteness. \ \begin{lemma} \label{Lv} Let $v(x)=F(x,T(x,Du)).$ \ Then \begin{align} Lv & =w^{ij}F_{ij}+2F_{is}c^{is}+F_{st}c^{is}c^{jt}w_{ij}\label{eq:Lv}\\ & +F_{p}\left( -c^{pk}f_{k}-c^{pk}c_{ks,j}c^{js}-c_{kst}c^{pk}c^{is c^{jt}w_{ij}\right) \nonumber\\ & -F_{k}\left( w^{ij}c_{ijs}c^{sk}+\bar{f}_{s}c^{sk}+c^{is}c_{sit c^{tk}\right) . \nonumber \end{align} \end{lemma} \begin{corollary} \label{Lh} Given conditions (c-a1) (c-a2) and (a1) (a2)\ on the functions $f,\bar{f},$ $h,$ and $\bar{h},$ we have \[ Lh\geq\frac{9}{10}\delta\bar{W}-\frac{11}{10 \ \[ L\bar{h}(T(x,Du))\geq\frac{9}{10}\delta W-\frac{11}{10}. \] \end{corollary} \bigskip \subsection{Obliqueness} We follow the argument from [TW] section 2. \ Defining \begin{align*} \gamma & =Dh\\ \beta & =\bar{h}_{s}c^{si}\partial_{i \end{align*} we le \[ \chi=h_{k}\bar{h}_{s}c^{sk}=\gamma\cdot\beta. \] From Lemma \ref{Lv} with our assumptions we hav \[ L\chi\leq\bar{W}(\left\vert D^{3}h\right\vert +C_{3}+C_{4})+W(\left\vert D^{3}\bar{h}\right\vert +C_{3}+C_{4})+C_{5}(n). \] Then Corollary \ref{Lh} gives \[ L\left\{ \chi-\lambda h-\lambda\bar{h}\circ T(x))\right\} \leq\bar{W (\frac{11}{10}-\frac{9}{10}\lambda)+W(\frac{11}{10}-\frac{9}{10 \lambda)+2\frac{11}{10}\lambda+C_{5}(n) \] which is negative for $\lambda$ reasonably chosen. (Throughout we are using bounds (\ref{2.1}) etc, and our initial assumptions.)\ \ \ This function will then have a minimum at the boundary, precisely\ at the point where $\chi$ achieves a minimum on the boundary, and at this point we hav \[ \left\{ D\chi-\lambda D(\bar{h}\circ T)-\lambda Dh\right\} \cdot\frac {\gamma}{|\gamma|}\leq0 \] or \begin{equation} D\left\{ \chi-\lambda\bar{h}\circ T\right\} =\tau\gamma\label{tau \end{equation} for some $\tau\leq\lambda.$ Now computing (following \cite[2.31-2.33]{TW}), using (\ref{cconvex}) \ and (\ref{this is w}) with our other assumptions including (\ref{beta}) we conclude \bigski \begin{align} D\chi\cdot\beta & =c^{ti}\bar{h}_{t}\left( h_{ki}c^{sk}\bar{h}_{s +h_{k}\left( c_{i}^{sk}+c_{p}^{sk}T_{i}^{p}\right) \bar{h}_{s}+h_{k c^{sk}\bar{h}_{sp}T_{i}^{p}\right) \nonumber\\ & =h_{ki}\beta^{k}\beta^{i}+c^{ti}\bar{h}_{t}h_{k}\bar{h}_{s}c_{i ^{sk}+c^{ti}\bar{h}_{t}h_{k}T_{i}^{p}(\bar{h}_{sp}c^{sk}-\bar{h}_{s c^{sm}c^{rk}c_{mrp})\nonumber\\ & =h_{ki}\beta^{k}\beta^{i}+c^{ti}\bar{h}_{t}h_{k}\bar{h}_{s}c_{i}^{sk +h_{k}\bar{h}_{t}T_{a}^{t}c^{pa}c^{rk}(\bar{h}_{rp}-\bar{h}_{s}c^{sm c_{mrp})\label{adf}\\ & \geq|\beta|^{2}\delta-C_{3}\geq\frac{1}{5}\delta,\nonumber \end{align} The third term in (\ref{adf}) can be expressed as an inner product $g$ of the gradients of the functions $h(x)$ and $\bar{h}\circ T(x),$ which are both multiples of the outward normal,\ where \[ g(\xi,\nu)=(\bar{h}_{rp}-\bar{h}_{s}c^{sm}c_{mrp})c^{rk}c^{pa}\xi_{k}\nu_{a}. \] Thus \begin{align*} \tau\gamma\cdot\beta & =D\chi\cdot\beta-\lambda D(\bar{h}\circ T)\cdot\beta\\ & \geq\delta/5-\lambda\bar{h}_{s}T_{i}^{s}c^{it}\bar{h}_{t}\\ & =\delta/5-\lambda w_{\beta\beta}. \end{align*} Thus from $\tau \leq \lambda,$ \begin{equation} \lambda\chi\geq\delta/5-\lambda w_{\beta\beta}. \label{chi1 \end{equation} Using symmetry (replacing all quantities with barred quantities we find the problem does not change, again see \cite{TW} and Lemma \ref{c convexity lemma )\ , we may assume \begin{equation} \lambda\chi\geq\delta/5-\lambda\bar{w}_{\gamma\gamma}. \label{chi2 \end{equation} Then, using the Urbas formula \cite{U},\ \cite[2.13]{TW} \[ (\beta\cdot\gamma)^{2}=w^{ij}\gamma_{i}\gamma_{j}w_{\beta\beta \] or \begin{equation} \chi^{2}=\bar{w}_{\gamma\gamma}w_{\beta\beta} \label{chi3 \end{equation} we have combining (\ref{chi1}) (\ref{chi2}) and (\ref{chi3} \begin{equation} \chi\geq\frac{\delta}{10\lambda}=\theta. \label{theta \end{equation} \begin{corollary} \label{delta} The following holds, regarding the angle between $\beta$ and $\gamma$ $\ \[ \angle(\beta,\gamma)\leq\Delta<\pi/2. \] \end{corollary} \subsection{cost-convexity} \begin{lemma} \label{c convexity lemma}Suppose $u\left( x\right) $ is a solution to (\ref{OT}) on a domain in \mathbb{R} ^{n}$. \ \ If $D^{2}u\geq2\epsilon_{0},$ and the cost function differs from the Euclidean cost function by less than then $\epsilon_{0}$ in $C^{2},$ then $u$ is $c$-convex, and the mapping $T(x,u)\ $is one to one. \end{lemma} \begin{proof} \ Suffice to consider the $c=-x\cdot y+\phi(x,y),$ where $\phi$ is small in $C^{2}(\Omega).$ At a point $x_{0}$, we have $Du(x_{0})=Dc\left( x_{0},T(x_{0},Du\right) )=\ -T(x_{0},Du)+D\phi(x_{0},T(x_{0})).$ \ At another point, $x_{1}$ \ \[ \langle Du(x_{1})-Du(x_{0}),x_{1}-x_{0}\rangle\geq2\epsilon_{0}|x_{1 -x_{0}|^{2}. \] Now suppose that $u$ is not strictly $c$-convex. \ \ Clearly the issue would have to be nonlocal, as locally \[ D^{2}u-D^{2}c\geq2\epsilon_{0}-\epsilon_{0}>0. \] Thus we can assume that there is a point $x_{0}$ and a locally supporting cost function \[ c_{y_{0}}(x)=-x\cdot T(x_{0})+\phi(x,T(x_{0})) \] which contacts $u$ from below near $x_{0}$ but touches $u$ (possibly transversely) at a point $x_{1}.$ \ It follows that \[ \langle Dc_{y_{0}}(x_{1})-Dc_{y_{0}}(x_{0}),x_{1}-x_{0}\rangle\geq\langle Du(x_{1})-Du(x_{0}),x_{1}-x_{0}\rangle \] that i \[ \left\Vert D^{2}\phi\right\Vert _{C^{1,1}}|x_{1}-x_{0}|^{2}\geq2\epsilon _{0}|x_{1}-x_{0}|^{2 \] \ a contradiction. \ It follows that $u$ is $c$-convex \ and $T$ is one to one. \ \end{proof} \bigskip \subsection{Boundary Estimate} \bigskip Let \[ M=\max_{\left\vert e\right\vert =1,e\in T_{x}\Omega}w_{ee \] be the maximum of all eigenvalues $W$ over all of $\Omega$. Throughout this section we will assume that the maximum occurs on the boundary. Recalling (\ref{2.3}) and Lemma \ref{Lv}, we may choose a $C_{6}$ so that \[ L(C_{6}M^{n-2/n-1}h-\bar{h}(T(x,Du))\geq0. \] Since $h,\bar{h}$ both vanish on the boundary, the derivatives must satisfy \[ D_{\beta}\bar{h}\circ T(x,Du)\leq C_{6}M^{n-2/n-1 \] that i \[ \bar{h}_{s}T_{i}^{s}\beta_{i}=\bar{h}_{s}c^{sj}w_{ij}\bar{h}_{t c^{ti}=w_{\beta\beta}\leq C_{6}M^{n-2/n-1}. \] \begin{lemma} \label{brendle} At a point $x_{0}$ on the boundary $\partial\Omega, \bigskip\ suppose $w_{ee}\leq M$ for unit directions $e$ which are tangential to the boundary. If $z$ is any vector in $T_{x_{0}}\Omega,$ then \[ w_{zz}\leq M|\hat{z}|^{2}+\frac{1}{\theta^{2}}\langle z,\nabla h\rangle ^{2}w_{\beta\beta}. \] where \[ \hat{z}=z-\frac{\gamma\cdot z}{\gamma\cdot\beta}\beta=z-y, \] and $\theta$ is defined by (\ref{theta}). \end{lemma} \begin{proof} Dotting with $\gamma$ verifies $\hat{z}$ is tangential, thus \[ 0=\partial_{\hat{z}}\bar{h}\circ T(x,Du)=\bar{h}_{s}T_{j}^{s}\hat{z}_{j =\bar{h}_{s}c^{is}w_{ij}\hat{z}_{j}. \] Now \[ w_{zz}=w_{\hat{z}\hat{z}}+2w_{\hat{z}y}+w_{yy \] but \[ w_{\hat{z}y}=w_{ij}\hat{z}_{j}\bar{h}_{s}c^{is}=0 \] so \[ w_{zz}\leq M|\hat{z}|^{2}+\left( \frac{\gamma\cdot z}{\gamma\cdot\beta }\right) ^{2}w_{\beta\beta}. \] \end{proof} Now suppose that the maximum tangential derivative $w_{11}=M^{T}$ happens at a point $x_{0}$, where $e_{1}$ is a tangential direction. \ Define the function \[ \eta=w_{11}-M^{T}|\hat{e}_{1}(x)|^{2}-\ C_{6}\frac{1}{\theta^{2}}\langle e_{1},\nabla h(x)\rangle^{2}M^{n-2/n-1}+C_{7}(M+1)(h+\bar{h}\circ T) \] where\ \[ |\hat{e}_{1}(x)|^{2}=\left\vert e_{1}-\frac{h_{1}(x)}{\xi(\bar{h}_{s (T)c^{sk}h_{k}(x,T))}\beta\right\vert ^{2 \] with $\xi$ a smooth function satisfying $\xi(t)=t$ for $t>\theta/2,$ and $\xi(t)\geq\theta/4.$ \ $\ $Now computing, using Lemma \ref{Lw} and (\ref{2.2} \begin{align*} L\eta & \geq\delta w_{11}^{2}-\left( C_{4}+C_{3}\right) \bar{W W^{2}-C(n)-M\left\vert L|\hat{e}_{1}(x)|^{2}\right\vert )-\ C_{6}\frac {1}{\theta^{2}}|L\ \langle e_{1},\nabla h(x)\rangle^{2}|M^{n-2/n-1}\\ & +C_{7}(M+1)\left\{ \frac{9}{10}\delta\left( \bar{W}+W\right) -2(1+\mu)\right\} \end{align*} and using (considering Lemma \ref{Lv}) \begin{align*} \left\vert L|\hat{e}_{1}(x)|^{2}\right\vert & \leq C_{8}(\bar{W}+1+W)\\ \left\vert LC_{6}\langle e_{1},\nabla h(x)\rangle^{2}\right\vert & \leq C_{8}(\bar{W}+1+W). \end{align*} we may choose \[ C_{7}=C_{8}+\left( C_{4}+C_{3}\right) \left( \bar{M}+M\right) \] so that \[ L\eta\geq0. \] Next we show a lower bound on $D_{\beta}w_{11}(x_{0}).$ \ First, observe that due to Lemma \ref{brendle}, $\eta$ has a maximum at $x_{0}.$ \ It follows from the Hopf maximum principle that $D\eta\cdot\beta=\nu\gamma\cdot\beta$ \ $\geq0.$ \ Thus (recalling $h_{1}(x_{0})=0$ \begin{align} D_{\beta}w_{11}(x_{0}) & \geq M^{T}D_{\beta}|\hat{e}_{1}|^{2}+D_{\beta C_{6}\langle e_{1},\nabla h(x)\rangle^{2}M^{n-2/n-1}\label{eq:42}\\ & -\left\{ C_{8}+\left( C_{4}+C_{3}\right) \left( \bar{M}+M\right) \right\} M(D_{\beta}h+D_{\beta}H)\nonumber\\ & \geq-C(n)M^{T}-\left\{ C_{8}+\left( C_{4}+C_{3}\right) \left( \bar {M}+M\right) \right\} C_{6}(n)(1+M^{2n-3/n-1}).\nonumber \end{align} \bigskip Finally we will derive a relation between the maximum $M$ of all eigenvalues of $w$ and for tangential eigenvalues $M^{T}$. Go to the point where the maximum of all eigenvalues for $w$ happens. (Again, in this section we\ assume this happens along the boundary.) \ We diagonalize $w=diag(M,\lambda_{2},\ldots\lambda_{n})$ with respect to some coordinates $e_{1,}\ldots e_{n},$ choosing $e_{1}\cdot\gamma\geq0.$\ Now \[ w_{\beta\beta}=(\beta\cdot e_{1})^{2}M+(\beta\cdot e_{2})^{2}\lambda _{2}+\ldots(\beta\cdot e_{n})^{2}\lambda_{n}\leq C_{6}(n)M^{n-2/n-1 \] thu \begin{equation} (\beta\cdot e_{1})^{2}\leq C_{6}(n)M^{-1/n-1}.\label{alternative:1 \end{equation} It follows that there is a $C_{10}$ depending on $C_{6}(n)$ and $\Delta,$ (recall Corollary \ref{delta}) such that if $M\geq C_{10},$ then \[ \left\vert \angle(\beta,e_{1})-\pi/2\right\vert <\frac{1}{2}(\pi/2-\Delta) \] in particular \[ \angle(\gamma,e_{1})\geq\frac{1}{2}(\pi/2-\Delta). \] Thus the length of projection of the maximum eigenvector of $w$ onto the tangent plane is at least some value $\sigma M$ depending on $\Delta.$ So we may assume that either $M\leq C_{10},$ or the maximum tangential value $M^{T}$ satisfies $M^{T}$ $\geq\sigma M.$ \begin{proposition} Suppose that the global maximum for $w~$is attained along the boundary. \ Then if $M\geq$ $C_{10}$, $M$ must satisf \begin{equation} M^{2}-\left( C_{4}+C_{3}\right) M^{n+1}\leq C_{11 \end{equation} \end{proposition} \begin{proof} Differentiating $\bar{h}\circ T(x,Du)$ twice tangentially, \begin{align} \partial_{11}\bar{h}\circ T(x,Du)) & =\bar{h}_{s}T_{11}^{s}+\bar{h}_{st T_{1}^{s}T_{1}^{t}=-\langle\nabla\bar{h}\circ T,II(1,1)\rangle\label{diff 11 \\ & =\bar{h}_{p}\left( c^{pk}w_{11,k}+c^{pk}c_{11,s}T_{k}^{s}-c^{pk c_{k1,s}T_{1}^{s}-c^{pk}c_{ks,1}T_{1}^{s}-c_{kst}c^{pk}T_{1}^{s}T_{1 ^{t}\right) \nonumber\\ & +\bar{h}_{st}c^{si}w_{i1}c^{ti}w_{j1 \end{align} using \cite[4.11]{MTW}. \ Now using $\bar{h}_{p}c^{pk}w_{11,k}=w_{11,\beta}$, (\ref{eq:42}) and the discussion in the previous paragraph we conclude that if $M\geq C_{10},$ \begin{align*} & \delta\sigma M^{2}-C(n)M_{T}-\left\{ C_{8}+\left( C_{4}+C_{3}\right) \left( \bar{M}+M\right) \right\} C_{6}(n)M^{2n-3/n-1}-C_{3}W^{2}\\ & \leq C_{6}M^{n-2/n-1}. \end{align*} Using Young's inequality to clean up the expression, we have \begin{equation} M^{2}-\left( C_{4}+C_{3}\right) M^{n+1}\leq C_{11}.\label{split:2 \end{equation} \end{proof} \section{Proof of Theorem} We now go through the alternatives and make our choice of constants, in order to bound $w$ and consequently $D^{2}u$. First, if the maximum happens in the interior, then (\ref{split1} \begin{equation} M^{2}-\left( C_{4}+C_{3}\right) M^{n+1}\leq C_{12}.\label{b1 \end{equation} If not, then either (\ref{split:2}) \begin{equation} M^{2}-\left( C_{4}+C_{3}\right) M^{n+1}\leq C_{11}\label{b2 \end{equation} or \begin{equation} M\leq C_{10},\label{b3 \end{equation} by the discussion surrounding (\ref{alternative:1}). So we simply must choose $\left( C_{4}+C_{3}\right) $ small enough, say \[ \left( C_{4}+C_{3}\right) \leq\varepsilon_{0 \] so that the noncompact region defined by (\ref{b1}) does not intersect the compact regions defined by (\ref{b2}) and (\ref{b3}), similarly for the noncompact region defined by (\ref{b2}). \ \ Further, in order to have $c$-convexity, we must assume that the conditions of Lemma \ref{c convexity lemma} are satisfied. \ The upper bounds in the above alternatives provide lower bounds on the Hessian, so we choose $C_{3}$ small enough so that Lemma \ref{c convexity lemma} is satisfied. \ Now by the theory of Delanoe [D], Caffarelli [C1] and Urbas [U] we have a classical solution to the problem for distance squared \[ c^{0}(x,y)=|x-y|^{2}\text{/2 \] in Euclidean space. \ We use the method of continuity. Openness is provided by Theorem 17.6 in GT, where we se \[ G:C^{2,\alpha}(\Omega)\times\lbrack0,1]\rightarrow C^{0,\alpha}\left( \Omega\right) \times C^{1,\alpha}\left( \partial\Omega\right) \] with \begin{align*} G(u,t) & =\\ & \left( \begin{array} [c]{c \ln\det\left[ u_{ij}-c_{ij}^{(t)}(x,T^{(t)}(x,Du)\right] -h(x)+\bar {h}(T^{(t)}(x,Du))-\ln\det\left[ c_{is}^{(t)}(x,T^{(t)}(x,Du))\right] ,\\ \bar{h}(T^{\left( t\right) }(x,Du)) \end{array} \right) \end{align*} where the cost function is changing from Euclidean to $c$ as \[ c^{(t)}=(1-t)c^{0}+tc \] and $T^{(t)\text{ }}$defined by \[ Dc^{(t)}(x,T^{(t)}(x,Du))=Du. \] Our initial solution $u_{0}$ is smooth , so it satisfies the above estimates (\ref{b1}, etc) with $C_{3,}C_{4}=0.$ \ These bounds change continuously with $t$ so $D^{2}u$ must stay in the compact components of (\ref{b1}) (\ref{b2}) and (\ref{b3}). \ As is standard for this problem, we cite [LT] to obtain the $C^{2,\alpha}$ estimates.\ By [GT] Theorem 17.6, we have openness in $t,$ and the estimates give us closedness as long as $\left\vert D^{4}c^{\left( t\right) }\right\vert ,\left\vert D^{3}c^{\left( t\right) }\right\vert \ \leq$\ $\varepsilon_{0}$. \ \ This completes the proof of Theorem 1.1. \section{Theorem 2} \bigskip First we employ a change of coordinates so that \[ c_{is}(x_{0},\bar{x}_{0})=-I_{n}. \] \begin{proof} Then, on a\ product of very small balls $B_{1/\lambda}(x_{0})\times B_{1/\lambda}(\bar{x}_{0})$ we have \[ \frac{1}{C_{2}}\left\vert \xi\right\vert ^{2}\leq-c^{si}\xi_{i}\xi_{s}\leq C_{2}\left\vert \xi\right\vert ^{2 \] for some $C_{2\text{ }}$ near $1,$ and $\left\vert D^{3}c\right\vert ,\left\vert D^{4}c\right\vert \leq C$ which may be large but finite. We now rescale and consider the following problem on $B_{1}(0)\times B_{1}(\bar{0})$: \ Let \[ c^{(\lambda)}(y,\bar{y})=\lambda^{2}c(\frac{y}{\lambda},\frac{\bar{y} {\lambda}) \] be the cost function, and let the distributions to be transported be Gaussians, satisfying (a1-3) on $B_{1}(0),B_{1}(\bar{0}).$ This cost function $c^{(\lambda)}$ \ now satisfies the conditions in our first theorem, as we see that choosing $\lambda$ large enough will make the third and fourth derivatives arbitrarily small. \ \ It follows by Theorem 1.1 that the solution to this rescaled optimal transportation problem is smooth. \ \ \ However, the coordinate change and "change of currency" do not change the underlying optimal transportation problem. \ Thus we also have smoothness for the solution of the problem sending \[ m=e^{-\lambda^{2}|x-x_{0}|^{2}/2}\chi_{B_{1/\lambda}(x_{0}) \] to \[ \bar{m}=e^{-\lambda^{2}|\bar{x}-\bar{x}_{0}|^{2}/2}\chi_{B_{1/\lambda}(\bar {x}_{0})}. \] This completes the proof. \end{proof}
2,869,038,156,460
arxiv
\section{Introduction} In the nonlinear $\sigma$ model description of the two-dimensional (2D) Heisenberg model,\cite{csh} the low-energy and low-temperature properties of the system are completely determined by three ground state parameters; the sublattice magnetization $M$, the spin stiffness constant $\rho_s$, and the spinwave velocity $c$. Their values are not given by the theory, however, but have to be determined starting from the microscopic Hamiltonian. A large number of calculations of the ground state parameters have been carried out. The antiferromagnetically ordered ground state, which has been established rigorously only for $S > 1/2$,\cite{s1order} was first convincingly confirmed also for $S=1/2$ in a quantum Monte Carlo (QMC) study by Reger and Young.\cite{reger} The sublattice magnetization obtained this way, $M \approx 0.30$ (in units where the N\'eel state has $M=1/2$), also indicated that spinwave theory \cite{spinwave1,spinwave2} gives a surprisingly good quantitative description of the ground state. The same conclusion was reached by Singh,\cite{singh} who carried out a series expansion around the Ising limit, and found $M \approx 0.30$, $\rho_s \approx 0.18J$, and $c \approx 1.7J$ ($J$ is the nearest-neighbor exchange coupling), all in good agreement with spinwave theory including the $1/S$ corrections. \cite{spinwave2} Subsequent higher-order spinwave calculations showed that the $1/S^2$ corrections to $M$, $\rho_s$ and $c$ indeed are small. \cite{hamer,igarashi,canali} Several other QMC simulations, \cite{barnes,carlson,gross,trivedi,liang,runge1,runge2,sauer,wiese,beard} exact diagonalizations,\cite{schulz1,schulz2,einarsson} as well as series expansions to higher orders,\cite{weihong} have confirmed and improved on the accuracy of the above estimates. The presently most accurate calculations \cite{runge2,wiese,beard,weihong} indicate that the true values of the ground state parameters deviate from their $1/S^2$ spinwave values by only 1-2\% or less. For most practical purposes (such as extracting $J$ for a system from experimental data), the ground state parameters of the 2D Heisenberg model are now known to quite sufficient accuracy. However, there are still reasons to go to even higher precision. One is that the model is one of the basic ``prototypic'' many-body models in condensed matter physics. It has become a testing ground for various analytical and numerical methods for strongly correlated systems, thus making it important to accurately establish its properties. Another reason is the very detailed predictions that have resulted from field theoretical studies, such as renormalization group calculations for the nonlinear $\sigma$ model,\cite{csh,critical,neuberger,fisher} and chiral perturbation theory.\cite{hasenfratz} Apart from giving the low-energy properties in the thermodynamic limit, these theories also predict the system size dependence of various quantities.\cite{neuberger,fisher,hasenfratz} This is important from the standpoint of numerical calculations such as exact diagonalization and QMC, which are necessarily restricted to relatively small lattices. Finite-size scaling approaches have been very successful in the study of 1D quantum spin systems, having convincingly confirmed various predictions from bosonization and conformal field theory. For example, critical exponents and logarithmic corrections have been extracted from the size dependence of ground state energies and finite-size gaps,\cite{lancscaling} and from correlation functions.\cite{dmrgscaling} With the concrete predictions now available, similar studies show great promise for testing theories also in 2D. For the standard Heisenberg model, finite-size scaling has been used extensively and successfully in extrapolating, e.g., the sublattice magnetization for small lattices to infinite system size, \cite{reger,barnes,carlson,gross,trivedi,liang} but only a few studies have so far been accurate enough for reliably addressing the validity of the theoretical predictions for the size {\it corrections}. \cite{runge1,runge2,wiese,beard} In one dimension, exact diagonalization, and more recently the density matrix renormalization group method,\cite{dmrg} enable highly accurate calculations for systems sufficiently large to approach the limit where the asymptotic scaling forms are valid.\cite{lancscaling,dmrgscaling} Calculations with these methods in two dimensions cannot reach linear dimensions large enough to verify the details of the predicted scaling forms, however. Some of the expected leading finite-size behavior has been seen in exact diagonalization studies including systems with up to $6 \times 6$ spins,\cite{schulz1,schulz2} but constants extracted from the size dependence are typically not consistent with other calculations. For example, $c$ extracted from the scaling of $E$ deviates by 15\% from other estimates.\cite{schulz1} There are hence clear indications that these small systems are not yet in the regime where only the dominant corrections are important. QMC can reach significantly larger lattices at the cost of statistical errors which are often relatively large, making it difficult to accurately extract the scaling behavior. Runge carried out Green's function Monte Carlo (GFMC) simulations of $L \times L$ Heisenberg systems with $L$ up to $16$, and found a reasonable consistency with the leading $T=0$ size dependence of the energy and the sublattice magnetization.\cite{runge1,runge2} He also noted the presence of a subleading correction to the energy,\cite{runge2} but the accuracy of the GFMC data was not high enough to extract its order, and furthermore $c$ extracted from the leading correction was sensitive to the subleading one. The extrapolated ground state energy obtained in this study is nevertheless the most accurate estimate reported so far.\cite{runge2} Chiral perturbation theory has recently enabled calculations of scaling forms for finite size {\it and} finite temperature for various quantities. \cite{hasenfratz} Such forms have been used in combination with QMC data in recent work by Wiese and Ying,\cite{wiese} and Beard and Wiese.\cite{beard} Their calculations employed, respectively, methods based on the ``loop-cluster algorithm'' suggested by Evertz {\it et al.} \cite{evertz}, and a continuous-time variant of that method developed by Beard and Wiese.\cite{beard} These algorithms are based on global flips of loops of spins, and overcome the problems with long autocorrelation times typical of standard Suzuki-Trotter \cite{suzuki1,suzuki2} or worldline \cite{worldline} QMC methods (the continuous-time approach furthermore avoids the systematic discretization error of the Trotter approximation). Considerably more accurate finite-$T$ data could therefore be generated, and the leading-order scaling forms of chiral perturbation theory were convincingly verified, both in the ``cubic'' regime $T/c \approx 1/L$ (Ref.~\onlinecite{wiese}) and the ``cylindrical'' regime $T/c \ll 1/L$ (Ref.~\onlinecite{beard}). The extrapolated $M$, $\rho_s$ and $c$ are the most accurate reported to date, although there are some minor discrepancies between the two results for $M$ (on the border line of what could be expected within statistical errors alone). In this paper, a finite-size scaling study of $T=0$ data is reported. Using the the Stochastic Series Expansion (SSE) QMC algorithm, \cite{sse1,sse2} energy results of unprecedented accuracy are obtained for $L \times L$ lattices with $L$ up to $16$. The relative statistical errors are as low as $\approx 10^{-5}$. Employing a recently suggested data analysis scheme which takes into account covariance among calculated quantities, \cite{covar} very accurate results for the sublattice magnetization are also obtained. Furthermore, the spin stiffness and the long-wavelength susceptibility $\chi (q=2\pi/L)$ are also calculated directly in the simulations. Assuming for the size dependences polynomials in $1/L$, constrained by scaling forms for $E$ and $M$ predicted from chiral perturbation theory, \cite{hasenfratz} all the computed quantities are included in a coupled $\chi^2$ fit. The quality of the QMC data for $E$ and $M$ is high enough that size corrections {\it beyond the subleading terms} have to be included. The leading-order corrections are fully consistent with the predictions. From a careful statistical analysis of the fits, bounds for the subleading terms are estimated. The subleading energy correction is found to agree with the prediction of chiral perturbation theory to within a statistical error of 5\% (the subleading correction for $M$ is also estimated, but has not yet been calculated analytically). This is the first numerical confirmation of chiral perturbation theory to subleading order. The extrapolated ground state energy, $E=-0.669437(5)$, is the most accurate estimate reported to date, with a statistical error six times smaller than the GFMC result by Runge,\cite{runge2} and is slightly lower than his result. Comparing the finite-size data of the two calculations, a clear tendency to over-estimation of the energy is seen in the GFMC results. This is likely due to a bias originating from ``population control'' in GFMC (a small effect of this nature was in fact anticipated by Runge\cite{runge2}). The results for the sublattice magnetization, $M=0.3070(3)$, and the spin-stiffness, $\rho_s=0.175(2)$, are both slightly lower than the estimates from the finite-$T$ scaling by Beard and Wiese\cite{beard} [$M=0.3083(2)$ and $\rho_s=0.185(2)$]. Although it is at this point difficult to definitely conclude which calculation is more reliable, it can again be noted that the high accuracy of the QMC data for $E$ and $M$ used in the fits carried out in this paper necessitates the inclusion of size corrections beyond the orders considered by Beard and Wiese.\cite{beard} Hence, any remaining effects of neglected corrections of even higher order should be smaller. Quantitative estimates of such effects on the extrapolations performed here support that they are smaller than the statistical errors indicated above. For the spinwave velocity the two calculations agree, the result obtained here being $c=1.673(7)$ and the value reported in Ref.~\onlinecite{beard} being $c=1.68(1)$. The outline of the rest of the paper is the following. In Sec.~II the SSE algorithm and the covariance error reduction scheme are outlined. The absence of systematic errors are demonstrated in comparisons with exact results for $4\times 4$ and $6\times 6$ lattices. The fitting procedures and the results of these are discussed in Sec.~III. The study is summarized in Sec.~IV. Some other problems where the methods applied here should be useful are also mentioned. \section{Numerical Methods} The standard 2D Heisenberg model is defined by the Hamiltonian \begin{equation} \hat H = J\sum\limits_{\langle i,j\rangle} {\bf S}_i \cdot {\bf S}_j, \quad (J > 0), \end{equation} where ${\bf S}_i$ is a spin-$1/2$ operator at site $i$ on a square lattice with $N=L\times L$ sites, and ${\langle i,j\rangle}$ denotes a pair of nearest-neighbor sites. Below, in II-A, the SSE approach to QMC simulation of this model is outlined. More details of the algorithm are discussed in Refs.~\onlinecite{sse2} and \onlinecite{chapter}. The SSE method has recently been applied to a variety of spin models, \cite{bilayer1,bilayer2,dimrand,chains} as well as 1D Hubbard-type electronic models.\cite{ssefermions} It was recently noted that correlations between measurements of different observables can be used to significantly increase the accuracy of certain quantities calculated in SSE simulations.\cite{covar} This covariance scheme for analyzing the data is of crucial importance in the present work, and therefore this method is also described below in Sec.~II-B. The high accuracy of the procedures is demonstrated by comparing results for $4\times 4$ and $6 \times 6$ lattices with the exact diagonalization data available for these systems. \subsection{Stochastic Series Expansion} Based on the exact power series expansion of e$^{-\beta\hat H}$, the SSE method \cite{sse1,sse2} can be considered a generalization of Handscomb's QMC scheme.\cite{handscomb,lee} It is the first ``exact'' method proposed for QMC simulations of general lattice Hamiltonians at finite-temperature (with the usual caveat of being in practice restricted to models for which the sign problem can be avoided). It is hence not based on a controlled approximation, such as the Trotter formula used in standard worldline methods, \cite{worldline} and directly gives results accurate to within statistical errors. Despite being formulated at finite $T$, temperatures low enough for studying the ground state can easily be reached for moderate-size lattices. As in Handscomb's method for the $S=1/2$ antiferromagnet,\cite{lee} the SSE approach for this model starts from the Hamiltonian written as ($J=1$) \begin{equation} \hat H = -{1\over 2} \sum\limits_{b=1}^{2N} [\hat H_{1,b} - \hat H_{2,b}] + {N\over 2}, \label{ham2} \end{equation} where $b$ is a bond connecting a pair of nearest-neighbor sites $\langle i(b),j(b)\rangle$, and the operators $\hat H_{1,b}$ and $\hat H_{2,b}$ are defined as \begin{mathletters} \begin{eqnarray} \hat H_{1,b} & = & 2[\hbox{$1\over 4$} - S^z_{i(b)}S^z_{j(b)}] \\ \hat H_{2,b} & = & S^+_{i(b)}S^-_{j(b)} + S^-_{i(b)}S^+_{j(b)} . \end{eqnarray} \end{mathletters} An exact expression for an operator expectation value \begin{equation} \langle \hat A \rangle = {1\over Z} {\rm Tr}\lbrace \hat A {\rm e}^{-\beta \hat H} \rbrace ,\quad Z = {\rm Tr}\lbrace {\rm e}^{-\beta \hat H} \rbrace , \end{equation} at inverse temperature $\beta = J/T$, is obtained by Taylor expanding e$^{-\beta \hat H}$ and writing the traces as sums over diagonal matrix elements in the basis $\lbrace |\alpha \rangle = |S^z_1,\ldots,S^z_N \rangle \rbrace$. The partition function is then \cite{sse1} \begin{equation} Z = \sum\limits_\alpha \sum\limits_{n=0}^\infty \sum\limits_{S_n} {(-1)^{n_2} \beta^n \over n!} \left \langle \alpha \left | \prod\limits_{i=1}^n \hat H_{a_i,b_i} \right | \alpha \right \rangle , \label{partition} \end{equation} where $S_n$ is a sequence of index pairs defining the operator string $\prod_{i=1}^n \hat H_{a_i,b_i}$, \begin{equation} S_n = [a_1,b_1][a_2,b_2]\ldots [a_n,b_n],\quad a_i \in \lbrace 1,2\rbrace$, $b_i \in \lbrace 1,\ldots ,2N \rbrace, \label{sn} \end{equation} and $n_2$ denotes the total number of index pairs (operators) $[a_i,b_i]$ with $a_i = 2$ ($n=n_1+n_2$). Eq.~(\ref{partition}) deviates from Handscomb's method,\cite{handscomb,lee} which relies on exact evaluation of the traces of the operator sequences and therefore is limited to models for which this is possible. The Heisenberg model considered here is such a model (one of the very few), but the more general SSE approach of further expanding over a set of basis states is preferable also in this case, for reasons that will be discussed below. The objective now is to develop a scheme for importance sampling of the terms in the partition function (\ref{partition}). A term, or configuration, $(\alpha ,S_n)$ is specified by a basis state $| \alpha \rangle$ and an operator sequence $S_n$. The operators $\hat H_{1,b}$ and $\hat H_{2,b}$ can act only on states where the spins at sites $i(b)$ and $j(b)$ are antiparallel. The diagonal $\hat H_{1,b}$ leaves such a state unchanged, whereas the off-diagonal $\hat H_{2,b}$ flips the spin pair. Defining a propagated state \begin{equation} | \alpha (p) \rangle = \prod\limits_{i=1}^p \hat H_{a_i,b_i} |\alpha \rangle , \quad | \alpha (0) \rangle = | \alpha \rangle , \label{propagated} \end{equation} a configuration $(\alpha ,S_n)$ must clearly satisfy the periodicity condition $|\alpha (n) \rangle = | \alpha (0) \rangle$ in order to contribute to the partition function. For a lattice with $L \times L$ sites and $L$ even, this implies that the total number $n_2$ of the off-diagonal operators must be even, and hence that all terms in (\ref{partition}) are positive and can be used as relative probabilities in a Monte Carlo importance sampling procedure (this is true for any non-frustrated system). For a finite system at finite $\beta$, the powers $n$ contributing significantly to the partition function are restricted to within a well defined regime, and the sampling space is therefore finite in practice. In order to construct an efficient updating scheme for the index sequence it is useful to explicitly truncate the Taylor expansion at some self-consistently chosen upper bound $n=l$, high enough to cause only an exponentially small, completely negligible error.\cite{sse1} One can then define a sampling space where the length of the sequence is {\it fixed}, by inserting a number $l-n$ of unit operators, denoted $\hat H_{0,0}$, in the operator strings. The terms in the partition function (\ref{partition}) are divided by ${l \choose n}$, in order to compensate for the number of different ways of inserting the unit operators. The summation over $n$ in (\ref{partition}) is then implicitly included in the summation over all sequences $S_l$ of length $l$, with $[a_i ,b_i]=[0,0]$ as an allowed operator. Denoting by $W(\alpha ,S_l)$ the weight of a configuration $(\alpha ,S_l)$, the partition function is then \begin{equation} Z = \sum\limits_{\alpha} \sum\limits_{S_l} W(\alpha ,S_l) . \end{equation} Since all non-zero matrix elements in (\ref{partition}) equal one, the weight is (when non-vanishing) \begin{equation} W(\alpha ,S_l ) = \left ({\beta\over 2} \right )^n {(l-n)!\over l!}, \label{wl} \end{equation} where $n$ still is the expansion power of the term, i.e., the number of non-$[0,0]$ operators in $S_l$. The following is only a brief outline of the actual sampling scheme. More details can be found in Refs.~\onlinecite{sse2} and \onlinecite{chapter}. During the simulation, $S_l$ and one of the states $|\alpha (p)\rangle$ are stored. Other propagated states are generated as needed. The simulation is started with a randomly generated state $| \alpha (0)\rangle$, with an index sequence $S_l$ containing only $[0,0]$ operators (unit operators), and with some arbitrary (small) $l$. The truncation $l$ is adjusted as the simulation proceeds, as will be discussed further below. With the fixed-length scheme, all updates of the operator sequence can be formulated in terms of substitutions of one or several operators. The simplest involves a diagonal operator at a single position; $[0,0] \leftrightarrow [1,b]$. This update can be carried out consecutively at all positions $p$ for which $a_p \in \lbrace 0,1\rbrace$. In the $\rightarrow$ direction, the bond index $b$ is chosen at random and the update is rejected if the spins connected by $b$ are parallel in the current state $|\alpha (p-1)\rangle$. The Metropolis acceptance probabilities \cite{metropolis} required to satisfy detailed balance are obtained from (\ref{wl}), where the power $n$ in is changed by $\pm 1$. Updates involving the off-diagonal operators $[2,b]$ are carried out with $n$ fixed. The simplest is of the type $[1,b][1,b] \leftrightarrow [2,b][2,b]$, involving two operators acting on the same bond. These two sequence updates can generate all configurations with spin flips on retracing paths on the lattice, and are the only ones required for a 1D system with open boundary conditions. For a 2D system, configurations associated with spin flips around any closed loop are possible, and an additional type of update is required. It is sufficient to consider substitutions on a plaquette, of the type $[2,b_1][2,b_2] \leftrightarrow [2,b_3][2,b_4]$, where $b_1 ,\ldots, b_4$ is a permutation of the four bonds of a plaquette. For systems with periodic boundary conditions, updates involving cyclic spin flips on loops wrapping around the whole system are required (sampling of different winding number sectors), and cannot be accomplished by the above local sequence alterations. For the square lattice considered here, the winding number can be changed by substituting $L/2$ operators according to $[2,b_1]\ldots [2,b_{L/2}] \leftrightarrow [2,b_{L/2+1}]\ldots [2,b_L]$, where the set of bonds $b_1 ,\ldots ,b_L$ is a permutation of bonds forming a closed ring around the system in the $x$- or $y$-direction. Updating the operator sequence with the four types of operator substitutions described above suffices for generating all possible configurations within a sector of fixed magnetization, $m^z=\sum_{i=1}^N S^z_i$. In the grand canonical ensemble, global spin flips changing the magnetization are also required. Here $T \to 0$ will be considered (i.e., $T$ is much lower than the finite-size gap), and since the ground state is a singlet\cite{lieb} the canonical ensemble with $m^z=0$ is appropriate. It can be noted that in Handscomb's method the sampling is (in principle) automatically over all magnetization sectors, and therefore a restriction to, e.g., $m^z=0$ is not possible. In practice, this causes problems at low temperatures, and Handscomb's method has therefore been used for the antiferromagnetic Heisenberg model mostly at relatively high temperatures ($T/J \agt 0.4$ in 2D).\cite{lee} The SSE method with the restriction $m^z=0$ can be used at arbitrarily low $T$. In order to determine a sufficiently high truncation of the expansion, the fluctuating power $n$ is monitored during the equilibration part of the simulation. If $n$ exceeds some threshold $l-\Delta_l/2$, the cut-off is increased, $l \to l + \Delta_l$, by inserting additional $\hat H_{0,0}$ operators at random positions. In practice $\Delta_l \approx l/10$ leads to a rapid saturation of $l$ at a value sufficient to cause no detectable truncation errors. The growth of $l$ during equilibration is illustrated for a $4 \times 4$ system in Fig.~\ref{figlen}. The distribution of $n$ during a subsequent simulation is shown in Fig.~\ref{figdist}, and clearly demonstrates that the truncation of the expansion is no approximation in practice. A Monte Carlo step (MC step) is defined as a series of the single-(diagonal)operator substitutions attempted consecutively at each position in $S_l$ (where possible), followed by a series of off-diagonal updates carried out on each bond, plaquette, and ring. Due to the locality of the constraints in these updates, the number of operations (the CPU time) per MC step scales linearly with $N$ and $\beta$.\cite{sse2,chapter} However, the acceptance rate for the ``ring update'' that changes the winding number decreases rapidly with increasing system size. It is therefore sometimes useful to increase the number of attempted ring updates with the system size, which then leads to a faster growth of CPU time with $N$. The acceptance rate of the ring update currently used becomes too low for $L \agt 16$, and simulations of larger systems therefore in practice have to be restricted to the sector with zero winding number. It has recently been noted\cite{patrik} that in fact exact results are obtained as $T \to 0$ even for simulations restricted this way. However, compared to simulations with fluctuating winding numbers, lower temperatures are required for the system observables to saturate at their ground state values.\cite{patrik} Here only systems with $L \le 16$ are considered, and the update changing the winding number is always included. Measurements of physical observables are carried out using the index sequences $S_n$ obtained by omitting the $[0,0]$ operators in the generated $S_l$. These are then, of course, distributed according to the weight function corresponding to Eq.~(\ref{partition}). One can show that the internal energy per spin is simply given by the average of $n$ [with the constant term in Eq.~(\ref{ham2}) neglected]: \cite{handscomb,sse1} \begin{equation} E = -{\langle n\rangle \over N \beta} . \label{energy} \end{equation} This expression also shows that the average power, and hence the sequence length $l$, scales as $\beta N$ at low temperatures. A spin-spin correlation function, \begin{equation} C(i,j)=C({\bf r}_i-{\bf r}_j)=\langle S^z_i S^z_j \rangle , \end{equation} is obtained averaging the correlations in the propagated states $| \alpha (p) \rangle$ defined in Eq.~(\ref{propagated}). Further defining \begin{equation} S^z_i[p]= \langle \alpha (p) | S^z_i | \alpha (p) \rangle, \end{equation} the correlation function is given by \cite{sse1} \begin{equation} C(i,j) = \left \langle {1\over n+1} \sum\limits_{p=0}^n S^z_i[p]S^z_j[p] \right \rangle . \label{correl} \end{equation} The corresponding static susceptibility, \begin{equation} \chi (i,j) = \int\limits _0^\beta d\tau \langle S^z_i (\tau) S^z_j (0) \rangle , \end{equation} involves correlations between all the propagated states:\cite{sse2} \begin{equation} \chi (i,j)=\left \langle {\beta\over n(n+1)} \left ( \sum\limits_{p=0}^{n-1} S^z_i[p] \right ) \left ( \sum\limits_{p=0}^{n-1} S^z_j[p] \right ) + {\beta\over (n+1)^2} \left ( \sum\limits_{p=0}^{n} S^z_i[p]S^z_j[p] \right ) \right \rangle . \label{susc} \end{equation} Off-diagonal correlation functions can be easily calculated for operators that can be expressed in terms of the spin-flipping operators $\hat H_{2,b}$, each of which is a sum of two terms; $\hat H^+_b = S^+_{i(b)}S^-_{j(b)}$ and $\hat H^-_b=S^-_{i(b)}S^+_{j(b)}$. The spin stiffness constant involves a static susceptibility defined in terms of these operators. Although the simulation scheme is formulated with $\hat H_{2,b}=H^+_b + H^-_b $, one can still access the terms individually since only one of them can propagate a given state. One can show that an equal-time correlation function, \begin{equation} F_{\sigma \sigma'} (b,b') = \langle \hat H^{\sigma}_{b} \hat H^{\sigma '}_{b'} \rangle , \end{equation} is given by \cite{sse2} \begin{equation} F_{\sigma \sigma'} (b,b') = \left \langle {n-1\over (\beta/2)^2 } N(b\sigma ; b'\sigma ') \right \rangle , \label{equaloff} \end{equation} where $N(b\sigma ; b'\sigma ')$ is the number of times the operators $\hat H^{\sigma}_{b}$ and $\hat H^{\sigma '}_{b'}$ appear next to each other in $S_n$, in the given order. The corresponding static susceptibility, \begin{equation} \chi_{\sigma \sigma'} (b,b') = \int\limits_0^\beta d\tau \langle \hat H^{\sigma}_{b}(\tau) \hat H^{\sigma '}_{b'}(0) \rangle , \label{offsusdef} \end{equation} is given by the remarkably simple formula\cite{sse2} \begin{equation} \chi_{\sigma \sigma'} (b,b') = 4 \left \langle N(b\sigma) N(b'\sigma') - \delta_{bb'} \delta_{\sigma\sigma'} N(b\sigma) \right \rangle / \beta, \label{offsus} \end{equation} where $N(b\sigma)$ is the total number of operators $\hat H_b^\sigma$ in $S_n$. Now a direct estimator for the spin stiffness can be constructed. The stiffness, $\rho_s$, is defined as the second derivative of the ground state energy with respect to a twist $\Phi$ in the boundary condition, around an axis perpendicular to the direction of the broken symmetry. For a finite lattice, where the symmetry is not broken, a factor $3/2$ has to be included in order to account for rotational averaging. Distributing the twist equally over all interacting spin pairs $\langle i,j\rangle_x$ in the $x$-direction, the finite-size definition for $\rho_s$ is hence \begin{equation} \rho_s = {3\over 2} {1\over L^2} {\partial^2 E_0 (\phi) \over \partial \phi^2} \Bigl | _{\phi=0}, \label{stiffderiv} \end{equation} where $\phi = \Phi/L$. An expression which is only dependent on the ground state at $\phi=0$ is obtained by expanding the Hamiltonian to second order in $\phi$. The Hamiltonian in the presence of the twist is \begin{equation} \hat H(\phi) = \sum\limits_{~\langle i,j\rangle_x} {\bf S}_i \cdot R(\phi) {\bf S}_j + \sum\limits_{~\langle i,j\rangle_y} {\bf S}_i \cdot {\bf S}_j, \end{equation} where $R(\phi)$ is the rotation matrix \begin{equation} R(\phi) = \left ( \begin{array}{ccc} \cos{(\phi)} & \sin{(\phi)} & 0 \\ -\sin{(\phi)} & \cos{(\phi)} & 0 \\ 0 & 0 & 1 \end{array} \right ). \end{equation} Expanding to second order in $\phi$ results in \begin{equation} \hat H (\phi) - \hat H (0) = -\frac{1}{2} \sum\limits_{~\langle i,j\rangle_x} \left [ \phi^2 (S^x_iS^x_j + S^y_iS^y_j) + i\phi (S^+_iS^-_j - S^-_iS^+_j) \right ]. \end{equation} The first term is proportional to $\hat H(0)$ (for the rotationally invariant case considered here). The expectation value of the second term vanishes, but it gives a contribution quadratic in $\phi$ in second order perturbation theory. Defining the spin current operator \begin{equation} j_s = \frac{i}{2}\sum\limits_{~\langle i,j\rangle_x} (S^+_iS^-_j - S^-_iS^+_j), \end{equation} and the current-current correlation function at Matsubara frequency $\omega_m = 2\pi mT$, \begin{equation} \Lambda _s (\omega_m)= \frac{1}{L^2} \int\limits_0^\beta d\tau {\rm e}^{-i\omega_m \tau} \langle j_s (\tau) j_s (0) \rangle , \label{lambdas} \end{equation} the stiffness is given by \begin{equation} \rho_s = - \hbox{$3\over 2$} [\hbox{$1\over 3$}E + \Lambda _s (0)] , \label{stiffdef} \end{equation} where $E$ is the ground state energy per spin. The QMC estimate for the energy is given by Eq.~(\ref{energy}). The current-current correlator $\Lambda_s \equiv \Lambda_s (0)$ is a sum of integrals of the form (\ref{offsusdef}). Denoting by $N^+_x$ and $N^-_x$ the number in $S_n$ of operators $S^+_iS^-_j$ and $S^-_iS^+_j$ with $\langle i,j\rangle$ a bond in the $x$-direction, Eqs.~(\ref{lambdas}) and (\ref{offsus}) give \begin{equation} \rho_s = {3/2 \over \beta N } \left \langle ( N^+_x - N^-_x)^2 \right\rangle , \end{equation} i.e. the terms linear in $N^+_x$ and $N^-_x$ cancel. Defining the winding numbers $w_x$ and $w_y$ in the $x$ and $y$ direction: \begin{equation} w_\alpha = (N^+_\alpha - N^-_\alpha) \bigr / L ,\quad (\alpha = x,y), \end{equation} the stiffness can also be written as \begin{equation} \rho_s = \hbox{$3\over 4$} \left \langle w_x^2 + w_y^2 \right \rangle \bigr / \beta. \label{rhow} \end{equation} This definition is clearly valid only for a simulation that samples all winding number sectors. With a restriction to the subspace with $w_x = w_y = 0$, $\rho_s$ can be calculated using the long-wavelength limit of a current-current correlator involving a twist field with a spatial modulation.\cite{scalapino} The above method of calculating the stiffness directly from the winding number fluctuations is clearly strongly related to methods used for the superfluid density in simulations of boson models. \cite{pollock} \subsection{Error Reduction Using Covariance} In Monte Carlo simulations, fluctuations (statistical errors) of different physical observables are often correlated with each other. These covariance effects can sometimes be used to obtain improved estimators for certain quantities.\cite{covar} In some cases one may have exact knowledge of some quantity independently of the QMC calculation. If there are strong correlations between a known quantity $A$ and some other, unknown quantity $B$, the accuracy of $B$ can be improved via its covariance with the measured $A$, by calculating the average and statistical error under the condition that $A$ equals its known value. In other cases, it may be possible to calculate a quantity in more than one way in the same simulation. If one of the estimates, $A_1$, is more accurate than the other, $A_2$, a covariance between $A_2$ and some other quantity $B$ can again be used to improve the estimate of $B$. With the SSE method the internal energy of the rotationally invariant Heisenberg model can be calculated in two different ways: $E_1$ from the average power of the series expansion according to Eq.~(\ref{energy}), and using the nearest-neighbor correlation function $C(1,0)$ calculated according to Eq.~(\ref{correl}); $E_2=6C(1,0)$. The manifestly rotationally invariant estimator $E_1$ is significantly less noisy than $E_2$. Results for quantities with fluctuations correlated to those of $C(1,0)$, such as $C({\bf r})$ with $r > 1$, can therefore be improved with the aid of $E_1$. For the purpose of accurately measuring correlations between the fluctuations of two different quantities, the so called ``bootstrap method'' is a useful tool.\cite{bootstrap} With the simulation data as usually divided into $M$ ``bin'' averages, a ``bootstrap sample'' $\bar A_R$ is defined as an average over $M$ randomly selected bins (i.e., the same number as the total number of bins, allowing, of course, multiple selections of the same bin). With $r(i)$ denoting the $i$:th randomly chosen bin, \begin{equation} \bar A_R = {1 \over M} \sum\limits_{i=1}^M A_{r(i)} . \label{randomav} \end{equation} The statistical error can be calculated on the basis of $M_R$ bootstrap samples $\bar A_{R_i}$, according to\cite{bootstrap} \begin{equation} \sigma^2 = {1 \over M_R} \sum\limits_{i=1}^{M_R} (\bar A_{R_i} - \bar A )^2 , \label{booterror} \end{equation} where $\bar A$ is the regular average over all bins. Note that Eq.~(\ref{booterror}) lacks the factor $(M_R-1)^{-1}$ present in the conventional expression for the variance of the average calculated on the basis of $M_R$ bins. The bootstrap method is in general more accurate (due to a better realization of a Gaussian distribution for the bootstrap samples), in particular if $A$ is not measured directly in the simulation, but is some nonlinear function of measured quantities (in which case $A$ should be calculated on the basis of bootstrap samples, not individual bins). Sets of bootstrap samples $\lbrace \bar A_{R_i} \rbrace$ and $\lbrace \bar B_{R_i} \rbrace$ generated on the basis of the same randomly selected bins are well suited for evaluating correlations between the statistical fluctuations of $A$ and $B$, and are used in the covariance error reduction scheme described next. Here this method will be illustrated using simulation results for the staggered structure factor $S(\pi,\pi)$ and the staggered susceptibility $\chi(\pi ,\pi)$. These are defined according to \begin{mathletters} \begin{eqnarray} S(\pi,\pi) &=& {1\over N} \sum\limits_{i,j} (-1)^{x_j-x_i+y_j-y_i}C(i,j) \label{spi} \\ \chi(\pi,\pi)&=&{1\over N} \sum\limits_{i,j} (-1)^{x_j-x_i+y_j-y_i}\chi (i,j), \label{xpi} \end{eqnarray} \end{mathletters} with $C(i,j)$ and $\chi (i,j)$ given by Eqs.~(\ref{correl}) and (\ref{susc}). The structure factor is of particular interest, since it defines the sublattice magnetization squared of a finite system. The fluctuations of $S(\pi ,\pi)$ are strongly correlated to those of $C(1,0)$, and $S(\pi ,\pi)$ can therefore be calculated to an accuracy significantly higher than if only the direct estimator (\ref{correl}) is used. The susceptibility $\chi(\pi,\pi)$ is only weakly correlated with $C(1,0)$, however, and only a modest gain in accuracy can be achieved for this quantity. First some results for a $6\times 6$ lattice are discussed. This is the largest system for which Lanczos results have been obtained.\cite{schulz2} Comparing with these exact results, the accuracy of the QMC technique and the covariance method can be rigorously checked. The temperature used in the simulation has to be low enough for the calculated quantities to have saturated at their ground state values. In order to check for temperature effects, several calculations were carried out. Results at inverse temperatures $\beta=24$ and $48$ are indistinguishable within error bars, indicating that these temperatures are sufficiently low for $L=6$. The results presented below are for $\beta=48$. The simulation was divided into bins of $5 \times 10^5$ MC step each, and a total of $600$ bins were generated. Fig.~\ref{figcov6} shows the covariance between the measured nearest-neighbor correlation function ($E_2$) and $S(\pi,\pi)$. The plot was generated on the basis of $2000$ bootstrap samples. Strong linear correlations between the two quantities are evident. Hence further knowledge of $E$ can improve the estimate of $S$. The conventional average and error of $S(\pi,\pi)$ is calculated on the basis of all the points i.e., the distribution obtained by projecting the points onto the $S$-axis. Having a better estimate $E_1 \pm \sigma_1$ for $E$, an improved estimate of $S$ can be calculated by weighting the points by a Gaussian centered at $E_1$ and with a width equal to the error $\sigma_1$. In this case, the reduced statistical error is $\approx 1/12$ of the conventional error. Note that the conventional estimates of both $S$ and $E_2$ lie outside the exact results by $\approx 1.5$ standard deviations (not an unlikely situation statistically). The improved estimate of $S$ is nevertheless within one standard deviation of the exact result, reflecting this being the case for the more accurate energy estimate $E_1$ used in the procedure. In fact, this correcting property of the covariance method can even eliminate certain systematic errors, such as those originating from finite-$T$ effects in calculations aimed at ground state properties. \cite{covar} Besides illustrating the use of the covariance method to reduce the statistical errors, Fig.~\ref{figcov6} also clearly demonstrates to a high accuracy the absence of detectable systematic errors in the QMC data. This confirms that the SSE method indeed produces unbiased results. Table~\ref{tab1} summarizes the comparisons with the exact results for both $4\times 4$ and $6\times 6$ lattices. As the system size increases, the fluctuations in $S(\pi,\pi)$ as computed in the standard way increase, and accurate estimates become increasingly difficult to obtain. This is typical of algorithms utilizing local updates. The fluctuations in the energy per site as calculated from $\langle n\rangle$ actually decrease, however (due to self-averaging). Hence, the gain in accuracy achieved with the covariance effect increases with the system size. Fig.~\ref{figcov16} shows $L=16$ results for the staggered structure factor. For this system size the error in the energy estimate $E_1$ is negligible on the scale of the fluctuations of $E_2$, and the error in the improved $S(\pi,\pi)$ is essentially the width of the elongated shape in the vertical direction. In this case the covariance method leads to error bars $\approx 1/100$ of those calculated in the standard way. Unfortunately, not all quantities exhibit a strong covariance with $C(1,0)$. Fig.~\ref{figcovsus} shows results for the staggered susceptibility (\ref{xpi}) of a $6 \times 6$ system. In this case there is only a very weak covariance, and hardly any gain in accuracy can be achieved. It is easy to understand why the covariance with $C(1,0)$ is particularly strong for $S(\pi ,\pi)$ (or indeed any equal-time spin correlation): The system is rotationally invariant, but the simulation generates configurations in a representation where the $z$-direction is singled out, and only this component of the correlation function is measured (the other components are not easily measurable, which is the case also with standard worldline methods). Measurements based on a particular set of configurations (a single bin or a bootstrap sample) will inevitably be affected by some deviations from perfect rotational invariance. This is manifested as amplitude fluctuations in the particular spin component measured, and cause the covariance effects seen in the data discussed above. The ability of the local Monte Carlo updates to rotate the direction of the antiferromagnetic order in spin space diminishes with increasing size, leading to large statistical fluctuations in the conventional estimate of the correlations. The fact that the energy fluctuations do not increase with the system size can, in the same way, be traced to the rotationally invariant nature of the estimator (\ref{energy}). A somewhat more formal discussion of the covariance error reduction scheme can be found in Ref.~\onlinecite{covar}. \section{Results} Simulations of $L \times L$ systems with $L \le 16$ (only even $L$) were carried out at inverse temperatures $\beta =4L$ and $8L$. Within statistical errors the results are indistinguishable, indicating that in both cases the ground state completely dominates the behavior of the calculated quantities. This can also be checked using the finite-size singlet-triplet gap scaling predicted from chiral perturbation theory.\cite{hasenfratz} For $L=16$ and $\beta=128$, this gives an estimate of $\sim 10^{-7}$ for the relative error in the calculated ground state energy due to excited states (note that since the simulations are carried out in the canonical ensemble, only $m^z=0$ states are mixed in). For the smaller systems the errors are even smaller (the gap scales as $1/N$). All results discussed here are for $\beta=8L$. \cite{wastenote} The statistical errors of the calculated energies are as small as $\approx 10^{-5}$ for all $L$ studied. This accuracy exceeds by a factor 5-6 the the most accurate results previously reported for $L=4-16$; the GFMC results by Runge.\cite{runge2} Comparing the two sets of results, they agree for $L \le 8$, but for the larger sizes the GFMC data is consistently higher by $2-3$ GFMC error bars. Given the agreement to a relative accuracy of less than $10^{-5}$ between the SSE result for $L=6$ and the exact result, and the non-approximate nature of the algorithm, it is hard to see why there should be any systematic errors in the SSE data for the larger lattices. Note that any remaining finite-temperature effects would lead to an overestimation of the energy, and hence could not explain the discrepancy with the GFMC data. As discussed above, care has been taken to verify that in fact the finite temperature effects are well below the statistical errors. GFMC calculations, on the other hand, are in general expected to be affected by a small systematic error originating from ``population control'' of the varying number of ``random walkers'' used in that type of simulation. In Runge's calculation, attempts were made to remove such bias more effectively than in previous \cite{carlson,trivedi} GFMC calculations. However, a small remaining systematic error could not be ruled out, and the effect was expected to be an over-estimation of the energy.\cite{runge2} The discrepancy found here is therefore not completely surprising. The energies obtained with the SSE algorithm are listed in Table~\ref{tab2}. It is gratifying to note that the SSE energy is indeed nicely self-averaging --- despite the considerably fewer numbers of MC steps performed for the larger systems the relative errors do not differ much from the smaller sizes. Two definitions of the sublattice magnetization have been frequently used in previous studies,\cite{reger} and will be used here as well. The first definition is in terms of the staggered structure factor, \begin{equation} M^2_1(L) = 3S(\pi,\pi)/L^2 , \label {defm1} \end{equation} and the second one uses the spin-spin correlation function at the largest separation on the finite lattice, \begin{equation} M^2_2(L) = 3C(L/2,L/2). \label{defm2} \end{equation} The factors $3$ are included to account for rotational averaging of the $z$-component of the correlations. $M_1(L)$ and $M_2(L)$ should, of course, scale to the same sublattice magnetization in the thermodynamic limit. Using covariance error reduction, both were determined to within statisticals errors of $\approx 10^{-4}$ (slightly larger for the largest systems). This accuracy also exceeds that of previous studies. The results for both $S(\pi,\pi)$ and $C(L/2,L/2)$ are listed i Table~\ref{tab2}. As discussed in Sec.~II, the spin stiffness can be obtained directly by measuring the square of the fluctuating winding number. However, as pointed out recently by Einarsson and Schulz,\cite{einarsson} the two terms in Eq.~(\ref{stiffdef}) have different leading size-corrections; $\sim 1/L^3$ for $E$ and $\sim 1/L$ for $\Lambda _s$. Therefore, $\Lambda _s$ is also calculated separately. There is a small discrepancy between the QMC results for $L=4$ and $L=6$, and the Lanczos results reported in Ref.~\onlinecite{einarsson}. Adjusting for different factors in the definitions, the Lanczos results are $\Lambda_s(4)=0.04832$ and $\Lambda_s(6)=0.06723$, whereas the QMC results obtained here are $\Lambda_s(4)=0.04841(2)$ and $\Lambda_s(6)=0.06791(3)$. The reason for the discrepancy is not clear, but carrying out $4\times 4$ exact diagonalizations with weak twist-fields included in the Hamiltonian, and subsequently calculating the derivative in Eq.~(\ref{stiffderiv}) numerically, gives $\Lambda_s(4)=0.04840$, in good agreement with the QMC result. Hence, there is reason to believe that the QMC results are correct. The uniform susceptibility has typically been obtained in numerical studies via a definition in terms of the singlet-triplet excitation gap. \cite{gross,runge2,schulz2} Here a different approach is taken, not requiring simulations in the $S=1$ sector. The ${\bf q}$-dependent susceptibility, \begin{equation} \chi(q_x,q_y)={1\over N} \sum\limits_{j,k} {\rm e}^{i(q_xj+q_yk)} \chi (j,k), \label{chiq} \end{equation} is calculated using Eq.~(\ref{susc}), and its value at the longest wavelength, $q_1=2\pi/L$, is taken as the definition of the finite-size uniform susceptibility [due to the finite-size gap and the conserved magnetization, $\chi (q=0)$ of course vanishes identically]. In order to give the correct transverse susceptibility of a system with broken symmetry in the thermodynamic limit, the result has to be adjusted by factor $3/2$. Hence, the definition is \begin{equation} \chi_\perp (L) = \hbox{$3 \over 2$}\chi (2\pi/L,0). \end{equation} The spinwave velocity can be obtained from the infinite-size values of $\rho_s$ and $\chi_\perp$ according to the general hydrodynamic relation \begin{equation} c = \sqrt{\rho_s /\chi_\perp} . \label{hydro} \end{equation} The above quantities will now be scaled to the thermodynamic limit using $\chi^2$ fits to appropriate scaling forms. Chiral perturbation theory gives the following scaling behavior for the ground state energy and the sublattice magnetization defined according to Eq.~(\ref{defm1}) [parameters without the argument $L$ will henceforth denote the infinite-size values]: \begin{mathletters} \begin{eqnarray} E(L) & = & E + \beta c {1\over L^3} + {c^2 \over 4 \rho_s}{1\over L^4} + \ldots , \label{escale} \\ M^2_1(L) & = & M^2 + \alpha {M^2 \over c \chi_\perp}{1\over L} + \ldots , \label{mscale} \end{eqnarray} \label{emscale} \end{mathletters} where $\alpha = 0.62075$ and $\beta=-1.4377$.\cite{hasenfratz} The leading corrections have been obtained also from renormalization group calculations for the nonlinear $\sigma$ model,\cite{neuberger,fisher} and their orders also agree with spinwave theory.\cite{huse} Spinwave theory gives that the leading corrections to $\Lambda_s$ and $M_2$ are $\sim 1/L$,\cite{einarsson,huse} and this should be the case also for $\chi_\perp (L)$ due to the linear spinwave spectrum for small $q$. To the author's knowledge, there are no more detailed predictions for the scaling behavior of these quantities. Individually fitting all the parameters, it is found that the high accuracy of $E(L)$ necessitates the inclusion also of a term $\sim 1/L^5$ in Eq.~(\ref{escale}). Both $M^2_1(L)$ and $M^2_2(L)$ require corrections up to order $1/L^3$. The QMC results for $\Lambda_s(L)$ and $\chi_\perp (L)$ are less accurate, and only linear and quadratic terms are needed. Hence, the following size dependences are assumed \begin{mathletters} \begin{eqnarray} E(L) & = & E + {e_3 \over L^3} + {e_4 \over L^4} + {e_5 \over L^5} \\ M^2_1(L) & = & M^2 + {m_1 \over L} + {m_2 \over L^2} + {m_3 \over L^3} \\ M^2_2(L) & = & M^2 + {n_1 \over L} + {n_2 \over L^2} + {n_3 \over L^3} \\ \Lambda_s(L) & = & \Lambda_s + {l_1 \over L} + {l_2 \over L^2} \\ \chi_\perp(L) & = & \chi_\perp + {x_1 \over L} + {x_2 \over L^2}. \end{eqnarray} \label{allscale} \end{mathletters} The predicted scaling forms (\ref{emscale}), together with the hydrodynamic relation (\ref{hydro}) and the expression (\ref{stiffdef}) for the spin stiffness $\rho_s$, imply the following constraints among the parameters and size-corrections: \begin{mathletters} \begin{eqnarray} \Lambda_s & = & -(1/3)[E + 2 \alpha M^2 e_3 /(\beta m_1)] \label{const2} \\ \chi_\perp & = & \alpha \beta M^2 /(m_1 e_3) \label{const3} \\ e_4 & = & m_1 e_3 /(4\alpha\beta M^2) \label{const1} . \end{eqnarray} \label{constraints} \end{mathletters} All the scaling forms (\ref{allscale}) are hence coupled to a high degree, and a good simultaneous fit of all parameters will strongly support the field theoretical predictions (\ref{emscale}). Data for all sizes $L=4-16$ can be included in fits with good values of $\chi^2$ per degree of freedom ($\chi^2/$DOF), except in the case of $\chi_\perp (L)$ for which $L=4$ has to be excluded (not surprising, since the smallest wave-vector $q_1$ used in the definition is as large as $\pi /2$ for $L=4$). Both $\Lambda_s(16)$ and $\chi_\perp (16)$ have error bars too large to be useful, and are therefore also excluded. Extrapolating the infinite-size parameters from fits to a small number of points, one has to take into account the fact that there are higher-order corrections present, which by necessity have been neglected in the scaling forms used. The statistical errors of the extrapolated parameters may be smaller than the systematic errors introduced due to this neglect (even though the fit may be good). In order to minimize this type of subtle errors, the $L=4$ data were excluded from all the fits discussed in the following. This leads to larger statistical fluctuations but should significantly reduce the risk of underestimating the errors (the largest neglected correction to $E$ is $11$ times larger for $L=4$ than for $L=6$, and for the other quantities $3-5$ times larger). Before considering the full fit (\ref{allscale}) with all the constraints (\ref{constraints}), it is instructive to consider first the results of individual, unconstrained fits to all the different quantities. The effects of including the constraints can then be judged in light of these results. Completely independent fits give $E=-0.66943(2)$, $M=0.3062(6)$ [from $M_1(L)$], $M=0.3068(9)$ [from $M_2(L)$], $\rho_s = 0.179(4)$, and $\chi_\perp = 0.063(1)$. Note that $M_1(L)$ and $M_2(L)$ give the same sublattice magnetization $M$ within statistical errors, as they should. Using (\ref{hydro}) the spinwave velocity is $c=1.69(2)$. These parameters are in good general agreement with previous calculations, except that the energy is slightly lower than the best GFMC estimate,\cite{runge2} due to the discrepancies in the finite-size data discussed above. The sublattice magnetization is a bit lower than the recent result by Beard and Wiese \cite{beard} [$M=0.3085(2)$]. The consistency with the scaling forms (\ref{emscale}) can of course also be tested with these independent fits. The leading energy correction is found to be $e_3 = -2.43(11)$, whereas Eq.~(\ref{escale}) in combination with the above estimate for $c$ gives $e_3 = \beta c = 2.43(3)$. The constant of the linear term in $M_1(L)$ is $m_1 = 0.574(9)$, whereas the right hand side of the scaling form (\ref{mscale}) gives $m_1 = \alpha M^2/(c\chi_\perp) = 0.550(10)$. The subleading energy correction of the fit is $e_4 = 4(1)$, and Eq.~(\ref{escale}) gives $e_4 = c^2 /(4 \rho_s) = 4.0(1)$. Hence, there is good consistency with the predicted scaling forms, within a few percent for the leading terms, but with a statistical uncertainty as large as $\approx 25$\% for the subleading energy correction. The leading corrections have been derived in several different ways. \cite{neuberger,fisher,hasenfratz} Given then the good numerical agreement found above, it is now reasonable to enforce the constraints (\ref{const2}) and (\ref{const3}) which involve these terms. From such a fit, with the constraint on the subleading energy correction (\ref{const1}) left unenforced, one can get a better estimate of the size of the subleading term. The constraint that $M_1(L)$ and $M_2(L)$ extrapolate to the same $M$ is also enforced. One of the early nonlinear $\sigma$ model calculations of the finite-size behavior indicated that the subleading correction to $E$ was $\sim 1/L^5$, not $\sim 1/L^4$. \cite{fisher} Previous numerical calculations were not accurate enough to distinguish between these forms. However, Runge noted that the value of $c$ extracted from the leading correction was in rather poor agreement with other estimates if a $1/L^5$ subleading correction was used, and that a slightly better value was obtained using $1/L^4$. Even with the accuracy of the QMC results for $E(L)$ obtained here, individual fits using the two different subleading corrections (and including also the next higher-order correction in both cases) cannot by themselves definitely rule out the absence of the $1/L^4$ term, although the fit including it is better. However, it is not at all possible to obtain a good fit constrained by (\ref{const2}) and (\ref{const3}) without the $1/L^4$ term, $\chi^2/$DOF being as high as $\approx 50$ in this case. With the $1/L^4$ term $\chi^2/DOF$$\approx 0.9$. Hence, knowing the constraints on the leading corrections, the present data unambiguously require that the subleading energy term is $\sim 1/L^4$. The parameters obtained in the partially constrained fit are: $E=-0.669436(5)$, $M=0.3071(3)$, $\rho_s = 0.176(2)$, $\chi_\perp = 0.0623(10)$, and $c= 1.681(14)$. The statistical errors are here significantly reduced relative to the previous unconstrained fits, and the two sets of parameters are consistent with each other. The leading corrections to $E$ and $M$ are now, of course, in complete agreement with the theoretical prediction. The subleading energy correction of the fit is $e_4 = 4.17(23)$, whereas Eq.~(\ref{escale}) with the above parameters gives $e_4 = c^2 /(4 \rho_s) = 4.01(7)$. Hence, it is now also confirmed, at the $5$\% accuracy level, that the size of the subleading energy correction agrees with the chiral perturbation theory prediction by Hasenfratz and Niedermayer.\cite{hasenfratz} It is remarkable that the derivation of this very detailed theoretical result is based purely on symmetry and dimensionality considerations.\cite{hasenfratz} With the subleading energy correction now firmly established, the last constraint (\ref{const1}) can also be enforced, and the parameters of this fit are taken as the final results. This coupled fit still has $\chi^2/$DOF$\approx 0.9$ (the total number of parameters is $14$, and a total of $28$ data points were used). Looking at $\chi ^2$ for the five individual curves, they all represent good fits to their respective data sets. Hence, the constrained fit is in all respects a good one. All the ground state parameters obtained from the fit are listed in Table~\ref{tab3}, along with the leading and subleading corrections to the energy and the sublattice magnetization. There are only minor changes relative to the previous partially constrained fit, the main improvement in accuracy being for the spinwave velocity. The errors of the parameters were calculated using the bootstrap method,\cite{bootstrap} i.e., fits were carried out for a large number of bootstrap samples of the QMC data, and the error is defined as one standard deviation of the parameters of those fits. As explained in Sec.~II-B, this should be a very accurate method for calculating statistical errors even in highly nonlinear situations such as the constrained $\chi ^2$ fit. The QMC data used, along with the fitted curves, are shown in Figs.~\ref{figenergy}--\ref{figsusc}. In the figures, it can be observed that even though the $L=4$ data are not included in the fit, the fitted curves extrapolated to $L=4$ are quite close to these QMC points, except in the case of $\chi_\perp (4)$ (for which this point cannot even be included in an individual fit). In fact, calculating also the statistical errors of the extrapolations to $L=4$ gives strong further support to the reliability of the procedures used. For all quantities except the energy, the statistical errors are found to be comparable at $L=4$ and $L=\infty$. For the energy the fluctuations are more than $20$ times larger at $L=4$, due to the high order of the leading correction. Both $E(4)$ and $\Lambda_s(4)$ are within one standard deviation of the extrapolated results. The sublattice magnetizations $M_1(4)$ and $M_2(4)$ both deviate by $2.5$ standard deviations, and $\chi_\perp$ deviates by $3$ standard deviations. These rather small deviations clearly indicate that there are only minor effects of neglected higher-order corrections. The extrapolations to $L = \infty$ should of course be significantly less sensitive to systematic errors, since the neglected corrections rapidly vanish for the larger sizes. As already discussed above, the neglected corrections are several times larger at $L=4$ than at $L=6$ (the smallest size considered in the fits). Hence, any remaining systematic errors in the infinite-size extrapolations listed in Table~\ref{tab3} should be well below the indicated statistical errors. \section{Summary and Discussion} An extensive study of the ground state parameters of the 2D Heisenberg model has been presented. Using the Stochastic Series Expansion QMC method \cite{sse1,sse2} in combination with a data analysis scheme utilizing covariance effects,\cite{covar} results of unprecedented accuracy were obtained for the ground state energy and the sublattice magnetization for systems of linear dimensions up to $L=16$. The long-wavelength susceptibility and the spin stiffness were also directly calculated in the simulations. The QMC data was extrapolated to the thermodynamic limit using scaling forms predicted from chiral perturbation theory,\cite{hasenfratz} supplemented by higher-order terms necessary to obtain good fits. Both the leading and subleading corrections were found to agree in magnitude with the theoretical predictions to within a few percent. This is the first numerical verification of the predictions of chiral perturbation theory \cite{hasenfratz} to subleading order. The ground state energy extracted from the fit is the most accurate estimate obtained to date, and is slightly higher than the best Green's function Monte Carlo result.\cite{runge2} This discrepancy is most likely due to a ``population control'' bias in the GFMC calculation.\cite{runge2} The spin stiffness and the sublattice magnetization are both lower than the results of a recent low-temperature loop algorithm QMC study.\cite{beard} The discrepancy appears to be marginally larger than what could be explained by statistical fluctuations alone. For the spinwave velocity the results are consistent with each other. In the QMC study by Beard and Wiese \cite{beard} the size and temperature dependence of the uniform and staggered susceptibilities were fit to scaling forms from chiral perturbation theory. Hence, the underlying theory for analyzing the data is the same as used in the present study, but the physical quantities used are different, as is the temperature regime (low but finite $T$ versus $T=0$ in this study). Since both QMC algorithms are ``exact'', the discrepancies must originate from the scaling procedures. In this paper the effects of neglected higher-order corrections were discussed, and attempts were made to minimize these as much as possible. Furthermore, as a quantitative check of remaining effects of this nature, the calculated scaling functions were extrapolated to lattices {\it smaller} than the smallest size used in the fit ($L=6$). The close ageement with the actual calculated results, along with the high orders of the largest neglected corrections, show that any effects of the higher-order terms on the extrapolations to infinite size should be well below the carefully computed statistical errors. It can be noted that Beard and Wiese also included subleading corrections,\cite{beard} but not to the same high orders as was necessary in the present study (due to the high accuracy of the SSE results for the energy and the sublattice magnetization). Another reason to believe that the present study is more reliable is that the fit involves in a direct manner the $T=0$ finite-size definitions of the same infinite-size parameters sought, not functions of those parameters. In combination with the covariance error reduction scheme,\cite{covar} the SSE method is a very efficient method for calculating correlation functions of isotropic spin models, as exemplified by the results presented here. The covariance scheme is most efficient in cases where there are strong long-ranged correlations (where the ``bare'' estimator for the correlation function does not behave well). It is currently being applied in a study the temperature dependence of the correlation length of the weakly coupled Heisenberg bilayer, for larger lattices and lower temperatures than previously \cite{bilayer1} possible. This is motivated by recent results obtained from a mapping to a nonlinear $\sigma$ model for this system,\cite{yin} predicting a much faster divergence of the correlation length as $T\to 0$ than for the single layer. The SSE algorithm has also proven useful in the case of critical, or near-critical systems, such as the bilayer Heisenberg model close to its quantum critical point. More accurate finite-size scalings than previously reported for this,\cite{bilayer2} as well as other models exhibiting quantum critical behavior,\cite{dimrand} are possible with the data enhanced using the covariance method. QMC methods based on the loop-cluster algorithm,\cite{evertz} have proven to be very efficient in several studies of $S=1/2$ Heisenberg models, \cite{wiese,beard,loopstudies} and are clearly more efficient than the SSE method in many cases. For example, it was shown here that the covariance method cannot significantly improve the accuracy of calculated static susceptibilities, which appear to be very accurately given in loop algorithm simulations.\cite{beard} The method used here for calculating the spin stiffness directly from the winding number fluctuations would probably also be more accurate with loop algorithms. However, it is not clear whether loop algorithms can easily produce more accurate results for equal-time correlation functions or energies than those presented in this paper. The methods discussed here can also be easily extended to higher-spin models. In fact, the covariance scheme has an additional advantage for $S > 1/2$, in that the on-site correlation $(S^z_i)^2$ is known exactly, but fluctuates in the simulation and exhibits strong covariance with other correlation functions.\cite{covar} Detailed studies of various higher-spin models should therefore now be feasible also in $2D$. \section{Acknowledgments} I would like to thank B. Beard, D. Scalapino, R. Sugar, and U.-J. Wiese for discussions. This work was supported by the National Science Foundation under Grant No.~DMR-89-20538.
2,869,038,156,461
arxiv
\section{introduction} The controlled modification of graphene properties is essential for its proposed electronic applications \cite{1,2,3,4,5}. Ion irradiation is widely used for this aim (see, for example, \cite{6,7,8,9}) due to the ability to control the energy of ions and irradiation dose with high accuracy. Irradiation of pristine graphene results in increase of disorder due to introduced structural defects which influences its electrical and optical properties. Ion irradiation as the method to introduce disorder is interesting also due to the possible reversibility caused by annealing of radiation damage. Many works were devoted to the annealing of mono- and multi-layered graphene films. However, in most of previous papers, annealing was used for pristine, non-irradiated graphene as a procedure for overcoming unintentional doping and removal of polymer residues, which remain after wet graphene transfer to the substrate or after photolithography used in the device processing \cite{10,11,12,13,14}. In a few papers, the procedure of annealing was employed to samples preliminary irradiated with ions \cite{15,16}, plasma \cite{17} and UV light \cite{18}. Usually, measurements of the Raman scattering spectra (RS) are considered as an effective tool for probing the structure of disordered graphene films and density of introduced defects \cite{19,20,21}. Typical RS spectra for disordered graphene consist of three main lines. The G-line at 1600 cm$^{-1}$ is common for different carbon-based materials, including carbon nanotubes, mono- and multilayered graphene and graphite. The 2D-line at 2700 cm$^{-1}$ is related to an inter-valley two phonon mode, fully corresponds to momentum conservation and is emitted in the intact crystalline structure removed from any structural defects. The “defect-connected” D-line at 1350 cm$^{-1}$ is related to the inter-valley single phonon scattering process which is forbidden in the perfect graphene lattice due to momentum conservation, but is possible in the vicinity of a lattice defect (edge, vacancies, etc.) Therefore, the intensity of D-line is used (in the form of dimensionless ratio of amplitudes of D- and G-lines, $\alpha=I_{D}/I_{G}$) as a measure of disorder in graphene layers. Correspondingly, the normalized intensity of 2D-line $\beta=I_{2D}/I_{G}$ can be considered as a measure of non-destroyed part of the lattice. In this work, we report the results of measurements of RS in monolayer graphene samples irradiated with different dose $\Phi$ of heavy (Xe) and light (C) ions and annealed at different temperatures in vacuum and in forming gas (95\%Ar+5\%H$_{2}$). \section{samples} Details of sample preparation, ion irradiation and measurements of RS in our samples before annealing were reported in our previous papers \cite{22, 23}. Two initial large scale monolayer graphene specimens (5x5 mm) were supplied by Graphenea Inc.. Monolayer graphene was produced by CVD on copper catalyst and transferred to a 300 nm SiO$_{2}$/Si substrate using wet transfer process. Graphene specimens of such a large size were not a monocrystalline, they look like polycrystalline films with the average size of microcrystals about 10 microns \cite{22}. On the surface of the first specimen, six groups of micro-samples (0.2x0.2 mm) were prepared by means of electron-beam lithography. Each group of samples was irradiated by different dose $\Phi$ of carbon ions C$^{+}$ with energy of 35 keV. On the surface of the second specimen, micro-samples were not fabricated. Six areas 2x1 mm$^{2}$ each, of the whole specimen were just irradiated by different dose $\Phi$ of Xe$^{+}$ ions with the same energy of 35 keV. As a result, two series of samples irradiated by heavy (Xe$^{+}$) and light (C$^{+}$) ions were obtained. In the RS measurements, excitation was realized by a laser beam with excitation wavelength $\lambda$ = 532 nm and power less than 2 mW to avoid heating and film destruction. It was shown in \cite{23}, that dependences $\alpha(\Phi)$ and $\beta(\Phi)$ for both series of samples are merged if plotted not as a function of $\Phi$, but as a function of the density of defects $N_{D}$ = $k\Phi$, introduced by irradiation, where the coefficient $k$ depends on the energy and mass of the incident ion and reflects the average fraction of carbon vacancies in the graphene lattice per ion impact. It was found that for C$^{+}$-series, $k \approx 0.08$, while for Xe$^{+}$-series, $k \approx 0.8$ \cite{24}. Dependences of $\alpha(N_{D}$) and $\beta(N_{D}$) for both series of samples before annealing are shown in Fig. 1. Alignment of both dependences plotted on this scale, allows us to attribute all changes in the RS spectra observed after annealing, to the different annealing conditions. In non-irradiated samples (for these samples we assume that $N_{D} \approx 10^{11}$ cm$^{-2}$), $\alpha$ is very small and $\beta$ is maximal. With increase of $N_{D}$, $\alpha$ increases while $\beta$ decreases. However, with further increase of $N_{D}$, $\alpha$ reaches a maximum and then decreases. This non-monotonic behavior of $\alpha$ is explained by theoretical model \cite{25} based on the assumption that a single ion impact leads to formation of completely destroyed "defective” area, $S$-area, in the immediate vicinity of the defect which is surrounded by a more extended “activated” area ($A$-area), where the graphene lattice is preserved, but the proximity to the defect causes a breakdown of the selection rules and gives rise to the emission of D-peak, attributed to a single-phonon scattering. Increasing of $A$-areas obviously results in an increase of $\alpha$ and decrease of $\beta$. However, increase of $N_{D}$ is accompanied by decrease of the mean distance between defects $L_{D} = (N_{D})^{-1/2}$, and when $L_{D}$ becomes shorter than the size of $A$-area, they begin to overlap with each other and with $S$-areas. As a result, the value of $\alpha$ reaches a maximum and then decreases. Additionally, Fig. 1 shows that $\alpha$ and $\beta \to 0$ at $N_{D} = 0.5\times10^{14}$ cm$^{-2}$. Disappearance of both Raman scattering lines could be explained by the fact that at this $N_{D}$, the mean distance between defects $L_{D} = (N_{D})^{-1/2} \approx 1.5$ nm becomes smaller than the Raman relaxation length, 2 nm \cite{26}. Annealing of samples from Xe-series was performed in high vacuum ($2-4\times10^{-6}$ Torr), while samples from C-series were annealed in the mixed forming gas: 95\%Ar+5\%H$_{2}$ (800 sccm). Before turning on the gas flow, the tube was pumped and purged to a pressure about 100 Torr. Samples were heated at a rate of 15$^{\circ}$C/min to different annealing temperatures, $T_{a}$, and then annealed for 1 hour. Cooling of the samples was performed by shutting off the heater and letting samples cool naturally. \begin{figure}[H] \includegraphics[scale=0.33]{Fig1} \caption{(Color online) Normalized amplitude of D-line $\alpha = I_{D}/I_{G}$ (1, 2) and 2D-line $\beta = I_{2D}/I_{G}$ (3, 4) for samples irradiated with different dose $\Phi$ of C$^{+}$ ions (1, 3) and Xe$^{+}$ ions (2, 4) as a function of the density of introduced defects $N_{D} = k\Phi$. For non-irradiated samples, $N_{D}$ is presumed 10$^{11}$ cm$^{-2}$.} \label{Layout1} \end{figure} \section{results and discussion} Figures 2 and 3 show RS measurements of samples annealed in vacuum and in forming gas, accordingly, and at different $T_{a}$. All spectra are normalized to the intensity of the G-line which is taken as 1. One can see that annealing leads to changes in amplitudes of D and 2D-lines, as well as to shifts of the frequencies of the peak positions. \begin{figure}[H] \includegraphics[scale=0.35]{Fig2} \caption{(Color online) Raman spectra of samples annealed in vacuum. From bottom to top: red(1) - before annealing, purple(2) and green(3) - after annealing at 550$^{\circ}$C and 1000$^{\circ}$C. $N_{D}$ (in units of 10$^{13}$ cm$^{-2}$): (a) - 0.01 (non-irradiated), (b) - 0.4, (c) - 0.8, (d) - 1.6. All spectra are shifted for clarity and normalized to the intensity of G-line, $I_{G} = 1$.} \label{Layout2} \end{figure} \begin{figure}[H] \includegraphics[scale=0.33]{Fig3} \caption{(Color online) Raman spectra of samples annealed in forming gas. From bottom to top: 1 - before annealing, 2, 3, 4 - after annealing at 200$^{\circ}$C, 600$^{\circ}$C and 1000$^{\circ}$C correspondingly. $N_{D}$ (in units of 10$^{13}$ cm$^{-2}$): (a) - 0.01 (non-irradiated), (b) - 0.4, (c) - 1.6, (d) - 3.2. All spectra are shifted for clarity and normalized to the intensity of G-line, $I_{G} = 1$.} \label{Layout3} \end{figure} Figs. 4a,b show the values of $\alpha$ and $\beta$ for samples annealed in vacuum at different $T_{a}$. One can see that $\alpha$ decreases significantly with increase of $T_{a}$, with most of the change occurings at 550$^{\circ}$C. Decreasing $\alpha$ can be interpreted as a removal of irradiation induced defects. One might expect that decrease of $\alpha$ will lead to corresponding increase of $\beta$. However, Fig. 4b shows that after annealing at $T_{a} = 550^{\circ}$C, $\beta$ decreases for pristine and slightly irradiated samples, and increases only with further increase of $T_{a}$. In Ref. \cite{27}, simultaneous decrease of D- and 2D-lines was observed in defected graphene with increase of doping. So, we may suggest that in our experiment, doping is induced by vacuum annealing at low $T_{a}$ followed by exposure of annealed samples to ambient air. Further increase of $T_{a}$ up to 1000$^{\circ}$C leads to increase of $\beta$ up to the values almost equal to those before annealing for lightly irradiated samples and up to relatively large values for strongly disordered samples where 2D-line was initially suppressed by ion irradiation (see insert in Fig. 4b). This can be attributed to the effective defect removal and partial reconstruction of the lattice structure. However, after final annealing at $T_{a}$ = 1000$^{\circ}$C, D-line still remains which means that defects are not fully removed. We note, that in samples more disordered before annealing, the values of $\alpha$ which remain after full annealing, are larger (see insert in Fig. 4a). \begin{figure}[H] \includegraphics[scale=0.33]{Fig4a} \includegraphics[scale=0.33]{Fig4b} \caption{(Color online) $\alpha=I_{D}/I_{G}$ (a) and $\beta=I_{2D}/I_{G}$ (b) after annealing in vacuum at different temperatures $T_{a}$. The numbers near each curve correspond to the sample number shown in insert in (a). In inserts, curves (1) show $\alpha$ and $\beta$ before annealing, curves (2) – after annealing at 1000$^{\circ}$C. The lines are the guide to the eyes.} \label{Layout4} \end{figure} Figure 5a,b shows similar dependences of $\alpha$ and $\beta$ as a function of $T_{a}$ for samples annealed in forming gas. This annealing is characterized by different features: all data oscillate strongly, so that it is possible to note only the general trends: the ratio $\alpha$ decreases somewhat little for slightly disordered samples, and remains more or less the same for strongly irradiated samples, so after annealing at $T_{a}$ = 1000$^{\circ}$C all samples have a distribution $\alpha$($N_{D}$) with a maximum, which is similar than before annealing (see insert in Fig. 5a and compare with insert in Fig. 4a). The ratio $\beta$ continuously decreases a little with increase of $T_{a}$ for slightly disordered samples and increases for strongly disordered samples in which the 2D-line was initially suppressed. As a result, after final annealing at 1000$^{\circ}$C, the values of $\beta$ are small and lower than those values in the case of vacuum annealing (compare insert in Fig. 5b and insert in Fig. 4b). This gives rise to the conclusion that annealing in forming gas is less effective in reconstruction of the damaged graphene lattice. \begin{figure}[H] \includegraphics[scale=0.33]{Fig5a} \includegraphics[scale=0.33]{Fig5b} \caption{(Color online) $\alpha=I_{D}/I_{G}$ (a) and $\beta=I_{2D}/I_{G}$ (b) after annealing in forming gas at different temperatures $T_{a}$. The numbers near each curve correspond to the sample number shown in insert. $N_{D}$ (in units of 10$^{13}$ cm$^{-2}$): 1 - 0.01 (non-irradiated), 2 - 0.4, 3 - 0.8, 4 - 1.6, 5 - 3.2. In inserts, curves (1) show $\alpha(N_{D})$ and $\beta(N_{D})$ before annealing, curves (2) - after annealing at 1000$^{\circ}$C. The lines are the spline interpolation.} \label{Layout5} \end{figure} Vacuum annealing results also in appearance of a broad band in RS, centered, approximately, near the position of D-line at 1300 cm$^{-1}$ (Fig. 6). In Ref. \cite{13}, a similar band was observed in graphene annealed at 400$^{\circ}$C in oxygen-free atmosphere and was attributed to amorphous carbon (aC) on the surface of graphene \cite{28}. This can appear because of carbonization of organic traces of PMMA polymer used in the wet transfer of CVD grown graphene film onto SiO$_{2}$/Si substrate \cite{12}. Due to existence of the aC-band, the amplitude of D-line was always obtained after subtraction of aC-band, using Lorentzian decomposition as shown in Fig. 6. In samples annealed in forming gas, aC-band was not observed, which shows that the polymer traces are efficiently removed by this annealing. \begin{figure}[H] \includegraphics[scale=0.3]{Fig6} \caption{(Color online) Lorentzian decomposition of the Raman spectra for sample with $N_{D} = 0.4\times10^{13}$ cm$^{-2}$, annealed in vacuum at $T_{a} = 550^{\circ}$C. The black solid curve is experiment, the red dashed curve is the calculated sum of three Lorentzian curves.} \label{Layout6} \end{figure} Annealing in forming gas and in vacuum results in the blue shift of the position of all three lines, D, G and 2D. There are two reasons for the blue shift of the Raman peaks after annealing: (i) hole doping caused by enhanced ability to adsorb oxygen and water molecules after vacuum annealing and sample exposure to ambient air \cite{27,29,30} or (ii) compressive stress caused by the difference in the thermal expansion of graphene and substrate, slipping of the graphene film over substrate during heating and pinning of annealed at high $T_{a}$ film during cooling back to room temperature \cite{16,31,32,33,34,35}. It was shown in Ref. \cite{33} that these two reasons of the blue shift can be distinguished when one plots the frequency (position) of the 2D-peak $P_{2D}$ versus position of the G-peak $P_{G}$. When doping effect dominates, the slope of this dependence has to be equal to 0.7, while existence of strain leads to an increase of this slope up to 2.2. Fig. 7 shows this dependence measured at different $T_{a}$ for both series of samples. The slope is approximately 1.0 which shows that the compressive strain plays an important role in the blue shift. There is another argument to justify the contribution of lattice strain to the blue shift. The lattice deformation induced by strain leads to a change in the phonon energy, and therefore shift of 2D-line connected with emission of two equal phonons has to be double than the shift of D-line. In Fig. 8, shifts $\Delta P$ of position of RS lines after annealing at different $T_{a}$ are shown. One can see that below $T_{a}$ = 500$^{\circ}$C, shifts of all lines are equal which indicates that it is caused mainly by doping, while with further increase of $T_{a}$, shift of 2D-line is larger and the final shift after annealing at 1000$^{\circ}$C for 2D-line is, indeed, approximately two times larger than that for D-line. This shows that at high annealing temperatures, the main reason of shift is a compressive stress of the graphene lattice. One can see from Fig. 8, that the frequency (position) of both peaks oscillate with increase of $T_{a}$. Figure 5 shows that the intensities of D- and 2D-lines also oscillate when plotted as a function of $T_{a}$. Taking into account that both changes are caused by annealing-induced lattice deformation and doping, one can expect a possible correlation between these quantities. Fig. 9 shows that, indeed, there is some correlation between variation of the intensities of RS lines and their peak positions. In Fig. 9, the line amplitudes are normalized to the corresponding values at room temperature (RT). Such correlated variation can originate from heterogeneous distribution of the doping and the lattice strain. This can be caused by grain boundaries in the initial polycrystalline large-scale graphene specimens. \begin{figure}[H] \includegraphics[scale=0.33]{Fig7} \caption{(Color online) Positions of 2D-peak (1, 2) and D-peak (3, 4) plotted versus position of G-peak for samples, annealed at different $T_{a}$ in forming gas (1, 3) and in vacuum (2, 4).} \label{Layout7} \end{figure} \begin{figure}[H] \includegraphics[scale=0.3]{Fig8} \caption{(Color online) Shift of the average position of 2D-peak and D-peak for samples annealed in forming gas at different $T_{a}$. Error bars show spreading of the peak positions for samples with different $N_{D}$.} \label{Layout8} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[scale=0.33]{Fig9} \end{center} \caption{(Color online) Normalized amplitudes (3a, 4a) and peak position (3b, 4b) of D-line (a), G-line (b) and 2D-line (c) for samples 3 and 4 annealed in forming gas at different $T_{a}$. The amplitudes are normalized to the corresponding values at room temperature (RT) before annealing. The lines are a spline interpolation.} \label{Layout9} \end{figure} \section{conclusions} In conclusion, the following principal features for annealing of irradiated graphene in vacuum and in forming gas are revealed:\\ 1) Vacuum annealing below 500$^{\circ}$C, leads to simultaneous decrease of both D-line and 2D-line which can be explained by unintentional doping caused by exposure of annealed samples to ambient air. Further increase of $T_{a}$ up to 1000$^{\circ}$C leads to partial removal of defects and reconstruction of the damaged lattice. Annealing in forming gas is less effective in reconstruction of the damaged lattice up to annealing at 1000$^{\circ}$C.\\ 2) Annealing in vacuum leads also to appearance of a broad band near the position of D-line which is attributed to formation of amorphous carbon on the surface of graphene caused by carbonization of traces of polymer used in wet transfer of CVD grown graphene film on SiO$_{2}$/Si substrate. Annealing in forming gas does not lead to appearance of such a broad band, which indicates that polymer residues are removed by annealing in forming gas but only agglomerate in vacuum.\\ 3) Annealing in vacuum and in forming gas is accompanied by a blue shift of all RS lines, which is due to unintentional doping and compressive stress caused by different thermal expansion of monolayer graphene and the substrate. Doping is the main mechanism of shift at annealing below 500$^{\circ}$C, while stress dominates at high $T_{a}$ up to 1000$^{\circ}$C.\\ 4) Fluctuations of the intensity and peak position of RS lines are correlated and indicate inhomogeneous distribution of doping and strain across the samples, which may be caused by location of samples on the polycrystalline large-scale graphene specimen. \section{acknowledgements} E.Z ans A.S. were supported by the ISF (grant No. 569/16)
2,869,038,156,462
arxiv
\section{Introduction} Let $A$ be a given finite subset of the integers. For any integer $N \geq 1$, we are interested in determining the $N$-fold sumset of $A$, \[ NA : = \{a_1 + \cdots + a_N : a_1 , \ldots , a_N \in A\} , \] where the $a_i$'s are not necessarily distinct. For simplicity we may assume without loss of generality that the smallest element of $A$ is $0$, and that the gcd of its elements is $1$.\footnote{Since if we translate $A$ then we translate $NA$ predictably, as $N(A+\tau)=NA+N\tau$, and since if $A=g\cdot B:=\{ gb: b\in B\}$ then $NA=g\cdot NB$.} Under these assumptions we know that \[ 0\in A\subset 2A\subset 3A\subset \cdots \subset \mathbb N, \] where $\mathbb N$ is the natural numbers, defined to be the integers $\geq 0$. Moreover there exist integers $m_1,\ldots,m_k$ such that $m_1 a_1 + \cdots + m_k a_k = 1$, and therefore \[ \mathcal P(A) = \left\{ \sum_{a\in A} n_aa:\ \text{ Each } n_a\in \mathbb{N}\right\} =\lim_{N\to \infty} NA = \mathbb N \setminus \mathcal E(A) \] for some finite \emph{exceptional set} $\mathcal E(A)$.\footnote{We give a simple proof that $\mathcal E(A)$ is finite in section \ref{sec: basics}.} One very special case is the \emph{Frobenius postage stamp problem} in which we wish to determine what exact postage cost one can make up from an unlimited of $a$ cent and $b$ cent stamps. In other words, we wish to determine $\mathcal P(A)$ for $A=\{ 0,a,b\}$. It is a fun challenge for a primary school student to show that $\# \mathcal E(\{ 0,3,5\})=\{ 1,2,4,7\}$, and more generally, \cite{Syl}, that \[ \max \mathcal E(\{ 0,a,b\}) = ab - a - b, \text{ and } |\mathcal E(\{ 0,a,b\})| = \tfrac12 (a-1)(b-1). \] Erd\H{o}s and Graham \cite{EG} conjectured precise bounds for $\max \mathcal E(A)$; see also Dixmier \cite{Di}. In this article we study the variant in which we only allow the use of at most $N$ stamps; that is, can we determine the structure of the set $NA$? If $b = \max A$, then $NA\subset \{0 , \ldots , bN\} \cap \mathcal P(A)= \{0 , \ldots , bN\} \setminus \mathcal E(A)$. Moreover, we can use symmetry to determine a complementary exceptional set:\ Define the set $b-A:=\{ b-a:\ a\in A\}$. Then $NA=Nb-N(b-A)$ and so $NA$ cannot contain any elements $Nb-e$ where $e\in \mathcal E(b-A)$. Therefore \[ NA \subset \{0 , \ldots , bN\} \setminus (\mathcal E(A) \cup (bN - \mathcal E(b-A))). \] We ask when equality holds? \begin{thm} \label{Thm1} Let $A$ be a given finite subset of the integers, with smallest element $0$ and largest element $b$, in which the gcd of the elements of $A$ is $1$. If $N\geq 2[\tfrac b2]$ and $0 \leq n\leq Nb$ with $n\not\in \mathcal E(A)\cup (Nb- \mathcal E(b-A))$ then $n\in NA$. Equivalently, we have $$NA = \{0 , \ldots , bN\} \setminus (\mathcal E(A) \cup (bN - \mathcal E(b-A))).$$ \end{thm} In the next section we will show that if $A$ has just three elements then Theorem \ref{Thm1} holds for all integers $N\geq 1$ (which does not seem to have been observed before). However this is not true for larger $A$: If $A=\{ 0,1,b-1,b\}$ then $\mathcal{E}(A) = \mathcal{E}(b-A)=\emptyset$ and $b-2\in (b-2)A$ but $b-2\notin (b-3)A$, in which case Theorem \ref{Thm1} can only hold for $N\geq b-2$. We conjecture that one should be able to obtain the lower bound ``$N\geq b-2$'' (which would then be best possible) in place of ``$N\geq 2[\tfrac b2]$'' in Theorem \ref{Thm1}.\footnote{Bearing in mind the example $A=\{ 0,1,N+1,N+2,\dots,b\}$, we can refine this conjecture to ``$N\geq b+2-\#A$'' whenever $\#A\geq 4$.} It is feasible that one could develop our methods to show this, but it seems to us like that would be a formidable task. Theorem \ref{Thm1} seems to have first been proved, but with the bound $N\geq b^2(\# A-1)$, by Nathanson \cite{Nath} in 1972, which was improved to $N\geq \sum_{a\in A,\ a\ne 0} (a-1)$ in \cite{WCC}.\footnote{\cite{WCC} claim that their result is ``best possible," but this is a consequence of how they formulate their result. Indeed Theorem \ref{Thm1} yields at least as good a bound \emph{for all} sets $A$ with $\# A\geq 4$, and is better in all but a couple of families of examples.} We will generalize Theorem \ref{Thm1} to sets $A$ of arbitrary dimensions. Here we assume that $0\in A \subset \mathbb{Z}^n$. The \emph{convex hull} of the points in $A$ is given by $$ H(A) = \left\{\sum_{a \in A} c_a a : \sum_{a\in A} c_a = 1, \text{ each } c_a\geq 0\right\}, $$ so that \[ C_A := \left\{\sum_{a \in A} c_a a : \text{Each } c_a \geq 0\right\} = \lim_{N\to \infty} N H(A), \] is the \emph{cone} generated by $A$. Let $\mathcal P (A) $ be the set of sums in $C_A$ where each $c_a\in \mathbb N$, so that $\mathcal P (A)\subset C_A \cap\mathbb{Z}^n$. We define the \emph{exceptional set} to be \[ \mathcal E (A) : = (C_A \cap \mathbb{Z}^n) \setminus \mathcal P (A), \] the integer points that are in the convex hull of positive linear combinations of points from $A$, and yet are not an element of $NA$, for any integer $N\geq 1$. With this notation we can formulate our result: \begin{thm} \label{Thm2} Let $0\in A \subset \mathbb{Z}^n$ such that $A$ spans $\mathbb Z^n$ as a vector space over $\mathbb Z$. There exists a constant $N_{A}$ such that if $N\geq N_{A}$, \[ NA = (NH(A)\cap \mathbb Z^n) \setminus \mathcal E_N(A) \text{ where } \mathcal E_N(A):= \bigg( \mathcal E(A)\cup \bigcup_{a\in A} (aN-\mathcal E(a-A) ) \bigg) . \] \end{thm} We have been unable to find exactly this result in the literature. It would be good to obtain an upper bound on $N_A$, presumably in terms of the geometry of the convex hull of $A$. In Theorem \ref{Thm1}, when $A\subset \mathbb N^1$, the sets $\mathcal E(A)$ are finite, which can be viewed as a finite union of $0$ dimensional objects. In the two dimensional example \begin{equation}\label{ex1} A = \{(0,0) , (2,0) , (0,3) , (1,1)\},\end{equation} we find that $\mathcal{E} (A)$ in infinite, explicitly \begin{align*} \mathcal{E} (A)&=\quad \{ (0,1), (1,0), (1,2) \} + \mathcal P(\{ (0,0),(2,0)\}) \\ & \cup \quad \{ (0,1), (0,2), (1,0), (1,2), (2,1), (3,0)\} + \mathcal P(\{ (0,0),(0,3)\}), \end{align*} the union of nine one-dimensional objects. More generally we prove the following: \begin{thm} \label{Thm3} Let $0\in A \subset \mathbb{Z}^n$ such that $A$ spans $\mathbb Z^n$ as a vector space over $\mathbb Z$. Then $\mathcal E(A)$ is a finite union of sets of the form \[ \bigg\{ v+\sum_{b\in B} m_b b: m_b\in \mathbb Z_{\geq 0}\bigg\} = v+\mathcal P(B\cup\{ 0\}) \] where $v\in C_A\cap \mathbb Z_{\geq 0}^n$, with $B\subset A$ contains $\leq n-1$ elements, and the vectors in $B-0$ are linearly independent. \end{thm} We deduce from Theorem \ref{Thm3} that \begin{equation}\label{E} \# \mathcal{E}_N (A) = O(N^{n-1}). \end{equation} Theorem \ref{Thm3} also implies that there is a bound $B_A$ such that every element of $C_A \cap \mathbb{Z}^n$ which is further than a distance $B_A$ from its boundary, is an element of $\mathcal{P} (A)$ (and so not in $\mathcal{E} (A)$). The most remarkable result in this area is the 1992 theorem of Khovanskii \cite[Corollary 1]{Khov} who proved that $\# NA$ is a polynomial of degree $n$ in $N$ for $N$ sufficiently large, where the leading coefficient is $ {\rm Vol}(H(A))$. His extraordinary proof proceeds by constructing a finitely-generated graded module $M_1,M_2,\ldots$ over $\mathbb C[t_1,\dots,t_k]$ with $k=\#A$, where each $M_N$ is a vector space over $\mathbb C$ of dimension $|NA|$. One then deduces that $|NA|=\text{dim}_{\mathbb C} M_N$ is a polynomial in $N$, for $N$ sufficiently large, by a theorem of Hilbert. Nathanson \cite{Nath2} showed that this can generalized to sums $N_1A_1+\cdots+N_kA_K$ when all the $N_i$ are sufficiently large. This was all reproved by Nathanson and Ruzsa \cite{NathRuz} using elementary, combinatorial ideas (using several ideas in common with us). Moreover it can also be deduced from Theorems \ref{Thm2} and \ref{Thm3} . In section~\ref{easy} we look at the case where $A$ has three elements, showing that the result holds for all $N\geq 1$. This easier case introduces some of the ideas we will need later. In section~\ref{notsoeasy} we prove Theorem \ref{Thm1}. Obtaining the bound $N\geq 2b-2$ is not especially difficult, but improving this to $N\geq 2[\tfrac b2]$ becomes complicated and so we build up to it in a number of steps. In section~\ref{high} we begin the study of a natural higher dimensional analog. The introduction of even one new dimension creates significant complications, as the exceptional set $\mathcal{E} (A)$ is no longer necessarily finite. In the next subsection we indicate how one begins to attack these questions. \subsection{Representing most elements of $\mathbb Z_{\geq 0}^n$} \label{sec: basics} If $A=\{ 0,3,5\}$ one can represent \[ 8=1\times 3+1\times 5, \quad 9 =3\times 3\ \text{ and }\ 10=2\times 5 \] and then every integer $n\geq 11$ is represented by adding a positive multiple of 3 to one of these representations, depending on whether $n\equiv 2,0$ or $1 \mod 3$, respectively. In effect we are find representatives $r_1=10, r_2=8, r_3=9$ of $\mathbb Z/3\mathbb Z$ that belong to $\mathcal P(A)$, and then $\mathbb Z_{\geq 8} =\{ r_1,r_2,r_3\}+3\mathbb Z_{\geq 0} \subset \mathcal P(A)$, which implies that $\mathcal E(A)\subset \{ 0,\dots,7\}$. We can generalize this to arbitrary finite $A\subset \mathbb Z_{\geq 0}$ with $\text{gcd}(a:\ a\in A)=1$, as follows: Let $b\geq 1$ be the largest element of $A$ (with $0$ the smallest). Since $\text{gcd}(a:\ a\in A)=1$ there exist integers $m_a$, some positive, some negative, for which $\sum_{a\in A} m_aa=1$. Let $m:=\max_{a\in A} (-m_a)$ and $N:=bm\sum_{a\in A} a$, so that \[ r_k:=N+k= \sum_{a\in A} (bm+km_a)a\in \mathcal P(A) \text{ for } 1\leq k\leq b \] (as each $bm+km_a\geq bm-km\geq 0$) and $r_k\equiv k \mod b$. But then \[ \mathbb Z_{> N}=N+ b\mathbb Z_{\geq 1} =N+\{ 1,\dots, b\} +b\mathbb Z_{\geq 0}= \{ r_1,\dots, r_{b}\}+b\mathbb Z_{\geq 0} \subset \mathcal P(A), \] which implies that $\mathcal E(A)\subset \{ 0,\dots,N\}$. We can proceed similarly in $\mathbb Z_{\geq 0}^n$ with $n>1$, most easily when $C_A$ is generated by a set $B$ containing exactly $n$ non-zero elements (for example, $B:=\{(0,0), (2,0), (0,3)\}\subset A$, in the example from \eqref{ex1}). Let $\Lambda_B$ be the lattice of integer linear combinations of elements of $B$. We need to find $R\subset \mathcal P(A)$, a set of representatives of $\mathbb Z^n/\Lambda_B$, and then $(R+C_B)\cap \mathbb Z^n\subset \mathcal P(A)$. In the example \eqref{ex1} we can easily represent $\{ (m,n)\in \mathbb Z^2: 4\leq m\le 5,\ 3\leq n\leq 5\}$. Therefore if $(r,s)\in \mathcal E(A)$ then either $0\leq r\leq 3$ or $0\leq s\leq 2$, and so we see that $\mathcal E(A)$ is a subset of a finite set of translates of one-dimensional objects. \section{Classical postage stamp problem with at most $N$ stamps}\label{easy} It is worth pointing out explicitly that if, for given coprime integers $0 < a < b$, we have $n\in N\{0,a,b\}$ so that $n = ax + by$ with $x + y \leq N$ then\footnote{In this displayed equation, and throughout, we write ``$r \times a$'' to mean $r$ copies of the integer $a$.} \[ (N - x - y)\times b + x \times (b-a) = bN - n\] so that $bN - n \in N\{ 0,b-a,b\}$. \begin{thm}[Postage Stamp with at most $N$ stamps] \label{Thm0} Let $0 < a < b$ be coprime integers and $A = \{0,a,b\}$. If $N\geq 1$ then \[ NA = \{0 , \ldots , bN \} \setminus (\mathcal E(A) \cup (bN - \mathcal E(b-A))).\] \end{thm} In other words, $NA$ contains all the integers in $[0,bN]$, except a few unavoidable exceptions near to the endpoints of the interval. \begin{proof} Suppose that $n \in \{0 , \ldots , bN\}$, $n \notin \mathcal E(A)$ and $bN-n \notin \mathcal E(b-A)$, so that there exist $r,s ,r',s'\in \mathbb N$ such that \begin{equation}\label{E1} ra + sb = n, \end{equation} and \begin{equation}\label{E2} r' (b-a) + s' b = bN - n. \end{equation} We may assume $0 \leq r,r' \leq b-1$, as we may replace $r$ with $r- b$ and $s$ with $s + a$, and $r'$ with $r'- b$ and $s'$ with $s'+ b-a$. Now reducing \eqref{E1} and \eqref{E2} modulo $b$, we have \[ ra \equiv n \Mod b , \ \ \ -r' a \equiv -n \Mod b. \] Since $(a,b) = 1$, we deduce $r\equiv r' \Mod b$. Therefore $r=r'$ as $|r-r'|<b$, and so adding \eqref{E1} and \eqref{E2} we find \[ r b + s b + s' b = bN. \] This implies that $r+s+s'=N$ and so $r+s\leq N$ which gives $n\in NA$, as desired. \end{proof} \section{Arbitrary postage problem with at most $N$ stamps}\label{notsoeasy} \subsection{Sets with three or more elements} Let $$A=\{ 0=a_1<a_2<\ldots <a_k=b\} \subset \mathbb{Z},$$ with $(a_1,\ldots,a_k)=1$. In general we have $n\in NA$ if and only if $Nb-n\in N(b-A)$, since \[ n = \sum_{i=1}^k m_i a_i \text{ if and only if } Nb-n = \sum_{i=1}^k m_i (b- a_i) \] where we select $m_1$ so that $\sum_{i=1}^k m_i=N$. For $0\leq a\leq b-1$ define \[ n_{a,A}:=\min\{ n\geq 0:\ n\equiv a \Mod b \text{ and } n\in \mathcal P(A)\} \] and \[ N_{a,A}:=\min\{ N\geq 0:\ n_{a,A}\in NA\} \] We always have $n_{0,A}=0$ and $N_{0,A}=0$. Neither $0$ nor $b$ can be a term in the sum for $n_{a,A}$ else we can remove it and contradict the definition of $n_{a,A}$. But this implies that $n_{a,A}\leq N_{a,A} \cdot \max_{c\in A: c<b} c \leq (b-1)N_{a,A}$. \begin{lemma} \label{lem1} If $n\equiv a \Mod b$ then $n\in \mathcal P(A)$ if and only if $n\geq n_{a,A}$. \end{lemma} \begin{proof} If $n<n_{a,A}$ then $n\not \in \mathcal P(A)$ by the definition of $n_{a,A}$. Write $n_{a,A}=\sum_{c\in A} n_cc$ where each $n_c\geq 0$. If $n\equiv a \Mod b$ and $n\geq n_{a,A}$ then $n=n_{a,A}+rb$ for some integer $r\geq 0$ and so $n=\sum_{c\in A, c\ne b} n_cc (n_b+r)b\in \mathcal P(A)$. \end{proof} We deduce that \[ \mathcal E(A)=\bigcup_{a=1}^{b-1}\ \{ 1\leq n< n_{a,A}:\ n\equiv a \Mod b\} ; \] We also have the following: \begin{cor} \label{cortolem1} Suppose that $0\leq n\leq bN$ and $n\equiv a \pmod b$. Then \[ n\not\in \mathcal E(A)\cup (Nb- \mathcal E(b-A)) \text{ if and only if } n_{a,A}\leq n\leq bN-n_{b-a,b-A} .\] Thus there are such integers $n$ if and only if $N\geq N_{a,A}^*:= \tfrac 1b (n_{a,A}+n_{b-a,b-A})$. \end{cor} \begin{lemma}\label{Lem: Indn} Suppose that $N_0\geq N_{a,A}^*$. Assume that if $0\leq n\leq bN_0$ with $n\equiv a \pmod b$, and $n\not\in \mathcal E(A)\cup (N_0b- \mathcal E(b-A))$ then $n\in N_0A$. Then for any integer $N\geq N_0$ we have $n\in NA$ whenever $0\leq n\leq bN$ with $n\equiv a \pmod b$, and $n\not\in \mathcal E(A)\cup (Nb- \mathcal E(b-A))$. \end{lemma} \begin{proof} By induction. By hypothesis it holds for $N=N_0$. Suppose it holds for some $N\geq N_0$. If $n\equiv a \pmod b$ with $a\leq n\leq b(N+1)-n_{b-a,b-A}$ then either $a\leq n\leq bN-n_{b-a,b-A}$ so that $n\in NA\subset (N+1)A$, or $n=b+(bN-n_{b-a,b-A})\in b+NA\subset (N+1)A$. \end{proof} If $n_{a,A}=a_1+\cdots+a_N$ where $N=N_{a,A}$ then $$bN_{a,A}-n_{a,A}=(b-a_1)+\cdots+(b-a_N)\geq n_{b-a,b-A},$$ by definition. Therefore \[ N_{a,A} \geq \tfrac 1b (n_{a,A}+n_{b-a,b-A})=N_{a,A}^*, \] and the analogous argument implies that $N_{b-a,b-A} \geq N_{a,A}^*$. \begin{cor} \label{cor2tolem1} Given a set $A$, fix $a \Mod b$. The statement ``For all integers $N\geq 1$, for all integers $n\in [0,Nb]$ with $n\equiv a \Mod b$ we have $n\in NA$ if and only if $n\not\in \mathcal E(A)\cup (Nb- \mathcal E(b-A))$'' holds true if and only if $N _{a,A}=N_{a,A}^*$. \end{cor} \begin{proof} There are no such integers $n$ if $N< N_{a,A}^*$ by Corollary \ref{cortolem1}, so the statement is true. If the statement is true for $N= N_{a,A}^*$ then it holds for all $n\geq N_{a,A}^*$ by Lemma \ref{Lem: Indn}. Finally for $N= N_{a,A}^*$, the statement claims (only) that $n_{a,A}\in NA$. This happens if and only if $N=N_{a,A}^*\geq N _{a,A}$. The result follows since we just proved that $N_{a,A} \geq N_{a,A}^*$. \end{proof} In fact one can re-run the proof on $bN-a$ to see that if $N _{a,A}=N_{a,A}^*$ then $N _{b-a,b-A}=N_{a,A}^*$. Suppose $A$ has just three elements, say $A=\{0,c,b\}$ with $(c,b)=1$. For any non-zero $a \Mod b$ we have an integer $r, 1\leq r\leq b-1$ with $a\equiv cr \Mod b$, and one can easily show that $n_{a,A}=cr$ while $N_{a,A}=r$. Now $b-A=\{0,b-c,b\}$ so that $n_{b-a,b-A}=(b-c)r$ while $N_{b-a,b-A}=r$. Therefore $N_{a,A}=N_{b-a,b-A}=N_{a,A}^*=\tfrac 1b(n_{a,A}+n_{b-a,b-A})$ for every $a$, and so we recover Theorem \ref{Thm0} from Corollary \ref{cor2tolem1}. However Theorem \ref{Thm1} does not hold for all $N\geq 1$ for some sets $A$ of size $4$. For example, if $A=\{ 0,1,b-1,b\}$ then $b-A=A$. We have $n_{a,A}=a$ for $1\leq a\leq b-1$, and so $N_{a,A}^*=1$, but $N_{a,A}=a$ for $1\leq a\leq b-2$, and so Theorem \ref{Thm1} does not hold for all $N\geq 1$ by Corollary \ref{cor2tolem1}. In fact since $N_{b-2,b}=b-2>N_{b-2,b}^*=1$, if the statement ``if $n\leq Nb$ and $n\not\in \mathcal E(A)\cup (Nb- \mathcal E(b-A))$ then $n\in NA$'' is true then $N\geq b-2$. It would be interesting to have a simple criterion for the set $A$ to have the property that $N _{a,A}=N_{a,A}^*$ for all $a \Mod b$ (so that Corollary \ref{cor2tolem1} takes effect). Certainly many sets $A$ do not have this property; For example if there exists an integer $a,\ 1\leq a\leq b-1$ such that $a\not\in A$ but $a, b+a\in 2A$, then $n_{a,A}=a,\ n_{b-a,b-A}=b-a$, so that $N _{a,A}=2$ and $N_{a,A}^*=1$. \subsection{Proving a ``sufficiently large'' result} We begin getting bounds by proving the following. \begin{prop} \label{keyprop} Fix $0\leq a\leq b-1$ and suppose $N\geq N_{a,A}+N_{b-a,b-A}$. If $0 \leq n\leq Nb$ with $n\equiv a \Mod b$ and $n\not\in \mathcal E(A)\cup (Nb- \mathcal E(b-A))$ then $n\in NA$. \end{prop} \begin{cor} \label{Cor2} If $0 \leq n\leq Nb$ and $n\not\in \mathcal E(A)\cup (Nb- \mathcal E(b-A))$ then $n\in NA$, whenever $N\geq \max_{1\leq a\leq b-1} N_{a,A}+N_{b-a,b-A}$. \end{cor} To prove Proposition \ref{keyprop}, we need the following. \begin{prop} \label{P1} Fix $1\leq a\leq b-1$. If $n\leq (N-N_{a,A})b$ with $n\equiv a \Mod b$ and $n\not\in \mathcal E(A)$ then $n\in NA$. \end{prop} \begin{proof} If $n\not\in \mathcal E(A)$ then $n\geq n_{a,A}$ by the definition of $n_{a,A}$. Therefore $n=n_{a,A}+kb$ where $0\leq kb\leq n \leq (N-N_{a,A})b$, so that $0\leq k\leq N-N_{a,A}$ and $kb\in (N-N_{a,A})A$. Now $n_{a,A}\in N_{a,A}A$ and so $n=n_{a,A}+ kb \in N_{a,A}A + (N-N_{a,A})A=NA$. \end{proof} \begin{proof} [Proof of Proposition \ref{keyprop}] This is trivial for $a=0$. Otherwise, by hypothesis $n\not\in \mathcal E(A)$ and $bN-n\not\in \mathcal E(b-A)$. Moreover either $n\leq (N-N_{a,A})b$ or $bN-n\leq (N-N_{b-a,b-A})b$, else \[ bN=n+(bN-n)>(N-N_{a,A})b+(N-N_{b-a,b-A})b=(2N-N_{a,A}A-N_{b-a,b-A})b\geq Nb, \] which is impossible. Therefore Proposition \ref{keyprop} either follows by applying Proposition \ref{P1} to $A$, or by applying Proposition \ref{P1} to $b-A$ to obtain $Nb-n\in N(b-A)$ which implies $n\in NA$. \end{proof} It remains to bound $N_{a,A}$. We start with the following. \begin{lemma}\label{lem1} We have $N_{a,A}\leq b-1$. If $A=\{ 0,1,b\}$ then $N_{a,A}=b-1$. \end{lemma} \begin{proof} Suppose that $n_{a,A}=a_1+a_2+\cdots+a_r$ with each $a_i\in A$, and $r$ minimal. We have $r< b$ else two of $0,a_1,a_1+a_2,\ldots,a_1+\cdots +a_b$ are congruent mod $b$ by the pigeonhole principle, so their difference, which is a subsum of the $a_i$'s is $\equiv 0 \Mod b$. If these $a_i$'s are removed from the sum then we obtain a smaller element of $ \mathcal P(A)$ that is $\equiv a \Mod b$, contradicting the definition of $n_{a,A}$. We deduce that $N_A\leq b-1$. If $A= \{0,1,b\}$ then $b-1 \notin (b-2)A$ and so $N_A\geq b-1$. \end{proof} \begin{cor} \label{cor1} Suppose that $N\geq 2b-2$. If $n\leq Nb$ and $n\not\in \mathcal E(A)\cup (Nb- \mathcal E(b-A))$ then $n\in NA$. \end{cor} \begin{proof} Insert the bounds $N_{a,A}, N_{b-a,b-A}\leq b-1$ from Lemma \ref{lem1} into Corollary \ref{Cor2}. \end{proof} \subsection{The proof of Theorem \ref{Thm1}} With more effort we now prove Theorem \ref{Thm1}, improving upon Corollary~\ref{cor1} by a factor of 2, and getting close to the best possible bound $b-2$ (which, as we have seen, is as good as can be attained when $A=\{ 0,1,b-1,b\}$). One cannot obtain a better consequence of Corollary \ref{Cor2} since we have the following examples: \smallskip If $A=\{ 0, 1, b-1, b\}$ then $N_{[\tfrac b2],A}+N_{b-[\tfrac b2],b-A}=2[\tfrac b2]$. If $A=\{ 0, 1, 2, b\}$ with $b$ even then $N_{b-1,A}+N_{1,b-A}=b$. This is a particularly interesting case as one can verify that one has ``If $n\leq Nb$ and $n\not\in \mathcal E(A)\cup (Nb- \mathcal E(b-A))$ then $n\in NA$'' \emph{for all} $N\geq 1$. \smallskip We can apply Corollary \ref{Cor2} to obtain Theorem \ref{Thm1} provided $N_{a,A},N_{b-a,b-A}\leq [\tfrac b2]$ for each $a$. Therefore we need to classify those $A$ for which $N_{a,A}> \tfrac b2$ Let $(t)_b$ is the least non-negative residue of $t \Mod b$. Suppose that $1\leq a\leq b-1$, and write $n_a=n_{a,A}= a_1+\cdots+a_{m}$ where $m=N_{a,A}$ is minimal. No subsum of $a_1+\cdots+a_{m}$ can sum to $\equiv 0 \Mod b$ else we remove this subsum from the sum to get a smaller sum of elements of $A$ which is $\equiv a \Mod b$, contradicting the definition of $n_a$. Also the complete sum cannot be $\equiv 0 \Mod b$ else $a=0$ and $m=0$. Let $k=m+1$ and $a_k= -(a_1+\cdots+a_{m})$, so that $a_1+\cdots+a_{k}\equiv 0 \Mod b$ and no proper subsum is $0 \pmod b$; we call this a \emph{minimal zero-sum}. The Savchev-Chen structure theorem \cite{Sav} states that if $k\geq [\tfrac b2]+2$ then $a_1+\cdots+a_{k}\equiv 0 \Mod b$ is a minimal zero-sum if and only if there is a reduced residue $w \Mod b$ and positive integers $c_1,\dots,c_k$ such that $\sum_j c_j=b$ and $a_j\equiv wc_j \Mod b$ for all $j$. \begin{theorem}\label{SC1} If $N_{a,A}> \tfrac b2$ then $n_{a,A}$ is the sum of $N_{a,A}$ copies of some integer $h, 1\leq h\leq b-1$ with $(h,b)=1$. Moreover if $k\in A$ with $\ell\ne h$ then $(k/h)_b\geq N_{a,A}+1$. \end{theorem} \begin{proof} Above we have $k=m+1=N_{a,A}+1\geq [\tfrac b2]+2$ so we can apply the Savchev-Chen structure theorem. Some $c_j$ with $j\leq m$ must equal $1$ else $b=\sum_{j=1}^m c_j\geq 2m>b$, a contradiction. Hence $h\in A$ where $h=(w)_b$. Let $n:=\#\{ j\in [1,m]: c_j=1\}=\#\{ j\in [1,m]: a_j=h\}\geq 1$. If $(\ell h)_b\in A$ where $1\leq \ell <b$ then $n\leq \ell -1$ else we can remove $\ell $ copies of $h$ from the original sum for $n_{a,A}$ and replace them by one copy of $(\ell h)_b$. If $(\ell h)_b<\ell h$ then this makes the sum smaller, contradicting the definition of $n_a$. Otherwise this makes the number of summands smaller contradicting the definition of $N_{a,A}$. Therefore if $k$ is the smallest $c_j$-value $>1$, with $1\leq j\leq m$, then $(k h)_b\in A$ so that $k\geq n+1$, and so \[ b-1\geq \sum_{j=1}^m c_j \geq n\times 1+(m-n)\times k=m+(m-n)(k-1)\geq m+(m-n)n. \] If $1\leq n\leq m-1$ then this gives $b-1\geq m+(m-1)>b-1$, a contradiction. Hence $n=m$; that is, $n_a=h+h+\cdots+h$. Therefore $hm\equiv a \Mod b$. Moreover if $(\ell h)_b\in A$ with $\ell \ne 1$ then $\ell \geq n+1= m+1$. \end{proof} We now give a more precise version of the argument in Proposition \ref{P1}. \begin{prop} \label{P3} Fix $0\leq a\leq b-1$ and suppose $N\geq \max\{ N_{a,A}, N_{b-a,b-A}\}$. For all $0 \leq n\leq Nb$ with $n\equiv a \Mod b$ and $n\not\in \mathcal E(A)\cup (Nb- \mathcal E(b-A))$ we have that $n\in NA$, except perhaps if $ n=n_{a,A}+jb$ where \begin{equation} \label{eq: missingrange} N-N_{a,A}<j<N_{b-a,b-A}-\frac 1b( n_{a,A}+n_{b-a,b-A}). \end{equation} \end{prop} \begin{proof} Since $n_{a,A}\in N_{a,A}A$, we have \[ n_{a,A}+jb \in (N_{a,A}+j)A\in NA \text{ whenever } 0\leq j\leq N-N_{a,A}. \] The analogous statement for $b-A$ implies that \[ bN_{b-a,b-A}-n_{b-a,b-A}+ib \in NA \text{ whenever } 0\leq i\leq N-N_{b-a,b-A}.\qedhere \] \end{proof} \begin{proof} [Proof of Theorem \ref{Thm1}] Suppose that $N\geq N_0:=2[\tfrac b2]\geq b-1$. We will prove the result now for $N=N_0$; the result for all $N\geq N_0$ follows from Lemma \ref{Lem: Indn}. If $N_{a,A},N_{b-a,b-A}\leq [\tfrac b2]$ then the result follows from Proposition \ref{keyprop}. Hence we may assume that $N_{a,A}> [\tfrac b2]$ (if necessary changing $A$ for $b-A$). Theorem \ref{SC1} implies there exists an integer $h, 1\leq h\leq b-1$ with $(h,b)=1$ such that $n_{a,A}=N_{a,A}\times h$. We already proved the result when $A$ has three elements, so we may now assume it has a fourth, say $\{ 0,h,\ell,b\}\subset A$. Let $\mathcal B=\{ 0,h,\ell\}\subset \mathbb Z/b\mathbb Z$. Since $\mathcal B$ is not contained in any proper subgroup of $\mathbb Z/b\mathbb Z$ (as $(h,b)=1$), Kneser's theorem implies that $|k\mathcal B|\geq 2k+1$. For $N_0-N_{a,A}\leq k\leq \tfrac{b-1}2$, let $S:=2k-b+N_{a,A}+1$ so that there are $ b-2k$ elements in $\{ Sh, (S+1)h,\dots, N_{a,A}h\}$. By the pigeonhole principle, $sh\in k\mathcal B$ for some $s, S\leq s\leq N_{a,A}$ and therefore $sh+tb=a_1+\dots+a_k$ where each $a_i\in A$, for some integer $t$. Now $t\geq 0$ else we can replace $sh$ by $a_1+\dots+a_k$ contradicting the definition of $n_{a,A}$. On the other hand, $tb<sh+tb=a_1+\dots+a_k\leq k(b-1)$ and so $t\leq k$. Therefore \begin{align*} n_{a,A}+kb&= (N_{a,A}-s)h +(a_1+\dots+a_k) +(k-t)b \in (N_{a,A}-s+2k-t)A\\ & \subset (N_{a,A}-S+2k-t)A = (b-1-t)A\subset N_0A. \end{align*} We have filled in the range \eqref{eq: missingrange} for all $j\leq \tfrac{b-1}2$, which gives the whole of \eqref{eq: missingrange} if $N_{b-a,b-A}\leq [\tfrac b2]$. Therefore we may now assume that $N_{b-a,b-A}> [\tfrac b2]$. Since $N_{b-a,b-A}> [\tfrac b2]$ we may now rerun the argument above and obtain that \[ n_{b-a,b-A}+kb\in N_0(b-A) \text{ for all } k\leq \frac{b-1}2, \] and therefore if $n_{a,A}+jb\not\in \mathcal E(b-A)$ then \[ n_{a,A}+jb\in N_0A \text{ for all } j\geq \frac{b-1}2 , \] since \[ N_0-\frac{b-1}2-\frac {n_{a,A} +n_{b-a,b-A}}b \leq b-\frac{b-1}2-1=\frac{b-1}2. \qedhere \] \end{proof} \section{Higher dimensional postage stamp problem}\label{high} Let $A = \{a_1 , \ldots , a_k\} \subset \mathbb{Z}^n$ be a finite set of vectors with $k \geq n+2$. After translating $A$, we assume that $0\in A$ so that $$0\in A \subset 2A \subset \cdots.$$ We are interested in what elements are in $NA$. Assume that \[ \Lambda_A := \langle A \rangle_{\mathbb{Z}} = \mathbb{Z}^n. \] It is evident from the definitions that \[ NA \subset NH(A)\cap \mathcal P (A) = (NH(A)\cap \mathbb Z^n) \setminus \mathcal E (A) \] Let $b\in A$ and suppose that $x\in NA$ so that $x=\sum_{a \in A} c_a a$ where the $c_a$ are non-negative integers that sum to $N$. Therefore $Nb-x=Nb-\sum_{a \in A} c_a a=\sum_{a \in A} c_a(b-a)\in N(b-A)\subset \mathcal P (b-A)$. This implies that $Nb-x\not\in \mathcal E (b-A)$, and so $x\not\in Nb- \mathcal E (b-A)$. Therefore \[ NA \subset (NH(A)\cap \mathbb Z^n) \setminus \mathcal E_N(A) \] where \[ \mathcal E_N(A) :=NH(A)\cap\left( \mathcal E(A)\cup \bigcup_{a\in A} (aN-\mathcal E(a-A) ) \right). \] In Theorem \ref{Thm2} we will show that this is an equality for large $N$. We use two classical lemmas to prove this, and include their short proofs. \subsection{Two classical lemmas} \begin{lemma} [Carath\'eodory's theorem] \label{Lem:Cat} Assume that $0\in A$ and $A-A$ spans $\mathbb R^n$. If $v\in NH(A)$ then there exists a subset $B\subset A$ which contains $n+1$ elements, such that $B-B$ is a spanning set for $\mathbb R^n$, for which $v\in NH(B)$. \end{lemma} Note that the condition $B-B$ spans $\mathbb{R}^n$ is equivalent to the condition that $B$ is not contained in any hyperplane. In two dimensions, Lemma~\ref{Lem:Cat} asserts that each point of a polygon lies in a triangle (which depends on that point) formed by 3 of the vertices. \begin{proof} Since $v \in NH(A)$ we can write $$v=\sum_{a\in A} c_aa\in NH(A), \text{ with } 0 \leq \sum_{a \in A} c_a \leq N,$$ where each $c_a \geq 0$. We select the representation that minimizes $\# B$ where $$B=\{ a: c_a>0\},$$ Select any $b_0\in B$. We now show that the vectors $b-b_0,\ b\in B, b\ne b_0$ are linearly independent over $\mathbb R$. If not we can write $$\sum_{b\in B\setminus \{ b_0\} } e_b(b-b_0)=0,$$ where the $e_b$ are not all $0$. Let $e_{b_0}=-\sum_b e_b$ so that $\sum_{b\in B} e_bb=0$ and $\sum_{b\in B} e_b=0$, and at least one $e_b$ is positive. Now let $$m=\min_{b:\ e_b>0} c_b/e_b,$$ where $c_\beta=me_\beta$ with $\beta\in B$. Then $v=\sum_{b\in B} (c_b-me_b)b$ where each $c_b-me_b\geq 0$ with $\sum_{b\in B} (c_b-me_b)=\sum_{b\in B} c_b-m\sum_{b\in B} e_b=\sum_{b\in B} c_b\in [0,N]$. However the coefficient $c_\beta-me_\beta=0$ and this contradicts the minimality of $\#B$ . Since the vectors $b-b_0,\ b\in B, b\ne b_0$ are linearly independent, we can add new elements of $A$ to the set $B$ until we have $n+1$ elements, and then we obtain the result claimed. \end{proof} For $u=(u_1,\dots,u_n), v=(v_1,\dots,v_n)\in \mathbb Z_{\geq 0}^n$, we write $u\leq v$ if $u_i\leq v_i$ for each $i=1,\ldots,n$. The following is a classical lemma in additive combinatorics:\footnote{Formerly known as ``additive number theory''.} \begin{lemma}[Mann's lemma] \label{lem: Mann} Let $S\subset \mathbb Z_{\geq 0}^n$. There is a finite subset $T\subset S$ such that for all $s\in S$ there exists $t\in T$ for which $t\leq s$. \end{lemma} \begin{proof} We prove by induction on $n\geq 1$. For convenience we will write $T\leq S$, if for all $s\in S$ there exists $t\in T$ for which $t\leq s$. For $n=1$ let $T=\{ t\}$ where $t$ is the smallest integer in $S$. For $n>1$, select any element $(s_1,\dots, s_n)\in S$. Define $S_{j,r}:=\{ (u_1,\dots,u_n)\in S:\ u_j=r\}$ for each $j=1,\cdots,n$ and $0\leq r<s_j$. Let $\phi_j((u_1,\dots,u_n))=(u_1,\cdots,u_{j-1},u_{j+1},\cdots,u_n)$. The set $\phi_j(S_{j,r}) \subset \mathbb Z_{\geq 0}^{n-1}$ and so, by the induction hypothesis, there exists a finite subset $T_{j,r}\subset S_{j,r}$ such that $\phi_j(T_{j,r})\leq \phi_j(S_{j,r})$, which implies that $T_{j,r} \leq S_{j,r}$ as their $j$th co-ordinates are the same. Now let \[ T = \{ (s_1,\dots, s_n)\} \bigcup_{j=1}^n \bigcup_{r=0}^{s_j-1} T_{j,r}, \] which is a finite union of finite sets, and so finite. If $s\in S$ then either $(s_1,\dots, s_n)\leq s$, or $s\in S_{j,r}$ for some $j, 1\leq j\leq n$, and some $r, 0\leq r<s_j$. Hence $T\leq S$. \end{proof} \begin{lemma}[Mann's lemma, revisited] \label{lem: Mann2} Let $S\subset \mathbb Z_{\geq 0}^n$ with the property that if $s\in S$ then $s+\mathbb Z_{\geq 0}^n\in S$. Then $E:=\mathbb Z_{\geq 0}^n\setminus S$ is a finite union of sets of the form: For some $I\subset \{ 1,\dots, n\}$ \[ \{ (x_1,\dots,x_n): x_i\in \mathbb Z_{\geq 0} \text{ for each } i\in I\} \text{ with } x_j \text{ fixed if } j\not\in I. \] \end{lemma} \begin{proof} By induction on $n\geq 1$. In 1-dimension, $S$ is either empty so that $E=\mathbb Z_{\geq 0}$, or $S$ has some minimum element $s$, in which case $E$ is the finite set of elements $0,1,\dots,s-1$. If $n>1$ then in $n$-dimensions either $S$ is empty so that $E=\mathbb Z_{\geq 0}^n$ or $S$ contains some element $(s_1,\dots,s_n)$. Therefore if $(x_1,\dots,x_n)\in E$ there must exist some $k$ with $x_k\in \{ 0,1,\dots,s_k-1\}$. For each such $k,x_k$ we apply the result to $S_{x_k}:=\{ (u_1,\dots,u_n)\in S:\ u_k=x_k\}$, which is $n-1$ dimensional. \end{proof} \subsection{The proof of Theorem \ref{Thm2} } For any $v\in \mathcal P(A)$ define \[ \mu_A(v) := \min \left\{ \sum_{a\in A} n_a : v= \sum_{a\in A} n_aa, \text{ each } n_a \in \mathbb N \right\} , \] and $\mu_A(V):=\max_{v\in V} \mu_A(v)$ for any $V \subset \mathcal P(A)$. By definition, $V\subset NA$ if and only if $N\geq \mu_A(V)$. The heart of the proof of Theorem \ref{Thm2} is contained in the following result. \begin{prop} \label{Prop3} Let $0\in B\subset A \subset \mathbb{Z}^n$ where $\Lambda_A=\mathbb Z^n$, and $B^*=B\setminus \{ 0\}$ contains exactly $n$ elements, which span $\mathbb R^n$ (as a vector space over $\mathbb R$). There exists a finite subset $A^+\subset \mathcal P(A) $ such that if $v\in \mathcal P(A) $ then there is some $w=w(v)\in A^+$ for which $v-w\in \mathcal P(B) $. (That is, $\mathcal P(A)=A^++\mathcal P(B)$.) Let $N_{A,B}=\mu_A(A^+)$ so that $A^+\subset N_{A,B}A$. If $N\geq N_{A,B}$ and $v\in (N-N_{A,B})H(B)\cap \mathbb Z^n$ but $v\not\in \mathcal E(A) $ then $v\in NA$. \end{prop} \begin{proof} The fundamental domain for the lattice $\Lambda_B:=\langle B \rangle_{\mathbb{Z}}$ is \[ \mathbb{R}^n / \Lambda_B \cong \mathcal F(B):=\left\{ \sum_{b\in B^*} c_bb: \text{ Each } c_b\in [0,1)\right\} . \] Since $\mathcal F(B)$ is bounded, we see that $$L:= \mathcal F(B)\cap \mathbb{Z}^n$$ is finite. The sets $\ell + \Lambda_B$ partition $\mathbb{Z}^n$ as $\ell$ varies over $\ell\in L$. For each $\ell\in L$ we define \[ A_\ell = (\ell + \Lambda_B) \cap \mathcal P(A) , \] which partition $\mathcal P(A)$ into disjoint sets, so that $\mathcal P(A) =\bigcup _{\ell \in L} A_\ell$. Define $S_\ell\subset \mathbb N^n$ by \[ A_\ell :=\left\{ \ell+\sum_{b \in B^*} c_bb: (c_1,\ldots,c_n)\in S_\ell\right\} \subset C_B. \] By Mann's lemma (Lemma \ref{lem: Mann}), there is a finite subset $T_\ell\subset S_\ell$ such that for each $s \in S_\ell$ there is a $t \in T_\ell$ satisfying $t \leq s$. We may assume that $T_\ell$ is minimal, and define $$ A_\ell^+ =\left\{ \ell+\sum_{b\in B^*} c_bb: (c_1,\ldots,c_n)\in T_\ell\right\} \subset A_\ell. $$ By definition, for any $v\in A_\ell$ there exists $w\in A_\ell^+$ such that $v-w\in \mathcal P(B) $ (for we write $v=\ell +s\cdot B$ and let $w=\ell+t\cdot B$ where $t\leq s$, as above). That is, $A_\ell=A_\ell^++\mathcal P(B)$. Let $A^+=\cup_{\ell \in L} A_\ell^+$ which is a finite union of finite sets, and so is finite, and $A^+\subset \mathcal P(A) $. Moreover $\mathcal P(A) =\bigcup _{\ell \in L} A_\ell=\bigcup _{\ell \in L} A_\ell^++\mathcal P(B) =A^++\mathcal P(B)$ as claimed. Now suppose that $v\in (N-N_{A,B})H(B)\subset C_B\subset C_A$. Since the vectors in $B$ are linearly independent there is a unique representation $v=\sum_b v_bb$ as a linear combination of the elements of $B$, and has each $v_b\geq 0$ with $\sum_b v_b\leq N-N_{A,B}$. Also suppose $v\in \mathbb Z^n$ but $v\not\in \mathcal E(A) $ so that $v\in \mathcal P(A)$, as $v\in C_A\cap \mathbb Z^n$. Therefore there exists a unique $\ell\in L$ for which $v\in A_\ell$, and $w=w(v)=\sum_b w_bb\in A_\ell^+$ for which each $0\leq w_b\leq v_b$. Therefore $v-w=\sum_b (v_b-w_b)b\in UB$ where $U:=\sum_b (v_b-w_b)\leq \sum_b v_b\leq N-N_{A,B}$ and so $v-w\in (N-N_{A,B})B$. By definition, $w \in N_{A,B}A$, and so \[ v=(v-w)+w \in (N-N_{A,B})B+N_{A,B}A \subset (N-N_{A,B})A +N_{A,B}A =NA.\qedhere \] \end{proof} \begin{proof}[Proof of Theorem \ref{Thm2}] For every subset $B\subset A$ which contains $n+1$ elements, such that $B-B$ is a spanning set for $\mathbb R^n$, define $N_{A,B}^*:=N_{A,B}+\sum_{b\in B,\ b\ne 0} N_{b-A,b-B}$, and let $N_A$ be the maximum of these $N_{A,B}^*$. If $N\geq N_{A}$ and $v\in NH(A)$ then $v\in NH(B)$ for some such set $B$, by Lemma \ref{Lem:Cat}. If we also have $v\in \mathbb Z^n$ but \[ v\not\in \mathcal E(A)\cup \bigcup_{b\in B, b\ne 0} (Nb - \mathcal E(b-A)) \] then we can write $v=\sum_{b\in B} c_b b$ for real $c_b\geq 0$ with \[ \sum_{b\in B} c_b=N\geq N_{A,B}+\sum_{b\in B,\ b\ne 0} N_{b-A,b-B}. \] Therefore $\bullet$\ Either $c_0\geq N_{A,B}$ in which case \[ v = \sum_{b\in B, b\ne 0} c_b b \in (N-c_0)H(B) \subset (N-N_{A,B}) H(B) \] as well as $v\in \mathbb Z^n \setminus \mathcal E(A)=\mathcal P(A)$, and so $v\in NA$ by Proposition \ref{Prop3}; $\bullet$\ Or there exists $\beta\in B,\ \beta\ne 0$ for which $c_\beta\geq N_{\beta-A,\beta-B}$ so that \[ \beta N-v = \sum_{b\in B} c_b (\beta-b) \in (N-c_{\beta}) H(\beta-B)\subset (N-N_{\beta-A,\beta-B}) H(\beta-B). \] Now $v,\beta \in \mathbb Z^n$ and so $\beta N-v\in \mathbb Z^n$. Also $v\not\in \beta N - \mathcal E(\beta-A)$ by hypothesis, and so $\beta N -v\not\in \mathcal E(\beta-A)$. Therefore $\beta N -v\in N(\beta-A)$ by Proposition \ref{Prop3}, giving that $v\in NA$. \end{proof} \section{The structure and size of the exceptional set} \label{sec:SizeSet} \begin{prop} \label{Prop5} Let $0\in B\subset A \subset \mathbb{Z}^n$ where $\Lambda_A=\mathbb Z^n$, and $B^*=B\setminus \{ 0\}$ contains exactly $n$ elements, which span $\mathbb R^n$, so that $C_B=\{ \sum_{b\in B^*} x_bb:\text{ Each } x_b\geq 0\}$. There exist $r_b\geq 0$ such that $\{ \sum_{b\in B^*} x_bb:\text{ Each } x_b\geq r_b\}\cap \mathbb{Z}^n \subset \mathcal P(A)$. \end{prop} We deduce that if $x:=\sum_{b\in B^*} x_bb\in (C_B\cap \mathbb{Z}^n)\cap \mathcal E(A)$ then $0\leq x_b<r_b$ for some $b$. In other words $x$ is at a bounded distance from the boundary generated by $B\setminus \{ b\}$. (Theorem 2 of \cite{ST} gives a related result but is difficult to interpret in the language used here.) \begin{proof} We will use the notation of Proposition \ref{Prop3}. The elements of $B^*$ are linearly independent so that $\beta:=\sum_{b\in B} b$ lies in the interior of $C_B$. Therefore if the integer $M$ is sufficiently large then $\gamma:=\beta +\tfrac 1M \sum_{a\in A} a$ also lies in the interior of $C_B$. Now as $A$ generates $\mathbb Z^n$ as a vector space over $\mathbb Z$, we know that for each $\ell\in L$ there exist integers $c_{\ell, a}$ such that $\ell= \sum_{a\in A} c_{\ell, a}a$. Let $c\geq 0$ be an integer $\geq \max_{\ell\in L, a\in A} (-c_{\ell, a})$. The set $L'=cM \gamma + L$ of $\mathbb Z^n$-points is a translate of $L$ that can be represented as \[ cM \gamma + \sum_{a\in A} c_{\ell, a}a = cM \beta + \sum_{a\in A} (c+c_{\ell, a})a \in \mathcal P(A) \text{ for each } \ell \in L. \] The translation is by $cM \gamma\in C_B$ so $L'=cM \gamma + L\subset C_B$; moreover $L'$ gives a complete set of representatives of $\mathbb{R}^n / \Lambda_B$ and so every lattice point in $L'+\mathcal P(B)$ belongs to $\mathcal P(A)$. We can re-phrase this as \[ (cM \gamma+ C_B) \cap \mathbb{Z}^n \subset \mathcal P(A). \] Therefore if $cM \gamma = \sum_{b\in B^*} r_bb$ and $x:=\sum_{b\in B^*} x_bb\in \mathbb{Z}^n$, then $x\in \mathcal P(A)$ if each $x_b\geq r_b$. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm3}] We again use Lemma~\ref{Lem:Cat} to focus on sets $B\subset A$ which contain $n+1$ elements, such that $B-B$ is a spanning set for $\mathbb R^n$. We translate $B$ so that $0\in B$. As in the proof of Proposition \ref{Prop3}, we fix $\ell \in L$ (which is a finite set). Proposition \ref{Prop5} shows that $S_\ell$ is non-empty. Lemma \ref{lem: Mann2} yields the structure of $ \mathbb Z_{\geq 0}^n\setminus S_\ell$, which is not all of $ \mathbb Z_{\geq 0}^n$ as $S_\ell$ contains an element. This implies that the structure of $(\ell + \Lambda_B) \cap \mathcal E(A)$ is as claimed in Theorem \ref{Thm3}. The result follows as $\mathcal E(A)$ is a finite union of such sets. \end{proof}
2,869,038,156,463
arxiv
\section{Introduction} \noindent Mobile robots are increasingly used in complex environments aiming at autonomously realizing missions~\cite{wrs:online}. The rapid pace of development in robotics hardware and technology demands software that can sustain this growth~\cite{brugali2007software,Lee2008,Perez2008,Gamez2013563}. Increasingly, robots will be used for accomplishing tasks of everyday life by end-users with no expertise and knowledge in computer science, robotics, mathematics or logics. Providing techniques that support robotic software development is a major software-engineering challenge~\cite{brugali2007software,4799437,Gotz:2018:RIW:3149485.3149523,luckcuck2018formal,Maoz:2018:SEC:3196558.3196561,MaozFSE}. The mission describes the high-level tasks the robotic software must accomplish~\cite{lignos2015provably}. Among the different ways of describing missions that were proposed in the literature~\cite{Mindstorms,Choregraph,DSLInRobotics,arkin2006missionlab,Teambots,maoz2011aspectltl,Ruscio2016}, in this work, we consider declarative specifications~\cite{Broy:1991:DSD:952786.952788}. These describe the final outcome the software should achieve---rather than describing how to achieve it---and are prominently used in the robotics domain~\cite{menghi2018multi,ulusoy2011optimal,fainekos2009temporal,guo2013revising,wolff2013automaton,kress2011robot,doi:10.1177/0278364914546174,DBLP:journals/corr/MaozR16,maoz2011aspectltl,maozsynthesis,MaozFSE}. Precisely specifying missions and transforming them into a form useful for automatic processing are among the main challenges in engineering robotics software~\cite{situa,hinchey2005requirements,Kramer2007DevelopmentEF,6907489m,Maoz:2018:SEC:3196558.3196561}. On the one hand, missions should be defined with a notation that is high-level and user-friendly~\cite{Bozhinoski2015,ding2011automatic,lignos2015provably}. On the other hand, to enable automatic processing, the notation should be unambiguous and provide a formal and precise description of what robots should do in terms of movements and actions~\cite{lee1997graphical,smith2001events,srinivas2013graphical}. Typically, when engineering robotics software, the missions are first expressed using natural-language requirements. These are then specified using domain-specific languages, many of which have been proposed over the last decades~\cite{Mindstorms,Choregraph,DSLInRobotics,raman2013sorry}. These languages are often integrated with development environments that are used to generate code that can be executed within simulators or real robots~\cite{arkin2006missionlab,Teambots,maoz2011aspectltl}. However, these languages are typically bound to specific types of robots and support a limited number and type of missions. Other works, especially coming from the robotics domain, advocate to formally specify missions in temporal logics~\cite{maozsynthesis,doi:10.1177/0278364914546174,finucane2010ltlmop,menghi2018multi}. Unfortunately, defining temporal logic formulae is complicated. As such, the definition of mission specifications is laborious and error-prone, as widely recognized in the software-engineering and robotics communities~(e.g.,\cite{6016586,endo2004usability,Maoz:2018:SEC:3196558.3196561,wei2016extended}). Conceptually, defining a robotic mission entails two problems. First, ambiguities in mission requirements that prevent precise and unambiguous specifications must be resolved~\cite{Lignos2015,raman2013sorry,wei2016extended}. Consider the very simple mission requirement ``the robot shall visit the kitchen and the office.'' This can be interpreted as ``visit the kitchen'' and also that at some point the robot should ``visit the office'' or visit ``the kitchen and the office in order.'' This highlights the ambiguity in natural language requirements formulation, and common mistakes may be introduced when diverse interpretations are given~\cite{srinivas2013graphical,shah2015resolving,kiyavitskaya2008requirements,ringert2014requirements}. Second, creating specifications that correctly capture requirements is hard and error prone~\cite{6016586,endo2004usability,Maoz:2018:SEC:3196558.3196561,wei2016extended}. Assume that the correct intended behavior requires that ``the kitchen and the office are visited in order,'' which is a common mission specification problem~\cite{kress2009temporal,yoo2016online}. When transforming this requirement into a precise mission specification, an expert might come up with the following formula in temporal logic: \begin{center} $\phi_1=\LTLf \big((r\ in\ l_1) \wedge \LTLf (r\ in\ l_2 )\big)$, \end{center} \noindent where $r\ in\ l_1$ and $r\ in\ l_2$ signify that robot $r$ is in the kitchen and office, respectively, and $\LTLf$ denotes \emph{finally}. Now, recall that the actual requirement is that the robot reaches the kitchen \emph{before} the office. Unfortunately, the logical formula still admits that the robot reaches the office before entering the kitchen, which may be an unintended behavior. Mitigating this problem requires defining additional behavioral constraints. A correct formula, among others, is the following: \begin{center} $\phi_2=\phi_1\wedge \big(\big( \neg (r\ in\ l_2) \big)\ \LTLu\ (r\ in\ l_1) \big)$, \end{center} \noindent where $\LTLu$ stands for \emph{until}. The additional constraint requires the office to not be visited before the kitchen, recalling a specification pattern for temporal logics known as the \emph{absence pattern}~\cite{dwyer1999patterns}. Rather than conceiving such specifications recurrently in an ad hoc way with the risk of introducing mistakes, engineers could re-use validated solutions to existing mission requirements. Specification patterns are a popular solution to the specification problem. While precise behavioral specifications in logical languages enable reasoning about behavioral properties~\cite{EMERSON1990995,5238617}, specification is hard and error prone~\cite{Holzmann2002,Autili2007}. The problem is exacerbated, since practitioners are often unfamiliar with the intricate syntax and semantics of logical languages~\cite{dwyer1999patterns}. For instance, Dwyer et al.~\cite{dwyer1999patterns} introduced patterns for safety properties, later extended by Grunske~\cite{grunske2008specification} and Konrad et al.~\cite{konrad2005real} to address real-time and probabilistic quality properties. Autili et al.~\cite{autili2015aligning} consolidated and organized these patterns into a comprehensive catalog. Bianculli et al.~\cite{bianculli2012specification} applied specification patterns to the domain of Web services. All these patterns provide template solutions that can be used to specify the respective properties. However, none of these pattern catalogs focuses on the robotic software domain to solve the mission specification problem. We propose a pattern catalog and supporting tooling that facilitates engineering missions for mobile robots, which implements the original, high level idea that we had recently presented~\cite{Menghi:Idea}. We focus on robot movement as one of the major aspects considered in the robotics domain~\cite{brooks1991intelligence,brugali2005software,brugali2007stable}, as well as on how robots perform actions as they move within their environment. For each pattern we provide usage intent, known uses, relationships to other patterns, and---most importantly---a template mission specification in temporal logic. The latter relies on LTL and CTL as the most widely used formal specification languages in robotics~\cite{menghi2018multi,ulusoy2011optimal,fainekos2009temporal,guo2013revising,wolff2013automaton,kress2011robot,doi:10.1177/0278364914546174,DBLP:journals/corr/MaozR16,maoz2011aspectltl,maozsynthesis,MaozFSE}. The catalog has been produced by analyzing 245 natural-language mission requirements systematically retrieved from the robotics literature. From these requirements we identified recurrent mission specification problems and conceived solutions were organized as patterns in a catalog. Our patterns provide a formally defined vocabulary that supports robotics developers in defining mission requirements. Relying on the usage of the pattern catalog as a common vocabulary allows mitigating ambiguous natural language formulations~\cite{endo2004usability}. Our patterns also provide validated mission specifications for recurrent mission requirements, facilitating the creation of correct mission specifications~\cite{DBLP:journals/corr/MaozR16}. We implemented the tool PsAlM\xspace (Pattern bAsed Mission specifier)~\cite{PSALM} to further support developers in rigorous mission design. PsAlM\xspace allows (i) specifying a mission requirement through a structured English grammar, which uses patterns as basic building blocks and operators that allow composing these patterns into complex missions, and (ii) automatically generating specifications from mission requirements. PsAlM\xspace\ is robot-agnostic and integrated with: Spectra~\cite{Spectra} (a robot development environment), a planner~\cite{doi:10.1177/0278364914546174}, NuSMV~\cite{cimatti1999nusmv} (a model checker), and Simbad~\cite{hugues2006simbad} (a simulator for education and research). The pattern catalog and the PsAlM\xspace tool are available in an online appendix~\cite{paperstuff}. We evaluated the benefits obtained by the usage of our pattern support in rigorous and systematic mission design. We collected $441$ mission requirements in natural language: $436$ obtained from robotic development environments used by practitioners (i.e., Spectra~\cite{Spectra} and LTLMoP~\cite{finucane2010ltlmop,wei2016extended}), and five defined in collaboration with two well-known robotics companies developing commercial, human-size service robots (BOSCH\ and PAL Robotics). We show that most of the mission requirements were ambiguous, expressible using the proposed patterns, and that the usage of the patterns reduces ambiguities. We then evaluated the coverage of mission specifications. We collected $1229$ LTL and $22$ CTL mission specifications, from robotic development environments used by practitioners (i.e., Spectra~\cite{Spectra} and LTLMoP~\cite{finucane2010ltlmop,wei2016extended}) and research publications (i.e.,~\cite{ruchkin2018ipl}) and show that almost all the specifications can be obtained using the proposed patterns ($1154$ over $1251$). We also generated the specifications for the five mission requirements defined in collaboration with the two robotic companies and fed them into an existing planner. The produced plans were correctly executed by real robots, showing the benefits of the pattern support in real scenarios. To ensure the correctness of the proposed patterns we manually inspected their template mission specifications. We additionally tested patterns correctness on a set of $12$ randomly generated models representing buildings where the robot is deployed. We considered ten mission requirements (each obtained by combining three patterns), converted the mission requirements into LTL mission specifications and used those to generate robots' plans. We used the Simbad~\cite{hugues2006simbad} simulator, to verify that the plans satisfied the intended mission requirement. We subsequently generated both LTL and CTL specifications from the considered mission requirements. We verified that the same results are obtained when they are checked on the randomly generated models, confirming the correspondence among the CTL and LTL specifications. \section{Background} \label{sec:background} \noindent In this section, we present the terminology used in the remainder and introduce the temporal logics LTL and CTL we used for defining the patterns' template solutions. Recall that for communication and further refinement, the requirements of a software system are typically expressed in natural language or informal models. Refining these requirements into more formal representations avoids ambiguity, allowing automated processing and analysis. Such practices also emerged in the robotics engineering domain. \noindent $\bullet$ \emph{Mission Requirement}: a description in a natural language or in a domain-specific language of the mission (also called ``task'') the robots must perform~\cite{4209779,Lignos2015,raman2013sorry,menghi2018multi}.\\ $\bullet$ \emph{Mission Specification}: a formulation of the mission in a logical language with a precise semantics~ \cite{ulusoy2011optimal,fainekos2009temporal,guo2013revising,doi:10.1177/0278364914546174,DBLP:journals/corr/MaozR16,maozsynthesis }.\\ $\bullet$ \emph{Mission Specification Problem}: the problem of generating a mission specification from a mission requirement.\\ $\bullet$ \emph{Mission Specification Pattern}: a mapping between a recurrent mission-specification problem to a template solution and a description of the usage intent, known uses, and relationships to other patterns.\\ $\bullet$ \emph{Mission Specification Pattern Catalog}: a collection of mission specification patterns organized in a hierarchy aiding at browsing and selecting patterns, in order to support decision making during mission specification. We consider LTL (Linear Temporal Logic)~\cite{pnueli1977temporal} and CTL (Computation Tree Logic)~\cite{ben1983temporal}, since they are commonly used to express mission specifications in the robotic domain and are utilized extensively by the community (e.g.,~\cite{menghi2018multi,ulusoy2011optimal,fainekos2009temporal,guo2013revising,wolff2013automaton,kress2011robot,doi:10.1177/0278364914546174,DBLP:journals/corr/MaozR16,maoz2011aspectltl,maozsynthesis,MaozFSE}). A temporal logic specification can be used for several purposes, such as (i) for producing plans through the use of planners, (ii) for analysing the mission satisfaction though the use of model checkers, and (iii) to design a robotic application. We now briefly recall the LTL and CTL syntax and semantics. Let $\pi$ be a set of atomic propositions, LTL's syntax is the following: \begin{center} (LTL)\hspace{0.1cm} $\phi ::= \tau ~|~ \neg \phi ~|~ \phi \vee \phi ~|~ \LTLx \phi ~|~ \phi\ \LTLu\ \phi$ \text{where $\tau \in \pi$}. \label{syntaxltl} \end{center} The semantics of LTL is defined over an infinite sequence of truth assignments to the propositions $\pi$. The formula $\LTLx \phi$ expresses that $ \phi$ is true in the next position in a sequence, and the formula $\phi_1\ \LTLu\ \phi_2 $ expresses the property that $\phi_1$ is true until $\phi_2$ holds. \noindent CTL's syntax is the following: \begin{center} (CTL)\hspace{0.1cm} $\phi \coloneqq \tau \mid \neg \phi \mid \phi \vee \phi \mid \exists \Phi \mid \forall \Phi $, where $\Phi \coloneqq \LTLx \phi \mid \phi \LTLu \phi$ and $\tau \in \pi$. \label{syntaxctl} \end{center} CTL allows the specification of properties that predicate on a branching sequence of assignments. Specifically, when a position of a sequence has several successors, CTL enables the specification of a property that must hold for all or one of the paths that start from that position. For this reason, CTL includes two types of formulae: \emph{state} formulae that must hold in one position of the sequence and \emph{path} formulae that predicate on paths that start from a position. The operator $\forall$ (resp. $\exists$) asserts that $\phi$ must hold on all paths (resp. on one path) starting from the current position, while $\LTLx$ and $\LTLu$ are defined as for LTL. \section{Methodology} \label{sec:methodology} \noindent We derived our pattern catalog in three main steps. \textbf{Collection of Mission Requirements.} We collected mission requirements from scientific papers in the field of robotics. We additionally considered the software engineering literature, but noted a general absence of robotic mission specifications. We chose major venues based on consultation with domain experts and by considering their impact factor. Specifically, we analyzed mission specifications published in the four major~\cite{scholarlist} robotics venues over the last five years, in line with similar studies for pattern identification~\cite{dwyer1999patterns,konrad2005real,grunske2008specification}. We analyzed all papers published within a venue with two inclusion criteria (considered in order): (i) the paper title implies some notion of robotic movement-related concept, (ii) the paper contains at least one formulation of a mission requirement involving a robot that concerns movement. When the paper contained more than one mission requirement, each was considered separately. \begin{table}[t] \centering \scriptsize \caption{Papers and (requirements) analyzed per venue and year} \label{fig:paperNumber} \begin{tabularx}{\linewidth}{ p{3.19cm} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} } \toprule \textsf{Robotics Venue } & \rotatebox{90}{\textsf{2017}} & \rotatebox{90}{\textsf{2016}} & \rotatebox{90}{\textsf{2015}} & \rotatebox{90}{\textsf{2014}} & \rotatebox{90}{\textsf{2013}} & \rotatebox{90}{\textsf{Total}} \\ \midrule Intl. Conf. Robotics\,\&\,Autom. & 9(14) & 16\,(11) & 17\,(18) & 27\,(22) & 16\,(15) & 85\,(80) \\ Intl. J. of Robotics Research & 4(8) & 13\,(12) & 12\,(11) & 13\,(8) & 17\,(12) & 59 \,(51) \\ Trans. on Robotics & 2(6) & 12\,(9) & 5\,(1) & 8\,(2) & 4\,(2) & 31 \,(20) \\ Intl. Conf. on Int. Robots\,\&\,Sys. & 10(23) & 55\,(26) & 13\,(8) & 20\,(16) & 33\,(21) & 131\,(94) \\ \bottomrule \end{tabularx} \vspace{-0.5cm} \end{table} \begin{figure*} \scriptsize \begin{tikzpicture}[sibling distance=7em, every node/.style = {shape=rectangle, rounded corners, draw, align=center, top color=white, bottom color=white}, level 1/.style={sibling distance=14em, level distance=0.8cm}, level 2/.style={sibling distance=15em}, level 3/.style={sibling distance=4.8em}, level 4/.style={sibling distance=10em}]] \node (A) []{\small Robotic Missions Specification Patterns}; \node (B) [right=0.7cm of A] {\small Avoidance/\\ \small Invariant}; \node (C) [above left=0.2cm and 1cm of B]{\small Conditional/Limited}; \node[above right=0.4cm and 0.9cm of C,top color=lightgray, bottom color=lightgray, anchor=north] (D) {\small Past \\ \small avoidance}; \node[above=0.8cm of C,top color=lightgray, bottom color=lightgray, anchor=north] (E) {\small Global\\ \small avoidance}; \node[above left=0.4cm and 0.9cm of C,top color=lightgray, bottom color=lightgray, anchor=north] (F) {\small Future\\ \small avoidance}; \node (G) [right=0.9cm of B] {\small Restricted}; \node[below=0.2cm of G, top color=lightgray, bottom color=lightgray, anchor=north] (H) {\small Lower\\ \small Restricted\\ \small Avoidance} ; \node[above=1.1cm of G,top color=lightgray, bottom color=lightgray, anchor=north] (I) {\small Exact\\ \small Restricted\\ \small Avoidance} ; \node[above right=0.3cm and 1cm of G,top color=lightgray, bottom color=lightgray, anchor=north] (L) {\small Upper\\ \small Restricted\\ \small Avoidance }; \node (N) [ left=1.7cm of A] {\small Trigger}; \node[above right=0.1cm and 0.5cm of N,top color=lightgray, bottom color=lightgray] (O) {\small Wait }; \node (P) [above=0.2cm of N]{\small Reaction }; \node[above right=1cm and 0.9cm of P,top color=lightgray, bottom color=lightgray, anchor=north] (Q) {\small Instant.\\ \small Reaction }; \node[above=1cm of P,top color=lightgray, bottom color=lightgray, anchor=north] (R) {\small Delayed\\ \small Reaction }; \node[above left=1cm and 0.9cm of P,top color=lightgray, bottom color=lightgray, anchor=north,dashed] (FR) {\small Fast\\ \small Reaction }; \node (REACT) [left=0.7cm of N,dashed]{\small Bind }; \node[below left=0.2cm and 0.5cm of REACT,top color=lightgray, bottom color=lightgray, anchor=north,dashed] (BR) {\small Bound\\ \small Reaction }; \node[below right=0.2cm and 0.5cm of REACT,top color=lightgray, bottom color=lightgray, anchor=north,dashed] (BD) {\small Bound\\ \small Delay }; \node (S)[below=0.2cm of A] {\small Core Movement Patterns}; \node[below left=0.1cm and 2cm of S] (T){\small Coverage}; \node[below left=0.5cm and 2.5cm of T,top color=lightgray, bottom color=lightgray] (U) {\small Visit}; \node[below left=0.5cm and 0.5cm of T,top color=lightgray, bottom color=lightgray] (V) {\small Sequenced \\ \small Visit}; \node[below right=0.5cm and -1.2cm of T,top color=lightgray, bottom color=lightgray] (Z) {\small Ordered \\ \small Visit}; \node[below right=0.5cm and 0.5cm of T,top color=lightgray, bottom color=lightgray] (ZA) {\small Strict\\ \small Ordered\\ \small Visit}; \node[below right=0.5cm and 2.5cm of T,top color=lightgray, bottom color=lightgray] (ZB) {\small Fair\\ \small Visit}; \node[below right=0.1cm and 2cm of S] (ZC) {\small Surveillance}; \node[below left=0.5cm and 2cm of ZC,top color=lightgray, bottom color=lightgray] (ZD) {\small Patrolling}; \node[below left=0.5cm and 0cm of ZC,top color=lightgray, bottom color=lightgray] (ZE) {\small Sequenced\\ \small Patrolling}; \node[below right=0.5cm and -1.2cm of ZC,top color=lightgray, bottom color=lightgray] (ZF) {\small Ordered\\ \small Patrolling}; \node[below right=0.5cm and 0.5cm of ZC,top color=lightgray, bottom color=lightgray] (ZG) {\small Strict\\ \small Ordered \\ \small Patrolling}; \node[below right=0.5cm and 2.5cm of ZC,top color=lightgray, bottom color=lightgray,sibling distance=4em] (ZH) {\small Fair\\ \small Patrolling}; \path (T) edge (U) edge (V) edge (Z) edge (ZA) edge (ZB); \path (ZC) edge (ZD) edge (ZE) edge (ZF) edge (ZG) edge (ZH); \path (A) edge (B) edge (N) edge (S); \path (S) edge (T) edge (ZC); \path (B) edge (C) edge (G); \path (C) edge (D) edge (E) edge (F); \path (G) edge (H) edge (I) edge (L); \path (N) edge (O) edge (P) edge (REACT); \path (REACT) edge (BR) edge (BD); \path (P) edge (Q) edge (R) edge (FR); \end{tikzpicture} \caption{Mission specification pattern catalog. Filled nodes: patterns, non-filled nodes: categories. } \label{fig:specificationPatternSystem} \end{figure*} Altogether we obtained 306\ papers, through which, matching our inclusion criteria, we obtained 245\ mission requirements. \tabref{fig:paperNumber} shows the venues included in our analysis, together with the number of scientific publications and mission requirements obtained per year. The considered software engineering venues (ICSE, FSE, and ASE) are not present, since they did not contain any paper matching the inclusion criteria. \textbf{Identification of Mission Specification Problems.} We identified these problems as follows. {\scshape{(Step.1)}} We divided the collected mission requirements among two of the authors, who labeled them with keywords that describe the mission specification problems they describe. For example, the mission requirement ``The robot has to autonomously patrol the site and measure the state of valve levers and dial gauges at four checkpoints in order to decide if some machines need to be shut down'' (occuring in Schillinger et al.~\cite{schillinger2016human}) was associated with the keywords ``patrol,'' since the robot has to patrol the site, and ``instantaneous reaction,'' since when a valve is reached its level must be checked. {\scshape{(Step.2)}} We created a graph structure representing semantic relations between keywords. Each keyword is associated with a node of the graph structure. Two nodes were connected if their keywords identify two similar mission specification problems. For example, the keywords ``visit'' and ``reach'' are related since in both cases the robot has to visit/reach a location. {\scshape{(Step.3)}} Since our interest was not a mere classification of actions and movements that are executed by a robots, but rather detecting mission specification problems that concern how actions and movements are executed by a robot behavior over time, nodes that contain keywords that only refer to actions are removed (e.g., balance). {\scshape{(Step.4)}} Nodes that were connected through edges and contained keywords that identify to the same mission specification problem, e.g., visit and reach, were merged. {\scshape{(Step.5)}} We organized the mission specification problems into a catalog represented through a tree structure that facilitates browsing among mission specification problems. The material produced in these steps can be found in our online appendix~\cite{paperstuff}. \textbf{Pattern Formulation.} We formulated patterns by following established practices in the literature~\cite{dwyer1999patterns,grunske2008specification,autili2015aligning}. A pattern is characterized by (i) a name; (ii) a statement that captures the pattern intent (i.e., the mission requirement); (iii) a template instance of the mission specification in LTL and CTL; (iv) variations describing possible minor changes that can be applied to the pattern; (v) examples of known uses; (vi) relationships of the pattern to others and; (vii) occurrences of the pattern in literature. For each LTL pattern we also designed a B\"uchi Automaton (BA) that unambiguously describes the behaviors of the system allowed by the mission specification. The mission specification was designed by consulting specifications encoding requirements already present in the papers surveyed, by crosschecking them, and consulting specification patterns already proposed in the software-engineering literature~\cite{autili2015aligning}. If the proposed specification was related (or corresponded) with one of an already existing pattern, we indicated this in the relationships of the pattern to others, meaning that the pattern presented in literature is useful also to solve the identified mission specification problem. \section{Mission Specification Patterns} \label{sec:patterns} \noindent In this section, we present our catalog of mission specification patterns and briefly present one of them (\secref{sec:patterncatalog}). We also present PsAlM\xspace, a tool that supports developers in systematic mission design. PsAlM\xspace\ supports the description of mission requirements through the proposed patterns and the automatic generation of mission specifications (\secref{sec:PsAlMISt}). \pgfdeclarelayer{background} \pgfdeclarelayer{foreground} \newcommand{4.8cm}{4.8cm} \begin{table*} \caption{Core movement patterns} \label{tab:coremovementpatterns} \scriptsize \begin{tabular}{ p{0.3cm} p{4cm} p{8cm} p{4.8cm} } \toprule & \textbf{Description} & \textbf{Example} & \textbf{Formula} ($l_1, l_2, \ldots$ are location propositions) \\ \midrule \multirow{1}{*}[-0.4ex]{\rotatebox[origin=c]{90}{\emph{Visit}}} & Visit a set of locations in an unspecified order. & Locations $l_1$, $l_2$, and $l_3$ must be visited. $l_1 \rightarrow l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow (l_{\#})^\omega$ is an example trace that satisfies the mission requirement. & {\parbox[t]{4.8cm}{$\overset{n}{\underset{i=1}{\bigwedge}} \LTLf (l_i)$}} \\ \midrule \raisebox{-0.7\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{\parbox{1.5cm}{\emph{Sequenced\\ Visit}}}} & Visit a set of locations in sequence, one after the other. & Locations $l_1$, $l_2$, $l_3$ must be covered following this sequence. The trace $l_1 \rightarrow l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow (l_{\# \setminus 3})^\omega$ violates the mission since $l_3$ does not follow $l_2$. The trace $l_1 \rightarrow l_3 \rightarrow l_1 \rightarrow l_2 \rightarrow l_4 \rightarrow l_3 \rightarrow (l_{\#})^\omega$ satisfies the mission requirement. & {\parbox[t]{4.8cm}{$\LTLf (l_1 \wedge \LTLf(l_2 \wedge \ldots \LTLf(l_n)))$}} \\ \midrule \raisebox{-1.\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{\parbox{1.5cm}{\emph{Ordered\\ Visit}}}} & Sequence visit does not forbid to visit a successor location before its predecessor, but only that after the predecessor is visited the successor is also visited. Ordered visit forbids a successor to be visited before its predecessor. & Locations $l_1$, $l_2$, $l_3$ must covered following this order. The trace $l_1 \rightarrow l_3 \rightarrow l_1 \rightarrow l_2 \rightarrow l_3 \rightarrow (l_{\#})^\omega$ does not satisfy the mission requirement since $l_3$ preceeds $l_2$. The trace $l_1 \rightarrow l_4 \rightarrow l_1 \rightarrow l_2 \rightarrow l_4 \rightarrow l_3 \rightarrow (l_{\#})^\omega$ satisfies the mission requirement. & {\parbox[t]{4.8cm}{$\LTLf (l_1 \wedge \LTLf(l_2 \wedge \ldots \LTLf(l_n))) $ \newline $\overset{n-1}{\underset{i=1}{\bigwedge}} (\neg l_{i+1}) \LTLu l_i$}} \\ \midrule \raisebox{-2\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{\parbox{1.5cm}{\emph{Strict Ordered\\ Visit}}}} & Ordered visit pattern does not avoid a predecessor location to be visited multiple times before its successor. Strict ordered visit forbids this behavior. & Locations $l_1$, $l_2$, $l_3$ must be covered following the strict order $l_1$, $l_2$, $l_3$. The trace $l_1 \rightarrow l_4 \rightarrow l_1 \rightarrow l_2 \rightarrow l_4 \rightarrow l_3 \rightarrow (l_{\#})^\omega$ does not satisfy the mission requirement since $l_1$ occurs twice before $l_2$. The trace $l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow l_4 \rightarrow l_3 \rightarrow (l_{\#})^\omega$ satisfies the mission requirement. & {\parbox[t]{4.8cm}{ $\LTLf (l_1 \wedge \LTLf(l_2 \wedge \ldots \LTLf(l_n))) $\\ $\overset{n-1}{\underset{i=1}{\bigwedge}} (\neg l_{i+1}) \LTLu l_i$\\ $\overset{n-1}{\underset{i=1}{\bigwedge}} (\neg l_{i}) U (l_i \wedge \LTLx (\neg l_{i} \LTLu (l_{i+1})))$ }} \\ \midrule \raisebox{-1.\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{\parbox{1.5cm}{\emph{Fair\\ Visit}}}} & The difference among the number of times locations within a set are visited is at most one. & Locations $l_1$, $l_2$, $l_3$ must be covered in a fair way. The trace $l_1 \rightarrow l_4 \rightarrow l_1 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow (l_{\# -\{1,2,3\}})^\omega$ does not perform a fair visit since it visits $l_1$ three times while $l_2$ and $l_3$ are visited once. The trace $l_1 \rightarrow l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow l_2 \rightarrow l_4 \rightarrow (l_{\# \setminus\{1,2,3\}})^\omega$ performs a fair visit since it visits locations $l_1$, $l_2$, and $l_3$ twice. & {\parbox[t]{4.8cm}{$\overset{n}{\underset{i=1}{\bigwedge}} \LTLf (l_i) $\\ $\overset{n}{\underset{i=1}{\bigwedge}} \LTLg (l_{i} \rightarrow \LTLx ((\neg l_i) \LTLw l_{(i+1)\%n}))$}} \\ \midrule \raisebox{-0.65\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{\parbox{1.5cm}{\emph{Patrolling}}}} & Keep visiting a set of locations, but not in a particular order. & Locations $l_1$, $l_2$, $l_3$ must be surveilled. The trace $l_1 \rightarrow l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow (l_2 \rightarrow l_3 \rightarrow l_1)^\omega$ ensures that the mission requirement is satisfied. The trace $l_1 \rightarrow l_2 \rightarrow l _3 \rightarrow (l_1 \rightarrow l_3)^\omega$ represents a violation, since $l_2$ is not surveilled. & {\parbox[t]{4.8cm}{$ \overset{n}{\underset{i=1}{\bigwedge}} \LTLg \LTLf (l_i)$}} \\ \midrule \raisebox{-1\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{\parbox{1.5cm}{\emph{Sequenced\\ Patrolling}}}} & Keep visiting a set of locations in sequence, one after the other. & Locations $l_1$, $l_2$, $l_3$ must be patrolled in sequence. The trace $l_1 \rightarrow l_4 \rightarrow l_3\rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow (l_1 \rightarrow l_2 \rightarrow l_3)^\omega$ satisfies the mission requirement since globally any $l_1$ will be followed by $l_2$ and $l_2$ by $l_3$. The trace $l_1 \rightarrow l_4 \rightarrow l_3\rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow ( l_1 \rightarrow l_3)^\omega$ violates the mission requirement since after visiting $l_1$, the robot does not visit $l_2$. & {\parbox[t]{4.8cm}{$\LTLg (\LTLf (l_1 \wedge \LTLf(l_2 \wedge \ldots \LTLf(l_n))))$}} \\ \midrule \raisebox{-1\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{\parbox{1.5cm}{\emph{Ordered\\ Patrolling}}}} & Sequence patrolling does not forbid to visit a successor location before its predecessor. Ordered patrolling ensures that (after a successor is visited) the successor is not visited (again) before its predecessor. & Locations $l_1$, $l_2$, and $l_3$ must be patrolled following the order $l_1$, $l_2$, and $l_3$. The trace $l_1\rightarrow l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow ( l_1 \rightarrow l_2 \rightarrow l_3)^\omega$ violates the mission requirement since $l_3$ precedes $l_2$. The trace $l_1 \rightarrow l_1 \rightarrow l_2 \rightarrow l_4 \rightarrow l_4 \rightarrow l_3 \rightarrow ( l_1 \rightarrow l_2 \rightarrow l_3)^\omega$ satisfies the mission requirement & {\parbox[t]{4.8cm}{$\LTLg (\LTLf (l_1 \wedge \LTLf(l_2 \wedge \ldots \LTLf(l_n)))) $\\ $\overset{n-1}{\underset{i=1}{\bigwedge}} (\neg l_{i+1}) \LTLu l_i$\\ $\overset{n}{\underset{i=1}{\bigwedge}} \LTLg (l_{(i+1)\%n} \rightarrow \LTLx ( (\neg l_{(i+1)\%n}) \LTLu l_{i}))$ }}\\ \midrule \raisebox{-2.6\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{\parbox{1.5cm}{\emph{Strict Ordered\\ Patrolling}}}} & Ordered patrolling pattern does not avoid a predecessor location to be visited multiple times before its successor. Strict Ordered Patrolling ensures that, after a predecessor is visited, it is not visited again before its successor. & Locations $l_1$, $l_2$, $l_3$ must be patrolled following the strict order $l_1$, $l_2$, and $l_3$. The trace $l_1 \rightarrow l_4 \rightarrow l_1 \rightarrow l_2 \rightarrow l_4 \rightarrow l_3 \rightarrow ( l_1 \rightarrow l_2 \rightarrow l_3)^\omega$ violates the mission requirement since $l_1$ is visited twice before $l_2$. The trace $l_1\rightarrow l_4 \rightarrow l_2 \rightarrow l_4 \rightarrow l_3 \rightarrow ( l_1 \rightarrow l_2 \rightarrow l_3)^\omega$ satisfies the mission requirement. & {\parbox[t]{4.8cm}{$\LTLg (\LTLf (l_1 \wedge \LTLf(l_2 \wedge \ldots \LTLf(l_n)))) $ \\ $\overset{n-1}{\underset{i=1}{\bigwedge}} (\neg l_{i+1}) \LTLu l_i$\\ $\overset{n}{\underset{i=1}{\bigwedge}} \LTLg (l_{(i+1)\%n} \rightarrow \LTLx ( (\neg l_{(i+1)\%n}) \LTLu l_{i}))$ \\ $\overset{n-1}{\underset{i=1}{\bigwedge}} \LTLg ((l_{i}) \rightarrow \LTLx (\neg l_{i} \LTLu (l_{(i+1)\%n})))$ }} \\ \midrule \raisebox{-1\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{\parbox{1.5cm}{\emph{Fair\\ Patrolling}}}} & Keep visiting a set of locations and ensure that the difference among the number of times locations within a set are visited is at most one. & Locations $l_1$, $l_2$, and $l_3$ must be fair patrolled. The trace $l_1 \rightarrow l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow ( l_1 \rightarrow l_2 \rightarrow l_1 \rightarrow l_3)^\omega$ violates the mission requirements since the robot patrols $l_1$ more than $l_2$ and $l_3$. The trace $l_1 \rightarrow l_4 \rightarrow l_3 \rightarrow l_4 \rightarrow l_2 \rightarrow l_4 \rightarrow ( l_1 \rightarrow l_2 \rightarrow l_3)^\omega$ satisfies the mission requirement since locations $l_1$, $l_2$, and $l_3$ are patrolled fairly. & {\parbox[t]{4.8cm}{ $\overset{n}{\underset{i=1}{\bigwedge}} \LTLg( \LTLf (l_i))$ \\ $\overset{n}{\underset{i=1}{\bigwedge}} \LTLg (l_{i} \rightarrow \LTLx ((\neg l_i) \LTLw l_{(i+1)\%n}))$}} \\ \bottomrule \end{tabular} \end{table*} \newcommand{5.8cm}{5.8cm} \begin{table*} \caption{Avoidance and Trigger patterns.} \label{tab:avoidanceAndTrigger} \scriptsize \begin{tabular}{ p{0.6cm} p{2.5cm} p{7.8cm} p{5.8cm} } \toprule & \textbf{Description} & \textbf{Example} & \textbf{Formula} \\ \midrule \hspace{-0.2cm}\parbox[t]{0.6cm}{\emph{Past\\ avoidance}} & A condition has been fulfilled in the past. & If the robot enters location $l_1$, then it should have not visited location $l_2$ before. The trace $l_3 \rightarrow l_4 \rightarrow l_1 \rightarrow l_2 \rightarrow l_4 \rightarrow l_3 \rightarrow ( l_2 \rightarrow l_3)^\omega$ satisfies the mission requirement since location $l_2$ is not entered before location $l_1$. & {\parbox[t]{5.8cm}{$(\neg (l_1)) \LTLu p$, where $l_1 \in L$ and $p \in M$}} \\ \midrule \hspace{-0.2cm}\parbox[t]{0.6cm}{\emph{Global\\ avoidance}} & An avoidance condition globally holds throughout the mission. & The robot should avoid entering location $l_1$. Trace $l_3 \rightarrow l_4 \rightarrow l_3 \rightarrow l_2 \rightarrow l_4 \rightarrow l_3 \rightarrow ( l_3 \rightarrow l_2 \rightarrow l_3)^\omega$, satisfies the mission requirement since the robot never enters $l_1$. & {\parbox[t]{5.8cm}{$\LTLg(\neg (l_1))$, where $l_1 \in L$}} \\ \midrule \hspace{-0.2cm}\parbox[t]{0.6cm}{\emph{Future\\ avoidance}} & After the occurrence of an event, avoidance has to be fulfilled. & If the robot enters $l_1$, then it should avoid entering $l_2$ in the future. The trace $l_3 \rightarrow l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_3 \rightarrow ( l_3 \rightarrow l_2 \rightarrow l_3)^\omega$ does not satisfy the mission requirement since $l_2$ is entered after $l_1$. & {\parbox[t]{5.8cm}{$\LTLg( (c) \rightarrow( \LTLg (\neg (l_1))))$, where $c \in M$ and $l_1 \in PL$}} \\ \midrule \hspace{-0.2cm}\parbox[t]{0.6cm}{\emph{Upper Rest.\\ Avoidance}} & A restriction on the maximum number of occurrences is desired. & A robot has to visit $l_1$ at most $3$ times. The trace $l_1 \rightarrow l_4 \rightarrow l_1 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_1 \rightarrow ( l_3)^\omega$ violates the mission requirement since $l_1$ is visited four times. The trace $l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_2 \rightarrow l_4 \rightarrow ( l_3)^\omega$ satisfies the mission requirement. & {\parbox[t]{5.8cm}{$\neg \LTLf (\underbrace{l_1 \wedge \LTLx (\LTLf(l_1 \wedge \ldots \LTLx (\LTLf(l_1)}_\text{n})))) $, where $l_1 \in L$ }} \\ \midrule \hspace{-0.2cm}\parbox[t]{0.6cm}{\emph{Lower Rest.\\ Avoidance}} & A restriction on the minimum number of occurrences is desired. & A robot to enter location $l_1$ at least $3$ times. The trace $l_4 \rightarrow l_3 \rightarrow l_2 \rightarrow l_2\rightarrow l_4 \rightarrow ( l_3)^\omega$ violates the mission requirement since location $1$ is never entered. The trace $l_1 \rightarrow l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_1 \rightarrow ( l_3)^\omega$ satisfies the mission requirement. & {\parbox[t]{5.8cm}{$\LTLf (\underbrace{l_1 \wedge \LTLx (\LTLf(l_1 \wedge \ldots \LTLx (\LTLf(l_1)}_\text{n})))) $, where $l_1 \in L$}}\\ \midrule \hspace{-0.2cm}\parbox[t]{0.6cm}{\emph{Exact Rest.\\ Avoidance}} & The number of occurrences desired is an exact number. & A robot must enter location $l_1$ exactly $3$ times. The trace $l_4 \rightarrow l_3 \rightarrow l_2 \rightarrow l_2 \rightarrow l_4 \rightarrow ( l_3)^\omega$ violates the mission requirement. The trace $l_1 \rightarrow l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_1 \rightarrow ( l_3)^\omega$ satisfies the mission requirement since location $l_1$ is entered exactly $3$ times. & {\parbox[t]{5.8cm}{$\underbrace{(\neg (l1)) \LTLu (l1 \wedge (\LTLx ((\neg l1) \LTLu (l1 \ldots \wedge (\LTLx ((\neg l1) \LTLu (l1}_\text{n}$ $ \wedge (\LTLx (\LTLg (\neg l1))))))))))$, where $l_1 \in L$}} \\ \midrule \hspace{-0.2cm}\parbox[t]{0.6cm}{\emph{Inst.\\ Reaction}} & The occurrence of a stimulus instantaneously triggers a counteraction. & When location $l_2$ is reached action $a$ must be executed. The trace $l_1 \rightarrow l_3 \rightarrow \{ l_2,a \} \rightarrow \{ l_2,a \} \rightarrow l_4 \rightarrow (l_3)^\omega$ satisfies the mission requirement since when location $l_2$ is entered condition $a$ is performed. The trace $l_1 \rightarrow l_3 \rightarrow l_2 \rightarrow \{l_1,a\} \rightarrow l_4 \rightarrow (l_3)^\omega$ does not satisfy the mission requirement since when $l_2$ is reached $a$ is not executed. & $\LTLg( p_1 \rightarrow p_2)$, where $p_1 \in M$ and $p_2 \in PL \cup PA$ \\ \midrule \hspace{-0.2cm}\parbox[t]{0.6cm}{\emph{Delayed\\ Reaction}} & The occurrence of a stimulus triggers a counteraction some time later & When $c$ occurs the robot must start moving toward location $l_1$, and $l_1$ is subsequently finally reached. The trace $l_1 \rightarrow l_3 \rightarrow \{l_2,c\} \rightarrow l_1 \rightarrow l_4 \rightarrow (l_3)^\omega$ satisfies the mission requirement, since after $c$ occurs the robot starts moving toward location $l_1$, and location $l_1$ is finally reached. The trace $l_1 \rightarrow l_1 \rightarrow \{ l_2, c\} \rightarrow l_3 \rightarrow (l_3)^\omega$ does not satisfy the mission requirement since $c$ occurs when the robot is in $l_2$, and $l_1$ is not finally reached. & {\parbox[t]{5.8cm}{$\LTLg( p_1 \rightarrow \LTLf (p_2))$, where $p_1 \in M$ and $p_2 \in PL \cup PA$}}\\ \midrule \hspace{-0.2cm}\parbox[t]{0.6cm}{\emph{Prompt\\ Reaction}} & The occurrence of a stimulus triggers a counteraction promptly, i.e. in the next time instant. & If $c$ occurs $l_1$ is reached in the next time instant. The trace $l_1 \rightarrow l_3 \rightarrow \{l_2,c\} \rightarrow l_1 \rightarrow l_4 \rightarrow (l_3)^\omega$ satisfies the mission requirement, since after $c$ occurs $l_1$ is reached within the next time instant. The trace $l_1 \rightarrow$ $l_3 \rightarrow \{l_2,c\} \rightarrow l_4 \rightarrow l_1 \rightarrow (l_3)^\omega$ does not satisfy the mission requirement. & {\parbox[t]{5.8cm}{$\LTLg( p_1 \rightarrow \LTLx (p_2))$, where $p_1 \in M$ and $p_2 \in PL \cup PA$}}\\ \midrule \hspace{-0.2cm}\parbox[t]{0.6cm}{\emph{Bound\\ Reaction}} & A counteraction must be performed every time and only when a specific location is entered. & Action $a_1$ is bound though a delay to location $l_1$. The trace $l_1 \rightarrow l_3 \rightarrow \{l_2,c\} \rightarrow \{l_1,a_1\} \rightarrow l_4 \rightarrow \{l_1,a_1\} \rightarrow (l_3)^\omega$ satisfies the mission requirement. The trace $l_1 \rightarrow l_3 \rightarrow \{l_2,c\} \rightarrow \{l_1,a_1\} \rightarrow \{l_4,a_1\} \rightarrow \{l_1,a_1\} \rightarrow (l_3)^\omega$ does not satisfy the mission requirement since $a_1$ is executed in location $l_4$. & {\parbox[t]{5.8cm}{$\LTLg( p_1 \leftrightarrow p_2)$, where $p_1 \in M$ and $p_2 \in PL \cup PA$}} \\ \midrule \hspace{-0.2cm}\parbox[t]{0.6cm}{\emph{Bound\\ Delay}} & A counteraction must be performed, in the next time instant, every time and only when a specific location is entered & Action $a_1$ is bound to location $l_1$. The trace $l_1 \rightarrow l_3 \rightarrow \{l_2,c\} \rightarrow \{l_1\} \rightarrow \{l_4,1_1\} \rightarrow \{l_1\} \rightarrow \{l_4,a_1\} \rightarrow (l_3)^\omega$ satisfies the mission requirement. The trace $l_1 \rightarrow l_3 \rightarrow \{l_2,c\} \rightarrow \{l_1\} \rightarrow \{l_4,1_1\} \rightarrow \{l_1,a_1\} \rightarrow \{l_4\} \rightarrow (l_3)^\omega$ does not satisfy the mission requirement. & {\parbox[t]{5.8cm}{$\LTLg( p_1 \leftrightarrow \LTLx (p_2))$, where $p_1 \in M$ and $p_2 \in PL \cup PA$}} \\ \midrule \hspace{-0.2cm}\parbox[t]{0.6cm}{\emph{Wait}} & Inaction is desired till a stimulus occurs. & The robot remains in location $l_1$ until condition $c$ is satisfied. The trace $l_1 \rightarrow l_3 \rightarrow \{ l_2,c\} \rightarrow l_1 \rightarrow l_4 \rightarrow (l_3)^\omega$ violates the mission requirement since the robot left $l_1$ before condition $c$ is satisfied. The trace $l_1 \rightarrow \{l_1,c\} \rightarrow l_2 \rightarrow l_1 \rightarrow l_4 \rightarrow (l_3)^\omega$ satisfies the mission requirement. & $(l_1) \LTLu (p) $, where $l_1 \in L$ and $p \in PE \cup PA$ \\ \bottomrule \end{tabular} \vspace{-0.5cm} \end{table*} \subsection{Mission Specification Pattern Catalog} \label{sec:patterncatalog} \noindent Our catalog of robotic mission specification patterns comprises $22$ patterns organized into a pattern tree as illustrated in \figref{fig:specificationPatternSystem}. Leaves of the tree represent mission specification patterns. Intermediate nodes facilitate browsing within the hierarchy and aid pattern selection and decision making. Patterns identified by following the procedure described in Sec.~\ref{sec:methodology} are graphically indicated with a solid border. Due to space limits, we provide a high-level description of all patterns identified, examples of application, and the corresponding LTL mission specifications. The interested reader may refer to our online appendix~\cite{paperstuff}, which contains additional examples, occurrences of patterns in the literature, relations among the patterns and additional CTL mission specifications. \pattern{Strict Ordered Patrolling}{ A robot must patrol a set of locations following a strict sequence ordering. Such locations can be, e.g., areas in a building to be surveyed. }{ The following formula encodes the mission in LTL for $n$ locations and a robot $r$ (\% is the modulo arithmetic operator): \begin{center} $\overset{n}{\underset{i=1}{\bigwedge}} \LTLg (\LTLf (l_1 \wedge \LTLf (l_2 \wedge ... \LTLf (l_n)))) \overset{n-1}{\underset{i=1}{\bigwedge}} ((\neg l_{i+1})\ U\ l_i) \overset{n}{\underset{i=1}{\bigwedge}} \LTLg (l_{(i+1)\%n} \rightarrow \LTLx((\neg l_{(i+1)\%n})\ U\ l_i)) $\\ \end{center} Example with two locations. \begin{center} $\LTLg (\LTLf (l_1 \wedge \LTLf (l_2))) \wedge ((\neg l_2)\ U\ l_1) \wedge \LTLg (l_2 \rightarrow \LTLx((\neg l_2)\ U\ l_1)) \wedge \LTLg(l_1 \rightarrow \LTLx((\neg l_1)\ U\ l_2))$\\ \end{center} where $l_1$ and $l_2$ are expressions that indicate that a robot $r$ is in locations $l_1$ and $l_2$, respectively. } { A developer may want to allow traces in which sequences of \emph{consecutive} $l_1$ ($l_2$) are allowed, that is strict ordering is applied on sequences of non consecutive $l_1$ ($l_2$). In this case, traces in the form $l_1 \rightarrow (\rightarrow l_1 \rightarrow l_1 \rightarrow l_3 \rightarrow l_2)^\omega$ are admitted, while traces in the form $l_1 \rightarrow (\rightarrow l_1 \rightarrow l_3 \rightarrow l_1 \rightarrow l_2)^\omega$ are not admitted. This variation can be encoded using the following specification: \begin{center} $\LTLg (\LTLf (l_1 \wedge \LTLf (l_2))) \wedge ((\neg l_2)\ U\ l_1) \wedge \LTLg ((l_2 \wedge \LTLx(\neg l_2)) \rightarrow \LTLx((\neg l_2)\ U\ l_1)) \wedge \LTLg ((l_1 \wedge \LTLx(\neg l_1)) \rightarrow \LTLx((\neg l_1)\ U\ l_2))$ \end{center} This specification allows for sequences of consecutive $l_1$ ($l_2$) since the left side of the implication $l_1 \wedge \LTLx(\neg l_1)$ ($l_2 \wedge \LTLx(\neg l_2)$) is only triggered when $l_1$ ($l_2$) is exited. }{ A common usage example of the Strict Ordered Patrolling pattern is a scenario where a robot is performing surveillance in a building during night hours. Strict Sequence Patrolling and Avoidance often go together. Avoidance patterns are used to force robots to avoid obstacles as they guard a location. Triggers can also be used in combination with the Strict Sequence Patrolling pattern to specify conditions upon which Patrolling should start or stop. }{ The Strict Ordered Patrolling pattern is a specialisation of the Ordered Patrolling pattern, forcing the strict ordering. }{ Smith et. al.~\cite{smith2011optimal} proposed a mission specification forcing a robot to not visit a location twice in a row before a target location is reached. }{strictorderedpatrolling.png}{width=0.7\textwidth} \textbf{Preliminaries.} To aid comprehension of behavior and facilitate precise pattern definitions, we introduce the following notation. Given a finite set of locations $L=\{ l_1, l_2, \ldots, l_n\}$ and robots $R=\{ r_1, r_2, \ldots, r_n\}$, $PL=\{ r_x\ in\ l_y \mid r_x \in R \text{ and } l_y \in L\}$ is a set of location propositions, each indicating that a robot $r_x$ is in a specific location $l_y$ of the environment. Given a finite set of conditions of the environment $C=\{ c_1, c_2, \ldots, c_m \}$, we indicate as $PE=\{s_1, s_2, \ldots, s_m \}$ a set of propositions such that $s_i \in PE$ is true if and only if condition $c_i$ holds. Given a finite set of actions that the robots can perform $A=\{ a_1, a_2, \ldots, a_m \}$, we indicate as $PA=\{ r_x\ exec\ a_y \mid r_x \in R \text{ and } a_y \in A \}$ a set of propositions such that $r_x\ exec\ a_y$ is true if and only if action $a_y$ is performed by robot $r_x$. We define the set of propositions $M$ of a robotic application as $PL \cup PE \cup PA$. A trace is an infinite sequence $M_x \rightarrow M_y \rightarrow M_z \ldots$ where $M_x,M_y,M_z \subseteq M$ indicate a trace in which $M_z$ holds after $M_y$, and $M_y$ holds after $M_x$. For example, $\{r_1\ in\ l_1 \} \rightarrow \{r_1\ in\ l_2, c_1 \} \rightarrow \{ c_2, r_2\ exec\ a_1 \} \ldots$ is a trace where the element in position $1$ of the trace indicates that the robot $r_1$ is in location $l_1$, then the element in position $2$ indicates that the robot $r_1$ is in location $l_2$ and condition $c_1$ holds (indicating, for example, that an obstacle is detected), and then the element in position $3$ indicates that condition $c_2$ holds and robot $r_2$ is executing action $a_1$. In the following, with a slight abuse of notation, when a set is a singleton we will omit brackets. We use the notation $(M_x \rightarrow \ldots \rightarrow M_y )^\omega$, where $M_x,\ldots ,M_y \subseteq M$, to indicate a sequence $M_x \rightarrow \ldots \rightarrow M_y $ that occurs infinitely. We use the notation $l_{\#}$ to indicate any location, e.g., $r_1\ in\ l_1\rightarrow r_1\ in\ l_{\#} \rightarrow r_1\ in\ l_2$ indicates that a robot $r_1$ visits location $l_1$, afterwards any location, and then location $l_2$. We use the notation $l_{\# \setminus K}$, where $K \subset M$, to indicate any possible location not in $K$, e.g., $r_1\ in\ l_1\rightarrow r_1\ in\ l_{\# \setminus \{l_3\}} \rightarrow r_1\ in\ l_2$ indicates that $r_1$ visits $l_1$, then any location except $l_3$ is visited, and finally $l_2$. \textbf{Patterns.} Patterns are organized in three main groups -- core movement (Table~\ref{tab:coremovementpatterns}), triggers (Table~\ref{tab:avoidanceAndTrigger}), and avoidance (Table~\ref{tab:avoidanceAndTrigger}), explained in the following. For simplicity, in Tables~\ref{tab:coremovementpatterns} and~\ref{tab:avoidanceAndTrigger}, we assume that a single robot is considered during the mission specification and we use the notation $l_x$ as shortcut for $r_1\ in\ l_x$. The examples assume that the environment is made by four locations, namely $l_1$, $l_2$, $l_3$, and $l_4$. \emph{Core movement patterns}. How robots should move within an environment can be divided in two major categories representing locations' coverage and locations' surveillance. Coverage patterns require a robot to reach a set of locations of the environment. Surveillance patterns require a robot to \emph{keep} reaching a set of locations of the environment. \emph{Avoidance patterns.} Robot movements may be constrained in order to avoid occurrence of some behavior (Table~\ref{tab:avoidanceAndTrigger}). Avoidance may reflect a condition or be a bound to the occurrence of some event. \emph{Conditional avoidance} generally holds globally (i.e., for the entire behavior) and applies when avoidance of locations or obstacles is sought that depends on some condition. For example, a cleaning robot may avoid visiting locations that have been already cleaned. In the \emph{restricted avoidance} case, avoidance does not hold globally but accounts for a number of occurrences of an avoidance case. Depending on the number of occurrences being a maximum, minimum or exact number, \emph{upper}, \emph{exact} or \emph{lower} restricted avoidance is yielded. For example, a cleaning robot may avoid cleaning a room more than three times. \emph{Trigger patterns.} Robot's reactive behaviour based on stimuli, or robot's inaction until a stimulus occurs are expressed as trigger patterns in Table~\ref{tab:avoidanceAndTrigger}. As an example, the definition of the \emph{Strict Ordered Patrolling} mission specification pattern is presented in Fig.~\ref{fig:Strict Ordered Patrolling}. The patterns in detail are available in our online appendix~\cite{paperstuff}. \subsection{Specification Pattern Tool Support} \label{sec:PsAlMISt} \noindent We used the proposed pattern catalog to express robotic missions requirements and to automatically generate their mission specifications. To support developers in mission design we implemented the tool PsAlM\xspace ~\cite{PSALM}, which allows creating complex mission requirements by composing patterns with simple operators. PsAlM\xspace transforms mission requirements (i.e., composed patterns) into mission specifications in LTL or CTL. \Figref{fig:psaimist} illustrates the components of PsAlM\xspace. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{PsAlMIStStrubure.pdf} \caption{Main components of the PsAlM\xspace tool} \label{fig:psaimist} \end{figure} PsAlM\xspace provides a GUI \circled{\scriptsize{1}} that allows the definition of robotic missions requirements through a structured English grammar, which uses patterns as basic building blocks and AND and OR logic operators to compose these patterns. The structured English grammar and the PsAlM\xspace tool are provided in our online appendix~\cite{paperstuff}. The \texttt{SE2PT} component extracts from a mission requirement the set of patterns that are composed through the AND and OR operators \circled{\scriptsize{2}}. The \texttt{PT2LTL} \circled{\scriptsize{3}} and \texttt{PT2CTL} \circled{\scriptsize{4}} components automatically generate LTL and CTL specifications from these patterns. The produced LTL specifications can be used in different ways -- three possible usages are presented in \figref{fig:psaimist}. The LTL formulae are (i) fed into an existing planner and used to generate plans that satisfy the mission specification \circled{\scriptsize{5}}; (ii) converted into Deterministic B\"uchi automata used as input to the widely used Spectra~\cite{Spectra} robotic application modeling tool \circled{\scriptsize{6}}; and (iii) converted into the NuSMV~\cite{cimatti1999nusmv} input language to be used as input for model checking \circled{\scriptsize{7}}. The plans produced using the planner are (i) used as inputs by the Simbad~\cite{hugues2006simbad} simulation package \circled{\scriptsize{10}}, which is an autonomous robot simulation package for education and research; and (ii) performed by actual real robots \circled{\scriptsize{9}}, as also illustrated in the following section. Produced CTL specifications are also converted into the NuSMV~\cite{cimatti1999nusmv} input language to be used as input for model checking \circled{\scriptsize{7}}. \section{Evaluation} \label{sec:results} \noindent Our evaluation addressed the following two questions. \textbf{RQ1:} How \emph{effective} is the pattern catalog in capturing mission requirements and producing mission specifications? \textbf{RQ2:} Are the proposed mission specifications \emph{correct}? \begin{table*}[t] \scriptsize \caption{Mission specification patterns for \emph{Exp1}. Labels SC1, SC2, \ldots SC5 identify the considered scenarios.} \begin{tabular}{ c p{14cm} p{2.4cm} } \toprule \multicolumn{1}{ c }{\textbf{SC}} & \multicolumn{1}{ c }{\textbf{Description}} & \multicolumn{1}{ c }{\textbf{Patterns}} \\ \midrule SC1 & A robot is deployed within a supermarket and reports about the absence of sold items within a set of locations (i.e. $l1$, $l2$, $l3$, and $l4$). Furthermore, if in location $l4$ (where water supplies are present) a human is detected, it has to perform a collaborative grasping action and help the human in placing new water supplies. & Ordered Patrolling,\newline Instantaneous Reaction \\ \midrule SC2 & Three robots are deployed within an hospital environment: a mobile platform (Summit~\cite{Summit}), a manipulator (PA10~\cite{PA10}) and a mobile manipulator (Tiago~\cite{Tiago}), identified in the following as MP, M and MM, respectively. The robot M is deployed in hospital storage; when items (e.g., towels) are needed by a nurse or doctors, M has to load them on the MP. MP should reach the location where the nurse is located. If the item is heavy (e.g., heavy medical equipment), MM should reach the location where the nurse is to help unloading the equipment. When MP and MM are not required for shipping items they are patrolling a set of locations to avoid unauthorized people entering restricted areas of the hospital (e.g., radiotherapy rooms). & Patrolling,\newline Instantaneous Reaction,\newline Ordered Visit,\newline Wait\\ \midrule SC3 & A robot is developed within a university building to deliver coffee to employees. The robot reaches the coffee machine, uses the coffee machine to prepare the coffee and delivers it to the employee. & Strict Ordered Visit,\newline Instantaneous Reaction\\ \midrule SC4 & A robot is deployed within a shop to check the presence of intruders during night time. It has to iteratively check for intruders and report on their presence & Patrolling,\newline Instantaneous Reaction\\ \midrule SC5 & A robot is deployed within a company to notify employees in presence of a fire alarm. If a fire is detected, the robot is send to different areas of the company to ask employees to leave the building. & Visit,\newline Instantaneous Reaction\\ \bottomrule \end{tabular} \label{fig:exp1} \end{table*} \textbf{Coverage of Real-World Missions (RQ1).} \noindent We investigated (i) how the pattern catalog supports the specification of mission requirements and (ii) how the pattern catalog reduces ambiguities in mission requirements. \emph{Exp1.} We checked how the pattern catalog supports the formulation of mission requirements (and the generation of mission specifications) in real-world robotic scenarios. To this end, we defined five scenarios (\tabref{fig:exp1}) in collaboration with our industrial partners (BOSCH\ and PAL Robotics). \begin{table}[t] \centering \caption{Results of experiment Exp2. Lines contain the total number of mission requirements (MR), the number of not expressible (NE) and ambiguous (A) mission requirements and the number of cases that lead to a consensus (C) and no consensus (NC).} \scriptsize \begin{tabular}{ l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} } \toprule & \multicolumn{11}{ c }{Spectra Robotic Application} & & \\ \midrule & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & \multicolumn{1}{c }{11} & \multicolumn{1}{ c }{MP} & \multicolumn{1}{ c }{\textbf{Total}}\\ \midrule MR & 29 & 2 & 22 & 5 & 1 & 159 & 4 & 32 & 47 & 53 & \multicolumn{1}{c }{74} & \multicolumn{1}{ c }{8} & \multicolumn{1}{ c }{436} \\ NE & 3 & 0 & 0 & 0 & 0 & 47 & 0 & 0 & 7 & 1 & \multicolumn{1}{c }{8} & \multicolumn{1}{ c }{0} & \multicolumn{1}{ c }{66}\\ A & 3 & 0 & 2 & 1 & 0 & 35 & 0 & 10 & 12 & 32 & \multicolumn{1}{c }{7} & \multicolumn{1}{ c }{0} & \multicolumn{1}{ c }{102}\\ C & 13 & 0 & 11 & 2 & 1 & 29 & 4 & 8 & 11 & 8 & \multicolumn{1}{c }{20} & \multicolumn{1}{ c }{5} & \multicolumn{1}{ c }{112}\\ NC & 10 & 2 & 9 & 2 & 0 & 48 & 0 & 14 & 17 & 12 & \multicolumn{1}{c }{39} & \multicolumn{1}{ c }{3} & \multicolumn{1}{ c }{156}\\ \bottomrule \end{tabular} \label{tab:exp2missionrequirements} \end{table} The pattern catalog supported the creation of mission requirements using the patterns listed in \tabref{fig:exp1} for the different scenarios. In all the scenarios, PsAlM\xspace allowed the automatic creation of LTL mission specifications from the mission requirements without any human intervention. The mission specifications were then executed by the robots by relying on existing planners (see \figref{fig:psaimist}). Videos of the robots performing the described missions are available in our dedicated website~\cite{paperstuff}. The pattern catalog effectively supports the creation of mission requirements and specifications in realistic, industry-sourced scenarios. \emph{Exp2.} We collected mission requirements in natural language from available requirements produced from Spectra~\cite{Spectra} and LTLMoP~\cite{finucane2010ltlmop,wei2016extended}. Spectra is a tool that supports the design of the robotic applications. LTLMoP is a software package designed to assist in the development, implementation, and testing of high-level robot controllers. We checked how the pattern catalog may have supported developers in the definition of the mission requirements. In the case of Spectra, we used the Spectra files to extract mission requirements for robotic systems. In total, 11 robotic applications were considered. Note that mission requirements are realistic since they were finally executed with real robots~\cite{Examples}. We automatically extracted $428$\ mission requirements from the Spectra file. The number of mission requirements (MR) per robotic application is reported in \tabref{tab:exp2missionrequirements}. In the case of LTLMoP, $8$ requirements were extracted from the corresponding research papers~\cite{finucane2010ltlmop,wei2016extended} (\tabref{tab:exp2missionrequirements} MP column). \begin{table} \caption{Results of experiment Exp2. Number of occurrences of each pattern in the considered mission requirements.} \scriptsize \begin{tabular}{ l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} l@{\hspace{1.5mm}} } \toprule Pattern & Occ & Pattern & Occ & Pattern & Occ & Pattern & Occ & Pattern & Occ \\ \midrule Visit & 25 & SeqVisit & 1 & OrdVisit & 1 & InstReact & 127 & GlobAvoid & 25\\ PastAvoid & 60 & DelReact & 50 & Wait & 3 & FutAvoid & 48 & SeqVisit & 1 \\ StrictOrdPat & 1 & OrdVisit & 1 & ExactRest & 1 && \\ \bottomrule \end{tabular} \label{tab:exp2} \end{table} Each mission requirement was independently analyzed by two of the authors. The authors checked whether it is possible to express the mission requirement using the mission specification patterns. If one the authors stated that the requirement is not expressible the requirement is marked as not expressible (NE). The number of not expressible mission requirements is presented in Table~\ref{tab:exp2} under the column with header NE. If at least one of the authors found the mission requirement is ambiguous she marked it with the flag $A$. Otherwise, the mission requirement is labeled with the mission specification patterns needed to express the mission requirement. Then, the mission specification patterns used to express the mission requirement are considered. If the authors used the same mission specification patterns to express the mission requirement, a consensus is reached. The number of mission requirements that leads to consensus (resp. no consensus) is indicated in the row labeled C (resp. NC). The number of occurrences of each pattern is indicated in Table~\ref{tab:exp2}. The results show that most of the mission requirements ($370$ over $436$) were expressible using the pattern catalog, which is a reasonable coverage for pattern catalog usage. The $66$ mission requirements that are not covered suggested the introduction of new patterns identified in Fig.~\ref{fig:specificationPatternSystem} with a dashed border. It also shows that the pattern catalog is effective in real case scenarios. In $102$ cases the mission requirements were ambiguous, meaning that different interpretations can be given to the proposed mission requirement. In these cases, alternative combinations of patterns have been proposed by the authors to express the mission requirement. Each of these alternatives represents a possible way of expressing it in a non-ambiguous manner. In $156$ cases, while the authors judged that the requirement was not ambiguous, different pattern combinations were proposed. The combinations of patterns encode possible ways of expressing the mission requirement in a non-ambiguous manner. \emph{Exp3.} We analyzed the mission specifications contained in the Spectra examples collected in Exp2. We collected $1216$\ distinct LTL mission specifications and we analyzed each of these specifications\footnote{This number differs from the one of Exp2, since some specifications were not related with a mission requirement in the form of natural language.}. We verified whether it is possible to obtain the mission specifications starting from the proposed patterns, by performing the following steps. {\scshape{(Step.1)}} For each property we automatically checked whether it was an instance of a mission specification pattern or a simple combination of mission specification patterns. Results are shown in Table~\ref{tab:exp3}. Among $1216$\ mission specifications $424$\ were obtainable from the proposed patterns. {\scshape{(Step.2)}} We considered the properties that did not match any of the proposed patterns. $127$ of these properties are simple statements on the initial state of the system (no temporal operator is used), and thus did not match any of the proposed patterns. $442$ formulae concern properties that refer to variation of the trigger patterns that we have added to the pattern catalogue. $224$ formulae still did not match any of the proposed patterns. After analysis, $155$ among them were expressed using past temporal operators, which are not used in the mission specifications proposed in this work. In step 3 we checked whether these specifications might be reformulated without the past operators. $69$ of these properties, while they can be rewritten using the proposed patterns, they are written as complex LTL formulae and thus they do not match any of our patterns or combination of them. {\scshape{(Step.3)}} We considered the $155$ properties expressed using past temporal operators and we designed mission specifications for them. We found that $129$ of the proposed LTL formulae match one of the proposed pattern, while $26$ are complex LTL formulae that did not match any of the patterns. Thus, the final coverage of the proposed pattern catalog is $92\%$. We then analyzed $13$ mission specifications expressed in the form of LTL properties considered in~\cite{ruchkin2018ipl} and $22$ PCTL properties considered in~\cite{ruchkin2018ipl}, transformed in CTL by replacing the probabilistic operator ($\mathcal{P}$) with the universal quantifier ($\forall$). Given the small number of mission specifications we manually checked the presence of patterns in the formulae (Step 1 in Table~\ref{tab:exp3}). The results show that the pattern system was able to generate almost all mission specifications ($1154$ over $1251$). \begin{table} \centering \scriptsize \caption{Results of experiment Exp3. Pattern occurrence in the considered mission specifications.} \begin{tabular}{ c c c c c } \toprule & & \multicolumn{2}{c }{LTL} & CTL \\ \midrule & \multicolumn{1}{c }{Pattern} & Spectra & \cite{ruchkin2018ipl} & \cite{ruchkin2018ipl} \\ \midrule \multirow{5}{*}{\raisebox{\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{\parbox{2cm}{Step 1}}}}& \multicolumn{1}{c }{Instantaneous reaction} & 318 & 0 & 0\\ & \multicolumn{1}{c }{Visit} & 52 & 0 & 0\\ & \multicolumn{1}{c }{Patrolling} & 0 & 1 & 0\\ & \multicolumn{1}{c }{Strict Ordered Visit} & 0 & 9 &18 \\ & \multicolumn{1}{c }{Wait} & 0 & 1 & 2\\ & \multicolumn{1}{c }{Avoidance/Invariant} & 21 & 0 & 0 \\ & \multicolumn{1}{c }{Visit and Instantaneous reaction} & 18 & 0 & 0\\ & \multicolumn{1}{c}{Strict Ordered Visit and Global Avoidance} & 0 & 0 & 1\\ & \multicolumn{1}{c}{Reaction chain (chain of instantaneous reactions)} & 15 & 0 & 0 \\ & \multicolumn{1}{c}{Non matching} & 792 & 1 & 1 \\ \midrule \multirow{6}{*}{\raisebox{\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{\parbox{1cm}{ Step 2}}}} & \multicolumn{1}{c }{Init} & 127 & - & -\\ & \multicolumn{1}{c }{Fast reaction} & 379 & - & -\\ & \multicolumn{1}{c }{Binded reaction} & 36 & - & -\\ & \multicolumn{1}{c }{Binded delay} & 27 & - & -\\ & \multicolumn{1}{c }{Non matching for past} & 155 & - & -\\ & \multicolumn{1}{c }{Actual non matching} & 69 & - & - \\ \midrule \multirow{3}{*}{\raisebox{\normalbaselineskip}[0pt][0pt]{\rotatebox[origin=c]{90}{\parbox{1cm}{Step 3}}}} & \multicolumn{1}{c}{Fast reaction} & 103 & - & - \\ & \multicolumn{1}{c }{Binded delay} & 26 & - & - \\ & \multicolumn{1}{c }{Actual non matching} & 26 & - & -\\ \bottomrule \end{tabular} \label{tab:exp3} \end{table} \begin{table} \centering \scriptsize \caption{Results of experiments \emph{Exp4}, \emph{Exp5} and \emph{Exp6}. For \emph{Exp4} columns contain the number of times a plan is found ($\top$) and not found ($\bot$). For \emph{Exp5} and \emph{Exp6} columns contain the number of times the mission requirement is satisfied ($\top$) and violated ($\bot$).} \label{tab:expRQ2} \begin{tabular}{ c c c c c c c } \toprule & \multicolumn{2}{c }{Exp4} & \multicolumn{2}{c }{Exp5} & \multicolumn{2}{c }{Exp6} \\ \midrule \multicolumn{1}{ c }{\textbf{Mission Requirement}} & $\top$ & $\bot$ & $\top$ & $\bot$ & $\top$ & $\bot$ \\ \midrule \multicolumn{1}{ c }{OrdPatrol,UpperRestAvoid,Wait} & 2 & 10 & 1 & 11 & 1 & 11 \\ \multicolumn{1}{ c }{FairVisit,ExactRestAvoid$^\ast$,DelReact} & 5 & 7 & 0 & 12 & 4 & 8 \\ \multicolumn{1}{ c }{StrOrdVisit,GlobalAvoid,InstReact} & 3 & 9 & 1 & 11 & 1 & 11\\ \multicolumn{1}{ c }{SeqVisit,FutAvoid,BindDel$^\ast$} & 1 & 11 & 0 & 12 & 2 & 10\\ \multicolumn{1}{ c }{OrdVisit,PastAvoid,InstReact} & 3 & 9 & 1 & 11 & 1 & 11\\ \multicolumn{1}{ c }{Visit,LowRestAvoid,BindReact} & 3 & 9 & 1 & 11 & 1 & 11\\ \multicolumn{1}{ c }{StrictOrdPatrol,FutAvoid,Wait} & 1 & 11 & 1 & 11 & 1 & 11\\ \multicolumn{1}{ c }{Patrol,LowRestAvoid,InstReact}& 3 & 9 & 1 & 11 & 1 & 11\\ \multicolumn{1}{ c }{FairPatrol,ExactRestAvoid$^\ast$,DelReact} & 3 & 9 & 0 & 12 & 4 & 8\\ \multicolumn{1}{ c }{SeqPatrol,UpperRestAvoid,FastReact$^\ast$} & 1 & 11 & 0 & 12 & 2 & 10\\ \bottomrule \end{tabular} \end{table} \textbf{Summary.} The pattern catalog is effective in supporting developers in defining mission requirements and in generating mission specifications. Exp1 and Exp2 show that the pattern catalog effectively supports the definition of mission requirements. and that helps in reducing ambiguities in available mission requirements. Exp1 and Exp3 show that the pattern catalog effectively supports the generation of mission specifications. Exp1 shows how the pattern catalog can be used to generate precise, unambiguous, and formal mission specifications in industry sourced scenarios. \textbf{Correctness of the Patterns (RQ2).} \noindent To verify the mission specifications (LTL and CTL formulas) we manually reviewed them and performed a random testing to confirm that the specifications do not permit undesired system behaviors that were not detected during the manual check. \emph{Manual check.} We manually inspected instances of the patterns obtained by fixing the number of locations to be visited, conditions to be considered etc. For LTL formulae we used SPOT~\cite{spot} to generate B\"uchi automata (BA) encoding the traces of the system allowed and forbidden by the specification. We manually inspected the BA of all the proposed patterns. \emph{Random testing.} We performed some testing by exploiting a set of randomly generated models: a widespread technique to evaluate artifacts in the software engineering community~\cite{tabakov2005experimental,de2006antichains,tabakov2007model,saadatpanah2012comparing,famelis2012partial,FM2016,fase2018}, also used in the robotic community~\cite{menghi2018multi,DBLP:conf/icse/MenghiGPT18,best2016multi,takacs2009multi,stentz1996map}. We generated $12$ scenarios representing the structure of buildings containing $16$ locations, where a robot is deployed. The building has been generated by allocating $12$ traversable locations and $4$ locations that cannot be crossed, on a $4 \times 4$ matrix. Identifiers $l_0, l_1, \ldots, l_{11}$ are randomly assigned to the traversable locations. In $6$ of the $12$ scenarios the robot can move among adjacent cells that are traversable, while it cannot move within not crossable locations. In the other $6$ scenarios the robot can cross the adjacent cells by respecting the following rules: (i) it can move from a traversable cell with coordinate $[i,j]$ to a traversable cell with coordinate $[i,j+1]$ and $[i+1,j]$; (ii) it can move from a traversable cell with coordinate $[i,j]$ to another with coordinate $[h,k]$, where $i$ (resp. $h$) is the maximum (resp. minimum) row index of a cell that corresponds to a traversable location and $h$ (resp. $k$) is the maximum (resp. minimum) column index of the traversalble locations at row $i$ (resp. $h$). Conditions and actions are treated by considering whether a box is present in a location ($cond$ in the following), and the capability of the robot in changing its color ($act$ in the following). We randomly select $4$ traversable locations in which $cond$ is true and $4$ locations in which $act$ can be performed. For each scenario we considered different mission requirements; each obtained by randomly combining a core movement, a trigger and an avoidance pattern, and by ensuring that each pattern is used in at least one mission requirement. In total we generated $10$ mission requirements (Table~\ref{tab:expRQ2}). Core movement patterns are parametrized with locations $l_1, l_2$. The upper, exact and lower restricted avoidance patterns are parametrized by forcing the robot to visit location $l_3$, at most, exactly, and at least $2$ times, respectively. The global avoidance pattern forces the robot to not visit $l_3$, while the future and past avoidance force the robot to not visit $l_3$ after and before condition $cond$ is satisfied, i.e., a room that contains a box is visited. The wait pattern forces the robot to wait in location $l_4$ if a box is not present. The other trigger patterns are parametrized with the action $act$ that must be executed by the robot in relation with the occurrence of condition $cond$. We subsequently performed the following experiments. \emph{Exp4.} We generated the LTL specifications of the considered mission requirements. We (i) negated the LTL specification; (ii) encoded the specification and the model of the scenario in NuSMV~\cite{cimatti1999nusmv}; (iii) used NuSMV to check whether the models contained a path that satisfied the mission specification (violates its negation). If a plan was present we used Simbad~\cite{hugues2006simbad} to simulate the robot executing the plan. We verified whether the results were correct: when we expected a plan to not be present in the given model, NuSMV was not able to compute it, and, when a plan was expected to be present it was computed by NuSMV. We also checked the correctness of the generated plans using the Simbad simulator. Results confirm the correctness of the LTL mission specifications. The column labeled with the $\top$ (resp. $\bot$) symbol of Table~\ref{tab:expRQ2} contains the number of cases in which a plan was (resp. was not) present. \emph{Exp5.} We generated LTL and CTL specifications for the considered mission requirements. We (i) encoded the LTL and CTL specifications and the model of the scenario in NuSMV~\cite{cimatti1999nusmv}; (iii) we used NuSMV to check whether the verification of the specifications returned the same results. Table~\ref{tab:expRQ2} contains the number of cases in which the mission requirement was satisfied ($\top$) and not satisfied ($\bot$). Mission requirements were generally not satisfied, since for being satisfied they have to hold on all the paths of the models. NuSMV always returned the same results for LTL and CTL specifications confirming the correctness of CTL specifications. \emph{Exp6.} We investigated why in several cases the mission requirement was always not satisfied. In these cases we relaxed the mission requirements, by removing the patterns marked with the $^\ast$ symbol in Table~\ref{tab:expRQ2}. We executed the same steps of \emph{Exp4}. Table~\ref{tab:expRQ2} confirmed that by relaxing the mission requirements there were cases in which the mission requirement was actually satisfied. This is a further confirmation that the mission specifications are correct. \section{Discussion and Related Work} \label{sec:discussion} \noindent We discuss the proposed patterns and present related work. \emph{Methodology.} The number of mission requirements analyzed is in line with other approaches in the field~\cite{dwyer1999patterns,grunske2008specification,konrad2005real,autili2015aligning,bianculli2012specification}. These requirements usually come from exemplar scenarios used to provide evaluation about effectiveness of research-intensive works. As such, we believe that the scope of the pattern system is quite wide. Our study is certainly not exhaustive, as (i) formal specification in robotic application spreads, and (ii) the types of mission specifications change over time. As shown in the evaluation, patterns will grow over time as specifications that do not belong to the catalog are provided. \emph{Patterns.} While the presented patterns are mainly conceived to address needs of robotic mission specification, they are more generic and can be applied when the need is to specify some ``ordering" among events or action execution. Rather than predicate on robots reaching a set of actions, coverage and surveillance patterns may also include propositions that refer to generic events. In this sense, the proposed patterns can be considered as an extension of the property specification patterns~\cite{OrderSpecificationPatterns,dwyer1999patterns} that explicitly address different ordering among the occurrence of a set of events. While in this paper we proposed a direct encoding in LTL and CTL, they may also be expressed in terms of standard property specification patterns. The instantaneous reaction pattern may be obtained from the response pattern scoped with the global operator. The precedence chain and the response chains~\cite{OrderSpecificationPatterns,dwyer1999patterns} (that illustrate the 2 cause-1 effect and 1 cause-2 effects chain), can be composed with the precedence and response patterns to specify different ordering among a set of events. \emph{Evaluation.} The Spectra tool only supports specifications captured by the GR(1) LTL fragment used to describe three types of guarantees: initial, safety, and liveness. Initial guarantees constraint the initial states of the environment. Safety guarantees start with the temporal operator $\LTLg$ and constraints the current and next state. Liveness guarantees start with the temporal operators $\LTLg \LTLf$ and may not include the $\LTLx$ operator. These constraints justify the prevalence of patterns presented in Tables~\ref{tab:exp2},~\ref{tab:exp3}, and~\ref{tab:expRQ2}. While the proposed patterns can be expressed using deterministic B\"uchi automata (DBA), which can be translated in GR(1) formulae~\cite{maozsynthesis}, a manual encoding of the proposed patterns in GR(1) is complex and error prone. This is confirmed by the fact that analysis on the standard property specification patterns that can be expressed in GR(1), and an automatic procedure to map these patterns on formulae that are in the GR(1) fragment has been recently conducted~\cite{maozsynthesis}. All of the patterns proposed in this work are expressible using GR(1) formulae, and the automatic procedure presented in~\cite{maozsynthesis} can be integrated in PsAlM\xspace\ to generate Spectra formulae. \textbf{Related work.} Temporal logic specification patterns are a well-known solution to support developers in requirement specification~\cite{dwyer1999patterns,konrad2005real,grunske2008specification,autili2015aligning,Paun99,Remenska2014,Castillos2013}. Property specification patterns use in specific domains have been investigated in literature, including service-based applications~\cite{bianculli2012specification}, safety~\cite{Bitsch2001} and security~\cite{Spanoudakis2007}. However, at the best of our knowledge, no work has considered mission patterns for robotic applications. Domain Specific Languages (DSLs)~\cite{Schmidt2006,Ciccozzi4496,Ruscio2016,Bozhinoski2015,Adam2014} have been proposed for various purposed including production and analysis of behaviour descriptions, property verification and planning. However, features incorporated within DSLs are usually arbitrarily chosen by relying on the domain-specific experience of robotic engineers. Instead, specification patterns presented in this paper are collected from missions encountered in scientific literature, evaluated in industrial uses, and aim at supporting a wide range of robotic needs. We believe that the presented patterns consist of basic building blocks that can be reused within existing and new robotic DSLs. Moreover, support for developers on solving the mission specification problem is also provided in literature by graphical tools that simplify the specification of LTL formulae~\cite{lee1997graphical,smith2001events,srinivas2013graphical}. Our work is complementary with those; graphical logic mission specifications can also be integrated within PsAlM\xspace . \section{Conclusion} \label{sec:conclusion} \noindent We proposed a pattern catalog for mission specification of mobile robots. We identified patterns by analyzing mission requirements that have been systematically collected from scientific publications. We presented PsAlM\xspace , a tool that uses the proposed patterns to support developers in designing complex missions. We evaluated (ii) the support provided by the catalog in the definition of real-world missions; (ii) the correctness of the mission specifications contained in our pattern catalog. Future extensions of our mission specification pattern catalog will consider also time, space, and probability. We will also investigate the use of spatial logics~\cite{aiello2007handbook,papadimitriou1996topological,bivand2013spatial,erwtimatiko,cardelli2002spatial} to express more complex spatial robotic behaviours and perform user studies. \balance \bibliographystyle{IEEEtran}
2,869,038,156,464
arxiv
\section{Introduction} Cosmologists assume that the Universe can be described as a manifold. Mathematicians characterize manifolds in terms of their geometry and topology. Thus, two fundamental questions regarding our understanding of the Universe concern its geometry and topology. An important difference between these two attributes is that while geometry is a local characteristic that gives the intrinsic curvature of a manifold, topology is a global feature that characterizes its shape and size. Within the framework of standard cosmology, the Universe is described by a space-time manifold $\mathcal{M}_4 = \mathbb{R} \times M$ with a locally homogeneous and isotropic Robertson-Walker (RW) metric \begin{equation} \label{RWmetric} ds^2 = -dt^2 + a^2 (t) \left [ d \chi^2 + f^2(\chi) (d\theta^2 + \sin^2 \theta d\phi^2) \right ] \;, \end{equation} where $f(\chi)=(\chi\,$, $\sin\chi$, or $\sinh\chi)$, depending on the sign of the constant spatial curvature ($k=0,1,-1$). The spatial section $M$ is usually taken to be one of the simply connected spaces, namely, Euclidean $\mathbb{R}^3$, spherical $\mathbb{S}^3$, or hyperbolic $\mathbb{H}^3$. However, this is an assumption that has led to a common misconception that the curvature $k$ of $M$ is all one needs to decide whether the spatial section is finite or not. In a spatially homogeneous and isotropic Universe, for instance, the geometry, and therefore the corresponding curvature of the spatial sections $M$, is determined by the total matter-energy density $\Omega_{\mathrm{tot}}$. This means that the geometry or the curvature of $M$ is observable, i.e. for $\Omega_{\mathrm{tot}} < 1$ the spatial section is negatively curved ($k=-1$), for $\Omega_{\mathrm{tot}} = 1$ it is flat ($k=0$), while for $\Omega_{\mathrm{tot}} > 1$ $M$ is positively curved ($k=1$). In consequence, a key point in the search for the (spatial) geometry of the Universe is to use observations to constrain the density $\Omega_{\mathrm{tot}}$. In the context of the standard $\Lambda$CDM model (which we adopt in this work), this amounts to determining regions in the $\Omega_{\Lambda} - \Omega_{m}$ parametric plane that consistently account for the observations, and from which one expects to deduce the geometry of the Universe. As a matter of fact, the resulting regions in this parametric plane also give information on the dynamics of the Universe as, for example, whether an accelerated expansion is indicated by the observations, and on the possible behaviors regarding the expansion history of the Universe (eternal expansion, recollapse, bounce, etc.). However, geometry constrains, but does not dictate, the topology of the $3$-manifold $M$. Indeed, for the Euclidean geometry ($k=0$) besides $\mathbb{R}^{3}$, there are 17 classes of topologically distinct spaces $M$ that can be endowed with this geometry, while for both the spherical ($k=1$) and hyperbolic ($k=-1$) geometries there is an infinite number of topologically inequivalent manifolds with non-trivial topology that admit these geometries. Over the past few years, distinct approaches to probe a non-trivial topology of the Universe,% \footnote{In this article, in line with the usage in the literature, by topology of the Universe we mean the topology of the space-like section $M$.} using either discrete cosmic sources or cosmic microwave background radiation (CMBR), have been suggested \citep[see, e.g., the review articles of][]{Lach1995,Satrk1998,Levin2002,RG2004,BSCG}. An immediate observational consequence of a detectable non-trivial topology of the $3$-space $M$ is that the sky will show multiple (topological) images of either cosmic objects or specific spots of CMBR \citep{GRT2001a,GRT2001b,WLU2003,Weeks2003}. The so-called ``circles-in-the-sky" method \citep{CSS1998}, for instance, relies on multiple images of correlated circles in the CMBR maps. In a space with a detectable non-trivial topology, the sphere of last scattering intersects some of its topological images along the circles-in-the-sky, i.e., pairs of matching circles of equal radii, centered at different points on the last scattering sphere (LSS), with the same distribution (up to a phase) of temperature fluctuations, $\delta T$, along the correlated circles. Since the mapping from the last scattering surface to the night-sky sphere preserves circles \citep{CGMR05}, the correlated circles will be written in the CMBR anisotropy maps regardless of the background geometry and for any non-trivial detectable topology. As a consequence, to observationally probe a non-trivial topology, one should scrutinize the full-sky CMBR maps to extract the correlated circles, whose angular radii, matching phase, and relative position of their centers can be used to determine the topology of the Universe. Thus, a non-trivial cosmic topology is an observable and can be probed for all locally homogeneous and isotropic geometries, without any assumption concerning the cosmological density parameters. In this regard, the question as to whether one can use this observable to either determine the geometry or set constraints on the density parameters naturally arises. Regarding the geometry it is well-known that the topology of $M$ determines the sign of its curvature \citep[see, e.g.,][]{BernshteinShvartsman1980}. Thus, the topology of the spatial section of the Universe dictates its geometry. At first sight, this seems to indicate that the bounds on the density parameters $\Omega_{m}$ and $\Omega_{\Lambda}$ arising from the detection of cosmic topology should be very weak, in the sense that they would only determine whether the density parameters of the Universe take values in the regions below, above, or on the flat line $\Omega_{\mathrm{tot}} = \Omega_{\Lambda} + \Omega_{m}=1$. In this article, however, we show that, contrary to this indication, the detection of the cosmic topology through the ``circles-in-the-sky" method gives rise to very tight constraints on the density parameters. To this end, we use the Poincar\'e dodecahedral space as the observable topology of the spatial sections of the Universe to reanalyze the current SNe Ia constraints on the parametric space $\Omega_{m} - \Omega_{\Lambda}$, as provided by the so-called \emph{gold} sample of 157 SNe Ia given by \citet{rnew}. As a result, we show that the knowledge of cosmic topology provides very strong and complementary constraints on the region of the density parametric plane allowed by SNe Ia observations, drastically reducing the inherent degeneracies of current SNe Ia measurements. \section{SNe Ia observations and cosmic topology} The value of the total density $\Omega_{\mathrm{tot}}=1.02 \pm\, 0.02$ reported by the WMAP team \citep{WMAP-Spergel}, which favors a positively curved Universe, and the low power measured by WMAP for the CMBR quadrupole ($\ell=2$) and octopole ($\ell=3$) moments, have motivated the suggestion by \citet{Poincare} of the Poincar\'e dodecahedral space topology as a possible explanation for the anomalous power of these low multipoles. They found that the power spectrum of the Poincar\'e dodecahedral space's fits the WMAP-observed small power of the low multipoles, for $\Omega_{\mathrm{tot}}\simeq 1.013$, which clearly falls within the interval suggested by WMAP. Since then, the dodecahedral space has been examined in various studies \citep{Cornish,Roukema,Aurich1,Gundermann,Aurich2}, in which further features of the model have been carefully considered. As a result, it turns out that a Universe with the Poincar\'e dodecahedral space section accounts for the suppression of power at large scales observed by WMAP, and fits the WMAP temperature two-point correlation function for $ 1.015 \leq\Omega_{\mathrm{tot}} \leq 1.020$ \citep{Aurich1,Aurich2}, retaining the standard Friedmann-Lema\^{\i}tre-Robertson-Walker (FLRW) foundation for local physics. A preliminary search failed to find the antipodal matched circles in the WMAP CMBR sky maps predicted for the Poincar\'e dodecahedral space model \citep{Cornish}. In a second search, indications for these correlated circles were found, but due to noise and foreground structure of the CMBR maps, no final conclusion has been drawn \citep{Aurich3}. We also note that the Doppler and integrated Sachs-Wolfe contributions to the circles-in-the-sky are strong enough to blur the circles, and thus the matched circles can be overlooked in the CMBR sky maps \citep{Aurich1}. Additional effects such as the Sunyaev-Zeldovich effect and the finite thickness of the LSS, as well as possible systematics in the removal of the foregrounds, can further damage the topological circle matching. On these observational grounds, in what follows, we shall assume the Poincar\'e dodecahedron model. \subsection{SNe Ia plus cosmic topology analysis} To study the consequences of the FLRW model with the Poincar\'e dodecahedral space section $\mathcal{D}$, we begin by recalling that this model predicts six pairs of antipodal matched circles on the LSS, centered in a symmetrical pattern like the faces of the dodecahedron. Clearly the distance between the centers of each pair of circles is twice the radius $r_{inj}$ of the sphere inscribable in $\mathcal{D}$. Now, a straightforward use of a Napier's rule to the right-angled spherical triangle with elements $r_{inj}$, the angular radius $\alpha$ of a matched circle, and the radius $\chi^{}_{lss}$ of the last scattering sphere (see Fig.~\ref{CinTheSky}), furnishes \begin{equation} \label{cosalpha} \cos \alpha = \frac{\tan r_{inj}}{\tan \chi^{}_{lss} }\;, \end{equation} where $r_{inj} = \pi/10$ for the dodecahedron. Note that $\chi^{}_{lss}$ depends only on the cosmological scenario, and for the $\Lambda$CDM model it reads (in units of the curvature radius) \begin{equation} \label{redshift-dist} \chi^{}_{lss}= \sqrt{|\Omega_k|} \int_1^{1+z_{lss}} \hspace{-4mm} \frac{dx}{\sqrt{\Omega_{m}x^3 + \Omega_kx^2 + \Omega_{\Lambda}}} \;, \end{equation} where $\Omega_k = 1-\Omega_{\mathrm{tot}}$ and $z_{lss}=1089$ \citep{WMAP-Spergel} \begin{figure} \centering \includegraphics[width=9cm]{f1.eps} \caption{A schematic illustration of two antipodal matching circles in the sphere of last scattering. The relation between the angular radius $\alpha$ and the angular sides $r_{inj}$ and $\chi^{}_{lss}$ is given by the following Napier's rule for spherical triangles: $\sin (\pi/2 - \alpha) = \tan r_{inj}\, \tan (\pi/2 - \chi^{}_{lss})$ \citep[see, e.g.,][]{Coxeter}. } \label{CinTheSky} \end{figure} Equations~(\ref{cosalpha}) and~(\ref{redshift-dist}) give the relations between the angular radius $\alpha$ and the cosmological density parameters $\Omega_{\Lambda}$ and $\Omega_{m}$, and thus can be used to set bounds on these parameters. To quantify this, we proceed in the following way. Firstly, we take the angular radius $\alpha = 50^\circ$ estimated in \citet{Aurich1}. Secondly, we note that measurements of the radius $\alpha$ unavoidably involve observational uncertainties, and therefore, in order to set constraints on the density parameters from the detection of cosmic topology, one should take such uncertainties into account. To obtain very conservative results, we take $\delta {\alpha} \simeq 6^\circ$, the scale below which the circles are blurred \citep{Aurich1}. In our statistical analysis, we use SNe Ia data from \citet{rnew}. The total sample presented in that reference consists of 186 events distributed over the redshift interval $0.01 \lesssim z \lesssim 1.7$ and constitutes the compilation of observations made by two supernova search teams plus, 16 new events observed by the Hubble space telescope (HST). This total data set was initially divided into ``high-confidence'' (\emph{gold}) and ``likely but not certain'' (\emph{silver}) subsets. Here, we consider only the 157 events that constitute the so-called \emph{gold} sample. The confidence regions in the parametric space $\Omega_m - \Omega_{\Lambda}$ are determined by defining a probability distribution function ${\cal{L}} = \int{ e^{-\chi^{2}(\mathbf{p})/2}dh}$, where $\mathbf{p}$ stands for the parameters $\Omega_m$, $\Omega_{\Lambda}$, and $h$, and we have marginalized over all possible values of the Hubble parameter $h$ \citep[for some recent SNe Ia analyses see][]{CP03,NP04,APSNIa04}. The Poincar\'e dodecahedral space topology is added to the SNe Ia data as a Gaussian prior on the value of $\chi^{}_{lss}$, which can easily be obtained from Eqs.~(\ref{cosalpha})~--~(\ref{redshift-dist}). \begin{figure} \centering \includegraphics[width=9cm]{f2.eps} \caption{The 68.3\%, 95.4\%, and 99.7\% confidence regions in the density parametric plane, which arise from the SNe Ia plus dodecahedral space topology analysis. The best-fit values for the dark matter and dark energy density parameters are, respectively, $\Omega_m = 0.316^{+0.011}_{-0.009}$ and $\Omega_{\Lambda} = 0.706^{+0.010}_{-0.009}$ at a 95.4\% confidence level. The value of the total density parameter, as well as of the angular radius of the circles and the corresponding uncertainties, are also displayed. } \label{Top+SNIa} \end{figure} Figure~\ref{Top+SNIa} shows the results of our joint SNe Ia plus cosmic topology analysis. There, we display the confidence regions (68.3\%, 95.4\%, and 99.7\%) in the parametric plane $\Omega_m - \Omega_{\Lambda}$. Compared to the conventional SNe Ia analysis, i.e. the one with no such cosmic topology assumption \citep[see, e.g., Fig.~8 of][]{rnew}, it is clear that the effect of the cosmic topology as a new cosmological observable is to considerably reduce the area corresponding to the confidence intervals in the parametric space $\Omega_m - \Omega_{\Lambda}$, as well as to break degeneracies arising from the current SNe Ia measurements. The best-fit parameters for this joint analysis are $\Omega_m = 0.316$ and $\Omega_{\Lambda} = 0.706$ with reduced $\chi^2_{min}/\nu \simeq 1.13$ ($\nu$ is defined as degrees of freedom). At a 95.4\% confidence level (c.l.) we found $\Omega_m = 0.316^{+0.010}_{-0.009}$ and $\Omega_{\Lambda} = 0.706 \pm 0.010$, which corresponds to $\Omega_{\mathrm{tot}} = 1.022 \pm 0.014$. Note that this value of the total energy density parameter derived from our SNe Ia plus topology statistics is in full agreement with those reported by the WMAP team, $\Omega_{\mathrm{tot}} = 1.02 \pm 0.02$ \citep{WMAP-Spergel}, as well as with the value obtained by fitting the Poincar\'e dodecahedral power spectrum for low multipoles with the WMAP data, i.e. $ 1.015 \leq \Omega_{\mathrm{tot}} \leq 1.020$ \citep{Aurich1} and $\Omega_{\mathrm{tot}}\simeq 1.013$ \citep{Poincare}. Concerning the above analysis it is also worth emphasizing three important aspects at this point. First, the range $1.015 \leq \Omega_{\mathrm{tot}} \leq 1.020$ in which the Poincar\'e dodecahedral space model fits the WMAP data (and also gives rise to six pairs of matching circles) has not been used as a prior of our statistical data analysis. Second, the best-fit values for both $\Omega_m$ and $\Omega_{\Lambda}$ (and, consequently, for $\Omega_{\mathrm{tot}}$) depend very weakly on the value used for the angular radius $\alpha$ of the circle. As an example, by assuming $\alpha = 11^\circ \pm 1^\circ$, as suggested in \citet{Roukema}, it is found that $\Omega_m = 0.312^{+ 0.078}_{-0.072}$, $\Omega_{\Lambda} = 0.698^{ + 0.072}_{ - 0.078}$, and $\Omega_{\mathrm{tot}}= 1.010\pm0.002$ at a 95.4\% (c.l.), which is very close to the value found by considering $\alpha = 50^\circ$ \citep{Aurich1} with an uncertainty of $6^\circ$. Third, the uncertainty on the value of the radius $\alpha$ alters the width corresponding to the confidence regions, without having a significant effect on the best-fit values. Finally, we also notice that, by imposing the topological prior, the estimated value for the matter density parameter is surprisingly close to those suggested by dynamic or clustering estimates \citep[see, e.g.,][]{calb,DBW1997,wm1,wm,Pope2004}. On the other hand, as shown in \citet{rnew} \citep[see also][]{CP03,NP04,APSNIa04}, the conventional SNe Ia analysis (without the above cosmic topology constraint) provides $\Omega_m \simeq 0.46$, which is $\sim 1\sigma$ off from the central value obtained by using independent methods, as for instance, the mean relative peculiar velocity measurements for pairs of galaxies~\citep{wm1}. \section{Final remarks} Fundamental questions, such as whether the Universe will expand forever or eventually re-collapse and what are its shape and size, are associated with the nature of its constituents as well as with the measurements of both the local curvature and the global topology of the $3$-dimensional world. The so-called ``circles-in-the-sky" method makes it apparent that a non-trivial detectable topology of the spatial section can be probed for any locally homogeneous and isotropic Universe, with no assumption about the cosmological density parameters. In this article, we have shown that the knowledge of spatial topology of the Universe not only dictates the sign of its local curvature (and therefore its geometry), but also imposes very restrictive constraints on the density parameters associated with dark matter ($\Omega_m$) and dark energy ($\Omega_{\Lambda}$). Indeed, by combining the detection of the cosmic topology through the ``circles-in-the-sky" method with the current SNe Ia observations, we have shown that the effect of the cosmic topology as a cosmological observable is to drastically reduce the degeneracies inherent to current SNe data, providing limits on the cosmological density parameters, which cannot presently be obtained from combinations of the current cosmological data. This role of cosmic topology has previously been emphasized in the context of cosmic crystallography by \citet{Uzan}. We underline the fact that the-best fit values are not the most important outcome of our work, since the dodecahedral space model has not been confirmed as the ultimate global topology of the Universe. We emphasize that even though the precise value of the radius $\alpha$ of the circle and its uncertainty (fundamental quantities in our analysis) can be modified by more accurate analysis and future observations, the general aspects of our analysis remain essentially unchanged, since the best-fit values of the cosmological parameters depend very weakly on $\alpha$, and the value of uncertainty $\delta \alpha$ primarily alters the confidence uncertainty area in the density parametric plane $\Omega_m - \Omega_{\Lambda}$. On the other hand, regarding the possibility of using the observational results to guide the search for the circles in the sky, from a SDSS plus WMAP combination of large-scale structure, SNe Ia, and CMBR data \citep{Tegmark_et_al}, we can only place an upper bound on the angular radii of the circles for a Poincar\'e dodecahedral topology, namely $\alpha < 70^\circ$, which is consistent with value of $\alpha$ we have used in this work. Given the immense efforts expended in the quest for the local curvature of the Universe, we believe that our results reinforce the cosmological interest in the search for definitive observational evidences of a non-trivial cosmic topology. Further investigations of the other globally homogeneous spherical spaces that also fit current CMBR data are in progress and will be presented in a forthcoming article. \emph{Acknowledgements.} The authors are grateful to A.F.F. Teixeira for valuable discussions. We thank CNPq for the grants under which this work was carried out. JSA is also supported by Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado do Rio de Janeiro (FAPERJ).
2,869,038,156,465
arxiv
\section{Introduction} Data-driven dependency parsers achieve high parsing performance for languages representing different language families. The~state-of-the-art dependency parsers are trained with supervised learning methods on large correctly annotated treebanks, e.g. from Universal Dependencies \cite[UD,][]{nivre-etal-2020-universal}. UD is an~international initiative aimed at developing a~cross-linguistically consistent annotation schema and at building a~large multilingual collection of dependency treebanks annotated according to this schema. A~relatively small subset of UD treebanks is annotated with higher-order syntactic-semantic representations that encode various linguistic phenomena and are called Enhanced Universal Dependencies (EUD). Dependency treebanks, especially the~uniformly annotated UD treebanks, are used for multilingual system development, e.g. within multiple shared tasks on dependency parsing \cite{buchholz-marsi-2006-conll,nivre-etal-2007-conll,seddah-etal-2013-overview,seddah-etal-2014-introducing,zeman-etal-2017-conll,zeman-etal-2018-conll}. In particular, the~IWPT 2020 shared task on Parsing into Enhanced Universal Dependencies \cite{bouma-etal-2020-overview} is worth mentioning, because it is the predecessor of the~current IWPT 2021 shared task \cite{bouma-etal-2021-overview}. All shared tasks contributed to rapid advancement of language parsing technology, inter alia, the formulation of groundbreaking parsing algorithms and their publicly available implementations \cite[e.g.][]{nivre-etal-2006-labeled,mcdonald-etal-2006-multilingual,straka-strakova-2017-tokenizing,dozat-etal-2017-stanfords,rybak-wroblewska-2018-semi,he-choi-2020-adaptation}. Dependency parsing is an~important issue in various sophisticated downstream tasks, including but not limited to sentiment analysis \cite{sun-etal-2019-aspect}, relation extraction \cite{zhang-etal-2018-graph,vashishth-etal-2018-reside,guo-etal-2019-attention}, semantic role labelling \cite{wang-etal-2019-best}, or question answering \cite{AAAI1817406}. On the other hand, even if EUD parsing aims at predicting semantically informed structures, which seem to be appropriate in advanced NLP tasks, it is not yet used in solving these tasks. An~obstacle can be the~availability of the~state-of-the-art EUD parsers, e.g. two top systems at the~IWPT 2020 EUD shared task \cite[i.e.][]{kanerva-etal-2020-turku,heinecke-2020-hybrid} are not publicly available and therefore difficult to integrate into NLU systems without having to implement them from scratch. Meeting the~potential expectations of NLU system architects, the~source code of COMBO with the~new EUD parsing module and the pre-trained models developed as part of our solution submitted to this shared task are publicly available. The~proposed solution to EUD parsing is based on (1) Stanza tokeniser \cite{qi-etal-2020-stanza}, (2) COMBO \cite{combo}, a~data-driven language-independent system for morphosyntactic prediction, i.e. part-of-speech tagging, morphological analysis, lemmatisation, dependency parsing, and EUD parsing (see Section \ref{sec:eud_predictor}), (3) an~algorithm that merges predicted labelled dependency arcs and predicted EUD arcs, and builds the~final EUD graphs (see Section \ref{sec:merge_algo}), and (4) two linguistically motivated language-independent rules that improve the~final EUD graphs (see Section \ref{sec:postprocessing}). The~first expansion rule adds case information sublabels to EUD modifiers, and the~second one amends enhanced arcs coming into the~function words. These two rules are integrated into the~proposed EUD parsing system. In the~official evaluation, our EUD parser ranked 4th, obtaining an~average ELAS of 83.79\% and EULAS of 85.20\%.\footnote{\url{https://universaldependencies.org/iwpt21/results_official_coarse.html}} It is worth emphasising that COMBO predicts labelled dependency trees with an~average LAS of 88.91\%, only being slightly outperformed by the~ROBERTNLP system. \section{Shared task description} The~IWPT 2021 EUD shared task consists in evaluating systems for parsing raw texts into Enhanced Universal Dependencies. The~systems are trained and evaluated on data supplied by the~organisers. \paragraph{Data} The~shared task dataset includes treebanks for 17 languages from 4 language families. The~largest group in this collection is constituted by Indo-European languages, i.e. Bulgarian, Czech, Polish, Russian, Slovak, Ukrainian (Slavic), Dutch, English, Swedish (Germanic), French, Italian (Romance), and Latvian, Lithuanian (Baltic). There are also representatives of the~Uralic (Finnic) languages, i.e. Estonian and Finnish, the~Afro-Asiatic (Semitic) languages -- Arabic, and the~Southern Dravidian languages -- Tamil. The~datasets vary in size and type of enhancements. \paragraph{Enhancement types} \label{sec:enhancemets} Various linguistic phenomena are encoded in EUD graphs: \begin{itemize} \item propagation of conjuncts in coordination constructions (see Figure \ref{fig:conjuncts}), \item null nodes encoding elided predicates in coordination constructions (see Figure \ref{fig:emptynode}), \item additional subject relations in control and raising constructions (see Figure \ref{fig:controll}), \item coreference relations in relative clause constructions (see Figure \ref{fig:relative_clause}), \item detailed case information sublabels of the~modifiers (see Figure \ref{fig:case}). \end{itemize} \begin{figure}[h!] \centering \begin{dependency}[theme = simple, label style={font=\normalsize}] \begin{deptext}[column sep=0.2cm] The \& store \& buys \& and \& sells \& cameras \& . \\ \end{deptext} \depedge{2}{1}{det} \depedg {3}{2}{nsubj} \depedge{3}{5}{conj} \depedge{3}{7}{punct} \depedge{5}{4}{cc} \depedge{3}{6}{obj} \depedge[edge style=blue, edge below]{5}{2}{\textcolor{blue}{nsubj}} \depedge[edge style=blue, edge belo ]{5}{6}{\textcolor{blue}{obj}} \end{dependency} \caption{The~EUD graph with the~conjoined predicate; the~conjoined verbs (\textit{buys} and \textit{sells}) share the~subject (\textit{the store}) and the~object (\textit{cameras}), and the~propagated relations are indicated with the bottom blue enhanced edges.} \label{fig:conjuncts} \end{figure} \begin{figure}[h!] \centering \begin{dependency}[theme = simple, label style={font=\normalsize}] \begin{deptext}[column sep=0.2cm] John \& orders \& tea \& and \& Timothy \& $\square$ \& coffee \\% \& . \\ \end{deptext} \depedge{2}{1}{nsubj} \depedge{2}{3}{obj} \depedge[edge style=dotted]{2}{5}{conj} \depedge dotted]{5}{7}{orphan} \depedge[edge style=dotted]{5}{4}{cc} \depedge[edge style=blue, edge below ]{6}{5}{\textcolor{blue}{nsubj}} \depedge[edge style=blue, edge below, edge end x offset=-6pt]{6}{7}{\textcolor{blue}{obj}} \depedge[edge style=blue, edge below ]{2}{6}{\textcolor{blue}{conj}} \depedge[edge style=blue, edge below ]{6}{4}{\textcolor{blue}{cc}} \end{dependency} \caption{The EUD graph with an~empty node $\square$ and the bottom blue enhanced edges. The~tree edges removed from the~EUD graph are dotted.} \label{fig:emptynode} \end{figure} \begin{figure}[h!] \centering \begin{dependency}[theme = simple, label style={font=\normalsize}] \begin{deptext}[column sep=0.4cm] John \& tried \& to \& order \& coffee \& . \\ \end{deptext} \depedge{2}{1}{nsubj} \depedge{2}{4}{xcomp} \depedge{4}{3}{mark} \depedge{4}{5}{obj} \depedge{2}{6}{punct} \depedge[edge style=blue, edge below ]{4}{1}{\textcolor{blue}{nsubj}} \end{dependency} \caption{The EUD graph with the~bottom blue enhanced edge encoding subject control with the~control predicate \textit{try}.} \label{fig:controll} \end{figure} \begin{figure}[h!] \centering \begin{dependency}[theme = simple, label style={font=\normalsize}] \begin{deptext}[column sep=0.6cm] the \& house \& that \& I \& bought \\ \end{deptext} \depedge{2}{1}{det} \depedge{2}{5}{acl:relcl} \depedge{5}{4}{nsubj} \depedge[edge style=dotted]{5}{3}{obj} \depedge[edge style=blue, edge belo ]{2}{3}{\textcolor{blue}{ref}} \depedge[edge style=blue, edge below]{5}{2}{\textcolor{blue}{obj}} \end{dependency} \caption{The EUD graph representing a~relative clause modifying the noun \textit{house}. The~enhanced edges are marked with the~bottom blue arcs and the~tree edge removed from the~EUD graph is dotted.} \label{fig:relative_clause} \end{figure} \section{System overview} The~EUD parsing system is built of the~following components: a~data encoder boosted with a~contextual language model (see Section \ref{sec:dataencoder}), morphosyntactic predictors (see Section \ref{sec:predictor_architecture}), an~EUD predictor (see Section \ref{sec:eud_predictor}), an~algorithm merging predicted labelled dependency arcs and enhanced dependency arcs (see Section \ref{sec:merge_algo}), and a~post-processing module (see Section \ref{sec:postprocessing}). \subsection{Data encoder} \label{sec:dataencoder} The~encoder vectorises the~tokenised input data. The~input tokens are first represented as a~concatenation of a~character-based word embedding estimated during system training with a~dilated convolutional neural network \cite{dcnn:2015}, and a~BERT-based embedding estimated as follows. \noindent BERT-based language models \cite[LM,][]{devlin-etal-2019-bert,conneau-etal-2020-unsupervised} are not fine-tuned during system training. Instead, we apply the scalar mix technique based on \citet{peters-etal-2018-deep} to produce an~embedding ($h$) for a word $i$ as a~weighted sum of embeddings from all layers: \begin{equation} h_i = \gamma \sum_{j=1}^L s_j h_{ij} \end{equation} Parameters $\gamma$ and $s_j$ are learnable weights, additionally $s_j$ are softmax-normalised. $L$ is the number of transformer layers. At the point of using LM, the~data is already tokenised. If LM intra-tokeniser splits a~word into multiple subwords, the~embeddings $h$ are estimated for these subwords and averaged. The~vectors of words or averaged vectors of subwords are finally transformed with one fully connected (FC) layer. The~encoder with two BiLSTM layers \cite{lstm:1997,bilstm:2005} transforms the~concatenations of the~character-based word embeddings and the~transformed BERT-based embeddings into token vectors. The~BiLSTM-transformed token embeddings are used as input to morphosyntactic predictors and the~EUD parsing module. \subsection{Morphosyntactic predictors} \label{sec:predictor_architecture} The~proposed approach is based on various morphosyntactic predictions. Part-of-speech tags, morphological features, and lemmata are used in the~post-processing step to extract case information expanding enhanced sublabels of modifiers (see Section \ref{sec:postprocessing}). The~merge algorithm (see Section \ref{sec:merge_algo}), in turn, combines labelled dependency arcs with enhanced dependency arcs predicted by EUD parsing module. \subsection{EUD predictor} \label{sec:eud_predictor} The~EUD parsing module consists of an~enhanced arc classifier and an~enhanced label classifier. The arc classifier utilises two single FC layers that transform encoded token vectors into head and dependent embeddings. These embeddings are used to calculate an~adjacency matrix ($A$) of an~enhanced graph. $A$ is a~$n \times n$ matrix, where $n$ is the~number of tokens in a~sentence (plus the~\textsc{root} node). The~matrix element $A_{ij}$ corresponds to the~dot product of the~$i$-th dependent embedding and the~$j$-th head embedding. The~dot product indicates the~certainty of the~edge between two tokens. The sigmoid function, applied to each element of $A$, allows the network to predict many heads for a given dependent, i.e. EUD graphs are built. \noindent The~enhanced label classifier also applies two fully connected layers to estimate head ($e_i$) and dependent ($e_j$) embeddings (they differ from embeddings estimated in the~enhanced arc prediction). Enhanced dependency labels are predicted by a~fully connected layer with the~softmax activation function which is given the~dependent embedding concatenated with the~head embedding. \begin{equation} e_{head} = \mathit{FC}(e_i) \end{equation} \begin{equation} e_{dep} = \mathit{FC}(e_j) \end{equation} \begin{equation} \mathit{label} = \mathit{argmax}(\mathit{FC}(e_{head}, e_{dep})) \end{equation} The loss function is only propagated for those pairs ($i, j$) that belong to ground truth (i.e. arcs existing in the enhanced dependency graph). \subsection{Merge algorithm} \label{sec:merge_algo} The~predicted enhanced graphs could be used without further processing. However, their quality could definitely be improved if they exploited information from the~predicted dependency trees. Enhanced dependency graphs appear to be heavily tree-based (see the~example EUD graphs in Section \ref{sec:enhancemets}). The~EUD graphs include some additional edges, empty nodes, and extended labels of modifiers (and conjuncts in some languages), or their structure is slightly transformed. We therefore decided to merge the~predicted trees and the~predicted enhanced graphs. \begin{algorithm} \SetAlgoLined \SetKwInOut{Input}{Input} \Input{$\:T\coloneqq(V, E_T): \text{tree}$\\ $\:G\coloneqq(V, E_G): \text{graph}$\\ } \SetKwInOut{Output}{Output} \Output{$\:\textit{EUD}: \text{the~final EUD graph}$} $E_{\textit{EUD}} = \{\}$\; \For{$e \: \text{in} \: E_T$} { \If{$\text{label}(e) \neq \text{acl:relcl}$}{ $E_{\textit{EUD}} \coloneqq E_{\textit{EUD}} + e $\; } } \For{$e \: \text{in} \: E_G$} { \If{$e \notin E_{\textit{EUD}}$ {\bf and} $\text{has\_no\_cycle}(E_{\textit{EUD}} + e)$}{ $E_{\textit{EUD}} \coloneqq E_{\textit{EUD}} + e$\; } } \For{$e \: \text{in} \: E_T$} { \If{$\text{label}(e) = \text{acl:relcl}$}{ $E_{\textit{EUD}} \coloneqq E_{\textit{EUD}} + e $\; } } $\textit{EUD} \coloneqq (V, E_{\textit{EUD}})$\; \caption{The merge algorithm} \label{algo2} \end{algorithm} The merge algorithm (see Algorithm \ref{algo2}) successively adds the~predicted tree and graph edges to the~set of EUD edges, and then composes the~final EUD graph of these edges. It starts by selecting all tree edges except for edges with the~\textit{acl:relcl} label. The~EUD graphs representing relative clauses contain cycles (see Figure \ref{fig:relative_clause}). Refraining from adding the~\textit{acl:relcl} relations in this step, we attempt to avoid the~cycle problem thereafter. In the~second step, consecutive graph edges are added to the~EUD set as long as they do not form a~cycle or there are no edges with the~same or a different label in the~EUD set (i.e. we eliminate duplicate edges). In the~last step, the~\textit{acl:relcl} relations are added to the~EUD set which is then used to compose a~final EUD graph. We are aware that UD relations selected in the~first merging step do not contain case information, e.g. the~\textit{obl} relation is transferred to the~EUD set, although this relation should be de facto labelled \textit{obl:because\_of}, \textit{obl:for}, or \textit{obl:outside}. However, our preliminary experiments indicated that the~anticipated enhanced labels often had erroneous case extensions, which could not even come from a~sentence. Correcting labels with accidental case extensions would require defining a~large number of relabelling rules that would have to be adapted to a~particular language. Extending the~modifier labels rather than correcting them seems to be a~more transparent and simple procedure. We thus define one rule that derives case information from automatically predicted morphological features and lemmata (see Rule 1 in Section \ref{sec:postprocessing}). The rule is utilised in the~post-processing step, which is the~last step of building the~EUD graphs. \iffalse As dependency trees use only universal dependency relations, the labels attached to the graph in step 1 are limited only to those. However, the graphs use more verbose annotations (i.e. graphs can use \textit{conj:and} instead of \textit{conj}). To overcome this issue in our merging algorithm, we propose an additional step, which we call post-processing. \subsection{Graph Parser and Merge Algorithm} The graph parser consists of two elements: an arc and a dependency labels classifiers. The former classifier is a very straightforward extension of a base dependency parser. Instead of a softmax function over the row in an adjacency matrix, we use a sigmoid function to obtain a probability that there is a connection between two tokens. The label classifier does not use the matrix rows as in the base parser, so we use only head and dependent representation. For every possible pair of words ($i, j$), we use the concatenation of its embedding ($e$) to predict the graph labels. \begin{equation} e_{head} = \mathit{FC}(e_i) \end{equation} \begin{equation} e_{dep} = \mathit{FC}(e_j) \end{equation} \begin{equation} \mathit{label} = \mathit{argmax}(\mathit{FC}(e_{head}, e_{dep})) \end{equation} We propagate the loss function only for those pairs that belong to ground truth (i.e. arcs existing in the enhanced dependency graph). Both predictors - the arc and the label one - are two additional tasks in the base parser multi-task model. Two objective functions are yet another element of the joint model loss. We weigh them with the same values as in a regular dependency parser that is 0.2 for the arc classifier loss and 0.8 for the label classifier. We derive the final enhanced dependency graph as a result of both: tree and graph parsers. We use the following 3-step algorithm to merge the results of the predictors. \begin{enumerate} \item Start with a dependency tree and its labels as the base of the graph. Omit any arc with an \textit{acl:relcl} label. \item For each graph edge, if it is not already part of a graph and does not result in a cycle, add the edge with a label form the graph parser. \item Add omitted earlier tree arcs with the \textit{acl:relcl} label. \end{enumerate} The merge algorithm creates our final enhanced dependency graph in terms of structure (although sometimes we remove edges in the next processing step). As dependency trees use only universal dependency relations, the labels attached to the graph in step 1 are limited only to those. However, the graphs use more verbose annotations (i.e. graphs can use \textit{conj:and} instead of \textit{conj}). To overcome this issue in our merging algorithm, we propose an additional step, which we call post-processing. \fi \subsection{Post-processing} \label{sec:postprocessing} We define two rules that improve the~automatically predicted EUD graphs. \paragraph{Rule 1} The first rule specifies case information of the~following modifiers: \textit{nmod} (nominal modifier), \textit{obl} (oblique nominal), \textit{acl} (clasual modifier of nouns), \textit{advcl} (adverbial clause modifier), and of conjuncts (\textit{conj}). The~case information (lemma) is derived from \textit{case}/\textit{mark} or \textit{cc} dependents of a~modifier or a~conjunct, respectively, and from the~modifier's morphological attribute \textit{Case}. Figure \ref{fig:case} exemplifies extending UD labels with case information.\footnote{This~sentence originates from the~English dev set. As the~case extension of the~\textit{obl} label is derived from the~structure coordinating two prepositions (i.e. \textit{on} and \textit{about}), we wonder about correctness of selecting only \textit{about} as the case extension.} \begin{figure}[h!] \centering \begin{dependency}[theme = simple, label style={font=\normalsize}] \begin{deptext}[column sep=0.2cm] On \& or \& about \& Sep \& 23 \& , \& 1999 \& ... placed \\ \end{deptext} \depedge{4}{1}{case} \depedge{4}{3}{case} \depedge{3}{2}{cc} \depedge{1}{3}{conj\textcolor{blue}{\textbf{:or}}} \depedge{4}{5}{nummod} \depedge{4}{6}{punct} \depedge{4}{7}{nummod} \depedge{8}{4}{obl\textcolor{blue}{\textbf{:about}}} \end{dependency} \caption{The EUD graph with blue, bolded sublabels representing case information. The~text excerpt comes from the~sentence "On or about September 23, 1999 a request for service was placed by the above referenced counterparty.".} \label{fig:case} \end{figure} The~rule is language-independent and UD-based. However, as not all treebanks attribute case information to their modifiers or conjuncts, the rule applies only to predefined languages, e.g. the~conjunct extension is only valid in English, Italian, Dutch, and Swedish. \paragraph{Rule 2} The~second rule corrects enhanced edges coming into the~function words that are labelled \textit{mark}, \textit{punct}, \textit{root}, \textit{case}, \textit{det}, \textit{cc}, \textit{cop}, \textit{aux} and \textit{ref}. They should not be assigned other dependency relation types in EUD graphs. If a~token \textit{and} is assigned the~\textit{cc} grammatical function in a~dependency tree, and thus also in the~corresponding EUD graph (the~first merge step), it cannot be simultaneously a~subject (\textit{nsubj}), for example. If such an~erroneous \textit{nsubj} relation exists, it is removed from the~EUD graph in line with the~second rule. \begin{table*}[ht!] \centering \begin{tabular}{lll} \toprule Language & Model name & Reference\\ \midrule Arabic & bert-base-arabertv2 & \citet{antoun-etal-2020-arabert} \\ English & bert-base-cased & \citet{devlin-etal-2019-bert} \\ French & camembert-base & \citet{martin-etal-2020-camembert} \\ Finnish & bert-base-finnish-cased-v1 & \citet{DBLP:journals/corr/abs-1912-07076} \\ Polish & herbert-large-cased & \citet{mroczkowski-etal-2021-herbert} \\ \hline Others & xlm-roberta-large & \citet{conneau-etal-2020-unsupervised} \\ \bottomrule \end{tabular} \caption{\label{tab:berts} Language models used in the experiments. Names refer to Transformers library \cite{wolf-etal-2020-transformers}.} \end{table*} \section{Experimental setup} \subsection{Segmentation and preprocessing} \label{sec:segmentation} Stanza tokeniser \cite{qi-etal-2020-stanza} is used to split raw text into sentences, split sentences into tokens, and optionally to expand multi-words. We train a~new segmentation model for each language on the~training data provided in the shared task.\footnote{It is not allowed to use versions of UD other than 2.7 in the IWPT 2021 shared task (see \compacturl{https://universaldependencies.org/iwpt21/task_and_evaluation.html}). As the~publicly available Stanza models are trained on UD 2.5, we have to train new models on UD 2.7.} Whenever there are several UD treebanks for a language, we train the segmentation model on the concatenation of all training datasets available for that language. Multi-word expansion involves only two languages, i.e. Arabic and Tamil, because it does not cause substantial gains in parsing other languages. In order to collapse empty nodes, training data are preprocessed with the~official UD script.\footnote{\compacturl{https://github.com/UniversalDependencies/tools/blob/master/enhanced_collapse_empty_nodes.pl}} Dependents of the collapsed empty nodes are assigned new labels, corresponding to the~empty node label and the~dependent label joined with the~special symbol $>$. During prediction, the~collapsed labels are expanded and empty nodes are added at the~end of a~sentence, following \citet{he-choi-2020-adaptation}. This design decision is motivated by the~fact that (1) it is difficult to find a~proper position of elided tokens or phrases, especially in free word order languages, and (2) the~evaluation procedure does not take an~empty node position into account, i.e. appending an empty node at the end of a sentence does not downgrade the score. It is important to note that designing a~heuristic that identifies proper positions of elided elements remains an~open issue, and appending empty nodes at the~end of a~sentence is only a~makeshift solution. Input data are encoded using BERT-based language models. Depending on the language, either language-specific BERT \cite{devlin-etal-2019-bert} or multilingual XLM-R \cite{conneau-etal-2020-unsupervised} is used (see Table \ref{tab:berts}). \subsection{Morphosyntactic prediction} COMBO system \cite{combo} is used to predict part-of-speech tags, morphological features, lemmata, and dependency trees. For the~purpose of this task, we also implement a~new EUD parsing module (see Section \ref{sec:eud_predictor}) and integrate it with COMBO. Similarly to segmentation models, we train one COMBO model for a~language on all treebanks provided for this language in the~shared task data using the~default training parameters (see Table \ref{tab:hyper_train}).\footnote{All models are trained and tested on a~single NVIDIA V100 card.} \begin{table}[h!] \renewcommand\tabcolsep{5.7pt} \centering \begin{tabular}{lc} \toprule Hyperparameter & Value \\ \midrule Optimiser & Adam \\%\citeyearpar{Kingma:2014} \\ & \cite{Kingma:2014} \\ Learning rate & $0.002$\\ $\beta_1$ and $\beta_2$ & 0.9\\ Number of epochs & 400 \\ \hline BiLSTM layers & 2 \\ BiLSTM dropout rate & 0.33 \\ LSTM hidden size & 512 \\ Arc projection size & 512 \\ Label projection size & 128 \\ \bottomrule \end{tabular} \caption{COMBO training parameters (the~upper entries) and model parameters (the~bottom entries).} \label{tab:hyper_train} \end{table} \section{Results} The~shared task submissions are evaluated with two evaluation metrics: ELAS -- LAS\footnote{LAS (labelled attachment score) is the~proportion of tokens that are assigned the correct head and dependency label according to the gold standard.} on enhanced dependencies, and EULAS -- LAS on enhanced dependencies where labels are restricted to the~UD relation types, i.e. sublabels are ignored. COMBO ranks 4th, achieving 84.71\% ELAS in the~qualitative evaluation (an~average over treebanks), and 83.79\% ELAS in the~coarse evaluation (an~average over languages). In terms of EULAS, it ranks 4th achieving 86.30\% in the~qualitative evaluation, and 5th achieving 85.20\% in the~coarse evaluation. In addition to ELAS and EULAS metrics, the~systems are also compared in terms of quality of predicting labelled dependency trees measured with LAS (the~secondary evaluation measure). In the~LAS ranking, COMBO takes second place achieving 88.91\% in the~qualitative evaluation, and 87.84\% in the~coarse evaluation, being slightly overcome by the~ROBERTNLP system (89.25\% in the~qualitative evaluation, and 89.18\% in the~coarse evaluation). Table \ref{tab:results} presents the~official results of COMBO models per language \begin{table}[h!] \renewcommand\tabcolsep{10pt} \centering \begin{tabular}{l|l|l|l} \toprule Language & LAS & EULAS & ELAS \\ \midrule Arabic & 81.04 & 78.35 & 76.39 \\ Bulgarian & 89.52 & 87.41 & 86.67 \\ Czech & 93.30 & 90.57 & 89.08 \\ Dutch & 90.93 & 88.90 & 87.07 \\ English & 87.22 & 85.27 & 84.09 \\ Estonian & 87.53 & 85.56 & 84.02 \\ Finnish & 92.28 & 88.79 & 87.28 \\ French & 89.29 & 88.10 & 87.32 \\ Italian & 93.27 & 91.16 & 90.40 \\ Latvian & 90.25 & 86.22 & 84.57 \\ Lithuanian & 84.75 & 81.28 & 79.75 \\ Polish & 92.75 & 90.22 & 87.65 \\ Russian & 94.29 & 91.76 & 90.73 \\ Slovak & 91.72 & 88.53 & 87.04 \\ Swedish & 87.82 & 85.26 & 83.20 \\ Tamil & 56.28 & 53.49 & 52.27 \\ Ukrainian & 90.96 & 87.60 & 86.92 \\ \hline \textbf{Average} & \textbf{87.84} & \textbf{85.20} & \textbf{83.79} \\ \bottomrule \end{tabular} \caption{\label{tab:results} The official evaluation results per language.} \end{table} \begin{table}[h!] \renewcommand\tabcolsep{4pt} \centering \begin{tabular}{l|l|l|l|l} \toprule \multirow{2}{*}{Language} & \multicolumn{2}{c}{Before} & \multicolumn{2}{c}{After} \\ & EULAS & ELAS & EULAS & ELAS \\ \midrule Arabic & 77.46 & 57.32 & 77.89 & 76.40 \\ Bulgarian & 89.50 & 78.97 & 90.29 & 89.30 \\ Czech & 89.93 & 74.96 & 91.28 & 89.91 \\ Dutch & 87.96 & 76.22 & 88.94 & 87.64 \\ English & 85.13 & 74.40 & 85.49 & 84.30 \\ Estonian & 86.27 & 68.73 & 86.92 & 85.45 \\ Finnish & 86.98 & 72.08 & 87.92 & 86.44 \\ French & 90.48 & 89.99 & 91.10 & 90.62 \\ Italian & 89.84 & 75.47 & 91.10 & 90.31 \\ Latvian & 85.65 & 73.72 & 86.44 & 84.88 \\ Lithuanian & 82.37 & 63.56 & 83.41 & 82.32 \\ Polish & 90.08 & 77.97 & 90.64 & 87.64 \\ Russian & 90.43 & 75.93 & 91.03 & 90.10 \\ Slovak & 87.89 & 71.71 & 89.39 & 87.90 \\ Swedish & 85.62 & 73.59 & 86.09 & 84.07 \\ Tamil & 54.35 & 40.48 & 54.84 & 53.38 \\ Ukrainian & 88.30 & 73.51 & 89.13 & 88.52 \\ \bottomrule \end{tabular \caption{\label{tab:postprocessing} Impact of the post-processing step.} \end{table} \paragraph{Post-processing impact} We measure the impact of the post-processing step (i.e. extending graph labels with case information and correcting edges coming into the~function words) on the development data per language (see Table \ref{tab:postprocessing}). Following the~training approach, we concatenate the datasets if a language has multiple treebanks. The~second rule modifies the~graph structure. However, as the~EULAS scores are almost negligible, using this rule seems questionable. The~first rule, in turn, does not modify the~structure of EUD graphs, but only their edge labels, and its impact on improving ELAS scores is significant. \begin{table} \renewcommand\tabcolsep{6pt} \centering \begin{tabular}{l|l|l|l|l} \toprule \multirow{2}{*}{Language} & \multicolumn{2}{c}{Sentences} & \multicolumn{2}{c}{Tokens}\\ & TGIF & Stanza & TGIF & Stanza\\ \midrule Arabic & 96.87 & 79.92 & 99.99 & 99.97\\ Dutch & 94.32 & 83.82 & 99.90 & 99.89 \\ Lithuanian & 96.22 & 87.74 & 99.99 & 99.81\\ Swedish & 99.03 & 93.64 & 99.86 & 99.44 \\ \bottomrule \end{tabular} \caption{\label{tab:segmentation_compar} The~quality of TGIF and Stanza segmentation in the selected languages.} \end{table} \begin{table*} \renewcommand\tabcolsep{10pt} \centering \begin{tabular}{l|l|l|l} \toprule Language & LAS & EULAS & ELAS \\ \midrule Arabic & 81.04 (+4.51) & 78.35 (+4.3) & 76.39 (+4.24) \\ Dutch & 90.93 (+1.52) & 88.90 (+1.61) & 87.07 (+1.59) \\ Lithuanian & 84.75 (+1.35) & 81.28 (+1.34) & 79.75 (+1.31) \\ Swedish & 87.82 (+1.24) & 85.26 (+1.21) & 83.20 (+1.17) \\ \bottomrule \end{tabular} \caption{\label{tab:goldtoken}Performance gain in predicting UD trees and EUD graphs of gold-standard tokanised test sentences from the~languages with the worst segmentation quality. The~values in brackets show the improvement over the~baseline (i.e. Stanza tokenisation).} \end{table*} \paragraph{Segmentation drawback} The~official evaluation results show significant discrepancies in the~quality of tokenisation and sentence segmentation. The~highest differences in sentence segmentation between TGIF, the~winner of the~shared task, and Stanza used in our approach are shown in Table \ref{tab:segmentation_compar}. For example, there is a~loss of more than 15 percentage points in sentence segmentation of the~Arabic texts. We therefore decide to investigate the~impact of the~quality of sentence segmentation and tokenisation on the~final results. For this purpose, we conduct an~additional experiment consisting in predicting EUD graphs on the~test data with gold-standard tokenisation and sentence segmentation. The~results of this experiment show a~gain of around 1.5 pp for all tested languages except Arabic with the~gain over 4 pp (see Table \ref{tab:goldtoken}). \section{Conclusion} We presented the~COMBO-based solution to EUD parsing which took part in the~IWPT 2021 EUD shared task. The~proposed approach is hybrid, i.e. based on machine learning and rule-based algorithms. First, UD trees and EUD graphs (and also morphosyntactic features of tokens, i.e. parts of speech, morphological features, and lemmata) are automatically predicted with the~data-driven COMBO system. Then, the~predicted structures are combined into the~EUD graphs using the~developed rule-based merge algorithm. Finally, the~labels of modifiers and conjuncts in the~merged EUD graphs are extended with case information using an~expansion rule. The~proposed solution is simple and language-independent. We recognise that we could still improve the~results, e.g. by defining language-specific correction rules. However, our objective was to build an~easy-to-use system for predicting EUD graphs that is publicly available and can be efficiently use to solve sophisticated NLU tasks. \section*{Acknowledgments} The research presented in this paper was founded by the~Polish Ministry of Education and Science as part of the investment in the CLARIN-PL research infrastructure. The~computing was performed at Pozna\'{n} Supercomputing and Networking Center. \bibliographystyle{acl_natbib}
2,869,038,156,466
arxiv
\section{ Introduction } \label{sec:intro} In the standard reionization history of the Universe, the metagalactic ionizing background evolved relatively slowly except for the reionization of hydrogen ($z \gtrsim 6$) and of helium (fully-ionized at $z \sim 3$). During these phase transitions, the intergalactic medium (IGM) became mostly transparent to the relevant ionizing photons, allowing the high-energy radiation field to grow rapidly. Since \ion{He}{2} has an ionization potential of 54.4~eV, bright quasars with hard spectra are required for its ionization. As such, the distribution and intrinsic character of quasars, in addition to the properties of the IGM, determine the radiation background at these high energies. These quasars are quite rare, implying a strongly fluctuating background even after reionization \citep{Fard98, Bolt06, Meik07, Furl08b}. Direct evidence for these source-induced variations has been seen in the ``transverse proximity effect" of the hardness ratio through comparisons of the \ion{H}{1} and \ion{He}{2} Lyman-$\alpha$ (Ly$\alpha$) forests with surveys for nearby quasars \citep{Jako03, Wors06, Wors07}. These variations are exaggerated by the strong attenuation from the IGM \citep{Haar96, Fauc08, Furl08}. Furthermore, radiative transfer through the clumpy IGM can induce additional fluctuations \citep{Mase05, Titt07}. During reionization, fluctuations in the background are even greater, because some regions receive strong ionizing radiation while others remain singly-ionized with no local illumination. Recent observations indicate that helium reionization occurs at $z \sim 3$. The strongest evidence comes from far-ultraviolet spectra of the \ion{He}{2} Ly$\alpha$ forest along the lines of sight to bright quasars at $z \sim 3$. These observations of the \ion{He}{2} Ly$\alpha$ transition ($\lambda_{\rm{rest}} = 304$~\AA) are difficult, because bright quasars with sufficient far-UV flux and no intervening Lyman-limit systems are required. To date, six such lines of sight have yielded opacity measurements: PKS 1935-692 \citep{Tytl95, Ande99}), HS 1700+64 \citep{Davi96, Fech06}, HE 2347-4342 \citep{Reim97, Kris01, Smet02, Shul04, Zhen04}, SDSS J2346-0016 \citep{Zhen04a, Zhen08}, Q0302-003 \citep{Jako94, Hoga97, Heap00, Jako03} and HS 1157-3143 \citep{Reim05}. The effective helium optical depth from these studies decreases rapidly at $z \approx 2.8$, then declines slowly to lower redshifts. The opacities at higher redshifts exhibit a patchy structure with alternating high and low absorption, which may indicate an inhomogeneous radiation background. More promising sightlines have been detected \citep{Syph09}, and the Cosmic Origins Spectrograph on the \textit{Hubble Space Telescope} should add to the current pool of data. Several indirect methods attempt to probe the impact of helium reionization on the the IGM. One expected effect of helium reionization is an increase in the IGM temperature \citep{Hui97, Gles05, Furl08a, McQu09}. \citet{Scha00} detected a sudden temperature increase at $z \sim 3.3$ by examining the thermal broadening of \ion{H}{1} Ly$\alpha$ forest lines (see also \citealt{Scha99, Theu02a}). Around the same time, the IGM temperature-density relation appears to become nearly isothermal, another indication of recent helium reionization \citep{Scha00, Rico00}. However, not all studies agree \citep{McDo01}, and temperature measurements via the Ly$\alpha$ forest flux power spectrum show no evidence for any sudden change \citep{Zald01, Viel04, McDo06}. Furthermore, this temperature increase should decrease the recombination rate of hydrogen, decreasing the \ion{H}{1} opacity \citep{Theu02}. Three studies with differing methods have measured a narrow dip at $z \sim 3.2$ in the \ion{H}{1} effective optical depth \citep{Bern03, Fauc08a, Dall08}. While initially attributed to helium reionization \citep{Theu02}, more recent studies find that reproducing this feature with helium reionization is extremely difficult \citep{Bolt09a, Bolt09, McQu09}. The (average) metagalactic ionizing background should also harden as helium is reionized, because the IGM would become increasingly transparent to high-energy photons. Observations of the \ion{He}{2}/\ion{H}{1} ratio are qualitatively consistent with reionization occurring at $z \sim 3$ in at least one line of sight \citep{Heap00}. \citet{Song98, Song05} found a break in the ratio of \ion{C}{4} to \ion{Si}{4} at $z \sim 3$. Modeling of the ionizing background from optically thin and optically thick metal line systems also shows a significant hardening at $z \sim 3$ \citep{Vlad03, Agaf05, Agaf07}, but other data of comparable quality show no evidence for rapid evolution \citep{Kim02, Agui04}. However, this approach is made more difficult by the large fluctuations in the \ion{He}{2}/\ion{H}{1} ratio even after helium reionization is complete \citep{Shul04}. In this paper, we focus on interpreting the \ion{He}{2} Ly$\alpha$ forest and the significance of the jump in the opacity at $z \approx 2.8$. After averaging the effective optical depth over all sightlines, we calculate the expected photoionization rate given some simple assumptions. In particular, we investigate the impact of a fluctuating radiation background, comparing it to the common uniform assumption. We interpret our results in terms of an evolving attenuation length for helium-ionizing photons $R_0$ as well as state-of-the-art models of inhomogeneous reionization. We use a semi-analytic model, outlined in \S\ref{sec:method}, to infer the helium photoionization rate from the \ion{He}{2} Ly$\alpha$ forest. The helium opacity measurements in the redshift range $2.0 \lesssim z \lesssim 3.2$, which serve as the foundation for our calculations, are compiled from the literature in \S\ref{sec:data}. First, we assume a post-reionization universe over the entire redshift span. In this regime, we find the photoionization rate and attenuation length in \S\ref{sec:results}, given the average measured opacity. Motivated by these results, we examine some fiducial reionization histories in \S\ref{sec:toy}. We conclude in \S\ref{sec:disc}. In our calculations, we assume a cosmology with $\Omega_m = 0.26, \Omega_{\Lambda} = 0.74, \Omega_b = 0.044, H_0 = h$(100~km s$^{-1}$ Mpc$^{-1})$ (with $h$ = 0.74), $n = 0.95$, and $\sigma_8 = 0.8$, consistent with the most recent measurements \citep{Dunk09, Koma09}. \section{ Method } \label{sec:method} The helium Ly$\alpha$ forest observed in the spectra of quasars originates from singly-ionized helium gas in the IGM. Quantitative measurement of this absorption is typically quoted as the transmitted flux ratio $F$, defined as the ratio of observed and intrinsic fluxes, or the related effective optical depth \begin{equation} \label{eq:taueff} \tau_{\rm{eff}} \equiv -\ln\langle F \rangle. \end{equation} We have $F = e^{-\tau_{\rm{eff}}} = \langle e^{-\tau} \rangle > e^{-\langle \tau \rangle}$, thus $\tau_{\rm{eff}} < \langle \tau \rangle$. From the current opacity measurements, we wish to infer the \ion{He}{2} photoionization rate. This connection depends on the details of the IGM, including the temperature, density distribution, and ionized helium fraction. \subsection{ Fluctuating Gunn-Peterson approximation } \label{sec:FGPA} The Gunn-Peterson (\citeyear{Gunn65}) optical depth for \ion{He}{2} Ly$\alpha$ photons is \begin{equation} \label{eq:GP} \tau_{GP} = \frac{\pi e^2}{m_ec}f_{\alpha}\lambda_{\alpha}H^{-1}(z)n_{\rm{HeII}}. \end{equation} Here, the oscillator strength $f_{\alpha} = 0.416$, $\lambda_{\alpha} = 304~\mbox{\AA}$, and $n_{\rm{HeII}}$ is the density of singly-ionized helium in the IGM. For simplicity, we approximate the Hubble constant as $H(z) \approx H_0\Omega_m^{1/2}(1 + z)^{3/2}$. Since the Ly$\alpha$ forest probes the low-density, ionized IGM, most of the hydrogen (mass fraction $X = 0.76$) and helium ($Y = 0.24$) are in the form of \ion{H}{2} and \ion{He}{3}, respectively, after reionization. Under these assumptions, photoionization equilibrium requires \begin{equation} \label{eq:balance} \Gamma n_{\rm{HeII}} = n_{\rm{He}}n_e\alpha_A, \end{equation} where $\Gamma$ is the \ion{He}{2} photoionization rate and the case-A recombination coefficient is $\alpha_A = 3.54\times10^{-12}(T/10^4~\rm{K})^{-0.7}$~cm$^3$~s$^{-1}$ according to \citet{Stor95}. For a clumpy universe, most photons emitted by recombinations are produced and subsequently reabsorbed in dense, mostly neutral systems. These ionizing photons, therefore, cannot escape to the low density regions of interest for the forest, so we use case-A \citep{Mira03}. The Ly$\alpha$ forest, and therefore the optical depth, trace the local overdensity $\Delta$ of the IGM, where $\Delta \equiv \rho/\bar{\rho}$ and $\bar{\rho}$ is the mean mass density. Since $n_e \propto n_{\rm{He}}$ in a highly ionized IGM, equation~(\ref{eq:balance}) implies that $n_{\rm{HeII}} \propto n^2_{\rm{He}} \propto \Delta^2$. The optical depth is proportional to $n_{\rm{HeII}}$ (see eq.~\ref{eq:GP}), which introduces a $\Delta^2$ factor. Additionally, the temperature of the IGM, which affects the recombination rate, is typically described by a power law of the form $T = T_0\Delta^{\gamma-1}$ \citep{Hui97}, where $T_0$ and $\gamma$ are taken as constants.\footnote{ In actuality, during and after helium reionization both $T_0$ and $\gamma$ likely become redshift- and density-dependent. We will consider such effects in \S\ref{sec:semi} and \S\ref{sec:T}.} Including the above equations and cosmological factors, \begin{eqnarray} \tau_{\rm{GP}} & \simeq & \frac{19}{\Gamma_{-14}}\left(\frac{T_0}{10^4~\rm{K}}\right)^{-0.7}\left(\frac{\Omega_bh^2}{0.0241}\right)^2\left(\frac{\Omega_mh^2}{0.142}\right)^{-1/2} \nonumber \\ & & \times \left(\frac{1+z}{4}\right)^{9/2}\Delta^{2-0.7(\gamma-1)}, \label{eq:tau} \end{eqnarray} where $\Gamma = 10^{-14}\Gamma_{-14}$~s$^{-1}$. The fluctuating Gunn-Peterson approximation (FGPA) (e.g., \citealt{Wein99}), equation~(\ref{eq:tau}), relates the effective optical depth, or the continuum normalized flux, to the local overdensity $\Delta$ and the photoionization rate $\Gamma_{-14}$. This approximation shows the relationship between the opacity and the IGM, but it ignores the effects of peculiar velocities and thermal broadening on the Ly$\alpha$ lines. In practice, when comparing to observations an overall proportionality constant, $\kappa$, is introduced to the right hand side of equation~(\ref{eq:tau}) to compensate for these factors, as described in \S\ref{sec:UVbg}. This normalization also incorporates the uncertainties in $T_0, \Omega_b, \Omega_m$, and $h$. \subsection{ A semi-analytic model for Ly$\alpha$ absorption } \label{sec:semi} To fully describe helium reionization and compute the detailed features of the Ly$\alpha$ forest, complex hydrodynamical simulations of the IGM, including radiative transfer effects and an inhomogeneous background, are required. Recent simulations \citep{Soka02, Pasc07, McQu09} have made great advances to incorporate the relevant physics and to increase in scale. The simulations remain computationally intensive, and they cannot simultaneously resolve the $\sim 100$~Mpc scales required to adequately study inhomogeneous helium reionization and the much smaller scales required to self-consistently study the Ly$\alpha$ forest, necessitating some sort of semi-analytic prescription to describe baryonic matter on small scales. On the other hand, the semi-analytic approach taken here, including fluctuations in the ionizing background, should broadly reproduce the observed optical depth, especially considering the uncertainties in the measurements and the limited availability of suitable quasar lines of sight. To outline, the model has four basic inputs: the IGM density distribution $p(\Delta)$, the temperature-density relation $T(\Delta)$, the radiation background distribution $f(J)$, and the mean helium ionized fraction $\bar{x}_{\rm{HeIII}}$. \citet{Mira00} suggest the volume-weighted density distribution function \begin{equation} p(\Delta) = A\Delta^{-\beta}\exp\left[-\frac{\left(\Delta^{-2/3} - C_0\right)^2}{2\left(2\delta_0/3\right)^2}\right], \end{equation} where $\delta_0 = 7.61/(1 + z)$ and $\beta$ (for a few redshifts) are given in Table~1 of their paper. Intermediate $\beta$ values were found using polynomial interpolation. The remaining constants, $A$ and $C_0$, were calculated by normalizing the total volume and mass to unity at each redshift. The distribution matches cosmological simulations reasonably well for the redshifts of interest,\footnote{More recent simulations by \citet{Pawl09} and \citet{Bolt09b} basically agree with the above $p(\Delta)$ for low densities, which the Ly$\alpha$ forest primarily probes.} i.e. $z = 2 - 4$. Although this form does not incorporate all the physics of reionization and was not generated with the current cosmological parameter values, the overall behavior should be sufficient for the purposes of our model. A current topic of discussion is the thermal evolution of the IGM during and after helium reionization. The temperature-density relation should vary as a function of redshift and density; notably, helium reionization should increase the overall temperature by a factor of a few \citep{Hui97, Gles05, Furl08a, McQu09}, which may have been observed \citep{Scha00,Rico00}. As mentioned in \S\ref{sec:FGPA}, the temperature is assumed to follow a power law $T = T_0\Delta^{\gamma-1}$ in the FGPA. Generally, $T_0 \sim 1-2\times10^4$~K and $1 \leq \gamma \leq 1.6$ should broadly describe the post-reionization IGM (e.g., \citealt{Hui97}), but the exact values are a matter of debate. Unless otherwise noted, we use $T_0 = 2\times10^4$~K and $\gamma = 1$ throughout our calculations. The isothermal assumption also suppresses the temperature, and therefore density, dependence of the recombination coefficient $\alpha_A$. A uniform radiation background has been a common assumption in previous studies, but the sources (quasars) for these photons are rare and bright. Therefore, random variations in the quasar distribution create substantial variations in the high-frequency radiation background \citep{Meik07, Furl08, McQu09}. Furthermore, the $1/r^2$ intensity profiles of these sources induce strong small-scale fluctuations \citep{Furl09a}, which may in turn significantly affect the overall optical depth of the Ly$\alpha$ forest. For the probability distribution $f(J)$ of the angle-averaged specific intensity of the radiation background $J$, we follow the model presented in \citet{Furl08b}. In the post-reionization limit, the probability distribution can be computed exactly for a given quasar luminosity function and attenuation length, assuming that the sources are randomly distributed (following Poisson statistics). This distribution can be derived either via Markov's method \citep{Zuo92} or via the method of characteristic functions \citep{Meik03}. During reionization, the local \ion{He}{3} bubble radius, i.e. the horizon within which ionizing sources are visible, varies across the IGM and so becomes another important parameter. Due to the rarity of sources, with typically only a few visible per \ion{He}{3} region, a Monte Carlo treatment best serves this regime \citep{Furl08b}. For a given bubble of a specified size, we randomly choose the number of quasars inside each bubble according to a Poisson distribution. Each quasar is then randomly assigned a location within the bubble as well as a luminosity (via the measured luminosity function). Next, we sum the specific intensity from each quasar. After $10^6$ Monte Carlo trials, this procedure provides $f(J)$ for a given bubble size; the solution converges to the post-reionization scenario for an infinite bubble radius. The final ingredient is the size distribution of discrete ionized bubbles, which is found using the excursion set approach of \citet{Furl08} (based on the hydrogen reionization equivalent from \citealt{Furl04}). After integrating over all possible bubble sizes, $f(J)$ depends on the parameters $z, \, R_0,$ and $\bar{x}_{\rm{HeIII}}$, in addition to the specified luminosity function. In the following calculations, we scale $J$ (which is evaluated at a single frequency) to $\Gamma$ (which integrates over all frequencies) simply using the ratio of the mean photoionization rate to the mean radiation background. This is not strictly correct, because higher frequency photons have larger attenuation lengths and so more uniform backgrounds; however, it is a reasonable prescription because the ionization cross section falls rapidly with photon frequency. However, it does mean that we ignore the large range in spectral indices of the ionizing sources (see below), which modulate the shape of the local ionizing background and lead to an additional source of fluctuations in $\Gamma$ relative to $J$ that we do not model. The largest problem occurs during helium reionization, when the highest energy photons can travel \emph{between} \ion{He}{3} regions (a process we ignore); however, they have small ionization cross sections and so do not significantly change our results, except very near the end of that process \citep{Furl08b}. \begin{figure*} \plottwo {f1a.eps} {f1b.eps} \caption{ Distribution of the photoionization rate $\Gamma$ relative to its mean value in a fully-ionized IGM, $\langle \Gamma \rangle $, at redshift $z = 2.5$. \textit{Left panel:} The curves assume $R_0 = 5, 10, 50,$ and 100~Mpc, from widest to narrowest, in a post-reionization universe. \textit{Right panel:} The curves take $\bar{x}_{\rm{HeIII}} = 0.3, 0.5, 0.75, 0.9,$ and 1.0, from lowest to highest at peak, for $R_0 = 35$~Mpc.} \label{fig:f(j)} \end{figure*} For the majority of the paper, we consider the IGM to be fully-ionized, i.e. $\bar{x}_{\rm{HeIII}} = 1.0$. In this post-reionization regime, $R_0$, the attenuation length of the ionizing photons, determines the shape of $f(\Gamma)$, given the redshift and other model assumptions. We show several example distributions in the left panel Figure~\ref{fig:f(j)}. From widest to narrowest, the curves correspond to $R_0 = 5, 10, 50,$ and 100~Mpc ($z = 2.5$). Smaller attenuation lengths yield a greater spread in $\Gamma$. Qualitatively, more sources contribute to ionizing a given patch of the IGM for higher $R_0$, making the peak photoionization rate more likely, i.e. the curve is narrower. A uniform background corresponds to $R_0 \rightarrow \infty$. Although low $\Gamma$ values are more likely for low $R_0$, the high-$\Gamma$ tail is nearly independent of $R_0$. This is because a large $\Gamma$ occurs within the ``proximity zone" of a single quasar, making it relatively independent of contributions from much larger scales (unless $R_0$ is much smaller than the proximity zone itself). During reionization, as in \S\ref{sec:toy}, the ionized fraction \textit{and} the mean free path affect the distribution function, as shown in the right panel of Figure~\ref{fig:f(j)}. Here, we take $z = 2.5$, $R_0 = 35$~Mpc, and $\bar{x}_{\rm{HeIII}} = 0.3, 0.5, 0.75, 0.9,$and 1.0 (from lowest to highest at the peak). A broader distribution of photoionization rates is expected, because the large spread in \ion{He}{3} bubble sizes restricts the source horizon inhomogeneously across the Universe. To estimate $\tau_{\rm{eff}}$ (or $F$), we integrate over all densities and photoionization rates: \begin{equation} \label{eq:F} F = e^{-\tau_{\rm{eff}}} = \int_0^{\infty} d\Gamma f(\Gamma) \int_0^{\infty} d\Delta e^{-\tau(\Delta | \Gamma)}p(\Delta). \end{equation} This integral is valid if $\Gamma$ and $\Delta$ are uncorrelated. Since relatively rare quasars ionize \ion{He}{2}, random fluctuations in the number of sources (as opposed to their spatial clustering) dominate the ionization morphology \citep{McQu09}, justifying our assumption of an uncorrelated IGM density and the photoionization rate. \subsection{ The UV background from quasars } \label{sec:UVbg} Since quasars ionize the \ion{He}{2} in the IGM, the UV metagalactic background can, in principle, be calculated directly from distribution and intrinsic properties of quasars. Currently, these details, i.e. the quasar luminosity function and attenuation length, are uncertain. The following method for estimating $\Gamma$ is used as a reference for the semi-analytic model. The \ion{He}{2} ionization rate is \begin{equation} \label{eq:Gamma} \Gamma = 4\pi\int_{\nu_{\rm{HeII}}}^{\infty} \frac{J_{\nu}}{h\nu}\sigma_{\nu}d\nu, \end{equation} where $\sigma_{\nu} = 1.91\times10^{-18}(\nu/\nu_{\rm{HeII}})^{-3}$~cm$^2$ is the photoionization cross section for \ion{He}{2} and $\nu_{\rm{HeII}}$ is the photon frequency needed to fully ionize helium. For the radiation background at frequency $\nu$, $J_{\nu}$, we assume a simplified form, the absorption limited case in \citet{Meik03}: \begin{equation} \label{eq:J} J_{\nu} = \frac{1}{4\pi}\epsilon_{\nu}(z)R_0(z), \end{equation} where $\epsilon_{\nu}$ is the quasar emissivity and $R_0$ is the attenuation length. We begin with the $B$-band emissivity $\epsilon_{B}$, derived from the quasar luminosity function in \citet[hereafter HRH07]{Hopk07}. To convert this to the extreme-UV (EUV) frequencies of interest, we follow a broken power-law spectral energy distribution \citep{Mada99}: \begin{equation} L(\nu) \propto \left\{ \begin{array}{ll} \nu^{-0.3} & ~~~2500 < \lambda < 4600~\mbox{\AA}\\ \nu^{-0.8} & ~~~1050 < \lambda < 2500~\mbox{\AA}\\ \nu^{-\alpha} & ~~~\lambda < 1050~\mbox{\AA}. \end{array} \right. \end{equation} The EUV spectral index $\alpha$ is a source of debate and is not the same for all quasars. \citet{Telf02} find a wide range of values for individual quasars, e.g. $\alpha$ = -0.56 for HE~2347-4342 and 5.29 for TON~34. Most quasars lie closer to the mean, but HE~2347-4342 is a \ion{He}{2} Ly$\alpha$ line of sight in \S\ref{sec:data}. Unless otherwise noted, we use the mean value $\langle \alpha \rangle \approx 1.6$ from \citet{Telf02}, ignoring the variations in $\alpha$. \citet{Zhen97} found $\langle \alpha \rangle \approx 1.8$, which serves as a comparison. The uncertainty in the spectral index $\alpha$ affects the amplitude, not the shape, of the $\Gamma$ curve derived from the QLF (see Fig.~\ref{fig:Gamma}). Since our semi-analytic calculations are normalized to a single point on the $\Gamma$ curve, this uncertainty translates into an amplitude shift in our results, i.e. changes $\kappa$. The next ingredient for $J_{\nu}$ is the attenuation length of helium-ionizing photons $R_0$. As described above, $R_0$ depends on the photon frequency; for example, high-energy photons can propagate larger distances. For simplicity, we use a single frequency-averaged attenuation length and focus only on the redshift evolution (in any case, the absolute amplitude can be subsumed into our normalization factor $\kappa$ below). For concreteness, we apply the $comoving$ form found in \citet{Bolt06}: \begin{equation} \label{eq:Ro} R_0 = 30\left(\frac{1 + z}{4}\right)^{-3}~\rm{Mpc}, \end{equation} which assumes the number of Lyman limit systems per unit redshift is proportional to $(1 + z)^{1.5}$ \citep{Stor94} and uses the normalization based on the model of \citet{Mira00}. An alternate approach, probably more appropriate during reionization itself, is to estimate the attenuation length around individual quasars, as in \citet{Furl08}. This method gives a similar value at $z=3$ but with a \emph{slower} redshift evolution, which would only strengthen our conclusions. This empirical equation for the mean free path (with the above quasar emissivity) provides the photoionization rate $\Gamma$, shown in Figure~\ref{fig:Gamma}. The photoionization rate from the \citet{Mada99} QLF (with $\alpha = 1.6$ and 1.8) is also included in the figure to illustrate the effect of uncertainties in the quasar properties. The spectral index has a greater effect on the amplitude than the differing QLFs. \begin{figure} \plotone {f2.eps} \caption{Evolution of the mean photoionization rate (in units of $10^{-14}$~s$^{-1}$) with redshift. The solid curve represents the inferred ionization rate from the HRH07 quasar luminosity function with $\alpha = 1.6$. The dotted (dashed) curve follows \citet{Mada99} with extreme-UV spectral index $\alpha = 1.6~(1.8)$. All take the attenuation length from eq.~(\ref{eq:Ro}). The points (with a slight redshift offset) show the reconstructed photoionization rate given the measured effective optical depth, assuming a uniform (triangles) and fluctuating (squares) radiation background. The results are fixed to the solid curve at $z = 2.45$.} \label{fig:Gamma} \end{figure} As noted in \S\ref{sec:FGPA}, comparing our FGPA treatment to the real Ly$\alpha$ forest data requires an uncertain correction to the $\tau-\Delta$ relation in equation~\ref{eq:tau}. For this purpose, we assume the above emissivity and attenuation length to be accurate. Then, the semi-analytic model is adjusted so that the predicted $\tau_{\rm{eff}}$ matches the measured value at a particular redshift. To do so, we insert a prefactor, $\kappa$, to the right hand side of equation~(\ref{eq:tau}). This factor compensates for line blending and other detailed physics ignored by the FGPA, but it also includes any uncertainties in the underlying cosmological or IGM parameters (such as $T_0$). A suitable normalization redshift should be after reionization and have data from more than one line of sight (see the right panel of Fig.~\ref{fig:expt}). Throughout this work, we take $z = 2.45$ as our fiducial point. If $\kappa$ is a constant with redshift, our choice of reference point mainly affects the overall amplitude of the photoionization rate or attenuation length, not the redshift evolution. However, since $\kappa$ depends on the IGM properties, it may change with redshift, density, and/or temperature. For example, an increase in temperature broadens the widths of the absorption lines in the Ly$\alpha$ forest, decreasing the importance of saturation but increasing the likelihood of line blending. Reionization, a drastic change to the IGM, should also affect $\kappa$. Here, we take $\kappa$ to be independent of $z$. The precise value lies somewhere between 0.1 and 0.5, depending on the specific model. \citet{Furl09b} find $\kappa = 0.3$ for the post-reionization hydrogen Ly$\alpha$ effective optical depth, which is similar to our values. In any case, we emphasize that our method \emph{cannot} be used to estimate the absolute value of the ionizing background -- for which detailed simulations are necessary -- but we hope that it can address the redshift evolution of $\Gamma$. \section{ Evolution of the \ion{He}{2} effective optical depth } \label{sec:data} Measurements of the \ion{He}{2} effective optical depth are challenging. Suitable lines of sight require a bright quasar with sufficient far-UV flux and no intervening Lyman-limit systems. Currently, only five quasar spectra have provided \ion{He}{2} opacity measurements appropriate for our analysis, displayed in the left panel of Figure~\ref{fig:expt} with the averaged values. \citet{Zhen04a} measure a lower limit on the optical depth at $z \sim 3.5$ from a sixth sightline, SDSS J2346-0016; we do not include any lower limits in any subsequent calculations, but we reference this measurement in \S\ref{sec:toy}. Due to the limited scope of the data, the observed opacities may not be representative of the IGM as a whole. \begin{figure*} \plottwo {f3a.eps} {f3b.eps} \caption{Evolution of the \ion{He}{2} effective optical depth based on the observations of the Ly$\alpha$ forest for five quasar spectra: HE 2347-4342, HS 1700+64, Q0302-003, HS 1157+314, and PKS 1935-692. The squares are the opacity measurements averaged over redshift bins $\Delta z = 0.1$ with suggestive uncertainties. \textit{Left panel:} Data and uncertainties as quoted in the literature are plotted. \textit{Right panel:} The average values for each line of sight are displayed, elucidating the origin of the uncertainties in the average opacities used throughout the paper. The small redshift offsets within each bin are for illustrative purposes only. } \label{fig:expt} \end{figure*} \textit{HE 2347-4342:} This quasar ($z_{\rm{em}} = 2.885$) is especially bright; therefore, this line of sight has been extensively analyzed. \citet{Zhen04} completed the most comprehensive investigation, covering the redshift range $2.0 < z < 2.9$, including Ly$\alpha$ and Ly$\beta$. \citet{Zhen04} and \citet{Shul04} ($2.0 < z < 2.9$ and Ly$\alpha$ only) utilized high-resolution spectra from the Far Ultraviolet Spectroscopic Explorer (FUSE; $R \sim 20,000$) and the Very Large Telescope (VLT; $R \sim 45,000$). An older and lower resolution study, \citet{Kris01}, covered $2.3 < z < 2.7$ with FUSE. Below redshift $z = 2.7$, the effective helium optical depth evolves smoothly. At higher redshifts, the opacity exhibits a patchy structure with very low and very high absorption, often described respectively as voids and filaments in the literature. \textit{HS 1700+64:} The Ly$\alpha$ forest of this quasar ($z_{\rm{em}} = 2.72$) has been resolved with FUSE over the redshift range $2.29 \lesssim z \lesssim 2.75$ \citep{Fech06}. An older study using the Hopkins Ultraviolet Telescope (HUT) is consistent with the newer, higher resolution results \citep{Davi96}. The helium opacity evolves smoothly and exhibits no indication of reionization. \textit{Q0302-003:} The spectrum of this quasar ($z_{\rm{em}} = 3.286$) was observed with the Space Telescope Imaging Spectrograph (STIS) aboard the Hubble Space Telescope (HST) at 1.8~\AA~resolution \citep{Heap00}. The effective optical depth generally increases with increasing redshift over $2.77 \lesssim z \lesssim 3.22$, excluding a void at $z \sim 3.05$ due to a nearby ionizing source \citep{Bajt88, Zhen95, Giro95}. The data were averaged over redshift bins of $\Delta z \simeq 0.1$. \citet{Hoga97} presented an analysis using the Goddard High Resolution Spectrograph (GHRS) which generally agrees with the later study but quoted a noticeably lower \ion{He}{2} optical depth near $z \sim 3.15$. \textit{HS 1157+314:} \citet{Reim05} obtained low resolution HST/STIS spectra of the \ion{He}{2} Ly$\alpha$ forest toward this quasar ($z_{\rm{em}} \sim 3$). Over the redshift range ($2.75 \leq z \leq 2.97$) of the study a patchy structure, similar to HE 2347-4342, is present. The given optical depth was averaged over a redshift bin of $\Delta z \simeq 0.1$. \textit{PKS 1935-692:} The HST/STIS spectrum for this quasar ($z_{\rm{em}} = 3.18$) was analyzed by \citet{Ande99}. Only one optical depth was quoted, but the spectrum exhibited the usual fluctuations. To merge these data sets, we initially binned them in redshift intervals of 0.1, starting with $z = 2.0$. Each bin was assigned the median redshift value, e.g. $z = 2.35$ for $2.3 \leq z < 2.4$. To objectively combine the data, the transmission flux ratios for each data set were averaged (weighted by redshift coverage) in the redshift bins. Then, the values for each quasar were averaged, since multiple data sets may cover the same line of sight. Finally, if more than one line of sight contributes to a bin, the fluxes are averaged once again. The process translates the left panel to the right panel of Figure~\ref{fig:expt}. The uncertainties along each line of sight in the right panel are simply the errors from each separate point in the literature, added in quadrature, without regard to systematic errors; our averaged values then take errors equal to the range spanned by these separate lines of sight. For the remainder of the paper, the error bars result from assuming $F \pm dF$ for each redshift bin, where $F$ are the squares and $\pm dF$ are the upper/lower error bars on the squares in the figure. The small number of well-studied lines of sight limits the amount of truly quantitative statements that can be made. \section{ Results } \label{sec:results} \subsection{ Mean \ion{He}{2} photionization rate } \label{sec:Gamma} We now apply our semi-analytic model to the observed \ion{He}{2} opacity found in the right panel of Figure~\ref{fig:expt}. For each redshift bin, the photoionization rate is calculated by iteratively solving equation~(\ref{eq:F}), i.e. varying $\langle \Gamma \rangle$ (the mean photoionization rate for a fully-ionized IGM) until $F$ matches the measurements. As noted in \S\ref{sec:semi}, a mean free path (eq.~\ref{eq:Ro}) is needed to determine $f(\Gamma)$ for the fluctuating background, but the uniform case has no such requirement except insofar as it affects $\langle \Gamma \rangle$. First, to provide some intuition, we fix $\kappa$ to be the same for both cases (here, $\kappa = 0.291$) and plot the resulting effective optical depth in Figure~\ref{fig:kappa}, given the emissivity and mean free path from \S\ref{sec:UVbg}. The figure shows the redshift evolution of this optical depth for uniform (solid) and fluctuating (dashed) radiation backgrounds. Interestingly, $\tau_{\rm eff}$ is significantly smaller for a uniform background, especially at higher redshifts (probably due to the shrinking attenuation length assumed in the fluctuating model). This is because most points in the IGM have $\Gamma < \langle \Gamma \rangle$ for the fluctuating background, so most of the IGM has a \emph{higher} opacity; the ``proximity zones" around each quasar are not sufficient to compensate for this effect. The measured opacities are included in the figure for reference, showing that the shapes of both (normalized) models are consistent with observations for $z < 2.8$ but deviate significantly at higher redshifts. In practice, $\tau_{\rm eff}$ is the measured quantity, from which we try to infer $\Gamma$. We find that, for the fiducial attenuation lengths, including a realistic fluctuating background increases the required $\langle \Gamma \rangle$ by about a factor of two -- a nontrivial effect that is important for reconciling quasar observations with the forest. The magnitude of the required adjustment is comparable to that found by \citet{Bolt06}, who used numerical simulations. However, as we have emphasized above, we cannot use our model to estimate the absolute value of the ionizing background because the FGPA does not fully describe the Ly$\alpha$ forest; instead we need a renormalization factor $\kappa$. For the remainder of this paper, we therefore fix the photoionization rate at the $z = 2.45$ HRH07 value (together with our fiducial $R_0$), as described in \S\ref{sec:UVbg}. This procedure gives $\kappa = 0.457$ and 0.291 for the uniform and fluctuating background, respectively. We strongly caution the reader that the remainder of our quoted results will therefore mask the overall amplitude disparity between these two cases, and we will focus on the redshift evolution of $\Gamma$ instead. \begin{figure} \plotone{f4.eps} \caption{The \ion{He}{2} effective optical depth, assuming the HRH07 QLF and eq.~(\ref{eq:Ro}), as a function of redshift. The normalization for all curves is constant, $\kappa = 0.291$. The solid (dotted) curve is based on a uniform radiation background, with $\gamma = 1.0~(1.6)$. The dashed curve represents a fluctuating background with an isothermal temperature-density relation. The average measured opacities are shown for reference.} \label{fig:kappa} \end{figure} Figure~\ref{fig:Gamma} displays the mean photoionization rate for a uniform (triangles) and fluctuating (squares) UV background (normalized to the fiducial point). The curves represent the photoionization rate inferred from various quasar luminosity functions, as described in \S\ref{sec:UVbg}. Since the $z = 2.45$ reference point was chosen arbitrarily, the overall amplitude should not be considered reliable, which is emphasized by the spread in the QLF curves. The difference between the fluctuating and uniform UV background is small and certainly within the uncertainties. The effect of including fluctuations on the photoionization rate is not straightforward. Generally, at lower $z$, the fluctuating $\Gamma$ is smaller than the uniform result, and the opposite is true for higher redshift. The redshift of the crossover between the two behaviors depends on the amplitude of the measured transmission ratio; a higher $F$ decreases this crossover redshift. We therefore attribute this effect to changing the characteristic overdensity of regions with high transmission, which is larger at lower redshifts because of the Universe's expansion. In this regime, where the underlying density field itself has a relatively broad distribution, the fluctuating ionizing field makes less of a difference to the required $\langle \Gamma \rangle$. Remember, however, that these relatively small changes are always swamped by the differing $\kappa$'s, and a fluctuating background always requires a larger $\langle \Gamma \rangle$ than the uniform case. For $z < 2.7$, the normalized points lie near the HRH07 curve. The averaged opacity at $z = 2.25$, which is significantly lower, relies on a single line of sight, HE 2347-4342, and differs significantly from the trend seen in Figure~\ref{fig:expt}. As expected, the inferred $\Gamma$ fluctuates considerably over this redshift range. In part, these variations are due to the limited amount of data, both in the number and redshift coverage of usable quasar sightlines. But the UV background fluctuates considerably, especially during reionization (see the $f(\Gamma)$ discussion in \S\ref{sec:semi}). For $z > 2.8$, the calculated photoionization rate consistently undershoots the model prediction, possibly indicating the end of helium reionization around that time: there is much more \ion{He}{2} than can be accommodated by a smoothly varying emissivity or attenuation length. \subsection{ Evolution of the attenuation length } \label{sec:R} Because the measured quasar emissivity evolves smoothly with redshift, the most natural interpretation of this discontinuity is in terms of the attenuation length, which intuitively evolves rapidly at the end of reionization when \ion{He}{3} regions merge together and sharply increase the horizon to which ionizing sources are visible. Following the prescription for the UV background in \S\ref{sec:UVbg}, we calculate the mean free path $R_0$, given the HRH07 QLF and the $\Gamma_{-14}$ from the previous section. This procedure amounts to varying the solid curve, via $R_0$, to match the points in Figure~\ref{fig:Gamma}. The redshift evolution of the attenuation length for uniform (triangles) and fluctuating (squares) radiation backgrounds is plotted in Figure~\ref{fig:R_0}, with equation~(\ref{eq:Ro}) as a reference. The normalization remains the same as the previous section, i.e. $z = 2.45$ is the fiducial point. \begin{figure} \plotone{f5.eps} \caption{ Evolution of the helium-ionizing attenuation length with redshift. The calculated mean free path generally increases with a discontinuity around $z = 2.8$. The fluctuating (squares) background appears to smooth the evolution as compared to the uniform (triangles) background. The results are matched to the \citet{Bolt06} attenuation length (solid curve) at $z = 2.45$. The estimated uncertainties are shown only for the uniform case but are similar for both. } \label{fig:R_0} \end{figure} Similarly to the inferred photoionization rate, the points vary about the reference curve for $z < 2.7$ and depart from it for $z > 2.8$. The uncertainties, which are shown only for the uniform UV background (but are comparable for the other case), are again quite large. Incorporating fluctuations reduces the severity of, but does not eliminate, the jump in the evolution of the attenuation length. Once again the results lie consistently below the curve for higher redshifts. From the viewpoint of $\Gamma$ or $R_0$, there appears to be a systematic change in behavior above $z \approx 2.8$. The marked decrease in $R_0$ that is required, by at least a factor of two from the fiducial model, indicates an important change in the state of the IGM. However, as we have described above, a single attenuation length is no longer appropriate during reionization, so in \S\ref{sec:toy} we will turn to models of inhomogeneous reionization. \subsection{The IGM Temperature-Density Relation} \label{sec:T} As discussed in \S\ref{sec:semi}, the temperature-density relation of the IGM is a complicated question and an important component of the semi-analytic model. For the majority of the paper, we assume an isothermal model, i.e. $\gamma = 1$ in $T = T_0\Delta^{\gamma - 1}$. In reality, the temperature may depend on the density of the IGM. A further complication arises during (and shortly after) helium reionization when the IGM is inhomogeneously reheated and subsequently relaxes to a power law \citep{Gles05, Furl08a, McQu09}. To partially address the former issue, we repeat the calculation of $\Gamma$ for a homogeneous radiation background, but now with $\gamma = 1.6$, shown in Figure~\ref{fig:T}. The difference between the two cases is $\lesssim 30\%$. Overall, the steeper temperature-density relation slightly smoothes the jump in $\Gamma_{-14}$ near redshift $z \approx 2.8$ but is insufficient to fully explain the observed disconitnuity. As expected, the normalization differs between the two equations of state: $\kappa_{\gamma = 1.0} = 0.457$ and $\kappa_{\gamma = 1.6} = 0.277$ (see Fig.~\ref{fig:kappa}). In other words, a model with a higher $\gamma$ requires a higher $\langle \Gamma \rangle$ to achieve the same optical depth. This is because a steeper temperature-density relation makes the low-density IGM, which dominates the transmission, colder and hence \emph{more} neutral. Overall, then, the temperature-density relation and fluctuating ionizing background lead to a systematic uncertainty of nearly a factor of four in the mean photoionization rate inferred from the \ion{He}{2} forest. \begin{figure} \plotone{f6.eps} \caption{Comparison of the inferred photoionization rate (in units of $10^{-14}$~s$^{-1}$) for two temperature-density relations, given a homogeneous radiation background. The triangular points are derived using an isothermal model. The square points assume a steeper temperature-density relation, $\gamma = 1.6$. The points have a small redshift offset for illustrative purposes. The photoionization rate computed from the HRH07 quasar luminosity function is plotted for reference. Both models are normalized to the observations at $z=2.45$. } \label{fig:T} \end{figure} \section{ Models for \ion{He}{2} reionization } \label{sec:toy} We have seen that the observations appear to require a genuine discontinuity in the properties of the IGM at $z \approx 2.8$, although large statistical errors stemming from the small number of lines of sight prevent any strong conclusions. This change is often attributed to reionization; here we investigate this claim quantitatively with several ``toy" models for the evolution of the helium ionized fraction $\bar{x}_{\rm{HeIII}}$. This fraction determines $f(\Gamma)$ as described in \S\ref{sec:semi}. Assuming the HRH07 QLF and the attenuation length given by equation~(\ref{eq:Ro}),\footnote{Again, we note that during helium reionization the attenuation length should be evaluated with reference to individual quasars; in that case, it does not evolve strongly with redshift \citep{Furl08}. This consideration will only strengthen our conclusions.} we calculate the effective optical depth via equation~(\ref{eq:F}). The fiducial point for normalizing $\kappa$ remains at $z = 2.45$. Figure~\ref{fig:toys} shows the effective optical depth $\tau_{\rm{eff}}$ and ionized fraction $\bar{x}_{\rm{HeIII}}$ as a function of redshift for five reionization models. Each scenario is characterized by the redshift, $z_{\rm{He}}$, at which $\bar{x}_{\rm{HeIII}}$ reaches 1.0. From left to right in the figure, the curves correspond to $z_{\rm{He}} = 2.4, 2.5, 2.7, 3.1$ and $z_{\rm{He}} > 3.8$, i.e. post-reionization for the entire redshift range in question. The rate of ionization varies slightly between the models.\footnote{The unevenness in the curves arises because generating our Monte Carlo distributions is relatively expensive computationally, so we only generated a limited number at $\bar{x}_{\rm{HeIII}} = (0.3, 0.5, 0.75, 0.9,$ and 1.0) for each redshift.} The measured opacities are plotted for reference, including the recently discovered SDSS J2346-0016 at $z=3.45$ \citep{Zhen04a,Zhen08}. Note that we do not estimate any cosmic variance uncertainty. \begin{figure} \plotone{f7.eps} \caption{ The effective helium optical depth and \ion{He}{3} fraction for five toy reionization models. From left to right, the curves correspond to helium fully-ionized by $z_{\rm{He}} = 2.5, 2.7, 2.9, 3.1$ and post-reionization (or $z_{\rm{He}} > 3.8$). The \ion{He}{3} fraction evolution is varied slightly between models. The measured opacities are included for reference, including the lower limit at $z = 3.45$ from SDSS J2346-0016 \citep{Zhen04a}.} \label{fig:toys} \end{figure} The effective optical depth evolves smoothly in the post-reionization regime, which seems compatible with the data below $z \approx 2.8$. The $z_{\rm{He}} = 2.7$ model seems to fit the data best, but a range of values for the reionization redshift would be consistent with the existing data. Note that models with $z_{\rm He} > 3$ seem to evolve too smoothly; however, none of the curves displays enough of a discontinuity to match the data completely. Obviously, more lines of sight at $z \ga 3$ are needed to reduce the wide cosmic variance. \section{ Discussion } \label{sec:disc} We have applied a semi-analyic model to the interpretation of the \ion{He}{2} Ly$\alpha$ forest, one of the few direct observational probes of the epoch of helium reionization. Using simple assumptions about the IGM, the ionization background, and our empirical knowledge of quasars, we have inferred the evolution of the helium phoionization rate and the attenuation length from the \ion{He}{2} effective optical depth. We averaged the opacity measurements over 5 sightlines, which show an overall decrease in $\tau_{\rm eff}$ with decreasing redshift and a sharp jump at $z \approx 2.8$ with the alternating low and high absorption at higher reshifts. After proper normalization, our model provides good agreement to the lower redshift data, but -- assuming smooth evolution in the quasar emissivity and attenuation length -- consistently overpredicts $\langle \Gamma \rangle$ above $z \approx 2.8$. Although the uncertainties are large, these results suggest a rapid change in the IGM around that time. Our semi-analytic model is based the quasar luminosity function and the helium-ionizing photon attenuation length, which are determined empirically. The uncertainty in these quantities significantly affects our results. In particular, the plausible range of the mean EUV spectral index, $1.6 \lesssim \alpha \lesssim 1.8$, shifts the amplitude of our results by a factor of about two. Our model takes only the mean value and does not account for the variation in $\alpha$ from different quasars. Furthermore, our treatment of the attenuation length ignores any frequency dependence. However, these factors likely only affect the amplitude, not the evolution, of the inferred photoionization rate, especially the the jump at $z \approx 2.8$. A steeper redshift evolution for the attenuation length would decrease the severity of the feature around $z \approx 2.8$, but a simple power law cannot eliminate it, given our method. In calculating $\Gamma$ and $R_0$, we compared uniform and fluctuating backgrounds. Although helium reionization is thought to be inhomogeneous (see \citealt{Furl08}), the assumption of a uniform background has been common. We found that the uniform case produces an effective optical depth approximately a factor of two smaller for a fixed $\langle \Gamma \rangle$. Thus, properly incorporating the fluctuating background is crucial for interpreting the \ion{He}{2} forest in terms of the ionizing sources. Furthermore, we find that the inclusion of background variations slightly smoothes, but does not remove, the jump in the attenuation length at $z \approx 2.8$. A clear change in the IGM does appear to occur around this redshift. The discontinuous behavior in $\Gamma$ and $R_0$ led us to include helium reionization in our model through the distribution $f(\Gamma)$. During reionization, the ionized helium fraction determines this distribution, and we studied several toy models for the redshift evolution of $\bar{x}_{\rm{HeIII}}$. These models suggest $z_{\rm{He}} \approx 2.7$ as the best fit to the data, but the statistical uncertainties in $\tau_{\rm{eff}}$ are large. We do not account for cosmic variance and only consider the mean effective optical depth. Our method also makes assumptions that are not valid during reionization, e.g. a power-law temperature-density relation, but this does not appear to affect the discontinuity significantly. In fact, the most important caveat to our model is the use of the fluctuating Gunn-Peterson approximation, which is a simplified treatment of the Ly$\alpha$ absorption. The approach ignores the wings of absorption lines, peculiar velocities, and line blending; overall, these effects require us to add an unknown renormalization factor (of order $\sim 0.3$--$0.5$) when translating from $\Gamma$ to optical depth and compromises attempts to measure the absolute value of $\langle \Gamma \rangle$. One danger is the possible redshift evolution of this factor: we have assumed that it does not evolve, but in reality the line structure and temperature of the forest do evolve, especially at the end of reionization. More detailed numerical simulations that incorporate both the baryonic physics of the Ly$\alpha$ forest and the large-scale inhomogeneities of helium reionization are required to explore this fully. Interestingly, if our interpretation is correct then it appears that helium reionization completes at $z_{\rm He} \la 3$. This places it several hundred million years \emph{after} the epoch suggested by indirect probes of the \ion{H}{1} Ly$\alpha$ forest. Specifically, some measurements of the temperature evolution of the forest show a sharp jump at $z \approx 3.2$ and a shift toward isothermality \citep{Scha00, Rico00}, which has been interpreted as evidence for helium reionization \citep{Furl08a, McQu09}. It is not clear whether this time lag can be made consistent; it probably depends on the details of the line selection in the observations (see the discussion in \citealt{Furl08a}). Moreover, late helium reionization would present further difficulties for an explanation of the $z \approx 3.2$ feature in the \ion{H}{1} Ly$\alpha$ forest opacity in terms of helium reionization (see also \citealt{Bolt09}). Another indirect constraint is consistent, however, with our picture: reconstruction of the ionizing background from optically thin metal systems finds an effective optical depth in \ion{He}{2} Ly$\alpha$ photons slightly higher than the direct measurements, but with a similar redshift evolution \citep{Agaf05, Agaf07}. The most significant limitation in the data is the relatively small number of lines of sight, producing large variations in the measured transmission, especially at $z \ga 3$ where the cosmic variance is large. Fortunately, a number of new lines of sight have been found \citep{Zhen08,Syph09}, and the recent installation of the Cosmic Origins Spectrograph (COS) on the \emph{Hubble Space Telescope} adds a new instrument to our arsenal. Although the nominal wavelength range of COS limits it to $z \ga 2.8$, this is precisely the most interesting range for studying reionization. Our models show that $\tau_{\rm eff} \la 5$ so long as $x_{\rm HeIII} \ga 0.3$ (see Fig.~\ref{fig:toys}), so there should be a relatively wide redshift range with measurable transmission -- especially when considering the wide variations in the ionizing background expected during and after helium reionization (see Fig.~\ref{fig:f(j)} and \citealt{Furl08b}). Finally, these prospects point out one important difference between \ion{He}{3} and \ion{H}{1} reionization: the near-uniformity of the ionizing background at the end of \ion{H}{1} reionization means that very little residual transmission can be expected at $z \ga 6$ for that event, making the Ly$\alpha$ forest relatively useless for studying reionization. In contrast, the large variance intrinsic to the \ion{He}{2}-ionizing background produces much stronger fluctuations and makes the epoch of reionization itself accessible with the \ion{He}{2} Ly$\alpha$ forest. Another interesting difference between helium and hydrogen is the effect of including fluctuations on the photoionization rate inferred from the Ly$\alpha$ forest. We find that assuming a uniform ionizing background \emph{underestimates} $\Gamma$ by up to a factor of two, while during hydrogen reionization the effect is much smaller -- only a few percent \citep{Bolt07, Mesi09}. During and after helium reionization, the fluctuations are much more pronounced than the hydrogen equivalent, leading to a much broader $f(J)$, so that more of the Universe lies significantly below the mean. For hydrogen reionization, the distributions are much narrower, favoring $\Gamma$ near the mean. In addition, after hydrogen reionization the density distribution is much wider than $f(\Gamma)$, so that the latter provides only a small perturbation; the opposite is true in our case. Our general approach is very similar to \citet{Fan02} and \citet{Fan06}, who also interpreted the \ion{H}{1} Ly$\alpha$ forest data at $z \sim 6$ using a uniform ionizing background and the same IGM density model as we have (although in their case that model required extrapolation to the relevant redshifts). They also found a discontinuity in the optical depth (at $z \sim 6.1$), which is often taken as evidence for \ion{H}{1} reionization. But during this earlier epoch, that inference is less clear because of the near saturation of the forest and the unknown attenuation length (whose evolution really determines the overall ionizing background, but which may evolve rapidly even after reionization; \citealt{Furl09}). Nevertheless, we hope that understanding this discontinuity in the \ion{He}{2} forest properties will shed light on the problem of hydrogen reionization. We thank J.~S.~Bolton, J.~M.~Shull, and J.~Tumlinson for sharing their data in electronic form. This research was partially supported by the NSF through grant AST-0607470 and by the David and Lucile Packard Foundation. \bibliographystyle{apj}
2,869,038,156,467
arxiv
\section{Introduction}\label{sec1} The numerical solution of Hamiltonian problems has been the subject of many investigations in the last decades, due to the fact that Hamiltonian problems are not structurally stable against generic perturbations, like those induced by a general-purpose numerical method used to approximate their solutions. The main features of a canonical Hamiltonian system are surely the symplecticness of the map and the conservation of energy, which cannot be simultaneously maintained by any given Runge-Kutta or B-series method \cite{CFM2006}. Consequently, numerical methods have been devised in order of either defining a symplectic discrete map, giving rise to the class of {\em symplectic methods} (see, e.g., the monographs \cite{SSC1994,LeRe2004,GNI2006,BlCa2016}, and references therein, the review paper \cite{SS2016}, and related approaches \cite{But21}), or being able to conserve the energy, resulting in the class of {\em energy-conserving methods} (see, e.g., \cite{LQR1999,IaPa2007,IaPa2008,IaTr2009,CMcLMcLOQW2009,BIT2009,BIT2010,H2010,BIS2010,LW2016} and the monographs \cite{LIMBook2016}). In particular, {\em Hamiltonian Boundary Value Methods (HBVMs)} \cite{BIT2009,BIT2010,BIS2010,LIMBook2016,BIT2012} (as well as their generalizations, see, e.g. \cite{BCMR2012,BI2012,BIT2012_2,BS2014,ABI2015,BGIW2018,BMR2019,BIZ2020,ABI2022}) form a special class of Runge-Kutta methods with a (generally) rank-deficient coefficient matrix. HBVMs, in turn, can be also viewed as the outcome obtained after a local projection of the vector field onto a finite-dimensional function space: in particular, the set of polynomials of a given degree. For this purpose, the Legendre orthonormal polynomial basis has been considered so far \cite{BIT2012_1} (see also \cite{ABI2019_0}), and their use as spectral methods in time has been also investigated both theoretically and numerically \cite{BMR2019,BIMR2019,BGZ2019,ABI2020,ABIM2020,BBTZ2020}. Remarkably, as was already observed in \cite{BIT2012_1}, this idea is even more general and can be adapted to other finite-dimensional function spaces and/or different bases. Following this route, in this paper we consider a class of Runge-Kutta methods based on the use of the Chebyshev polynomial basis. It turns out that, with this choice, our approach finally leads us back to the same formulae introduced by Costabile and Napoli by considering the classical collocation conditions based on the Chebyshev abscissae \cite{CoNa2001}. Exploiting the Routh-Hurwitz criterion, in \cite{CoNa2004} they derived the A-stability property of the formule up to order 20 while, more recently, they also extended the methods to cope with $k$-th order problems \cite{CoNa2011}. In this context, we are mainly interested in a spectral-time implementation of these formulae, which means that the given approximation accuracy (usually close to the machine epsilon) is achieved by increasing the order of the formulae rather than reducing the integration stepsize. An interesting consequence of such a strategy is that, modulo the effects of round-off errors, the numerical solution will mimic the exact solution so that, when applied to a Hamiltonian system the method will inherit both the simplecticity and the conservation properties. The advantage of using the Chebyshev basis stems from the fact that all the entries in the Butcher tableau of the corresponding Runge-Kutta methods can be given in closed form, thus avoiding the introduction of round-off errors when numerically computing them (as is the case with the Legendre basis, where the Gauss-Legendre nodes need to be numerically computed). In this respect, not only does the analysis provided within the new framework help in deriving the explicit expression of the coefficients of the methods for any arbitrary high order, but it is also very useful to discuss the convergence rate when they are used as spectral methods in time and to derive an efficient implementation strategy based on the discrete cosine transform: these latter aspects are new, at the best of our knowledge. Further, also a generalization of the methods is sketched, similar to that characterizing HBVMs. With this premise, the structure of the paper is as follows: in Section~\ref{Hamil} we provide a brief introduction to the framework used to derive HBVMs, relying on the use of the Legendre polynomial basis; in Section~\ref{ceby} the approach is extended to cope with the Chebyshev polynomial basis; in Section~\ref{anal} a thorough analysis of the resulting methods is given, also when they are used as spectral methods in time; in Section~\ref{numtest} we provide some numerical tests, to assess the theoretical findings; at last a few conclusions are given in Section~\ref{fine}. \section{Hamiltonian Boundary Value Methods (HBVMs)}\label{Hamil} In order to introduce HBVMs, let us consider a canonical Hamiltonian problem in the form \begin{equation}\label{Hprob} \dot y = J \nabla H(y), \qquad y(0) = y_0\in\mathbb{R}^{2m}, \qquad J=\left(\begin{array}{cc} &I_m\\-I_m\end{array}\right) = -J^\top, \end{equation} with $H:\mathbb{R}^{2m}\rightarrow\mathbb{R}$ the Hamiltonian function (or {\em energy}) of the system.\footnote{In fact, for isolated mechanical systems, $H$ has the physical meaning of the total energy.} As is easily understood $H$ is a constant of motion, since $$\frac{\mathrm{d}}{\mathrm{d} t}H(y) = \nabla H(y)^\top\dot y = \nabla H(y)^\top J \nabla H(y)=0,$$ being $J$ skew-symmetric. The simple idea, on which HBVMs rely on, is that of reformulating the previous conservation property in terms of the line integral of $\nabla H$ along the path defined by the solution $y(t)$: $$H(y(t)) = H(y_0) + \int_0^t \nabla H(y(\tau))^\top\dot y(\tau)\mathrm{d}\tau.$$ Clearly, the integral at the right-hand side of the previous equality vanishes, because the integrand is identically zero by virtue of (\ref{Hprob}), so that the conservation holds true for all $t>0$. The solution of (\ref{Hprob}) is the unique function satisfying such a conservation property for all $t>0$. However, if we consider a discrete-time dynamics, ruled by a {\em timestep} $h$, there exist infinitely many paths $\sigma$ such that: \begin{eqnarray}\nonumber \sigma(0) &=& y_0, \qquad \sigma(h) =: y_1\approx y(h),\\ 0&=&h\int_0^1 \nabla H(\sigma(ch))^\top\dot\sigma(ch)\mathrm{d} c.\label{lim} \end{eqnarray} The path $\sigma$ obviously defines a one-step numerical method that conserves the energy, since $$H(y_1)~=~H(y_0) + h\int_0^1 \nabla H(\sigma(ch))^\top\dot\sigma(ch)\mathrm{d} c ~=~ H(y_0),$$ even though now the integrand is no more identically zero. The methods derived in this framework have been named {\em line integral methods}, due to the fact that the path $\sigma$ is defined so that the line integral in (\ref{lim}) vanishes. Line integral methods have been thoroughly analyzed in the monograph \cite{LIMBook2016} (see also the review paper \cite{BI2018}). Clearly, in the practical implementation of the methods, the integral is replaced by a quadrature of enough high-order, thus providing a fully discrete method, even though we shall not consider, for the moment, this aspect, for which the reader may refer to the above mentioned references.\footnote{This amounts to study HBVMs as {\em continuous-stage Runge-Kutta methods}, as it has been done in \cite{ABI2019_0,ABI2022_0}.} Interestingly enough, after some initial attempts to derive methods in this class \cite{IaPa2007,IaPa2008,IaTr2009,BIT2009,BIT2015}, a systematic way for their derivation was found in \cite{BIT2012_1}, which is based on a local Fourier expansion of the vector field in (\ref{Hprob}). In fact, by setting \begin{equation}\label{fy} f(y)=J\nabla H(y), \end{equation} and using hereafter, depending on the needs, either one or the other notation, one may rewrite problem (\ref{Hprob}), on the interval $[0,h]$, as: \begin{equation}\label{y1} \dot y(ch) = \sum_{j\ge0} P_j(c)\gamma_j(y), \qquad c\in[0,1], \qquad y(0) = y_0, \end{equation} where $\{P_j\}_{j\ge0}$ is the Legendre orthonormal polynomial basis on $[0,1]$, \begin{equation}\label{leg} P_i\in\Pi_i, \qquad \int_0^1 P_i(\tau)P_j(\tau)\mathrm{d} \tau = \delta_{ij}, \qquad i,j\ge0, \end{equation} $\Pi_i$, hereafter, denotes the vector space of polynomials of degree at most $i$, $\delta_{ij}$ is the Kronecker symbol, and \begin{equation}\label{gamj} \gamma_j(y) = \int_0^1P_j(\tau)f(y(\tau h))\mathrm{d}\tau, \qquad j \ge 0, \end{equation} are the corresponding Fourier coefficients. The solution of the problem is formally obtained, in terms of the unknown Fourier coefficients, by integrating both sides of Equation~(\ref{y1}): \begin{equation}\label{y} y(ch) = y_0 + h\sum_{j\ge0}\int_0^c P_j(x)\mathrm{d} x\, \gamma_j(y), \qquad c\in[0,1]. \end{equation} A polynomial approximation $\sigma\in\Pi_s$ can be formally obtained by truncating the previous series to finite sums: \begin{equation}\label{sig1} \dot\sigma(ch) = \sum_{j=0}^{s-1} P_j(c) \gamma_j(\sigma), \qquad c\in[0,1], \qquad \sigma(0)=y_0, \end{equation} and \begin{equation}\label{sig} \sigma(ch) = y_0 + h\sum_{j=0}^{s-1}\int_0^c P_j(x)\mathrm{d} x\, \gamma_j(\sigma), \qquad c\in[0,1], \end{equation} respectively, with $\gamma_j(\sigma)$ defined according to (\ref{gamj}), upon replacing $y$ with $\sigma$. Whichever the degree $s\ge1$ of the polynomial approximation, the following result holds true, where the same notation used above holds. \begin{theo} \label{Hcons} $H(y_1)=H(y_0)$. \end{theo} \noindent\underline{Proof}\quad In fact, one has: \begin{eqnarray*} \lefteqn{H(y_1)-H(y_0) ~=~ H(\sigma(h))-H(\sigma(0)) ~=~ h\int_0^1 \nabla H(\sigma(ch))^\top\dot\sigma(ch)\mathrm{d} c}\\ &=& h\int_0^1 \nabla H(\sigma(ch))^\top\sum_{j=0}^{s-1} P_j(c)\gamma_j(\sigma)\mathrm{d} c~ =~ h\sum_{j=0}^{s-1} \left[ \int_0^1 P_j(c)\nabla H(\sigma(ch))\mathrm{d} c\right]^\top \gamma_j(\sigma)\\ &=& h\sum_{j=0}^{s-1} \left[ \int_0^1 P_j(c)\nabla H(\sigma(ch))\mathrm{d} c\right]^\top J \left[ \int_0^1 P_j(c)\nabla H(\sigma(ch))\mathrm{d} c\right] ~=~ 0, \end{eqnarray*} due to the fact that $J$ is skew-symmetric.\,\mbox{~$\Box{~}$}\bigskip The next result states that the method has order $2s$. \begin{theo}\label{ord2s} $y_1-y(h)=O(h^{2s+1})$. \end{theo} \noindent\underline{Proof}\quad See \cite[Theorem~1]{BIT2012_1}.\,\mbox{~$\Box{~}$}\bigskip It must be emphasized that, when the method is used as a spectral method in time, then the concept of order does not hold anymore. Instead, the following result can be proved, under regularity assumptions on $f$ in (\ref{fy}). \begin{theo}\label{spectral} Let us assume $f(\sigma(t))$ be analytical in a closed ball of radius $r^*$ centered at $0$, then for all $h\in(0,h^*]$, with $h^*<r^*$, there exist $M=M(h^*)>0$ and $\rho>1$, $\rho\sim h^{-1}$, such that:\,\footnote{Hereafter, $|\cdot|$ will devote any convenient vector norm.} \begin{equation}\label{norma} \|\sigma-y\|~:=~\max_{c\in[0,1]}|\sigma(ch)-y(ch)| ~\le~ h M\rho^{-s}. \end{equation} \end{theo} \noindent\underline{Proof}\quad See \cite[Theorem~2]{ABI2020}.\,\mbox{~$\Box{~}$} \bigskip \begin{rem}\label{serve tutto} When using HBVMs as spectral methods in time, it is more important considering the measure of the error (\ref{norma}), rather than the error at $h$. In fact, the timestep used, in such a case, may be large, and even huge, so that the whole approximation in the interval $(0,h)$ is needed. \end{rem} \begin{rem}\label{formachiusa} As previously stated, an important issue, when using numerical methods as spectral methods, stems from the fact that one needs the relevant coefficients be computed for very high-order formulae. When some of such coefficients are evaluated numerically, as is the case for the abscissae of the Gauss-Legendre formulae, this may introduce errors that, even though small, may affect the accuracy of the resulting method. It is then useful to have formulae for which all the involved coefficients are known in closed form, and this motivates the present paper. \end{rem} \section{Chebyshev-Runge-Kutta methods}\label{ceby} An interesting way to interpret the approximation (\ref{sig1}) is that of looking for the coefficients $\gamma_0,\dots,\gamma_{s-1}$, in the polynomial approximation \begin{equation}\label{sig1_new} \dot\sigma(ch) = \sum_{j=0}^{s-1} P_j(c)\gamma_j, \qquad c\in[0,1],\qquad \sigma(0) = y_0\in\mathbb{R}^m, \end{equation} to\,\footnote{For sake of simplicity, hereafter we shall assume $f$ be an analytic function.} \begin{equation}\label{ode1} \dot y(ch) = f(y(ch)), \qquad c\in[0,1], \qquad y(0)=y_0\in\mathbb{R}^m, \end{equation} such that the residual function \begin{equation}\label{rch} r(ch) ~:=~ \dot\sigma(ch) - f(\sigma(ch)), \qquad c\in[0,1], \end{equation} be orthogonal to $\Pi_{s-1}$. That is, \begin{equation}\label{orto1} 0 ~=~ \int_0^1 P_i(c)r(ch)\mathrm{d} c ~=~ \int_0^1 P_i(c)\left( \dot\sigma(ch) - f(\sigma(ch))\right)\mathrm{d} c, \qquad i=0,\dots,s-1. \end{equation} By considering the orthogonality relations (\ref{leg}), this amounts to require that \begin{equation}\label{gami0} \gamma_i = \int_0^1 P_i(c) f\left( y_0+h\sum_{j=0}^{s-1}\int_0^cP_i(x)\mathrm{d} x\gamma_j\right)\mathrm{d} c, \qquad i=0,\dots,s-1,\end{equation} namely (\ref{gamj}) with $\sigma$ replacing $y$, and the new approximation given by \begin{equation}\label{ynew} y_1~:=~\sigma(h) ~\equiv~ y_0 + h\sum_{j=0}^{s-1}\int_0^1 P_j(x)\mathrm{d} x\gamma_j. \end{equation} Approximating the integrals in (\ref{gami0}) by a Gauss-Legendre quadrature of order $2k$, with $k\ge s$, then provides a HBVM$(k,s)$ method \cite{LIMBook2016,BI2018,BIT2010,BIT2012_1}. A generalization of the orthogonality requirement (\ref{gami0}) consists in considering a suitable weighting function \begin{equation}\label{wc} \omega(c)\ge 0, \qquad c\in[0,1], \qquad \int_0^1 \omega(c)\mathrm{d} c=1, \end{equation} and a polynomial basis orthonormal w.r.t. the induced product, \begin{equation}\label{legw} P_i\in\Pi_i, \qquad \int_0^1 \omega(\tau)P_i(\tau)P_j(\tau)\mathrm{d} \tau = \delta_{ij}, \qquad i,j\ge0, \end{equation} then requiring \begin{equation}\label{ortow} 0 ~=~ \int_0^1 \omega(c) P_i(c)r(ch)\mathrm{d} c ~\equiv~ \int_0^1 \omega(c)P_i(c)\left( \dot\sigma(ch) - f(\sigma(ch))\right)\mathrm{d} c, \qquad i=0,\dots,s-1. \end{equation} Consequently, now the coefficients of the polynomial approximation (\ref{sig1_new}) turn out to solve the following set of equations, \begin{equation}\label{gamiw} \gamma_i = \int_0^1 \omega(c)P_i(c) f\left( y_0+h\sum_{j=0}^{s-1}\int_0^cP_i(x)\mathrm{d} x\gamma_j\right)\mathrm{d} c, \qquad i=0,\dots,s-1,\end{equation} in place of (\ref{gami0}).\footnote{Clearly, (\ref{gami0}) is derived by considering the special case $\omega(c)\equiv1$.} In so doing, we obtain a polynomial approximation formally still given by (\ref{sig}), with the Fourier coefficients now defined as \begin{equation}\label{gamiws} \gamma_j ~\equiv~\gamma_i(\sigma) ~:=~ \int_0^1 \omega(c)P_i(c) f(\sigma(ch))\mathrm{d} c, \qquad i=0,\dots,s-1. \end{equation} Hereafter, we shall consider the weighting function \begin{equation}\label{cebyw} \omega(c) = \frac{1}{\pi \sqrt{c(1-c)}}, \qquad c\in(0,1), \end{equation} which satisfies (\ref{wc}) and provides the shifted and scaled Chebyshev polynomials of the first kind, i.e.,\footnote{As is usual, hereafter, $T_j(x)$, $j\ge0$, denote the Chebyshev polynomials of the first kind.} \begin{equation}\label{cebypol} P_0(c) ~=~ T_0(2c-1) ~\equiv~ 1, \qquad P_j(c) ~=~ \sqrt{2}\,T_j(2c-1), \quad j\ge1, \qquad c\in[0,1], \end{equation} satisfying (\ref{legw}). For later use, we also report the relations between such polynomials and their integrals \begin{eqnarray}\nonumber \int_0^c P_0(x)\mathrm{d} x &=& \frac{1}2\left[ \frac{P_1(c)}{\sqrt{2}}+P_0(c) \right],\\ \label{cebyint} \int_0^c P_1(x)\mathrm{d} x &=& \frac{1}8\left[ P_2(c)-\sqrt{2}P_0(c) \right],\\ \nonumber \int_0^c P_j(x)\mathrm{d} x &=& \frac{1}4\left[ \frac{P_{j+1}(c)}{j+1} - \frac{P_{j-1}(c)}{j-1} - \frac{(-1)^j2\sqrt{2}P_0(c)}{j^2-1}\right] , \qquad j\ge 2. \end{eqnarray} In particular, for $c=1$ one obtains, by considering that $P_0(1) = 1$ and $P_j(1) = \sqrt{2}$, for all $j\ge1$: \begin{equation}\label{intP1} \int_0^1 P_j(x)\mathrm{d} x = \left\{ \begin{array}{cl} 1, &~j~=~0,\\[2mm] 0, &~j\quad \mbox{odd,}\\[2mm] \frac{\sqrt{2}}{1-j^2}, &~j\quad\mbox{even}, \quad j\ge2. \end{array}\right. \end{equation} \subsection{Discretization} As is clear, the integrals involved in (\ref{gamiw})-(\ref{gamiws}) cannot be computed exactly, and need to be approximated by using a numerical method: this latter can be naturally chosen as the Gauss-Chebysvev interpolatory quadrature formula of order $2s$ on the interval $[0,1]$, with nodes \begin{equation}\label{ci} c_i = \frac{1+\cos\theta_i}2, \qquad \theta_i = \frac{2i-1}{2s}\pi, \qquad i=1,\dots,s, \end{equation} and weights $$\omega_i = \frac{1}s, \qquad i=1,\dots,s.$$ We recall that $P_s(c_i)=0$, $i=1,\dots,s$, due to fact that (see (\ref{cebypol})), $x_i=\cos\theta_i$, $i=1,\dots,s$ are the roots of $T_s(x)$. Consequently, the Fourier coefficients (\ref{gamiw})-(\ref{gamiws}) will be approximated as: \begin{equation}\label{gamis} \hat\gamma_i = \frac{1}s\sum_{j=1}^s P_i(c_j)f\left( y_0+h\sum_{\ell=0}^{s-1}\int_0^{c_j}P_\ell(x)\mathrm{d} x\,\hat\gamma_\ell\right)\mathrm{d} c, \qquad i=0,\dots,s-1.\end{equation} In so doing, we obtain a new polynomial approximation, \begin{equation}\label{uch} u(ch) = y_0 + h\sum_{j=0}^{s-1}\int_0^cP_j(x)\mathrm{d} x\,\hat\gamma_j, \qquad c\in[0,1], \end{equation} in place of $\sigma$. Setting $Y_j$ the argument of $f$ in Equation (\ref{gamis}), and taking into account (\ref{uch}), we arrive at the $s$-stage Runge-Kutta method with stages \begin{equation}\label{Yi} Y_i ~:=~u(c_ih) ~=~ y_0 + h\sum_{j=1}^s\underbrace{\left(\frac{1}s\sum_{\ell=0}^{s-1} \int_0^{c_i} P_\ell(x)\mathrm{d} x P_\ell(c_j)\right)}_{=:\,a_{ij}} f(Y_j), \qquad i=1,\dots,s, \end{equation} with the new approximation given by: \begin{equation}\label{y1dig} y_1 ~:=~ u(h) = y_0 + h\sum_{i=1}^s \underbrace{\left(\frac{1}s\sum_{\ell=0}^{s-1} \int_0^1 P_\ell(x)\mathrm{d} x P_\ell(c_i)\right)}_{=:\,b_i} f(Y_i). \end{equation} Consequently, the abscissae and weights $(c_i,b_i)$, $i=1,\dots,s$, and the entries $a_{ij}$, $i,j=1,\dots,s$, define an $s$-stage Runge-Kutta method. An explicit expression of the abscissae has been given in (\ref{ci}). Let us now derive more refined expressions for the weights and the Butcher matrix. For this purpose, let us define the following matrices: \begin{equation}\label{PIO} {\cal P}_s = \Big( P_{j-1}(c_i)\Big)_{i,j=1,\dots,s}, \qquad {\cal I}_s = \left( \int_0^{c_i} P_{j-1}(x)\mathrm{d} x\right)_{i,j=1,\dots,s}, \qquad \Omega = \frac{1}s I_s, \quad \in~\mathbb{R}^{s\times s}, \end{equation} with $I_s$ the identity matrix, and \begin{equation}\label{Xs} X_s = \left(\begin{array}{ccccccc} \frac{1}2 & -\sqrt{2}\beta_2 & \alpha_3 & \alpha_4 &\dots & \alpha_{s-1} &\alpha_s\\[1mm] \sqrt{2}\beta_1 & 0 & -\beta_1 & \\ & \beta_2 & 0 & -\beta_2 & &O\\ & & \beta_3 &0 &-\beta_3\\ & & &\ddots &\ddots &\ddots\\ &O & & &\beta_{s-2} &0 &-\beta_{s-2}\\ & & & & &\beta_{s-1} & 0 \end{array}\right) \in\mathbb{R}^{s\times s}, \end{equation} with \begin{equation}\label{Xs1} \beta_j = \frac{1}{4j}, \quad j\ge 1,\qquad \alpha_j = (-1)^j8\sqrt{2}\beta_j\beta_{j-2}, \quad j\ge 3. \end{equation} The following results hold true. \begin{lem}\label{PIOX} With reference to (\ref{ci}) and the matrices (\ref{PIO}) and (\ref{Xs}), one has: $$ ({\cal P}_s)_{ij} = \left[\sqrt{2} + \delta_{j1}(1-\sqrt{2})\right] \cos (j-1)\theta_i, \qquad i,j=1,\dots,s,$$ $${\cal I}_s = {\cal P}_s X_s, \qquad P_s^\top\Omega {\cal P}_s = I_s.$$ \end{lem} \noindent\underline{Proof}\quad The first statement follows from (\ref{cebypol}) and the well-known fact that $$T_{j-1}(\cos\theta_i) = \cos(j-1)\theta_i.$$ The second statement follows from (\ref{cebyint}), by considering that $P_s(c_i)=0$, $i=1,\dots,s$. Finally, one has, by setting, hereafter, $e_i\in\mathbb{R}^s$ the $i$th unit vector: \begin{eqnarray*} e_i^\top\left( {\cal P}_s^\top\Omega {\cal P}_s\right) e_j &=& \frac{1}s({\cal P}_s e_i)^\top({\cal P}_se_j) \\[1mm] &=& \frac{\left[\sqrt{2} + \delta_{i1}(1-\sqrt{2})\right]\left[\sqrt{2} + \delta_{j1}(1-\sqrt{2})\right]}s \sum_{k=1}^s\cos(i-1)\theta_k \cdot \cos(j-1)\theta_k \\ &=& \delta_{ij}, \end{eqnarray*} due to the fact that, for all $i,j=1,\dots,s$, $$\sum_{k=1}^s\cos(i-1)\theta_k \cdot \cos(j-1)\theta_k ~=~ \left\{ \begin{array}{cl} 0, & i\ne j,\\[1mm] s, & i=j=1,\\[1mm] \frac{s}2, &i=j\ge 1.\,\mbox{~$\Box{~}$} \end{array}\right.$$\smallskip Consequently, we can now state the following result. \begin{theo}\label{buttabc} The Butcher matrix of the Runge-Kutta method (\ref{Yi})-(\ref{y1dig}) is given by \begin{equation}\label{butA} A ~\equiv~(a_{ij})~ :=~ {\cal I}_s{\cal P}_s^\top\Omega ~\equiv~ {\cal P}_s X_s {\cal P}_s^\top\Omega ~\equiv~ {\cal P}_s X_s {\cal P}_s^{-1}. \end{equation} The corresponding weights are given by: \begin{equation}\label{bi} b_i := \frac{1}s\left[ 1 - 2\sum_{j=1}^{\lceil s/2\rceil-1} \frac{ \cos \frac{2i-1}s j\pi}{4j^2-1}\right], \qquad i=1,\dots,s. \end{equation} \end{theo} \noindent\underline{Proof}\quad From Lemma~\ref{PIOX}, the last two equalities in (\ref{butA}) easily follow. The first equality in (\ref{butA}) then follows from (\ref{Yi}), by taking into account (\ref{PIO}): $$a_{ij}~=~e_i^\top A e_j ~=~ e_i^\top {\cal I}_s{\cal P}_s^\top \Omega e_j ~=~ \frac{1}s (e_i^\top {\cal I}_s)(e_j^\top {\cal P}_s)^\top ~=~ \frac{1}s\sum_{\ell=0}^{s-1} \int_0^{c_i}P_\ell(x)\mathrm{d} x P_\ell(c_j),$$ which coincides with the formula given in (\ref{Yi}). The formula (\ref{bi}) of the weights follows by considering that from (\ref{y1dig}), taking into account (\ref{intP1}) and (\ref{ci}), one has: \begin{eqnarray*} b_i &=&\frac{1}s\sum_{\ell=0}^{s-1} \int_0^1 P_\ell(x)\mathrm{d} x P_\ell(c_i) ~=~ \left(\int_0^1P_0(c)\mathrm{d} c\,,\dots,\int_0^1P_{s-1}(c)\mathrm{d} c\right)^\top {\cal P}_s^\top\Omega e_i\\ &=& \frac{1}s\left( 1,\,0,\,\frac{-\sqrt{2}}{2^3-1},\,0,\,\frac{-\sqrt{2}}{4^3-1},\,0,\,\frac{-\sqrt{2}}{6^3-1},\,\dots\right) \left(\begin{array}{c} 1\\ \sqrt{2}\cos\theta_i \\ \vdots \\ \sqrt{2}\cos(s-1)\theta_i\end{array}\right)\\[1mm] &=& \frac{1}s\left[ 1 - 2\sum_{j=1}^{\lceil s/2\rceil-1} \frac{ \cos 2j\theta_i}{4j^2-1}\right]~=~\frac{1}s\left[ 1 - 2\sum_{j=1}^{\lceil s/2\rceil-1} \frac{ \cos \frac{2i-1}s j\pi}{4j^2-1}\right].\,\mbox{~$\Box{~}$} \end{eqnarray*}\smallskip Moreover, concerning the weights of the method, the following result holds true. \begin{theo}\label{bipos} The weights $\{b_i\}_{i=1,\dots,s}$ defined in (\ref{bi}) are all positive.\end{theo} \noindent\underline{Proof}\quad In fact, from (\ref{bi}), having fixed a given value of $s\ge 1$, one has: \begin{eqnarray*} b_i &=&\frac{1}s\left[ 1 - 2\sum_{j=1}^{\lceil s/2\rceil-1} \frac{ \cos \frac{2i-1}s j\pi}{4j^2-1}\right] ~\ge~ \frac{1}s\left[ 1 - \sum_{j=1}^{\lceil s/2\rceil-1} \frac{2}{4j^2-1}\right]\\ &=& \frac{1}s\left[ 1 - \sum_{j=1}^{\lceil s/2\rceil-1} \left(\frac{1}{2j-1}-\frac{1}{2j+1}\right)\right] ~=~\frac{1}s\left[ 1 -1 + \frac{1}{2(\lceil s/2\rceil-1)+1}\right]\\ &=& \frac{1}s\,\frac{1}{2\lceil s/2\rceil-1} ~\ge~\frac{1}s\,\frac{1}s ~=~\frac{1}{s^2}.\,\mbox{~$\Box{~}$} \end{eqnarray*}\smallskip In conclusion, we have derived the family of $s$-stage Runge-Kutta method whose Butcher tableau, with reference to (\ref{ci}), (\ref{bi}), and (\ref{PIO})--(\ref{Xs1}), is given by: \begin{equation}\label{tabCCM} \begin{array}{c|c} {\bm{c}} & {\cal I}_s {\cal P}_s^\top\Omega\\ \hline \\[-3mm] & {\bm{b}}^\top \end{array} \quad \equiv \quad \begin{array}{c|c} {\bm{c}} & {\cal P}_s X_s{\cal P}_s^\top\Omega\\ \hline \\[-3mm] & {\bm{b}}^\top \end{array} \quad \equiv \quad \begin{array}{c|c} {\bm{c}} & {\cal P}_s X_s{\cal P}_s^{-1}\\ \hline \\[-3mm] & {\bm{b}}^\top \end{array} \end{equation} having set, as is usual, \begin{equation}\label{bc} {\bm{c}}=(c_1,\,\dots,\,c_s)^\top, \qquad {\bm{b}}=(b_1,\,\dots,\,b_s)^\top. \end{equation} In particular, the last form in (\ref{tabCCM}) can be regarded as a kind of $W$-transformation \cite{HW1996} of the method. \begin{rem} The Runge-Kutta methods in (\ref{tabCCM})-(\ref{bc}) coincide with the {\em Chebyshev Collocation Methods} derived by Costabile and Napoli in \cite{CoNa2001}. For sake of brevity, we shall refer to the $s$-stage method as CCM$(s)$.\end{rem} \subsection{Collocation methods}\label{collo} As anticipated above, CCM$(s)$ methods are {\em collocation methods}. In fact, from (\ref{gamis}), (\ref{Yi}), and (\ref{tabCCM}), by setting $$Y ~=~ \left(\begin{array}{c} u(c_1h)\\ \vdots\\ u(c_sh)\end{array}\right) ~\equiv~ u({\bm{c}} h), \qquad \hat{{\bm{\gamma}}}=\left(\begin{array}{c}\hat\gamma_0\\ \vdots\\ \hat\gamma_{s-1}\end{array}\right),$$ the vector of the stages and of the Fourier coefficients, respectively, one derives that: $$\dot u({\bm{c}} h) = {\cal P}_s\otimes I_m \hat{{\bm{\gamma}}},\qquad \hat{{\bm{\gamma}}} = {\cal P}_s^\top\Omega\otimes I_m f(u({\bm{c}} h)).$$ By combining the last two equations, and taking into account that, by virtue of Lemma~\ref{PIOX}, ${\cal P}_s{\cal P}_s^\top\Omega = I_s$, one eventually derives that $$\dot u({\bm{c}} h) = f({\bm{c}} h),$$ i.e., $$\dot u(c_ih) = f(u(c_ih)), \qquad i=1,\dots,s.$$ Further, the collocation polynomial $u$ satisfies ~$u(0)=y_0$~ and ~$u(h)=:y_1$,~ as seen above. \subsection{Interesting implementation details}\label{itera} An interesting property of the Butcher matrix in the tableau (\ref{tabCCM}) derives from the fact that the multiplications $$\sqrt{s}\, {\cal P}_s^\top \Omega \otimes I_m V \equiv \frac{1}{\sqrt{s}}{\cal P}_s^\top\otimes I_m V, \qquad \mbox{and} \qquad \frac{1}{\sqrt{s}}{\cal P}_s\otimes I_m Z,$$ with $V,Z\in\mathbb{R}^{sm}$ given vectors, amount to the discrete cosine transform of $V$, ${\tt dct}(V)$, and the inverse discrete cosine transform of $Z$, ${\tt idct}(Z)$, respectively.\footnote{Here, we have used the name of the Matlab$^\copyright$ functions implementing the two transformations.} Consequently, a fixed-point iteration for computing the stages of the CCM$(s)$ method reads, by setting hereafter the vector ~${\bm{e}} = \left( 1,\, 1,\,\dots,\,1\right)^\top\in\mathbb{R}^s$: \begin{equation}\label{dct} Y^{(r+1)} = {\bm{e}}\otimes y_0 + h\,{\tt idct}\left( X_s\otimes I_m\,{\tt dct}(f(Y^{(r)}))\right), \qquad r=0,1,\dots, \end{equation} which can be advantageous, for large values of $s$. In fact, by considering that the matrix $X_s$ in (\ref{Xs}) is {\em sparse}, the complexity of one iteration amounts to computing the vector field in the stage values, plus $O(ms\log(s))$ flops,\footnote{As is usual, 1 flop denotes a basic {\em fl}oating-point {\em op}eration.} whereas the standard implementation would require $2ms^2$ flops. \subsection{Generalizations}\label{chbvmks} We observe that a possible generalizations of the collocation methods described above is that of using, in the discretization procedure of the integrals involved in (\ref{gamiw})-(\ref{gamiws}), a Gauss-Chebysvev interpolatory quadrature formula of order $2k$ on the interval $[0,1]$, with nodes and weights \begin{equation}\label{kges} c_i = \frac{1+\cos\theta_i}2, \qquad \theta_i = \frac{2i-1}{2k}\pi, \qquad \omega_i = \frac{1}k, \qquad i=1,\dots,k, \end{equation} for a convenient $k\ge s$. In such a case, for $k>s$, the methods are no more collocation methods. This is analogous to what happens for HBVM$(k,s)$ methods \cite{LIMBook2016} that, when $k=s$, reduce to the $s$-stage Gauss-Legendre collocation methods but, for $k>s$, are no more a collocation methods. By choosing the absissae (\ref{kges}), the matrices (\ref{PIO})-(\ref{Xs}) respectively become: \begin{equation}\label{PIO1} {\cal P}_s = \Big( P_{j-1}(c_i)\Big)_{\scriptsize\begin{array}{l}i=1,\dots,k\\j=1,\dots,s\end{array}}, ~ {\cal I}_s = \left( \int_0^{c_i} P_{j-1}(x)\mathrm{d} x\right)_{\scriptsize\begin{array}{l}i=1,\dots,k\\j=1,\dots,s\end{array}}\in\mathbb{R}^{k\times s}, \qquad \Omega = \frac{1}k I_k \in\mathbb{R}^{k\times k}, \end{equation} and (see (\ref{Xs})) $ \hat X_s = \left(\begin{array}{ccccccc} \frac{1}2 & -\sqrt{2}\beta_2 & \alpha_3 & \alpha_4 &\dots & \alpha_{s-1} &\alpha_s\\[1mm] \sqrt{2}\beta_1 & 0 & -\beta_1 & \\ & \beta_2 & 0 & -\beta_2 & &O\\ & & \beta_3 &0 &-\beta_3\\ & & &\ddots &\ddots &\ddots\\ &O & & &\beta_{s-2} &0 &-\beta_{s-2}\\ & & & & &\beta_{s-1} & 0\\ \hline & & & & & &\beta_s \end{array}\right) \equiv \left(\begin{array}{c} X_s \\ \hline 0,\dots,0,\beta_s\end{array}\right) \in\mathbb{R}^{s+1\times s}. $$ This latter matrix is such that, for $k>s$: \begin{equation}\label{IsPs} {\cal I}_s = {\cal P}_{s+1}\hat X_s, \end{equation} where ${\cal P}_{s+1}\in\mathbb{R}^{k\times s+1}$ is defined similarly as in (\ref{PIO1}).\footnote{Clearly, when $k=s$, (\ref{IsPs}) reduces to ${\cal I}_s={\cal P}_sX_s$, according to Lemma~\ref{PIOX}.} Finally, the corresponding Butcher tableau become (compare with (\ref{tabCCM})-(\ref{bc})): $ \begin{array}{c|c} {\bm{c}} & {\cal I}_s {\cal P}_s^\top\Omega\\ \hline \\[-3mm] & {\bm{b}}^\top \end{array} \quad \equiv \quad \begin{array}{c|c} {\bm{c}} & {\cal P}_{s+1} \hat X_s{\cal P}_s^\top\Omega\\ \hline \\[-3mm] & {\bm{b}}^\top \end{array}\, ,\qquad {\bm{c}}=(c_1,\,\dots,\,c_k)^\top, \qquad {\bm{b}}=(b_1,\,\dots,\,b_k)^\top, $ with the weights given by $ b_i := \frac{1}k\left[ 1 - 2\sum_{j=1}^{\lceil s/2\rceil-1} \frac{ \cos \frac{2i-1}k j\pi}{4j^2-1}\right], \qquad i=1,\dots,k, $ in place of (\ref{bi}). Nevertheless, provided that $k\ge s$ holds, the analysis of such methods is similar to that of CCM$(s)$ methods (all the results in Section~\ref{anal} continue formally to hold), so that we shall not consider them further. \section{Analysis of the methods}\label{anal} In this section we carry out the analysis of the CCM$(s)$ method defined by the Butcher tableau (\ref{tabCCM})-(\ref{bc}). In particular, we study: \begin{enumerate} \item the symmetry of the method; \item its order of convergence; \item its linear stability; \item its accuracy when used as a spectral method in time. \end{enumerate} The analysis of items 2 and 3 uses different arguments from those provided in \cite{CoNa2004,CoNa2011}, whereas the analysis related to items 1 and 4 is novel. \subsection{Symmetry} To begin with, let us recall a few symmetry properties of the abscissae (\ref{ci}) and of the polynomials (\ref{cebypol}), which we state without proof (they derive from well known properties of Chebyshev polynomials and abscissae): \begin{equation}\label{simprop} c_i = 1-c_{s-i+1}, \quad i=1,\dots,s, \qquad P_j(1-c) = (-1)^jP_j(c), \quad c\in[0,1], \quad i=0,\dots,s. \end{equation} For later use, we also set the vector \begin{equation}\label{Is1} {\cal I}_s(1) ~:=~ \left(\begin{array}{c} \int_0^1 P_0(x)\mathrm{d} x\\ \vdots \\ \int_0^1 P_{s-1}(x)\mathrm{d} x\end{array}\right) ~\equiv~ \left(\begin{array}{c} 1\\ 0\\ \frac{-\sqrt{2}}{2^3-1} \\ 0 \\ \frac{-\sqrt{2}}{4^3-1}\\ 0 \\ \frac{-\sqrt{2}}{6^3-1} \\ \vdots\end{array}\right), \end{equation} with the last equality following from (\ref{intP1}), such that, according to what seen in the proof of Theorem~\ref{buttabc}: \begin{equation}\label{bt} {\bm{b}}^\top = {\cal I}_s(1)^\top {\cal P}_s^\top\Omega. \end{equation} We also need to define the following matrices: \begin{equation}\label{ePD} P = \left(\begin{array}{ccc} & &1\\ &\udots\\ 1\end{array}\right), \quad D= \left(\begin{array}{cccccc} 1\\ &-1\\ &&1\\&&&-1\\&&&&\ddots\\&&&&&(-1)^{s-1}\end{array}\right) \in\mathbb{R}^{s\times s}. \end{equation} We can now state the following result. \begin{theo} The method (\ref{tabCCM})-(\ref{bc}) is symmetric.\end{theo} \noindent\underline{Proof}\quad Since $P{\bm{c}} = {\bm{c}}$, see (\ref{simprop}), it is known that the symmetry of the method (\ref{tabCCM})-(\ref{bc}) is granted, provided that \cite{GNI2006}: \begin{equation}\label{simme} P{\bm{b}} = {\bm{b}}, \qquad P\left({\cal I}_s{\cal P}_s^\top\Omega\right) P = {\bm{e}}\,{\bm{b}}^\top-{\cal I}_s{\cal P}_s^\top\Omega. \end{equation} Let us start proving the first equality which, in turn, amounts to show that, by virtue of (\ref{bt}): $${\bm{b}}^\top ~=~ {\cal I}_s(1)^\top {\cal P}_s^\top\Omega P ~=~ {\bm{b}}^\top P.$$ In fact, by taking into account (\ref{PIO}), (\ref{simprop}), and (\ref{ePD}), one at first derives that: \begin{equation}\label{simPO} {\cal P}_s^\top\Omega P = D{\cal P}_s^\top\Omega. \end{equation} Moreover, by considering that $D{\cal I}_s(1)={\cal I}_s(1)$, since the even entries of ${\cal I}_s(1)$ are zero (see (\ref{Is1})), one has: $$ {\bm{b}}^\top ~=~ {\cal I}_s(1)^\top {\cal P}_s^\top\Omega ~=~ {\cal I}_s(1)^\top D {\cal P}_s^\top\Omega ~=~ {\cal I}_s(1)^\top {\cal P}_s^\top\Omega P ~=~ {\bm{b}}^\top P. $$ Consequently, the first part of the statement follows. Further, one obtains, by virtue of (\ref{simPO}): $$P({\cal I}_s{\cal P}_s^\top\Omega) P = P({\cal I}_s D^2 {\cal P}_s^\top\Omega) P = (P{\cal I}_s D)(D{\cal P}_s^\top\Omega P) =(P{\cal I}_s D){\cal P}_s^\top\Omega .$$ Therefore, from (\ref{bt}) it follows that the second statement in (\ref{simme}) holds true, provided that $$P{\cal I}_s D = {\bm{e}}\,{\cal I}_s(1)^\top-{\cal I}_s \qquad \Longleftrightarrow\qquad (P{\cal I}_sD)_{ij} = \int_{c_i}^1 P_{j-1}(\xi)\mathrm{d} \xi,\quad i,j=1,\dots,s.$$ From (\ref{PIO}) and (\ref{ePD}) one obtains, by virtue of (\ref{simprop}): \begin{eqnarray*} (P{\cal I}_sD)_{ij} &=& (-1)^{j-1}\int_0^{c_{s-i+1}}P_{j-1}(x)\mathrm{d} x ~=~\int_0^{1-c_i}(-1)^{j-1}P_{j-1}(x)\mathrm{d} x\\ &=& \int_0^{1-c_i}P_{j-1}(1-x)\mathrm{d} x ~=~ -\int_1^{c_i} P_{j-1}(\xi)\mathrm{d}\xi ~=~\int_{c_i}^1 P_{j-1}(\xi)\mathrm{d}\xi. \end{eqnarray*} Consequently, the statement is proved.\,\mbox{~$\Box{~}$}\bigskip \subsection{Stability}\label{Astab} Because of its symmetry, one concludes that for all $s\ge1$ the absolute region of a CCM$(s)$ method coincides with $\mathbb{C}^-$, the negative-real complex plane, provided that all the eigenvalues of $X_s$ have positive real part (in fact, $X_s$ is similar to the Butcher matrix, see (\ref{tabCCM})). We have numerically verified that $X_s$ has all the eigenvalues with positive real part for all $s\ge1$ (actually, we stopped at $s=1000$). We can then conclude that CCM$(s)$ methods are {\em perfectly (or precisely)} $A$-stable for all $s\ge1$. \subsection{Order of convergence} Let us now study the order of convergence of a CCM$(s)$ method. For this purpose, we need the following preliminary results. \begin{lem}\label{symeven} The convergence order of a symmetric method is even.\end{lem} \noindent\underline{Proof}\quad See \cite[Theorem~3.2]{GNI2006}.\,\mbox{~$\Box{~}$}\bigskip \begin{lem}\label{Ghj} Assume that a function $G:[0,h]\rightarrow V$, with $V$ a vector space, admits a Taylor expansion at 0. Then, with reference to the orthonormal basis (\ref{cebyw})-(\ref{cebypol}), one has: $$\psi_j ~:=~\int_0^1 \omega(c) P_j(c)G(ch)\mathrm{d} c ~=~ O(h^j), \qquad j=0,1,\dots.$$ \end{lem} \noindent\underline{Proof}\quad In fact, one has: \begin{eqnarray*} \psi_j &=& \int_0^1 \omega(c) P_j(c) G(ch)\mathrm{d} c ~=~ \int_0^1 \omega(c) P_j(c) \sum_{\ell\ge0}\frac{G^{(\ell)}(0)}{\ell!} (ch)^\ell \mathrm{d} c \\[1mm] &=& \sum_{\ell\ge0} \frac{G^{(\ell)}(0)}{\ell!} h^\ell \int_0^1 \omega(c) P_j(c)c^\ell \mathrm{d} c ~=~\sum_{\ell\ge j} \frac{G^{(\ell)}(0)}{\ell!} h^\ell \int_0^1 \omega(c) P_j(c)c^\ell \mathrm{d} c\\[1mm] &=&O(h^j).\,\mbox{~$\Box{~}$} \end{eqnarray*}\smallskip \begin{lem}\label{hgj} With reference to the approximate Fourier coefficients defined in (\ref{gamis}), one has: $$\hat\gamma_j=O(h^j), \qquad j=0,\dots,s-1.$$\end{lem} \noindent\underline{Proof}\quad In fact, from the previous Lemma~\ref{Ghj} applied to $G(ch)=f(u(ch))$, we know that (using the notation (\ref{gamiws})) \begin{equation}\label{bgamj} \gamma_j(u) ~=~ \int_0^1\omega(c)P_j(c)f(u(ch))\mathrm{d} c ~=~ O(h^j). \end{equation} On the other hand, the quadrature error, \begin{equation}\label{Dhj} \Delta_j(h) ~:=~ \gamma_j(u)-\hat\gamma_j ~=~ O(h^{2s-j}). \end{equation} Consequently, $$\hat\gamma_j ~=~ \gamma_j(u) -\Delta_j(h) ~=~ O(h^j), \qquad j=0,\dots,s-1.\mbox{~$\Box{~}$}$$\smallskip Finally, we need to recall some well-known perturbation results for ODE-IVPs. For this purpose, let us denote ~$y(t,\xi,\eta)$~ the solution of the problem (compare with (\ref{ode1})), $$\dot y = f(y), \qquad t>\xi, \qquad y(\xi)=\eta\in\mathbb{R}^m.$$ Then: $ \frac{\partial}{\partial t}y(t,\xi,\eta)~=~ f(y(t,\xi,\eta)),\qquad \frac{\partial}{\partial \xi}y(t,\xi,\eta) ~=~ -\Phi(t,\xi,\eta) f(\eta), $ having set $$\Phi(t,\xi,\eta) ~\equiv~ \frac{\partial}{\partial \eta}y(t,\xi,\eta),$$ the solution of the variational problem $$\dot\Phi(t,\xi,\eta) ~=~ f'(y(t,\xi,\eta))\Phi(t,\xi,\eta), \qquad t>\xi, \qquad \Phi(\xi,\xi,\eta)=I_m.$$ We can now state the convergence result. \begin{theo}\label{ords} With reference to the polynomial approximation (\ref{uch}) defined by the CCM$(s)$ method (\ref{tabCCM})-(\ref{bc}) to the solution of (\ref{ode1}), one has: $$\|u-y\| = O(h^{s+1}), \qquad y_1-y(h) = O(h^{r+1}),$$ with ~$r=s$~ if ~$s$~ is even, or ~$r=s+1$,~ otherwise. \end{theo} \noindent\underline{Proof}\quad By taking into account the above arguments, and using the notation (\ref{bgamj})-(\ref{Dhj}), one has: \begin{eqnarray*} u(ch)-y(ch)&=& y(ch,ch,u(ch))-y(ch,0,u(0)) ~=~ \int_0^{ch} \frac{\mathrm{d}}{\mathrm{d} t} y(ch,t,u(t))\,\mathrm{d} t\\[1mm] &=& \int_0^{ch}\left.\frac{\partial}{\partial \xi}y(ch,\xi,u(t))\right|_{\xi=t}+\left.\frac{\partial}{\partial \eta}y(ch,t,\eta)\right|_{\eta=u(t)}\dot u(t)\,\mathrm{d} t \\ \end{eqnarray*}\begin{eqnarray*} &=& h\int_0^{c}\left.\frac{\partial}{\partial \xi}y(ch,\xi, u(\tau h))\right|_{\xi=\tau h}+\left.\frac{\partial}{\partial \eta}y(ch,\tau h,\eta)\right|_{\eta=u(\tau h)}\dot u(\tau h)\,\mathrm{d} \tau \\[1mm] &=&-h\int_0^c \Phi(ch,\tau h,u(\tau h))\left[ f(u(\tau h))-\sum_{j=0}^{s-1}P_j(\tau)\hat\gamma_j\right]\mathrm{d}\tau \\[1mm] &=&-h\int_0^c \Phi(ch,\tau h,u(\tau h))\left[ \sum_{j\ge0} P_j(\tau)\gamma_j(u)-\sum_{j=0}^{s-1}P_j(\tau)\Big(\gamma_j(u)-\Delta_j(h)\Big)\right]\mathrm{d}\tau \\[1mm] &=&-h\int_0^c \Phi(ch,\tau h,u(\tau h))\left[ \sum_{j\ge s} P_j(\tau)\gamma_j(u)+\sum_{j=0}^{s-1}P_j(\tau)\Delta_j(h)\right]\mathrm{d}\tau \\[1mm] &=&-h\int_0^c \Phi(ch,\tau h,u(\tau h))\left[O(h^s)+O(h^{s+1})\right]\mathrm{d}\tau ~=~ O(h^{s+1}). \end{eqnarray*} Consequently, the first part of the statement holds. The last part of the statement then follows from Lemma~\ref{symeven}, by taking $c=1$.\,\mbox{~$\Box{~}$}\bigskip \subsection{Analysis as spectral method} As was pointed out in the introduction, one main reason for introducing CCMs is their use as spectral methods in time. This strategy allows for using large (sometimes huge) timesteps $h$. In this context, the classical notion of order of convergence, based on the fact that $h\rightarrow0$, does not apply anymore, so that a different analysis is needed. We shall here follow similar steps as those described in \cite{ABI2020} for spectral HBVMs. To begin with, we recall that, for the modified Chebysehv basis (\ref{cebyw})-(\ref{cebypol}) the following properties hold true:\footnote{The second property has already been used in the proof of Lemma~\ref{Ghj}.} \begin{equation}\label{prop} \forall j\ge0:\quad \|P_j\|\le\sqrt{2}, \qquad \int_0^1 \omega(c)P_j(c)c^\ell\mathrm{d} c = 0, \quad \ell=0,\dots,j-1. \end{equation} Moreover, we recall that, with reference to the notation (\ref{gamiws}):\footnote{Also this property has already been used, in the proof of Theorem~\ref{ords}.} \begin{equation}\label{fych} f(y(ch)) = \sum_{j\ge0} P_j(c)\gamma_j(y), \qquad c\in[0,1]. \end{equation} We also state the following preliminary result. \begin{lem}\label{Qj} With reference to (\ref{cebyw}) and to the polynomials (\ref{cebypol}), let us define, for a given $\rho>1$: $$Q_j(\xi) ~=~ \int_0^1 \omega(c)\frac{P_j(c)}{\xi-c}\mathrm{d} c, \qquad \xi\in{\cal C}_\rho(0),$$ having set ~${\cal C}_\rho(0)=\{z\in\mathbb{C}:|z|=\rho\}$. Then \begin{equation}\label{normro} \|Q_j\|_\rho ~:=~ \max_{\xi\in{\cal C}_\rho(0)}|Q_j(\xi)| ~\le~ \sqrt{2}\frac{\rho^{-j}}{(\rho-1)}. \end{equation} \end{lem} \noindent\underline{Proof}\quad By taking into account (\ref{prop}), for $|\xi|=\rho>1$ one has: \begin{eqnarray*} Q_j(\xi) &=&\int_0^1 \omega(c)\frac{P_j(c)}{\xi-c}\mathrm{d} c~=~\xi^{-1} \int_0^1 \omega(c)\frac{P_j(c)}{1-\xi^{-1}c}\mathrm{d} c\\[1mm] &=& \xi^{-1} \int_0^1 \omega(c)P_j(c)\sum_{\ell\ge0}\xi^{-\ell}c^\ell\mathrm{d} c ~=~ \xi^{-1}\sum_{\ell\ge j}\xi^{-\ell} \int_0^1 \omega(c)P_j(c)c^\ell\mathrm{d} c\\[1mm] &=& \xi^{-j-1}\sum_{\ell\ge 0}\xi^{-\ell} \int_0^1 \omega(c)P_j(c)c^{\ell+j}\mathrm{d} c. \end{eqnarray*} Evaluating the norms, one has, again by virtue of (\ref{prop}): $$\left|\int_0^1 \omega(c)P_j(c)c^{\ell+j}\mathrm{d} c\right| ~\le~ \|P_j\| \int_0^1 \omega(c)\mathrm{d} c ~\le~ \sqrt{2}.$$ Consequently, one obtains: $$|Q_j(\xi)|~\le~ \sqrt{2}\rho^{-j-1}\sum_{\ell\ge0} \rho^{-\ell} ~=~ \sqrt{2}\rho^{-j-1}\frac{1}{1-\rho^{-1}} ~=~ \sqrt{2}\frac{\rho^{-j}}{\rho-1}.\,\mbox{~$\Box{~}$}$$\smallskip Further, by setting hereafter, for $r>0$, $${\cal B}_r(0) = \{z\in\mathbb{C}:|z|\le r\},$$ we have the following straightforward result. \begin{lem}\label{ghlem} Let $g(z)$ be analytical in ${\cal B}_{r^*}(0)$, for a given $r^*>0$. Then, for all $h\in(0,h^*]$, with $h^*<r^*$, \begin{equation}\label{ghz} g_h(\xi) ~:=~ g(\xi h) \end{equation} is analytical in ${\cal B}_\rho(0)$, with: \begin{equation}\label{rostar} \rho ~=~ \rho(h) ~:=~ \frac{r^*}h ~\ge~ \frac{r^*}{h^*} ~=:~ \rho^*~>~1. \end{equation} \end{lem} We can now state the following result. \begin{theo}\label{grhoj} With reference to (\ref{ode1}), let assume that the function \begin{equation}\label{gdiz} g(z)~:=~f(y(z)) \end{equation} and the timestep $h$ satisfy the hypotheses of Lemma~\ref{ghlem}. Then there exists $\kappa=\kappa(h^*)$ and $\rho>1$, $\rho\sim h^{-1}$, such that \begin{equation}\label{gjkro} |\gamma_j(y)|~\le~ \kappa \rho^{-j}, \qquad j\ge0. \end{equation} \end{theo} \noindent\underline{Proof}\quad By taking into account (\ref{gamiws}), (\ref{fych}), and (\ref{ghz})-(\ref{rostar}), by virtue of Lemma~\ref{Qj} one has: \begin{eqnarray*} \gamma_j(y) &=& \int_0^1 \omega(c)P_j(c)f(y(ch))\mathrm{d} c ~\equiv~\int_0^1 \omega(c)P_j(c)g_h(c)\mathrm{d} c\\[1mm] &=&\int_0^1 \omega(c)P_j(c) \left[\frac{1}{2\pi i}\int_{{\cal C}_\rho(0)} \frac{g_h(\xi)}{\xi-c}\mathrm{d}\xi\right] \mathrm{d} c ~=~ \frac{1}{2\pi i}\int_{{\cal C}_\rho(0)} g_h(\xi) \left[\int_0^1\omega(c)\frac{P_j(c)}{\xi-c}\mathrm{d} c \right]\mathrm{d} \xi \\[1mm] &\equiv& \frac{1}{2\pi i}\int_{{\cal C}_\rho(0)} g_h(\xi) Q_j(\xi)\mathrm{d} \xi. \end{eqnarray*} Consequently (see (\ref{normro})), $$|\gamma_j(y)|~\le~ \frac{\rho}{2\pi} \|g_h\|_\rho\|Q_j\|_\rho.$$ From Lemma~\ref{Qj} we know that $$\|Q_j\|_\rho ~\le~ \sqrt{2} \frac{\rho^{-j}}{\rho-1}.$$ Further, $$\|g_h\|_\rho ~=~ \max_{|z|=\rho}|g_h(z)| ~\le~ \max_{|z|\le \rho^*}|g_h(z)| ~\equiv~ \max_{|z|\le r^*}|g(z)| ~=:~ \|g\|.$$ Consequently, one eventually derives: $$|\gamma_j(y)|~\le~ \frac{\rho\sqrt{2}\|g\|}{2\pi(\rho-1)} \rho^{-j} ~\le~ \overbrace{\frac{\rho^*\sqrt{2}\|g\|}{2\pi(\rho^*-1)}}^{=:\,\kappa} \rho^{-j}, $$ from which the statement follows by taking into account that (see (\ref{rostar})), $\rho^*=r^*/h^*$.\,\mbox{~$\Box{~}$}\bigskip Finally, we have the following result. \begin{theo}\label{valeancora} Let us consider the polynomial approximation (\ref{sig}) and (\ref{cebyw})-(\ref{cebypol}), with the Fourier coefficients given by (\ref{gamiws}). Then, Theorem~\ref{spectral} continues formally to hold. \end{theo} \medskip \noindent\underline{Proof}\quad Following similar steps as in the proof of Theorem~\ref{ords}, one has: \begin{eqnarray*} \sigma(ch)-y(ch)&=& y(ch,ch,\sigma(ch))-y(ch,0,\sigma(0)) ~=~ \int_0^{ch} \frac{\mathrm{d}}{\mathrm{d} t} y(ch,t,\sigma(t))\,\mathrm{d} t\\[1mm] &=& \int_0^{ch}\left.\frac{\partial}{\partial \xi}y(ch,\xi,\sigma(t))\right|_{\xi=t}+\left.\frac{\partial}{\partial \eta}y(ch,t,\eta)\right|_{\eta=\sigma(t)}\dot \sigma(t)\,\mathrm{d} t \\[1mm] \hspace{2.2cm} &=& h\int_0^{c}\left.\frac{\partial}{\partial \xi}y(ch,\xi, \sigma(\tau h))\right|_{\xi=\tau h}+\left.\frac{\partial}{\partial \eta}y(ch,\tau h,\eta)\right|_{\eta=\sigma(\tau h)}\dot \sigma(\tau h)\,\mathrm{d} \tau \\[1mm] &=&-h\int_0^c \Phi(ch,\tau h,\sigma(\tau h))\left[ f(\sigma(\tau h))-\sum_{j=0}^{s-1}P_j(\tau)\gamma_j(\sigma)\right]\mathrm{d}\tau \\[1mm] \end{eqnarray*}\begin{eqnarray*} &=&-h\int_0^c \Phi(ch,\tau h,\sigma(\tau h))\left[ \sum_{j\ge0} P_j(\tau)\gamma_j(\sigma)-\sum_{j=0}^{s-1}P_j(\tau)\gamma_j(\sigma)\right]\mathrm{d}\tau \\[1mm] &=&-h\int_0^c \Phi(ch,\tau h,\sigma(\tau h))\sum_{j\ge s} P_j(\tau)\gamma_j(\sigma)\mathrm{d}\tau. \end{eqnarray*} Consequently, from (\ref{prop}) and Theorem~\ref{grhoj}, also considering (\ref{rostar}), one derives: \begin{eqnarray*} \|\sigma-y\| &\le& h\sqrt{2}\kappa \max_{c,\tau\in[0,1]}\|\Phi(ch,\tau h,\sigma(\tau h))\| \sum_{j\ge s}\rho^{-j} \\ &\le& h\underbrace{\sqrt{2}\kappa \max_{c,\tau\in[0,1]}\|\Phi(ch^*,\tau h^*,\sigma(\tau h^*))\| \frac{1}{1-(\rho^*)^{-1}}}_{=:\,M}\rho^{-s} ~\equiv~ hM\rho^{-s}.\,\mbox{~$\Box{~}$} \end{eqnarray*}\medskip \begin{rem}\label{uedsig} When a CCM$(s)$ method is used as a spectral method in time, one obviously assume that the value of $s$ is large enough so that the quadrature errors falls below the round-off error level. In other words, the polynomials $\sigma$ and $u$ are undistinguishable, within the round-off error level. \end{rem} \section{Numerical tests}\label{numtest} We here report a few numerical tests to assess the theoretical findings. They have been carried out on a 3 GHz Intel Xeon W10 core computer with 64GB of memory, running Matlab$^\copyright$ ~2020b. We consider the Kepler problem, \begin{equation}\label{kepl} \dot q_i = p_i, \qquad \dot p_i = \frac{-q_i}{(q_1^2+q_2^2)^{\frac{3}2}}, \qquad i=1,2, \end{equation} that, when considering the trajectory starting at \begin{equation}\label{kepl0} q_1(0) = 0.4, \qquad q_2(0) = p_1(0) = 0, \qquad p_2(0)=2, \end{equation} has a periodic solution of period $2\pi$. In Table~\ref{tab1} we list the numerical results obtained by solving this problem on one period by means of the CCM$(s)$ method, $s=1,\dots,4$, using timestep $h=2\pi/n$, namely, the errorr {\em err} after one period and the corresponding rate of convergence. As one may see, the listed results agree with what stated in Theorem~\ref{ords}, in particular the fact that the order of the methods is even, due to their symmetry. \begin{table}[t] \caption{Numerical results when solving problem (\ref{kepl})-(\ref{kepl0}) on the interval $[0,2\pi]$ with timestep $h=2\pi/n$.}\label{tab1} \centering \begin{tabular}{|r|cc|cc|cc|cc|} \hline &\multicolumn{2}{c|}{$s=1$}&\multicolumn{2}{c|}{$s=2$}&\multicolumn{2}{c|}{$s=3$}&\multicolumn{2}{c|}{$s=4$}\\ \hline $n$ & \em err & rate & \em err & rate &\em err & rate & \em err & rate \\ \hline 50 & 2.98e+0 & --- & 2.24e+0 & --- & 7.36e-03 & --- & 7.33e-03 & --- \\ 100 & 1.66e+0 & 0.8 & 9.45e-01 & 1.2 & 6.15e-04 & 3.6 & 4.46e-04 & 4.0 \\ 200 & 5.23e-01 & 1.7 & 2.53e-01 & 1.9 & 4.03e-05 & 3.9 & 2.78e-05 & 4.0 \\ 400 & 1.34e-01 & 2.0 & 6.34e-02 & 2.0 & 2.55e-06 & 4.0 & 1.73e-06 & 4.0 \\ 800 & 3.35e-02 & 2.0 & 1.58e-02 & 2.0 & 1.60e-07 & 4.0 & 1.08e-07 & 4.0 \\ 1600 & 8.38e-03 & 2.0 & 3.96e-03 & 2.0 & 1.00e-08 & 4.0 & 6.77e-09 & 4.0 \\ \hline \end{tabular} \bigskip \caption{Numerical results when solving problem (\ref{kepl})-(\ref{kepl0}) on the interval $[0,20\pi]$ by using the CCM(50) method with timestep $h=2\pi/n$.}\label{tab2} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $n$ & 3 & 6 & 9 & 12 & 15 \\ \hline period & \em err & \em err & \em err & \em err & \em err \\ \hline 1 & 5.04e-12 & 9.14e-14 & 7.37e-14 & 1.27e-13 & 7.44e-14 \\ 2 & 9.72e-12 & 7.96e-14 & 1.24e-13 & 3.05e-13 & 5.72e-14 \\ 3 & 1.34e-11 & 3.54e-13 & 2.81e-13 & 7.23e-13 & 5.49e-14 \\ 4 & 1.90e-11 & 3.00e-13 & 7.80e-13 & 1.27e-12 & 1.49e-13 \\ 5 & 2.55e-11 & 4.69e-13 & 1.34e-12 & 2.30e-12 & 2.77e-13 \\ 6 & 3.04e-11 & 5.68e-13 & 1.75e-12 & 3.25e-12 & 3.59e-13 \\ 7 & 3.44e-11 & 2.13e-13 & 1.73e-12 & 4.23e-12 & 2.37e-13 \\ 8 & 3.94e-11 & 2.72e-13 & 1.45e-12 & 5.19e-12 & 2.04e-13 \\ 9 & 4.38e-11 & 6.93e-13 & 1.18e-12 & 6.15e-12 & 3.29e-13 \\ 10 & 4.77e-11 & 1.54e-12 & 8.38e-13 & 7.01e-12 & 5.00e-13 \\ \hline \end{tabular} \end{table} Next, we consider the use of the methods as a spectral methods in time. In Figure~\ref{fig1} we plot the values of $|\gamma_j|$, $j=0,\dots,29$, for the CCM(30) method used with timestep $h=2\pi/n$, $n=5,10,15,20$, for the first integration step. As one may see, the Fourier coefficients decrease exponentially, with $j$, and the basis of the exponential decreases with $h$ (i.e., they decrease as $\rho^j$, with $\rho\sim h^{-1}$). Moreover, when using $h=\pi/10$ the last Fourier coefficients reach the round-off error level, thus stagnating. \begin{figure}[t] \centerline{ \includegraphics[width=12cm]{roj.eps}} \caption{Plot of $|\gamma_j|$, $j=0,\dots,29$, for the CCM$(30)$ method, for the first integration step of problem (\ref{kepl})-(\ref{kepl0}) by using the given timestep $h$.}\label{fig1} \end{figure} Now, let us solve problem (\ref{kepl})-(\ref{kepl0}) on ten periods, by using the CCM(50) method with timestep $h=2\pi/n$, $n=3,6,9,12,15$. The errors ({\em err}), at the end of each period, are listed in Table~\ref{tab2}. As one my see, the errors are uniformly small, as is expected from a spectral method. At last, since problem (\ref{kepl})-(\ref{kepl0}) is Hamiltonian with Hamiltonian $$H(q,p) = \frac{1}2 \|p\|_2^2 - \frac{1}{\|q\|_2},$$ let us consider the solution of problem (\ref{kepl})-(\ref{kepl0}) on the interval $[0,10^3]$ with timestep $h=0.1$, by using the CCM(3) and CCM(30) methods. The former method is only symmetric, whereas the second one is spectral, for the considered timestep. On the left of Figure~\ref{fig2} is the plot of the Hamiltonian error for the CCM(3) method: though not energy conserving, the Hamiltonian error is bounded, since the method is symmetric and the problem reversible. On the right of the same figure, is the plot of the Hamiltonian error for the CCM(30) method: in this case, the spectral accuracy reflects on the practical conservation of the Hamiltonian (as well as of the additional constant of motions, such as the angular momentum and the Lenz vector, not displayed here). It must be emphasized that the execution times for running the two methods (implemented in the same code) is almost the same: 2.9 sec vs. 3.7 sec. Considering that a similar behavior is observed when solving other classes of problems (see also \cite{ABI2020,BIMR2019} for a related analysis on HBVMs), it is then clear that the use of CCMs as spectral methods can be very effective. \begin{figure}[t] \centerline{ \includegraphics[width=7.5cm]{H03.eps}\quad \includegraphics[width=7.5cm]{H30.eps}} \caption{Plot of the Hamiltonian error when solving the problem (\ref{kepl})-(\ref{kepl0}) with timestep $h=0.1$ one the interval $[0,10^3]$ by using the CCM(3) method (left plot) and the CCM(30) method (right plot).}\label{fig2} \end{figure} \section{Conclusions}\label{fine} In this paper we have re-derived, in a novel framework, the Chebyshev collocation methods of Costabile and Napoli \cite{CoNa2001}, thus providing a more comprehensive analysis of the methods, w.r.t. \cite{CoNa2001,CoNa2004}, in particular when used as spectral methods in time. The methods, derived by considering an expansion of the vector field along the Chebyshev orthonormal polynomial basis, turn out to be symmetric, perfectly $A$-stable, can have arbitrarily high order, and the entries of their Butcher tableau can be written in closed form. Their use as spectral methods in time has been also studied. A few numerical tests confirm the theoretical achievements. As a further direction of investigation, the efficient implementation of the methods will be considered, as well as their application to different kinds of differential problems. \subsection*{Declarations} The authors declare no conflict of interests. \subsection*{Acknoledgements} The authors wish to thank the {\tt mrSIR} project \cite{mrSIR} for the financial support.
2,869,038,156,468
arxiv
\section{Solving the equations of motion} Integrating the equations \begin{align} i\partial_t c_{p,R}(t) &= (v_Fp + e V_b(t) ) \mbox{ } c_{p,R}(t) + \frac{ \Gamma }{L} \mbox{ } c_{.,L}(t) \nonumber \\ i\partial_t c_{p,L}(t) &= -v_Fp \mbox{ } c_{p,L}(t) + \frac{ \Gamma }{L} \mbox{ } c_{.,R}(t) \mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ } \label{eqom} \end{align} we get \begin{align} c_{p,R}(t)\mbox{ } =\mbox{ } c_{p,R}(t_0) \mbox{ }e^{ - i \int _{t_0 }^t (p v_F+e V_{b}(s))ds }-i \frac{\Gamma}{L} \int _{t_0 }^t e^{ - i \int _{ \tau }^t (p v_F+e V_{b}(s))ds } \mbox{ } c_{L,.}(\tau )\mbox{ } d\tau \label{A1} \end{align} and \begin{align} c_{p,L}(t) \mbox{ } = \mbox{ } c_{p,L}(t_0) \mbox{ }e^{ i \int _{t_0 }^t (p v_F)ds }-i \frac{\Gamma}{L}\int _{t_0 }^t e^{-i p v_F (s-t)} \mbox{ } c_{R,.}(s) \mbox{ } ds \label{A2} \end{align} where $t_{0}$ that appears in the integrating factor is the time that specifies the initial conditions. Setting, \begin{align} U(\tau,t) \equiv e^{ - i \int _{ \tau }^t e V_{b}(s)ds } \end{align} it follows that, \begin{align} c_{p,R}(t)\mbox{ } = \mbox{ } c_{p,R}(t_0) \mbox{ }e^{ - i (t-t_0) p v_F } \mbox{ } U(t_0,t) -i \frac{\Gamma}{L} \int _{t_0 }^t e^{ - i (t-\tau) p v_F } \mbox{ } U(\tau,t) \mbox{ } c_{L,.}(\tau )\mbox{ } d\tau \end{align} and \begin{align} c_{p,L}(t) \mbox{ } = \mbox{ } c_{p,L}(t_0) \mbox{ }e^{ i (t-t_0) p v_F } -i \frac{\Gamma}{L}\int _{t_0 }^t e^{-i p v_F (s-t)} \mbox{ } c_{R,.}(s) \mbox{ } ds \end{align} Performing a Fourier transform and writing in real space and time \begin{align} \psi_{R}(x,t)\mbox{ } =\mbox{ } \psi_{R}(x-v_F(t-t_0),t_0) \mbox{ } U(t_0,t) -i \Gamma \int _{t_0 }^t \delta(x- v_F (t-\tau)) \mbox{ } U(\tau,t) \mbox{ } \psi_{L}(0,\tau )\mbox{ } d\tau \end{align} and \begin{align} \psi_{L}(x,t)\mbox{ } = \mbox{ } \psi_{L}(x+v_F(t-t_0),t_0) -i \Gamma \int _{t_0 }^t \delta(x- v_F (s-t)) \mbox{ }\psi_{R}(0,s) \mbox{ } ds \end{align} this gives, \begin{align} \psi_{R}(x,t)\mbox{ } =& \mbox{ } \psi_{R}(x-v_F(t-t_0),t_0) \mbox{ } U(t_0,t)\nonumber \\ & -i \frac{\Gamma }{v_F} [ \theta(x ) -\theta(v_F t_0 + x- v_F t ) ] \mbox{ }\nonumber \\ & U(t - \frac{x}{v_F},t) \mbox{ }\psi_{L}(0,t - \frac{x}{v_F}) \label{a8} \end{align} and \begin{align} \psi_{L}&(x,t)\mbox{ } = \mbox{ } \psi_{L}(x+v_F(t-t_0),t_0) -i \frac{\Gamma }{v_F} [ \theta(- x ) - \theta(v_F t_0- x - v_F t ) ] \mbox{ }\psi_{R}(0,t + \frac{x}{v_F}) \label{a9} \end{align} Here $\theta(x)$ is the Heaviside step function but regularized using the Dirichlet criterion such that $\theta(0) = \frac{1}{2}$. Now we write, \begin{align} \psi_{R}(0,t)\mbox{ } = \mbox{ } \psi_{R}(-v_F(t-t_0),t_0) \mbox{ } U(t_0,t)-i \frac{\Gamma }{v_F} [ \frac{1}{2} -\theta( t_0 - t ) ] \mbox{ } \psi_{L}(0,t) \label{a10} \end{align} and \begin{align} \psi_{L}(0,t)\mbox{ } = \mbox{ } \psi_{L}(v_F(t-t_0),t_0) -i \frac{\Gamma }{v_F} [ \frac{1}{2} - \theta( t_0 - t ) ] \mbox{ }\psi_{R}(0,t) \label{a11} \end{align} Combining \ref{a10} and \ref{a11} we get, \begin{align} \psi_{R}(0,t)\mbox{ }=& \mbox{ } \frac{2 v_F (i \Gamma \psi_{L}(v_F(t-t_0),t_0)(2 \theta(t_0-t)-1)}{\Gamma ^2-4 \Gamma ^2 \theta(t-t_0) \theta(t_0-t)+4 v_F^2}\nonumber \\&+\frac{4 v_{F}^{2} \mbox{ } \psi_{R}(-v_F(t-t_0),t_0) \mbox{ } U(t_0,t))}{\Gamma ^2-4 \Gamma ^2 \theta(t-t_0) \theta(t_0-t)+4 v_F^2} \end{align} and \begin{align} \psi_{L}(0,t)\mbox{ } =& \mbox{ } \frac{i \Gamma \psi_{R}(-v_F(t-t_0),t_0) \mbox{ } U(t_0,t)(2 \theta(t_{0}-t)-1)}{\Gamma ^2-4 \Gamma ^2 \theta(t-t_{0}) \theta(t_{0}-t)+4 v_F^2} \nonumber \\&+\frac{(4 v_{F}^{2} \psi_{L}(v_F(t-t_0),t_0)}{\Gamma ^2-4 \Gamma ^2 \theta(t-t_{0}) \theta(t_{0}-t)+4 v_F^2} \end{align} By appropriately shifting the time coordinate we get expressions for $\psi_{R}(0,t + \frac{x}{v_F})$ and $\psi_{L}(0,t - \frac{x}{v_F})$. The terms $\theta(t-t_{0}) \theta(t_{0}-t) = 0$ in our regularization. Substituting these last two formulae in Eqs \ref{a8} and \ref{a9} we get the final solution. \section{Equation of motion for the NEGF} Here we show that the non-equilibrium Green's function that we have obtained does indeed satisfy the equation of motion. This is an important consistency check for our method and ensures that we have not made any unjustifyable assumptions. After some algebraic simplification we may conclude that, \begin{widetext} \begin{align} <\psi^{\dagger}_{R}(x^{'},t^{'})&(i\partial_t + i v_F \partial_x) \psi_{R}(x,t) > \mbox{ } = \mbox{ } \nonumber \\&- V_b(t)<\psi^{\dagger}_{R}(x^{'},t^{'}) \psi_{R}(x,t) >\nonumber \\& - \frac{i}{2\pi} \frac{ \frac{ \pi }{\beta v_F } }{\sinh( \frac{ \pi }{\beta v_F } (-x^{'}-v_F(t-t^{'}) ) ) } \delta(x) \bigg( U(t^{'},t) \left[1 - \theta(x^{'}) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right]\mbox{ } \left[ - i v_F \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right] \bigg)\nonumber \\ &- \frac{i}{2\pi} \frac{ \frac{ \pi }{\beta v_F } }{\sinh( \frac{ \pi }{\beta v_F } (-x^{'}-v_F(t-t^{'}) ) ) } \delta(x)\bigg(- i \frac{\Gamma }{v_F} \theta(x^{'} ) \mbox{ } U(t^{'},t^{'} - \frac{x^{'}}{v_F}) \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\mbox{ } i \frac{\Gamma }{v_F} i v_F \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2} \bigg) \end{align} But \begin{align} <\psi^{\dagger}_{R}(x^{'},t^{'})&\psi_{L}(0,t)>\mbox{ } = \mbox{ } \nonumber \\ &- \frac{i}{2\pi} \frac{ \frac{ \pi }{\beta v_F } }{\sinh( \frac{ \pi }{\beta v_F } (-x^{'}-v_F(t-t^{'}) ) ) } \bigg( - U(t^{'},t) \left[1 - \theta(x^{'}) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right]\mbox{ } i \frac{\Gamma }{v_F} \frac{1}{2} \mbox{ } \frac{ ( 2 v_F )^2 }{\Gamma ^2 +4 v_F^2} \bigg)\nonumber \\ &- \frac{i}{2\pi} \frac{ \frac{ \pi }{\beta v_F } }{\sinh( \frac{ \pi }{\beta v_F } (-x^{'}-v_F(t-t^{'}) ) ) }\bigg( i \frac{\Gamma }{v_F} \theta(x^{'} ) \mbox{ } U(t^{'},t^{'} - \frac{x^{'}}{v_F}) \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\mbox{ } \left[\frac{4 v_{F}^{2} }{\Gamma ^2 +4 v_F^2} \right] \bigg) \end{align} Hence it satisfies the equation of motion, \begin{align} <\psi^{\dagger}_{R}(x^{'},t^{'})(i\partial_t + i v_F \partial_x) \psi_{R}(x,t) > \mbox{ } = \mbox{ } - V_b(t)<\psi^{\dagger}_{R}(x^{'},t^{'}) \psi_{R}(x,t) > + \Gamma \delta(x) <\psi^{\dagger}_{R}(x^{'},t^{'}) \psi_{L}(0,t) > \end{align} \end{widetext} \section{Density-density correlation function} The connected parts of the density-density correlations $<T \mbox{ } \rho_\nu(x,t) \rho_\nu(x^{'},t^{'}) >-<\rho_\nu(x,t) ><\rho_\nu(x^{'},t^{'}) >$ are written using Wick's theorem. The slow part of the density is $\rho_{s} = \psi^{\dagger}_{R}\psi_{R} \mbox{ }+\mbox{ } \psi^{\dagger}_{L}\psi_{L}$. Then the slow part of the net DDCF can be written as given below, \begin{align} <T \mbox{ } \rho_s(x,t) \rho_s(x^{'},t^{'}) >_{c} \mbox{ } = \sum_{\nu,\nu^{'} = \pm 1} \bigg( \frac{ \frac{ i }{2\beta v_F } }{\sinh( \frac{ \pi }{\beta v_F } (\nu x-\nu^{'} x^{'}-v_F(t-t^{'}) ) ) } \bigg)^2 \eta_{\nu,\nu^{'}} \end{align} where $\nu,\nu^{'} = \pm1$ with $R = 1$ and $L = -1$. $\eta_{\nu,\nu^{'}}$ is given in Eqs, \ref{e2} and \ref{e3} below. \begin{widetext} \begin{align} \eta_{\nu,\nu} \mbox{ }=\mbox{ }& \left( \left[1 - \theta(\nu x^{'}) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right]^2\mbox{ } \left[1 - \theta(\nu x ) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right]^2 + \left( \frac{\Gamma }{v_F} \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\right)^4 \mbox{ } \theta(\nu x ) \theta(\nu x^{'} ) \right) \nonumber \\ & + \left[1 - \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right]^2 \mbox{ } \left( \frac{\Gamma }{v_F} \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\right)^2 \mbox{ } \theta(\nu x ) \theta(\nu x^{'} ) \mbox{ } \left( U(t^{'} - \frac{\nu x^{'}}{v_F},t - \frac{\nu x}{v_F}) + U(t - \frac{\nu x}{v_F},t^{'} - \frac{\nu x^{'}}{v_F}) \right) \label{e2} \end{align} and for $ \nu^{'} = -\nu $, \begin{align} \eta_{\nu,-\nu} \mbox{ }=\mbox{ }& \left( - \left[ 1 - \theta(\nu x ) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2} \right]^2 \theta(-\nu x^{'}) - \left[1 - \theta(-\nu x^{'} ) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right]^2 \theta(\nu x) \right) \nonumber \\ &+ \left( i \frac{\Gamma }{v_F}\frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2} \right)^2 \mbox{ } \left( U(t - \frac{\nu x}{v_F},t^{'} + \frac{\nu x^{'}}{v_F} ) + U(t^{'} + \frac{\nu x^{'}}{v_F},t - \frac{\nu x}{v_F}) \right) \mbox{ } \left[ 1 - \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2} \right]^2 \theta(\nu x) \theta(-\nu x^{'} ) \label{e3} \end{align} \end{widetext} \section{Solving the Equation of Motion when $|\Gamma_{TD}|$ is time-dependent} Writing the equations \begin{align} b_{p,R}(t) \mbox{ } = \mbox{ }\bigg(b_{p,R}(t_0) \mbox{ } e^{-i p (t-t_0) v_F } -\frac{i }{L} \int_{t_0}^te^{-i p (t-t_1) v_F } \mbox{ } b_{.,L}(t_1) \mbox{ } \Gamma_{TD}(t_1)\mbox{ } dt_1\bigg) \label{br} \end{align} and \begin{align} b_{p,L}(t) \mbox{ } = \mbox{ } \bigg( b_{p,L}(t_0) \mbox{ } e^{i p v_F (t-t_0)} -\frac{i }{L} \int _{t_0}^t e^{i p v_F (t- t_2) } \mbox{ } b_{.,R}(t_2) \mbox{ } \Gamma^*_{TD}(t_2) \mbox{ } dt_2 \bigg) \label{bl} \end{align} in real space and time \begin{align} \psi_{R}(x,t) \mbox{ } = \mbox{ }\psi_{R}(x - (t-t_0) v_F,t_0) -\frac{i}{v_F} \mbox{ } [ \theta(x ) - \theta(x - (t-t_0) v_F) ] \mbox{ } \Gamma_{TD}(t- \frac{x}{v_F}) \mbox{ } \psi_{L}(0,t- \frac{x}{v_F}) \end{align} and \begin{align} \psi_{L}(x,t) \mbox{ } = \mbox{ } \psi_{L}(x + (t-t_0) v_F,t_0) + \frac{ i }{v_F} [ \theta(x ) - \theta(x + (t-t_0) v_F) ] \mbox{ } \Gamma^*_{TD}(t + \frac{ x }{ v_F} ) \psi_{R}(0,t + \frac{ x }{ v_F} ) \end{align} Setting $x=0$ we get \begin{align} \psi_{R}(0,t) \mbox{ } = \mbox{ }\psi_{R}( - (t-t_0) v_F,t_0) -\frac{i}{v_F} \mbox{ } [ \frac{1}{2} - \theta( - (t-t_0) v_F) ] \mbox{ } \Gamma_{TD}(t ) \mbox{ } \psi_{L}(0,t ) \end{align} and \begin{align} \psi_{L}(0,t) \mbox{ } = \mbox{ } \psi_{L}( (t-t_0) v_F,t_0) + \frac{ i }{v_F} [ \frac{1}{2} - \theta( (t-t_0) v_F) ] \mbox{ } \Gamma^*_{TD}(t ) \psi_{R}(0,t ) \end{align} \mbox{ }\\ Decoupling the above two equations we get\\\mbox{ }\\\mbox{ }\\ \begin{widetext} \begin{align} \psi_{R}(0,t) \mbox{ } = \mbox{ } \frac{\psi_{R}( - (t-t_0) v_F,t_0)- \psi_{L}( (t-t_0) v_F,t_0) \frac{i}{v_F} \mbox{ } [ \frac{1}{2} - \theta( - (t-t_0) v_F) ] \mbox{ } \Gamma_{TD}(t )}{\frac{i}{v_F} \mbox{ } [ \frac{1}{2} - \theta( - (t-t_0) v_F) ] \mbox{ } \Gamma_{TD}(t ) \mbox{ } \frac{ i }{v_F} [ \frac{1}{2} - \theta( (t-t_0) v_F) ] \mbox{ } \Gamma^*_{TD}(t )+1} \end{align} and \begin{align} \psi_{L}(0,t) \mbox{ } = \mbox{ } \frac{ \psi_{L}( (t-t_0) v_F,t_0)+ \frac{ i }{v_F} [ \frac{1}{2} - \theta( (t-t_0) v_F) ] \mbox{ } \Gamma^*_{TD}(t ) \psi_{R}( - (t-t_0) v_F,t_0)}{\frac{i}{v_F} \mbox{ } [ \frac{1}{2} - \theta( - (t-t_0) v_F) ] \mbox{ } \Gamma_{TD}(t ) \mbox{ } \frac{ i }{v_F} [ \frac{1}{2} - \theta( (t-t_0) v_F) ] \mbox{ } \Gamma^*_{TD}(t )+1} \end{align} Appropriately shifting the time coordinates we can write \begin{align} \psi_{R}(0,t + \frac{ x }{ v_F} ) \mbox{ } = \mbox{ } \frac{\psi_{R}( -x- (t-t_0) v_F,t_0)- \psi_{L}( x+ (t-t_0) v_F,t_0) \frac{i}{v_F} \mbox{ } [ \frac{1}{2} - \theta(-x - (t-t_0) v_F) ] \mbox{ } \Gamma_{TD}(t + \frac{x}{v_F})}{\frac{i}{v_F} \mbox{ } [ \frac{1}{2} - \theta(-x - (t-t_0) v_F) ] \mbox{ } \Gamma_{TD}(t + \frac{x}{v_F} ) \mbox{ } \frac{ i }{v_F} [ \frac{1}{2} - \theta( x+ (t-t_0) v_F) ] \mbox{ } \Gamma^*_{TD}(t + \frac{x}{v_F} )+1} \end{align} and \begin{align} \psi_{L}(0,t- \frac{x}{v_F}) \mbox{ } = \mbox{ } \frac{ \psi_{L}( -x+ (t-t_0) v_F,t_0)+ \frac{ i }{v_F} [ \frac{1}{2} - \theta( -x+ (t-t_0) v_F) ] \mbox{ } \Gamma^*_{TD}(t- \frac{x}{v_F}) \psi_{R}(x - (t-t_0) v_F,t_0)}{\frac{i}{v_F} \mbox{ } [ \frac{1}{2} - \theta( x- (t-t_0) v_F) ] \mbox{ } \Gamma_{TD}(t- \frac{x}{v_F} ) \mbox{ } \frac{ i }{v_F} [ \frac{1}{2} - \theta(-x+ (t-t_0) v_F) ] \mbox{ } \Gamma^*_{TD}(t- \frac{x}{v_F} )+1} \end{align} \end{widetext} \end{document} \section{Introduction} \noindent It has been over half a century since pioneering theoretical insights shed light on the physics of electron-electron interactions in one-dimensional quantum systems. It was found that a Fermi-liquid description of a 1D electron gas would be destabilised even for a weak repulsive interaction. Later Haldane \cite{haldane1981effective} coined the term ``Luttinger Liquid" to describe the generic state of a one dimensional system of interacting electrons. With advancement in experimental techniques, it has become possible to realise one-dimensional electron systems in semiconductor hetero-structures \cite{meirav1989one,altshuler2012mesoscopic} and recently in carbon nanotubes \cite{bockrath1999luttinger,kim2007tomonaga}. However, the presence of impurities in such experimentally fabricated samples make it very difficult to look for Luttinger Liquid (LL) properties in the lab. This sparked a huge theoretical interest in the problem of impurities in LLs. Kane and Fisher in their pioneering works \cite{kane1992transport,kane1992resonant} showed that for repulsive interactions the electrons are always reflected by a weak link and are transmitted even through a strong barrier in the case of attractive interactions. In addition to this, several analytical\cite{grishin2004functional,matveev1993tunneling,samokhin1998lifetime} and numerical approaches \cite{qin1996impurity,hamamoto2008numerical,freyn2011numerical,ejima2009luttinger,moon1993resonant} have made their mark in this topic. \\ One-dimensional chiral Luttinger liquids are realised as the edge states of fractional quantum Hall fluids. By using contacts placed at the opposite sides of a tunnel barrier the transport properties related to electron tunneling into fractional quantum hall edges (FQH) with a potential bias between them can be measured. The application of a potential bias drives the system out of equilibrium. The study of non-equilibrium dynamics of the particles is crucial to understand the transport properties. Wen \cite{wen1991edge} predicted a universal scaling form for the tunneling behaviour as a function of bias voltage $V$ and temperature $T$ using a perturbative approach. When the energy scale is determined by the thermal energy the prediction is that the tunneling current obeys a power law in temperature $I_{tun} \propto T^{\alpha-1}$ while depending linearly on $V$ and when the energy scale is dominated by $eV$ the current is nonlinear in bias voltage $I_{tun} \propto V^{\alpha}$. The exponent $\alpha$ is determined by the topological properties of the bulk FQH liquid. Alternatively, Kane and Fisher \cite{kane1992resonant,kane1992transmission} used a renormalisation group (RG) analysis to obtain an expression for the tunneling current in the weak-tunneling limit, which exhibits the same limiting behaviours as in Wen's work. They were able to obtain a duality relation between the strong and weak-tunneling limits. Power law behaviour of the I-V ($I \propto V^{\alpha}$) characteristics was observed for a continuum of filling fractions $\nu$ \cite{chang2003chiral,chang1996observation}. The $\nu=1$ quantum Hall edge states are realized as non-interacting chiral quantum wires. Experimental studies involving one-dimensional constrictions defined by split-gates similar to the schematic shown in Fig.\ref{fig1} have been of interest since early days of quantum transport research \cite{PhysRevB.48.8840,PhysRevB.56.7477,PhysRevB.58.4846,PhysRevLett.61.2797,PhysRevLett.61.2801,PhysRevLett.62.2523,PhysRevLett.76.2145,PhysRevLett.77.135} Even when inter-particle interactions are absent, investigation of non-equilibrium dynamics is a non-trivial task. Experimental techniques have advanced in recent years to observe ultrafast transient responses in nanoscale electronic devices. Also experimental methods using tunneling spectroscopy have assisted researchers in understanding the non-equilibrium dynamics of low-dimensional systems better \cite{PhysRevLett.102.036804,altimiras2010non}. Computational Wigner-function approach has been used in the past to study steady state as well as transient regime in a resonant tunneling diode \cite{PhysRevB.39.7720}. But the theoretical tool of choice to study the subtle physics of non-equilibrium systems is the non-equilibrium Green function (NEGF) method (also known as Keldysh formalism). Although there are several techniques such as reduced density matrix methods \cite{bonitz2016quantum}, time-dependent DFT \cite{casida2009time,burke2005time}, quantum master equation \cite{harbola2006quantum}, density matrix renormalisation group \cite{cazalilla2002time,daley2004time,white2004real}, quantum Langevin equation \cite{PhysRevB.86.094503,PhysRevB.75.195110}, Bohm trajectories \cite{leavens1993bohm,oriols2007quantum}, Wigner transport equation \cite{frensley1987wigner,buot1990lattice,agarwal1991exact}, none are well suited to naturally include interactions in the system. The Keldysh formalism to handle the non-equilibrium Green function (NEGF) is a popular choice \cite{keldysh1965ionization,rammer1986quantum}. This method has been extensively used to study tunneling conductance in generic tight binding junctions \cite{berthod2011tunneling}, nonequilibrium currents in quantum dots \cite{doyon2006universal} as well as time dependent transport in double barrier resonant tunneling systems \cite{jauho1994time}. \begin{figure} \centering \includegraphics[scale=0.7]{schem1} \caption{\small Schematic diagram of a tunneling point-contact between two chiral (unidirectional) quantum wires labelled $R$ and $L$. An arbitrary time dependent potential $V_{b}(t)$ is applied on the $R$ branch. A point contact is formed by applying an electrostatic gate voltage.} \label{fig1} \end{figure} \\ In this work, we explore the problem of non-equilibrium transport across a point-like contact between two chiral (i.e. unidirectional) non-interacting one-dimensional quantum wires (see Fig.\ref{fig1}). In \cite{PhysRevLett.105.156802} the authors consider tunneling between interacting chiral quantum wires albeit treating the point contact perturbatively. Since we treat the point contact exactly, our work complements this earlier work \cite{PhysRevLett.105.156802} and contributes to a comprehensive understanding of this model. We exactly solve for the non-equilibrium Green functions (NEGF) in real space and time from the equation of motion of the Fermi fields as opposed to Fourier transformed energy domain. We extend our analysis beyond the infinite bandwidth limit and consider the case of a finite bandwidth in the point-contact. We obtain a transient in the tunneling current and this term involves an integral over the past history of the system and hence exhibits non-Markovian dynamics. Non-Markovian effects in quantum transport in nanodevices has attracted much attention in recent years over potential applications in quantum computing and nanotechnology. The non-Markovian master equation approach \cite{PhysRevB.83.115439}, analyses based on Keldysh non-equilibrium Green function formalism \cite{jauho1994time,PhysRevB.74.085324,PhysRevB.78.235110,PhysRevB.77.075302} and theories based on the Feynman-Vernon influence functional \cite{Jin_2010} have proven to be very useful in studying transient transport beyond the infinite bandwidth limit. In this work we treat the infinite bandwidth case to be the zeroth order result of a systematic perturbation approach and consider the effects of a finite bandwidth in the point-contact upto first order.\\ This paper is organised as follows. In Section \ref{sec1} we describe the model Hamiltonian and show in detail the method employed to calculate the finite temperature NEGF in terms of simple functions of position and time in presence of an arbitrary time-dependent voltage bias. In Section \ref{sec2} we calculate the tunneling current and differential tunneling conductance for the non-interacting system in presence of an arbitrary time-dependent voltage bias using the NEGF. In Section \ref{sec3} we point out certain unique non-equilibrium features of this particular model we are studying. We choose to call them \textit{bias-induced anomalies}. In Section \ref{sec4} we compute the dynamical density of states (DDOS). In Section \ref{sec6} we discuss the case of a complex time-dependent tunneling parameter and calculate the tunneling current for this case and show under what circumstances this approach is equivalent to the previously discussed case. We show in Section \ref{db} how our method can be easily applied to the case of resonant tunneling through a double barrier. We then introduce in Section \ref{secfinite} a finite bandwidth in the point-contact and develop a systematic perturbative approach to calculate the transient transport properties which are non-Markovian in this case. Finally in Section \ref{conc} we summarise our main results and its implications.\\ Exact analyses on time-dependent transport properties in out of equilibrium systems have been done by other authors that go beyond the infinite bandwidth limit \cite{PhysRevB.74.085324}. However in our approach, we analytically obtain the exact non-equilibrium Green functions in space and time domain (two space-time points) as opposed to these other approaches that only consider constant bias and obtain the Green functions in a Fourier transformed frequency domain. When the point-contact has a finite bandwidth, closed formulas for the full NEGF are possible only as a perturbative series in the inverse of the bandwidth. It is remarkable indeed that a closed formula for the out-of-equilibrium space-time Green function with a general bias of a problem as important as the present one has not been written down till now. This formula provides a convincing derivation of the I-V characteristics that is able to account for a variety of situations such as, time-dependent bias and nonlinearities in the I-V characteristics and so on. Given these observations, it is hard to overstate the importance of the present work. \\ \mbox{ } \\ \noindent \hyperref[AppendixA]{Appendix A} includes some details on how to calculate the tunneling current from the Green functions. \hyperref[AppendixB]{Appendix B} computes the local dynamical density of states in presence of an arbitrary time-dependent bias. \hyperref[AppendixC]{Appendix C} derives the connection between the present formalism and the conventional Keldysh Green functions. Finally, \hyperref[AppendixD]{Appendix D} shows details of the calculation of tunneling current when a finite bandwidth in the point-contact is introduced. \section{Model and formalism} \label{sec1} The tunneling through a point-contact is described by the addition of a tunneling Hamiltonian \cite{PhysRevLett.8.316}. The full Hamiltonian for the system under consideration is, \begin{align} H = \sum_{p}(v_Fp+e V_{b}(t) )c^{\dagger}_{p,R}c_{p,R} &+ \sum_{p}(-v_Fp)c^{\dagger}_{p,L}c_{p,L}+ \frac{\Gamma}{L}(c^{\dagger}_{.,R}c_{.,L}+c^{\dagger}_{.,L}c_{.,R}) \label{eqham} \end{align} where $R$ and $L$ label the right and left moving chiral spinless modes and $V_{b}(t)$ is the generic time dependent bias voltage applied to one of the contacts , the right($R$) moving one in this case. $c^{\dagger}_{p}$ and $c_{p}$ are the spinless fermion creation and annihilation operators in momentum space and we use the notation $c^{\dagger}_{.,R} = \sum_{p}c^{\dagger}_{p,R}$ and $\Gamma = \Gamma^{*}$ is the tunneling amplitude of the symmetric point-contact junction and $L$ that does not appear in the subscript is the system size. {\it{ Note that a regularisation scheme is implicit when writing down Eq.\ref{eqham}. We call this Dirichlet's regularisation. Here summing over all momenta implicitly means summing over an interval $ p \in \{-\Lambda,\Lambda\} $ and then setting $ \Lambda \rightarrow \infty $. This leads us to conclude, among other things, that the values of discontinuous functions are always the average of left and right hand limits. Specifically in the Dirichlet regularisation, the step function that appears repeatedly in the rest of the paper is defined as $ \theta(x > 0) = 1, \theta(x < 0 ) = 0, \theta(x=0) = \frac{1}{2} $. }} We postulate that the right mover experiences a potential $ V_b(t) $ more than the left mover at all spatial locations. Note that we have considered a generic time-dependent bias and hence our method is not restricted to the case of a dc-bias alone. \subsection{Equations of Motion} We may now go ahead and write down the equations of motion for the Fermi fields for the Hamiltonian in Eq.\ref{eqham} and systematically solve them to obtain the position-time non-equilibrium Green's function. The equations of motion are \begin{align} i\partial_t c_{p,R}(t) &= (v_Fp + e V_b(t) ) \mbox{ } c_{p,R}(t) + \frac{ \Gamma }{L} \mbox{ } c_{.,L}(t) \nonumber \\ i\partial_t c_{p,L}(t) &= -v_Fp \mbox{ } c_{p,L}(t) + \frac{ \Gamma }{L} \mbox{ } c_{.,R}(t) \mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ }\mbox{ } \label{eqom} \end{align} On solving the above coupled differential equations we write down the solutions explicitly in real space and time (where $ \psi_{R/L}(x) = \frac{1}{\sqrt{L}} \sum_p e^{ipx } \mbox{ } c_{p,R/L} $). \begin{widetext} \begin{align} \psi_{R}(x,t)\mbox{ } = \mbox{ } U(t_0,t) \left[1 +2 v_F \Gamma \mbox{ } sgn(t_{0}-t + \frac{x}{v_F}) \xi(x,t)\right]\mbox{ } \psi_{R}(x-v_F(t -t_0),t_0) \nonumber \\ -i(2 v_F)^2 U(t - \frac{x}{v_F},t) \mbox{ }\xi(x,t)\mbox{ } \psi_{L}(-x+v_F(t -t_0),t_0) \label{eqR} \end{align} and \begin{align} \psi_{L}(-x,t)\mbox{ } = \mbox{ } \left[ 1 +2 v_F \Gamma \mbox{ } sgn(t_0-t + \frac{x}{v_F}) \mbox{ }\xi(x,t) \right]\mbox{ } \psi_{L}(-x+v_F(t-t_0),t_0) \nonumber \\ -i ( 2 v_F )^2 \mbox{ } U(t_0,t - \frac{x}{v_F} ) \mbox{ }\xi(x,t) \mbox{ } \psi_{R}(x-v_F(t-t_0),t_0) \label{eqL} \end{align} Where we have set, \begin{align} \xi(x,t) \equiv \frac{\Gamma }{v_F} \mbox{}\frac{ [ \theta(x ) -\theta(v_F t_0 + x- v_F t ) ]}{\Gamma ^2-4 \Gamma ^2 \theta(t - \frac{x}{v_F}-t_{0}) \theta(t_{0}-t + \frac{x}{v_F})+4 v_F^2} \end{align} \end{widetext} and we have defined $U(\tau,t) \equiv e^{ - i \int _{ \tau }^t e V_{b}(s)ds }$. The time $t_{0}$ specifies, at this stage, some (arbitrary) initial time. Here $\theta(x)$ is the Heaviside step function but regularised using the Dirichlet criterion (i.e. $ \theta(0) = \frac{1}{2} $). Also $ sgn(x) \equiv \theta(x)-\theta(-x) $. It is easy to verify that the above expressions for the fields do satisfy the equations of motion in real space and time which are \begin{align} & i \partial_t \psi_R(x,t) \mbox{ } = \mbox{ } ( - i v_F \partial_x + e V_b(t)) \psi_R(x,t) + \Gamma \mbox{ }\delta(x)\mbox{ } \psi_L(0,t) \nonumber \\ & i \partial_t \psi_L(x,t) \mbox{ } = \mbox{ } i v_F \partial_x \psi_L(x,t) + \Gamma\mbox{ } \delta(x) \mbox{ } \psi_R(0,t) \end{align} \subsection{The non-equilibrium Green functions} We assume that the system is in thermal equilibrium with a reservoir at temperature $ T_{temp} = \frac{1}{\beta} $ in the remote past since the voltage bias is zero at these early times. Now setting the time $t_{0} = - \infty$ means the same as the system being in the equilibrium state, i.e. before the bias voltage is applied. Using this condition in Eqs.\ref{eqR} and \ref{eqL} we may write, \begin{align} & \psi_{R}(x,t)\mbox{ } = \mbox{ } U(-\infty,t) \left[1 - \theta(x ) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right] \psi_{R,0}(x-v_F(t -t_0) < 0,t_0) \nonumber\\ &-i \frac{\Gamma }{v_F} \theta(x ) \mbox{ } U(t - \frac{x}{v_F},t) \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\mbox{ } \psi_{L,0}(-x+v_F(t -t_0) > 0,t_0) \label{psiR} \end{align} and \begin{align} & \psi_{L}(-x,t)\mbox{ } = \mbox{ } \left[ 1 - \theta(x ) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2} \right] \psi_{L,0}(-x+v_F(t-t_0) > 0,t_0)\nonumber \\ &-i \frac{\Gamma }{v_F} \theta( x ) \mbox{ } \frac{ ( 2 v_F )^2 \mbox{ } U(-\infty,t - \frac{x}{v_F} ) }{\Gamma ^2 +4 v_F^2} \mbox{ }\psi_{R,0}(x-v_F(t-t_0) < 0,t_0) \label{psiL} \end{align} Before calculating the two-point Green's function it is crucial to note how the point-contact tunneling amplitude is related to the reflection ($R$) and transmission ($T$) amplitudes ($ |T|^2 + |R|^2 = 1 $) when modelled as an isolated impurity as shown in the reference \cite{babu2020density}, \begin{align} \Gamma = \Gamma^{*} = \frac{4 i v_{F} R T}{2T^{2} + 2 T} \end{align} This means, \begin{align} R \mbox{ } = \mbox{ } -i \frac{4\Gamma v_F }{\Gamma ^2+4 v_F^2} \mbox{ };\mbox{ }T \mbox{ } = \mbox{ } \frac{4v_F^2-\Gamma ^2 }{\Gamma ^2+4 v_F^2} \label{eqrt} \end{align} Note that since $t_{0}$ is in the equilibrium region ($t_{0} = -\infty$) the quantities $x-v_F(t-t_0) < 0$ and $-x+v_F(t-t_0) > 0$ for any fixed $ x,t $. The correlation functions at time $t_{0} = -\infty $ then are the well-known finite temperature equilibrium Green functions \cite{das2018quantum}. Set $z \equiv x-v_F(t-t_0),\mbox{ } z^{'} \equiv x^{'}-v_F(t^{'}-t_0) $, \begin{align} & < \psi^{\dagger}_{R,0}(z^{'}<0,t_0) \psi_{R,0}(z<0,t_0)>\mbox{ } = \mbox{ } <\psi^{\dagger}_{L,0}(-z^{'} >0,t_0)\psi_{L,0}(-z>0,t_0)>\mbox{ } = \mbox{ }\nonumber \\& - \frac{i}{2\pi} \frac{ \frac{ \pi }{\beta v_F } }{\sinh( \frac{ \pi }{\beta v_F } (x-x^{'}-v_F(t-t^{'}) ) ) } \label{eqrrz} \end{align} where $\beta$ is the inverse temperature. The $R,L$ and $L,R$ correlations with the position coordinates at the opposite sides of the point-contact are zero. \begin{align} &< \psi^{\dagger}_{R,0}(z^{'}<0,t_0)\psi_{L,0}(-z>0,t_0)> \nonumber \\ &=\mbox{ } <\psi^{\dagger}_{L,0}(-z^{'}>0,t_0)\psi_{R,0}(z<0,t_0)> \mbox{ }=\mbox{ } 0 \label{eqllz} \end{align} Using Eqs.\ref{eqrrz}-\ref{eqllz} in Eqs.\ref{psiR}-\ref{psiL} and the corresponding conjugates, we get the full non-equilibrium Green functions (NEGF), \begin{align} <\psi^{\dagger}_{\nu^{'}}(x^{'},t^{'})\psi_{\nu}(x,t)> \mbox{ }=\mbox{ } -\frac{i}{2\pi} \frac{\frac{\pi}{\beta v_{F}}}{\sinh( \frac{ \pi }{\beta v_F } (\nu x-\nu^{'}x^{'}-v_F(t-t^{'}) ) )} \kappa_{\nu,\nu^{'}} \label{eqnoneq} \end{align} where $\nu$,$\nu^{'} = \pm 1$ with $R=1$ and $L=-1$ and \begin{widetext} \small \begin{align} \kappa_{1,1} \mbox{ }=\mbox{ } \bigg( U(t^{'},t) \left[1 - \theta(x^{'}) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right]&\mbox{ } \left[1 - \theta(x ) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right] \nonumber \\ &+ \left( \frac{\Gamma }{v_F} \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\right)^2 \mbox{ } \theta(x ) \theta(x^{'} ) \mbox{ } U(t^{'},t^{'} - \frac{x^{'}}{v_F}) \mbox{ } U(t - \frac{x}{v_F},t) \bigg) \label{eqkappa1} \end{align} \begin{align} \kappa_{-1,-1} \mbox{ }=\mbox{ } \bigg( \left[ 1 - \theta(-x^{'}) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2} \right] & \left[ 1 - \theta(-x ) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2} \right] \nonumber \\ &+ \left( \frac{\Gamma }{v_F} \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\right)^2 \mbox{ } \theta(-x ) \theta( -x^{'} ) \mbox{ } U(t^{'} + \frac{x^{'}}{v_F},t + \frac{x}{v_F} ) \bigg) \label{eqkappa2} \end{align} \begin{align} \kappa_{1,-1} \mbox{ }= \mbox{ }\bigg( -U(t - \frac{x}{v_F},t) \mbox{ } &\left[ 1 - \theta(-x^{'}) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2} \right] \theta(x ) \mbox{ } \nonumber \\ &+ U(t^{'} + \frac{x^{'}}{v_F},t) \mbox{ } \left[1 - \theta(x ) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right] \theta( -x^{'} ) \bigg) i \frac{\Gamma }{v_F}\frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2} \label{eqkappa3} \end{align} \begin{align} \kappa_{-1,1} \mbox{ }=\mbox{ }\bigg( - U(t^{'},t + \frac{x}{v_F} ) &\left[1 - \theta(x^{'}) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right] \theta( -x ) \nonumber \\ &+ U(t^{'},t^{'} - \frac{x^{'}}{v_F}) \mbox{ } \left[ 1 - \theta(-x ) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2} \right] \theta(x^{'} ) \bigg) \mbox{ } i \frac{\Gamma }{v_F} \frac{ ( 2 v_F )^2 }{\Gamma ^2 +4 v_F^2} \label{eqkappa4} \end{align} \normalsize \end{widetext} \begin{figure*} \centering \subfigure[]{\includegraphics[width=45mm,height=45mm]{gtrans1mod}}\quad \quad \subfigure[]{\includegraphics[width=45mm,height=45mm]{gtrans2mod}}\quad \quad \subfigure[]{\includegraphics[width=45mm,height=45mm]{gtrans3mod}}\\ \subfigure[]{\includegraphics[width=45mm,height=45mm]{gtrans4mod}}\quad \quad \subfigure[]{\includegraphics[width=45mm,height=45mm]{gtrans5mod}}\quad \quad \subfigure[]{\includegraphics[width=45mm,height=45mm]{gtransmod6}}\\ \subfigure[]{\includegraphics[width=45mm,height=45mm]{gtransmod7}}\quad \quad \subfigure[]{\includegraphics[width=45mm,height=45mm]{gtransmod8}}\quad \quad \subfigure[]{\includegraphics[width=45mm,height=45mm]{gtransmod9}} \caption{Transient in NEGF: \small This figure shows density plots of the real part of the equal-time RR Green function $Re[<\psi^{\dagger}_{R}(x^{'},t)\psi_{R}(x,t)>]$ vs time ($t$) in presence of a step bias that is switched on at $t=0$. The real part of the non-equilibrium Green functions shows transient dynamics (\textbf{a})-(\textbf{e}) before reaching a steady state (\textbf{f})-(\textbf{i}). The other parameters are chosen to be $\Gamma=2$ and $v_{F} = 1$.} \label{gtrans} \end{figure*} For illustrative purposes, let us consider a step bias that is initially zero but is turned on suddenly at $t=0$ and remains constant for all positive times. We can show by plotting the equal-time NEGF versus the spatial coordinates for different times that at small times the NEGF captures the transient behaviour of the system and reaches a steady state at large positive times. In Fig.\ref{gtrans} the zero temperature equal-time RR non-equilibrium Green function for various times is plotted, clearly showing the initial transient regime followed by the appearance of a steady state at sufficiently long times after the switching on of a constant bias. We show in \hyperref[AppendixC]{Appendix C} how the space-time non-equilibrium Green function we have written in Eq.\ref{eqnoneq} can be interpreted as the Keldysh contour ordered NEGF. \subsection{Consistency check in the equilibrium limit} In a steady-state, the NEGF is a function of the time-difference $t-t^{'}$. The equilibrium Green's function also depends only on the time difference but the non-equilibrium nature of the problem is evident in the fact that the NEGF does not obey the KMS boundary conditions in imaginary time \cite{kubo1957statistical,martin1959theory} even when the system has reached a steady state. It is an important consistency check to make sure that the NEGF reduces to the equilibrium Green's function when the times $t$ and $t^{'}$ are set to be in the equilibrium region (remote past). This is equivalent to setting $U(t^{'},t) =1$ since there is no voltage bias when the system is in equilibrium. Note that when $xx^{'} > 0$, \begin{align} \bigg( \left[1 - \theta(x^{'}) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right]\mbox{ } \left[1 - \theta(x ) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right] + \theta(x ) \theta(x^{'} ) \mbox{ } \left( \frac{\Gamma }{v_F} \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2} \right)^2 \mbox{ } \bigg) = 1 \end{align} whereas from Eq.\ref{eqrt} it is clear that when $xx^{'} < 0$, \begin{align} &\bigg( \left[1 - \theta(x^{'}) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right]\mbox{ } \left[1 - \theta(x ) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right] + \theta(x ) \theta(x^{'} ) \mbox{ } \left( \frac{\Gamma }{v_F} \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2} \right)^2 \mbox{ } \bigg) \mbox{ } \nonumber \\ &= \mbox{ } \left[1 - \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right] = T = T^* \end{align} The NEGF in this case, reduces to the well-known \cite{das2018quantum} equilibrium Green functions given in Eq.\ref{eqg}. \begin{align} <T \psi^{\dagger}_{\nu^{'}}(x^{'},t^{'})\psi_{\nu}(x,t) >_{equil}\mbox{ } = \mbox{ }- \sum_{\gamma,\gamma^{'} = \pm 1} \frac{\pi}{\beta v_{F}} \frac{\theta(\gamma x)\theta(\gamma^{'}x^{'} )g_{\gamma,\gamma^{'}}(\nu,\nu^{'})}{\sinh( \frac{ \pi }{\beta v_F } (\nu x-\nu^{'} x^{'}-v_F(t-t^{'}) ) )} \label{eqg} \end{align} where in terms of the reflection and transmission amplitudes \begin{align} g_{\gamma,\gamma^{'}} (\nu,\nu^{'})\mbox{ } = \mbox{ } \frac{i}{2\pi} \bigg( \delta_{\nu,\nu^{'}} \delta_{\gamma,\gamma^{'}} +(T \delta_{\nu,\nu^{'}}+R \delta_{\nu,-\nu^{'}})\delta_{\gamma,\nu}\delta_{\gamma^{'},-\nu^{'}}+(T^{*} \delta_{\nu,\nu^{'}}+R^{*} \delta_{\nu,-\nu^{'}})\delta_{\gamma,-\nu}\delta_{\gamma^{'},\nu^{'}}\bigg) \end{align} It is easy to show that the non-equilibrium Green functions satisfy the equation of motion for the two-point functions. Now that we have obtained the NEGF we can use it to study the transport properties of the system particularly the tunneling current and differential tunneling conductance with general time-dependent bias voltage of which the well studied problem of a dc-bias \cite{berthod2011tunneling,ferrer1988contact,chen1991dynamic} is a special case. In the infinite bandwidth case only steady state behaviour in the $I-V$ characteristics is observed. When we consider the case of a finite bandwidth in the point-contact we observe non-Markovian transient transport dynamics in presence of an arbitrary time-dependent bias voltage. \section{Tunneling current and conductance in the infinite bandwidth case} \label{sec2} In this section, we evaluate the tunneling current and differential tunneling conductance. The tunneling current is defined usually as the rate of change of the difference in the number of right and left movers, \begin{align} I_{tun}(t) = e\mbox{ }\partial_{t}\frac{\Delta N}{2} = e\frac{i}{2}\left[H,\Delta N\right] = e\frac{i}{2}\left[H,N_{R}-N_{L}\right] \label{tundef} \end{align} We use the convention as in \cite{shah2016consistent}, where they have considered $\mu_{L} - \mu_{R} = e V$. In our case the bias is applied to the right movers so that, $\mu_{R} = e V_{b}(t)$ and $\mu_{L} = 0$. Hence in our case we use $\mu_{L}-\mu_{R} = -e V_{b}(t) = e V(t)$. The current becomes, \begin{align} I_{tun}(t) \mbox{ } = \mbox{ }-i e \Gamma \mbox{ } \lim_{t^{'} \rightarrow t } \bigg( < \psi^{\dagger}_R(0,t^{'}) \psi_L(0,t) > - < \psi^{\dagger}_L(0,t) \psi_R(0,t^{'}) > \bigg) \label{eqitun} \end{align} The detailed calculation of tunneling current is shown in \hyperref[AppendixA]{Appendix A}. Using the form of the NEGF we have obtained in Eq.\ref{eqitun}, finally we get ($\hbar = 1$), \begin{align} I_{tun}(t) \mbox{ } = \mbox{ } - \Gamma^2 \mbox{ } \mbox{ } \frac{ 4 }{\Gamma ^2 +4 v_F^2} \left[1 - \frac{ \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right] \mbox{ } \frac{e^2}{2\pi } V_b(t) \end{align} We define the tunneling amplitude in terms of a tunneling parameter $t_{p}$ as $\Gamma = 2 v_{F} t_{p}$ \cite{filippone2016tunneling}. So that the tunneling current upon restoring dimensional units and using $V_{b}(t) = -V(t)$ \begin{align} I_{tun}(t) \mbox{ } = \mbox{ } \mbox{ } \frac{ 4 t_{p}^{2} }{(t_{p}^{2} +1)^{2}} \mbox{ } \frac{e^2}{h } V(t) \label{eqtunc} \end{align} and the differential tunneling conductance \begin{align} G = \frac{d I_{tun}}{d V(\tau)} \mbox{ }=\mbox{ } G_{tun} \mbox{ } = \mbox{ } \frac{ 4 t_{p}^{2} }{(t_{p}^{2} +1)^{2}} \mbox{ } \frac{e^2}{ h } \label{eqcond} \end{align} are obtained as functions of the tunneling parameter and agree with the predictions of standard scattering theory \cite{LANDAUER198191,PhysRevB.31.6207}. The tunneling conductance shows $t_{p} \rightarrow \frac{1}{t_{p}}$ duality between the strong and weak tunneling regimes as predicted \cite{filippone2016tunneling,shah2016consistent}. Although our calculations are done when the system was at a finite temperature in the distant past, the temperature dependence naturally drops out of the expression for tunneling current when there are no interparticle interactions. Also it is worth mentioning that the current is linearly dependent on the time-dependent voltage bias (linear response) and the nonlinear dependence arises only when interactions between the fermions are taken into account (which is not done in the present paper).\\ For a step bias it is clear from Eq.\ref{eqtunc} that for the case of an infinite bandwidth no transients appear in the tunneling current even though the NEGF (Eq.\ref{eqnoneq}) shows transients. But these transients drop out when one evaluates the equal space-equal time two-point functions at the origin such as $< \psi^{\dagger}_R(0,t) \psi_L(0,t) >$ and $< \psi^{\dagger}_L(0,t) \psi_R(0,t) >$ that are present in the expression for the tunneling current (Eq.\ref{eqitun}). This could be explained by thinking of the variable $|x - x^{'}|$ as serving as a length scale or inverse bandwidth (for $< \psi^{\dagger}_R(x,t) \psi_R(x^{'},t) > $ and $< \psi^{\dagger}_L(x,t) \psi_L(x^{'},t) > $). This proxy for a inverse bandwidth is present in the full NEGF but is zero when one evaluates the tunneling current (because in this case $x = x^{'}$). So in order to investigate transients in the tunneling current one has to introduce a finite bandwidth in the problem description explicitly which is what we do in Sec.\ref{secfinite}. In that case one has to forego the idea of an exact solution and resort to a systematic perturbative approach. \section{Bias-induced anomalies} \label{sec3} In this section, we point out two interesting features of the problem we are studying. They are caused by the presence of a bias in the system which leads to physical quantities that normally vanish identically to be non-zero. Specifically, we contrast the behavior of a) the time-rate of change of the density difference between the local right and left mover densities and b) the time-rate of change of the difference between the total number of right movers and left movers. \\ \mbox{ } \\ The quantity in a) is, \begin{align} \partial_t \Delta \rho(x,t) \mbox{ } = \mbox{ } \frac{d}{dt} (\rho_R(x,t) - \rho_L(x,t)) \end{align} where $\rho_{R}(x,t)$ and $\rho_{L}(x,t)$ are the (normal-ordered) right and left moving particle densities respectively, whereas the quantity in b) is, \begin{align} \partial_t \Delta N(t) \mbox{ } = \mbox{ } \frac{d}{dt} (N_R(t) - N_L(t)) \end{align} where $ N_{R/L}(t) = \int dx \mbox{ } \rho_{R/L}(x,t) $. Naively, $ \Delta N(t) \mbox{ } = \mbox{ }\int dx\mbox{ } \Delta \rho(x,t) $ \\ \mbox{ } \\ The striking result is this: in the limit when the bias becomes time independent, the answer for a) is zero but the answer for b) is non-zero. This is what we mean by bias-induced anomaly. Normally we expect that both should be zero since b) is obtained by spatially integrating a). The reason for this anomaly is that integrating over an infinite domain of x-values effectively multiplies the quantity by this infinity. Thus even if this quantity was tending to zero, when multiplied by a quantity tending to infinity, leads to a value that could be and in this case, is – finite. \\ \mbox{ } \\ To see this mathematically we write, \begin{align} \partial_t \Delta \rho(x,t) \mbox{ } &= \mbox{ } \lim_{x^{'} \rightarrow x } v_F\frac{d}{dt}\bigg(<\psi^{\dagger}_{R}(x^{'},t)\psi_{R}(x,t) > -<\psi^{\dagger}_{L}(x^{'},t)\psi_{L}(x,t)>\bigg) \nonumber \\ \mbox{ } &= \mbox{ }- \frac{v_F}{\pi} \mbox{ } \left( \frac{\Gamma }{v_F} \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\right)^2 \mbox{ } \frac{1}{2v_F} \mbox{ } e V^{'}_b(t - \frac{|x|}{v_F}) \end{align} whereas, \begin{align} \partial_t \Delta N(t) \mbox{ } &= \mbox{ } - \frac{v_F}{\pi} \mbox{ } \left( \frac{\Gamma }{v_F} \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\right)^2 \mbox{ } \mbox{ } e \mbox{ } [ V_b(t) - V_b(-\infty) ] \end{align} Select $ V_b(t) = e^{ \alpha t } V_b(0) $ with $ \alpha \rightarrow 0^{+} $. This means the bias is slowly switched on from a zero value in the remote past and remains turned on at least until time $ t $. In the limit $ \alpha \rightarrow 0^{+} $, the bias is always on but the endpoints are mathematically well defined. In this case, \begin{align} \partial_t \Delta \rho(x,t) \mbox{ } &= \mbox{ }- Lim_{\alpha \rightarrow 0^{+} } \frac{v_F}{\pi} \mbox{ } \left( \frac{\Gamma }{v_F} \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\right)^2 \mbox{ } \frac{1}{2v_F} \mbox{ } e V^{'}_b(t - \frac{|x|}{v_F}) \approx 0 \end{align} whereas, \begin{align} \partial_t \Delta N(t) \mbox{ } &= \mbox{ } - Lim_{\alpha \rightarrow 0^{+} } \frac{v_F}{\pi} \mbox{ } \left( \frac{\Gamma }{v_F} \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\right)^2 \mbox{ } \mbox{ } e \mbox{ } [ V_b(t) - V_b(-\infty) ] \nonumber \\&= \mbox{ } - Lim_{\alpha \rightarrow 0^{+} } \frac{v_F}{\pi} \mbox{ } \left( \frac{\Gamma }{v_F} \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\right)^2 \mbox{ } \mbox{ } e \mbox{ } V_b(0) \neq 0 \end{align} This is the first of the ``bias-induced anomalies". \\ \mbox{ } \\ \mbox{ } \\ The second example is the one-particle Green function itself - specifically the one which involves turning a right mover to a left mover i.e. $ < \psi^{\dagger}_L(x,t) \psi_R(x^{'},t^{'})> $. In the absence of a bias, this quantity vanishes identically when $ x $ and $ x^{'} $ are on opposite sides of the origin. However in the present case we obtain something interesting, \begin{align} <\psi^{\dagger}_{L}(x^{'} < 0,t^{'})\psi_{R}(x > 0,t)> = -\frac{i}{2\pi} \frac{\frac{\pi}{\beta v_{F}} \mbox{ } q_1}{\sinh( \frac{ \pi }{\beta v_F } ( x + x^{'}-v_F(t-t^{'}) ) )} \mbox{ } ( -U(t - \frac{x}{v_F},t) + U(t^{'} + \frac{x^{'}}{v_F},t) ) \end{align} where $ q_1 = \left[ 1 - \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2} \right] \mbox{ } i \frac{\Gamma }{v_F}\frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2} $. This quantity is identically zero when there is no bias since the point contact causes reflection and turns a right mover to a left mover on the same side of the origin but not on opposite sides. But the presence of a bias leads to a non-equilibrium situation where what is opposite sides of the origin at one time is effectively the same side of the origin at other times as seen by the presence of the evolution factor $ ( -U(t - \frac{x}{v_F},t) + U(t^{'} + \frac{x^{'}}{v_F},t) ) $. \section{Dynamical density of states} \label{sec4} In this section, we present the formula for the dynamical density of states $D(\omega;x,T)$ at position $x$. $D(\omega;x,T) \mbox{ } d\omega$ is the number of fermionic states per unit length with energy between $\hbar \omega$ and $\hbar(\omega + d\omega)$ relative to the Fermi energy. The DDOS at $x^{'}=x$ is given by the formula \begin{align} D(\omega;x,T) \mbox{ } = \mbox{ }\int d\tau \mbox{ } e^{ -i \omega \tau } < \{ \psi(x,T + \frac{\tau }{2} ) , \psi^{\dagger}(x, T - \frac{ \tau }{2} ) \}> \label{Domega} \end{align} A closed expression for $D(\omega;x,T)$ can be obtained by using the formulas for the non-equilibrium Green functions obtained earlier. We evaluate the DDOS for the right movers in \hyperref[AppendixB]{Appendix B} and it can be evaluated in a similar manner for the left movers as well. The DDOS for the right movers is obtained as \begin{align} D(\omega;x,T) \mbox{ }= \mbox{ } D_{equil}(\omega;x,T) = \frac{1}{2v_F} \end{align} In other words, it is the same as what one would expect when there is no bias, no mutual interactions between particles and the system is in equilibrium. The reason for this simple result is that in absence of mutual interactions between fermions, the anticommutator in eq.(\ref{Domega}) is a Dirac delta-function at $ \tau = 0 $ (since the space dependence and time dependence appear additively for chiral fermions). \section{Time-dependent tunneling parameter} \label{sec6} In this section we consider the case of a time-dependent tunneling parameter $t_p$, which means $\Gamma$ is now time-dependent $\Gamma(t)$. We show that when the time-dependence of $\Gamma$ is present only as a phase factor $\Gamma_{TD}(t) = \Gamma e^{i \theta_{TD}(t)}$ the results are same as in the previous sections. But when the magnitude of $\Gamma$ itself is time-dependent i.e. $\Gamma_{TD}(t) = \Gamma(t)e^{i \theta_{TD}(t)}$ then these two approaches are not equivalent and we get different results. In general we can write a modified Hamiltonian for the system with a complex $\Gamma_{TD}(t)$, \begin{align} H = \sum_{p} v_Fp \mbox{ } b^{\dagger}_{p,R}b_{p,R} + \sum_{p}(-v_Fp) b^{\dagger}_{p,L}b_{p,L}+ \frac{ \Gamma_{TD}(t) }{L} b^{\dagger}_{.,R}b_{.,L} +\frac{ \Gamma^*_{TD}(t) }{L}b^{\dagger}_{.,L}b_{.,R} \label{timedepham} \end{align} Here we have used the same notations as in section \ref{sec1} and the Fermi fields are modified with the addition of a time-dependent phase $b_{p,R}(t) \mbox{ } = \mbox{ }c_{p,R}(t)\mbox{ }e^{ i \theta_{TD}(t) }$ and $b_{p,L}(t) \mbox{ } = \mbox{ }c_{p,L}(t) $. The equations of motion for the modified fields then become, \begin{align} i\partial_t b_{p,R}(t) \mbox{ } = \mbox{ } v_Fp \mbox{ } b_{p,R}(t) + \frac{ \Gamma_{TD}(t) }{L} b_{.,L}(t)\nonumber \\ i\partial_t b_{p,L}(t) \mbox{ } = \mbox{ } -v_Fp \mbox{ } b_{p,L}(t) + \frac{ \Gamma^{*}_{TD}(t) }{L} b_{.,R}(t) \label{eqomb} \end{align} \subsection{ \texorpdfstring{$ |\Gamma_{TD}(t)| $}{Lg} is independent of time} When the magnitude of the tunneling amplitude $|\Gamma_{TD}(t)|$ is independent of time we can write \begin{align} \Gamma_{TD}(t) = \Gamma \mbox{ }e^{ i \theta_{TD}(t) } \end{align} On comparing Eqs.\ref{eqomb} and \ref{eqom} so that, \begin{align} i\partial_t c_{p,R}(t) \mbox{ } &= \mbox{ }( v_Fp + \partial_t\theta_{TD}(t) ) \mbox{ } c_{p,R}(t) + \frac{ |\Gamma| }{L}c_{p,L}(t) \nonumber \\ i\partial_t c_{p,L}(t) \mbox{ } &= \mbox{ } -v_Fp \mbox{ }c_{p,L}(t) + \frac{ |\Gamma| }{L}c_{p,R}(t) \end{align} we can see that the case of $|\Gamma_{TD}(t)|$ independent of time is equivalent to the case of a time-independent real $\Gamma$ considered in the previous sections, provided one identifies $\partial_{t} \theta_{TD}(t) = e V_{b}(t)$. However when the time-dependence of the tunneling amplitude is such that the magnitude $|\Gamma_{TD}(t)|$ is dependent on time the two approaches are not equivalent. \subsection{$|\Gamma_{TD}(t)|$ is time-dependent} When the magnitude of the tunneling amplitude is itself time-dependent we may write, \begin{align} \Gamma_{TD}(t) = \Gamma(t) \mbox{ }e^{ i \theta_{TD}(t) } \end{align} Following a similar procedure as in Section \ref{sec1} we write down the finite temperature non-equilibrium Green functions for the case of a time-dependent tunneling parameter, \begin{align} <\psi^{\dagger}_{\nu^{'}}(x^{'},t^{'})\psi_{\nu}(x,t)> \mbox{ }=\mbox{ } -\frac{i}{2\pi} \frac{\frac{\pi}{\beta v_{F}}}{\sinh( \frac{ \pi }{\beta v_F } (\nu x-\nu^{'}x^{'}-v_F(t-t^{'}) ) )} \zeta_{\nu,\nu^{'}} \label{timedepnegf} \end{align} where $\nu$,$\nu^{'} = \pm 1$ with $R=1$ and $L=-1$ and \begin{widetext} \begin{align} \zeta_{1,1} = \left( 1 + \frac{ i }{2v_F} \mbox{ } \Gamma^*_{TD}(t- \frac{x}{v_F}) \mbox{ } \Xi_R(x,t) \right) \left( 1 - \frac{ i }{2v_F} \mbox{ } \Gamma_{TD}(t^{'} - \frac{x^{'}}{v_F}) \mbox{ } \Xi^*_R(x^{'},t^{'}) \right) + \Xi_R(x,t) \mbox{ } \Xi^*_R(x^{'},t^{'}) \end{align} \begin{align} \zeta_{1,-1} = \Xi^*_L(x^{'},t^{'})\mbox{ }\left( 1 + \frac{ i }{2v_F} \mbox{ } \Gamma^*_{TD}(t- \frac{x}{v_F}) \mbox{ } \Xi_R(x,t) \right)-\Xi_R(x,t) \mbox{ } \left( 1 + \frac{i}{2v_F} \mbox{ } \Gamma^*_{TD}(t^{'} + \frac{x^{'}}{v_F}) \mbox{ } \Xi^*_L(x^{'},t^{'})\right) \end{align} \begin{align} \zeta_{-1,1} = \Xi_L(x,t) \left( 1 - \frac{ i }{2v_F} \mbox{ } \Gamma_{TD}(t^{'} - \frac{x^{'}}{v_F}) \mbox{ } \Xi^*_R(x^{'},t^{'}) \right)-\Xi^*_R(x^{'},t^{'})\mbox{ } \left( 1 - \frac{i}{2v_F} \mbox{ } \Gamma_{TD}(t + \frac{x}{v_F}) \mbox{ } \Xi_L(x,t)\right) \end{align} \begin{align} \zeta_{-1,-1} = \left( 1 - \frac{i}{2v_F} \mbox{ } \Gamma_{TD}(t + \frac{x}{v_F}) \mbox{ } \Xi_L(x,t)\right) \mbox{ } \left( 1 + \frac{i}{2v_F} \mbox{ } \Gamma^*_{TD}(t^{'} + \frac{x^{'}}{v_F}) \mbox{ } \Xi^*_L(x^{'},t^{'})\right) + \Xi_L(x,t) \mbox{ } \Xi^*_L(x^{'},t^{'}) \end{align} \end{widetext} where \begin{align} \Xi_R(x,t)\mbox{ } = \mbox{ } \frac{i}{v_F} \mbox{ } \frac{ \theta(x ) \mbox{ } \Gamma_{TD}(t- \frac{x}{v_F}) }{ 1 + \frac{ |\Gamma_{TD}(t- \frac{x}{v_F} )|^2 }{(2v_F)^2 } }; \mbox{ } \mbox{ } \Xi_L(x,t)\mbox{ } = \mbox{ } - \frac{ i }{v_F} \frac{ \theta( -x ) \mbox{ } \Gamma^*_{TD}(t + \frac{ x }{ v_F} ) }{1 + \frac{ |\Gamma_{TD}(t + \frac{x}{v_F} )|^2 }{(2v_F)^2 }} \label{xiequil} \end{align} The tunneling current can be computed using the same definition as in Eq.\ref{tundef} but with the Hamiltonian given in Eq.\ref{timedepham}. This gives, \begin{align} I_{tun}(t) \mbox{ } = \mbox{ }-i \mbox{ }e \lim_{t^{'} \rightarrow t } \bigg( \Gamma_{TD}(t) \mbox{ } < \psi^{\dagger}_R(0,t^{'}) \psi_L(0,t) > - \Gamma^*_{TD}(t) \mbox{ } < \psi^{\dagger}_L(0,t) \psi_R(0,t^{'}) > \bigg) \label{ituntimedep} \end{align} After some effort we conclude, \begin{align} I_{tun}(t) \mbox{ } = \mbox{ }\frac{4 i e v_F^2 \left( \Gamma_{ TD }^*(t) \Gamma_{TD}^{'}(t)- \Gamma_{TD}(t) \Gamma_{TD}^{*'}(t)\right)}{\pi \left( | \Gamma_{TD}(t) |^2+4 v_F^2\right)^2} \label{itungammat} \end{align} As the magnitude of $\Gamma_{TD}(t)$ is time-dependent in this case we set $ \Gamma_{TD}(t) \mbox{ } = \mbox{ } \Gamma (t) \mbox{ }e^{i \theta_{TD}(t)} $ and using the relation $ \theta_{TD}'(t) = e V_b(t) $ we get \begin{align} I_{tun}(t) \mbox{ } = \mbox{ }-\frac{8 e^2 v_{F}^2 \Gamma (t)^2 }{\pi \left(\Gamma (t)^2+4 v_{F}^2\right)^2}\mbox{ }\mbox{ } V_b(t) \end{align} The time-dependent tunneling amplitude in terms of the tunneling parameter $t_{p}(t)$ can be written as $\Gamma(t) = 2 v_{F} t_{p}(t)$. Also using the convention as in \cite{shah2016consistent} we pick up a minus sign $V_{b}(t) = -V(t)$. Finally the following expression for the time-dependent tunneling current is obtained after restoring dimensional units \begin{align} I_{tun}(t) \mbox{ } = \mbox{ }\frac{ e^2 }{h } \mbox{ } \frac{ 4\mbox{ } t_p (t)^2 }{ \left(1+t_p(t)^2\right)^2}\mbox{ }V(t) \end{align} The above expression is similar in form to Eq. \ref{eqtunc} except that $t_{p}$ is now time-dependent and this tells us that so long as $ t_p(t) $ is switched on and held at a constant value, the current is proportional to voltage and there are no transients in the current. However if $ t_p(t) $ itself depends in some complicated way on the bias, this may not be the case. In such situations, this dependence has to be specified separately. \section{Double barrier resonant tunneling} \label{db} The type of impurity present at the point contact is given by the form of $\Gamma_{TD}(t)$. Here we show that the results from the previous section can be used to easily study resonant tunneling by treating the point contact impurity as a double barrier structure such as a lead-device-lead junction at the origin. We neglect Coulomb interactions and dynamics in the device and only focus on point-contact coupling between the left-lead and the device and the right-lead and the device. In this case the tunneling amplitude takes the form \begin{align} \Gamma_{TD}(t) = \Gamma(t) e^{i \Theta_{TD}} = (W_{L} e^{2 i \xi_{0}} + W_{R} e^{-2 i \xi_{0}})e^{i \Theta_{TD}(t)} \label{dbgamma} \end{align} where $\Theta^{'}_{TD}(t) = e V_{b}(t) = -e V(t)$ and the coupling between the double barrier at the origin and the left and right chiral quantum wires is $W_{L}$ and $W_{R}$ respectively which can be time-dependent in general but we assume them to be independent of time for simplicity as it doesn't change the form of the I-V characteristics. Eq.\ref{dbgamma} is valid in the limit $k_{F} \rightarrow \infty$ and inter-barrier separation $a \rightarrow 0$ such that $0 < k_{F} a = \xi_{0} < \infty$ is fixed. For the case of a symmetric double barrier $W_{L} = W_{R} = W$ and substituting Eq.\ref{dbgamma} in Eq.\ref{itungammat} we obtain \begin{align} I_{tun}(t) = \frac{8 e^{2} v_{F}^{2} W^{2}(e^{2 i \xi_{0}} + e^{-2 i \xi_{0}})^{2}}{\pi(4 v_{F}^{2} + W^{2}(e^{2 i \xi_{0}} + e^{-2 i \xi_{0}})^{2})^{2}} V(t) \end{align} The tunneling conductance is given by \begin{align} G = \frac{8 e^{2} v_{F}^{2} W^{2}(e^{2 i \xi_{0}} + e^{-2 i \xi_{0}})^{2}}{\pi(4 v_{F}^{2} + W^{2}(e^{2 i \xi_{0}} + e^{-2 i \xi_{0}})^{2})^{2}} \end{align} This is of the form $G = \frac{8 e^{2} v_{F}^{2} |\Gamma|^{2}}{\pi(4 v_{F}^{2} + |\Gamma|^{2})^{2}}$, which for $|\Gamma|^{2} = 4 v_{F}^{2}$ peaks at $G = \frac{e^{2}}{2\pi} = \frac{e^{2}}{h}$ (since $\hbar = 1$). This is used to determine the condition for resonant tunneling. One can tune the inter-barrier separation in the device region by external means in order to achieve resonance. When the resonance condition is satisfied the conductance attains its peak value $G = G_{0} = \frac{e^{2}}{h}$ (see Fig.\ref{resonant}). The condition for resonance in the symmetric double barrier case is determined to be \begin{align} W^{2}(2 \cos(4 \xi_{0}) + 2)-4 v_{F}^{2} = 0 \label{rescondsym} \end{align} . For the more general case of an asymmetrical double barrier the expression for current is obtained as \begin{align} I_{tun}(t) = \frac{8 e^{2} v_{F}^{2}\left(e^{2 i \xi_{0}} W_{L}+e^{-2 i \xi_{0}} W_{R}\right) \left(e^{-2 i \xi_{0}} W_{L}+e^{2 i \xi_{0}} W_{R}\right)}{\pi (4 v_{F}^{2} + \left(e^{2 i \xi_{0}} W_{L}+e^{-2 i \xi_{0}} W_{R}\right) \left(e^{-2 i \xi_{0}} W_{L}+e^{2 i \xi_{0}} W_{R}\right))^{2}}V(t) \end{align} The tunneling conductance attains its maximum value $G_{0} = \frac{e^{2}}{h}$ when the resonance condition for this case is satisfied, which is \begin{align} W_{L}^{2} + 2 \cos(4 \xi_{0}) W_{L} W_{R} + W_{R}^{2} - 4 v_{F}^{2} = 0 \end{align} \begin{figure}[H] \centering \includegraphics[scale=0.9]{symdb} \caption{This figure shows $G/G_{0}$ vs $\xi_{0}$ for a symmetric double barrier. The conductance peaks when the resonance condition (Eq.\ref{rescondsym}) is satisfied. The figure is plotted choosing $W=2$ and $v_{F}=1$.} \label{resonant} \end{figure} \section{Transient quantum transport in the case of finite bandwidth} \label{secfinite} So far what we have considered is the simplest possible scenario of non-interacting fermions with a linear dispersion and no momentum cutoffs (infinite bandwidth). In such a circumstance the I-V characteristic (Eq.\ref{eqtunc}) for point-contact tunneling is linear and only shows steady state behaviour when a constant voltage is turned on suddenly. We expect deviations from these properties when we take into account the effects of a momentum cutoff (finite bandwidth) $\Lambda$ in the point contact such that $|p| < \Lambda$. This implicitly means that there is a short distance cutoff ($\frac{1}{\Lambda}$) in the problem, which makes the quantum point contact (QPC) region to occupy a non-zero size. Here we calculate the leading correction to the tunneling current due to a finite bandwidth and show the appearance of a transient in the current in presence of a (subsequently constant) bias that is suddenly switched on. The correction to the current involves integration over the past history of the system and thus encodes memory effects ie. it displays non-Markovian behaviour.\\ \mbox{ } \\ In presence of a finite bandwidth, the Hamiltonian is written as \begin{align} H = \sum_p(v_F p + eV_b(t))c^{\dagger}_{p,R}c_{p,R} + \sum_p(-v_F p)c^{\dagger}_{p,L}c_{p,L} + \frac{\Gamma}{L}(c^{\dagger}_{.,R}c_{.,L} + c^{\dagger}_{.,L}c_{.,R}) \label{hamfinite} \end{align} where now we have defined $ c_{.,\nu} = \sum_{|p|<\Lambda}c_{p,\nu} $ with the momentum restricted to a finite bandwidth $\Lambda$. We set $ c_{.,\nu} = c^{\infty}_{.,\nu} + \delta c_{.,\nu} $ where $ c^{\infty}_{.,\nu} = \sum_pc_{p,\nu} $ and $\delta c_{.,\nu}$ is the deviation from the infinite bandwidth case. The other quantities have their usual meaning as in the previous sections. The equations of motion for the right and left moving Fermi fields in this case are \begin{align} i \partial_t c_{p,R}(t) = (v_Fp + eV_b(t)) c_{p,R}(t) + \theta(\Lambda-|p|)\frac{\Gamma}{L}c_{.,L}(t) \nonumber \\ i \partial_t c_{p,L}(t) = -v_Fp c_{p,L}(t) + \theta(\Lambda-|p|)\frac{\Gamma}{L}c_{.,R}(t) \label{bandeqom} \end{align} with $\theta(\Lambda-|p|)$ taken to be the Dirichlet regularized step function. Exact solution to the problem is possible only in the simplest case of infinite bandwidth. If we wish to investigate the case of a finite bandwidth analytically, we shall have to settle for perturbative corrections in powers of $\frac{1}{\Lambda}$. In this section, we are interested in the finite bandwidth correction to the current $\delta I_{tun}(t)$ and we shall evaluate it upto $O(\frac{1}{\Lambda})$ i.e. we assume a large but finite bandwidth. In this case we may write, \begin{align} \delta I_{tun}(t) =& - \frac{ i e \Gamma}{L} Lim_{t^{'} \rightarrow t }(<\delta c^{\dagger}_{.,R}(t^{'})c^{\infty}_{.,L}(t)> - <\delta c^{\dagger}_{.,L}(t)c^{\infty}_{.,R}(t^{'})>)\nonumber \\& - \frac{ i e \Gamma}{L} Lim_{t^{'} \rightarrow t }(<c^{\dagger \infty}_{.,R}(t^{'})\delta c_{.,L}(t)>- <c^{\dagger \infty }_{.,L}(t)\delta c_{.,R}(t^{'})>) \label{ditun} \end{align} Upon solving the equations in Eq.\ref{bandeqom} we may write, \begin{align} c_{.,R}(t) = \left( \sum_{|p|<\Lambda} c_{p,R}(t_0) e^{ -i (t-t_0) p v_F }\right)\mbox{ } U(t_0,t)- i \Gamma \int_{t_0}^t c_{.,L}(t_2) \delta_{\Lambda} (v_F(t-t_2))\mbox{ } U(t_2,t) \, dt_2 \label{cdotR} \end{align} and \begin{align} c_{.,L}(t) = \left( \sum_{|p|<\Lambda} c_{p,L}(t_0) e^{ i (t-t_0) p v_F }\right)- i \Gamma \int_{t_0}^t c_{.,R}(t_2) \delta_{\Lambda}(v_F(t-t_2)) \, dt_2 \label{cdotL} \end{align} where we have defined the broadened Dirac delta as $\delta_{\Lambda}(x) = \frac{1}{L}\sum_{|p|<\Lambda}e^{ipx}$. We find the need to introduce a quantity $ \Delta(x) $ that corresponds to the difference between this broadened and actual Dirac delta function. In terms of the usual Dirac delta function $\delta(X) = \frac{1}{L} \sum_{p} e^{i p X}$ this can be written as \begin{align} \delta_{\Lambda} (X) = \delta(X) - \Delta(X) \label{deltalambda} \end{align} Computing the finite bandwidth correction to leading order in $ 1/\Lambda $ is itself not a straightforward task and involves a tedious calculation. In order to calculate $\delta I_{tun}(t)$ we have to evaluate correlations of the type $<\delta c^{\dagger}_{.,\nu^{'}}(t^{'})c^{\infty}_{.,\nu}(t)>$ and $<c^{\dagger \infty}_{.,\nu^{'}}(t^{'})\delta c_{.,\nu}(t)>$. In \hyperref[AppendixD]{Appendix D} we discuss the procedure to calculate these correlations. After a lengthy procedure we obtain the following expression \begin{widetext} \begin{align} \delta I_{tun}(t) = &-\frac{i e \Gamma}{L}\bigg( -\frac{4 L v_F^2 \left(4 v_F^2-\Gamma^2\right) (U(t,t_0)-U(t_0,t))}{ (\Gamma ^2+4 v_F^2)^2} \int dx \frac{\Delta(x-v_{F}(t-t_{0}))}{2\pi i} \frac{ \frac{ \pi }{\beta v_F } \mbox{ }\theta(-x) (R-R^*) }{\sinh( \frac{ \pi }{\beta v_F } (-x + v_{F}(t-t_{0}) ) ) } \mbox{ } \mbox{ } \nonumber \\ &+ \frac{32 i \Gamma^3 L v_F^4}{(\Gamma^2+4 v_F^2)^3} \mbox{ } \int_{t_0}^{t} \Delta(v_F(t-t_2))\mbox{ } \frac{1}{2\pi i} \frac{ \frac{ \pi }{\beta v_F } }{\sinh( \frac{ \pi }{\beta} (t-t_{2}) ) } \mbox{ } (U(t,t_2) U(t,t_2)- U(t_2,t) \mbox{ } U(t_2,t) ) \, dt_2 \bigg) \label{deltaifin} \end{align} \end{widetext} \begin{figure} \centering \includegraphics[scale=1]{ditunvst} \caption{Current transients: \small This figure shows $\delta I_{tun}(t)$ vs $t$ for different values of dimensionless bias $e V_{dl}$ defined as $e V_{dl} = \frac{e V_{0}}{v_{F} \Lambda}$ where we have considered a step bias $V(t) = \theta(t) V_{0}$ and for dimensionless temperature $T_{dl} = \frac{T}{v_{F} \Lambda} = 0.1$. The other parameters are $\Gamma \Lambda = 100$ and $v_{F} \Lambda = 100 $ in appropriate units. The red-dashes indicate the $tDMRG$ result for the time-evolution of current in an Anderson dot model (see Fig.1 in \cite{Eckel_2010}), which we have rescaled and overlayed on our plot to show the qualitative similarity between the current transients even in a completely different model to ours.} \label{deltaivst} \end{figure} Here $R$ is the reflection amplitude (for the infinite band-width point contact), $\beta$ is inverse temperature, $x$ is position and $t_{0}$ is an arbitrary initial time when the system is in equilibrium. Since we are going to set $t_{0} \rightarrow -\infty$ as we did in the case of infinite bandwidth, the first term in Eq.\ref{deltaifin} drops out and we are left with only the second term which involves an integral over the past history of the system. \begin{widetext} \begin{align} \delta I_{tun}(t) = -\frac{i e \Gamma}{L} \frac{32 i \Gamma^3 L v_F^4}{(\Gamma^2+4 v_F^2)^3} \mbox{ } \int_{-\infty}^{t} \frac{\Delta(v_F(t-t_2))}{2\pi i} \frac{ \frac{ \pi }{\beta v_F } }{\sinh( \frac{ \pi }{\beta} (t-t_{2}) ) } (U(t,t_2) U(t,t_2)- U(t_2,t) \mbox{ } U(t_2,t) ) \, dt_2 \label{deltaifinal} \end{align} \end{widetext} Note that the function $\Delta $ may be written as $\Delta(v_F(t-t_2)) = \delta(v_F(t-t_2)) - \frac{\sin(\Lambda v_F(t-t_2))}{\pi v_F(t-t_2)} $. The expression for $\delta I_{tun}(t)$ in Eq.\ref{deltaifinal} exhibits non-Markovian behaviour as the current at a given time is depends not only on the voltage applied at that time but also on the voltage at all previous times. When the bias is switched on suddenly and remains constant thereafter, the tunneling current shows a transient (gradual) build up before settling into its steady state constant value. Even when a constant voltage is present eternally, there is a non-linear dependence of the current on this bias voltage which is seen in terms such as $U(t,t_2)$, where $U(t,t_{2}) \equiv e^{ - i \int _{ t}^{t_{2}} e V_{b}(s)ds }$. Note that according to our convention $V_{b}(t) = -V(t)$ and hence $U(t,t_{2}) \equiv e^{ i \int _{ t}^{t_{2}} e V(s)ds }$ (where $ V(t) $ is the convention of \cite{shah2016consistent}). In Fig.\ref{deltaivst} we plot $\delta I_{tun}(t)$ vs $t$ for the case of a step bias (sudden switching on) $V(t) = V_{0} \mbox{ }\theta(t)$ showing the transients in the current before reaching a steady state. The transients are qualitatively similar to that observed in the experimental investigation of split-gate quantum point-contacts \cite{doi:10.1063/1.2337865} and also in numerical simulations of non-equilibrium transport through an Anderson dot using methods like time-dependent density matrix renormalization group ($tDMRG$) \cite{Eckel_2010} and the iterative summation of path integrals ($ISPI$) \cite{PhysRevB.77.195316} approach.\\ Our analysis shows that even in non-equilibrium transport through a simple quantum point-contact, transient dynamics appear in the tunneling current when a short distance cutoff is introduced in the problem. This is the reason why numerical methods like $tDMRG$, that work on a lattice, predict a transient in the current before a steady state is reached. Fig.\ref{deltaivdltdl} shows the steady state current $\delta I_{tun}$ as a function of constant time-independent bias ($e V_{dl} = \frac{e V}{v_{F} \Lambda}$) for different temperatures $T_{dl} = \frac{T}{v_{F} \Lambda}$. The three energy scales in the problem are temperature ($T$), bias potential ($V$) and the energy scale associated with the bandwidth ($v_{F} \Lambda$). The energy scale due to the bandwidth is the dominant one as we assume the bandwidth to be large but finite in our calculation. \begin{figure}[H] \centering \includegraphics[scale=0.9]{divsvdl} \caption{Nonlinear I-V characteristics: \small The temperature dependence of $\delta I_{tun}$ in steady state is shown when plotted vs $e V_{dl} = \frac{e V}{v_{F} \Lambda}$. The dimensionless temperature is defined as $ T_{dl} = \frac{T}{v_{F} \Lambda}$. The other parameters are $\Gamma \Lambda = 100 $ and $v_{F} \Lambda = 100$ in appropriate units.} \label{deltaivdltdl} \end{figure} \section{Summary and Conclusions} \label{conc} In this paper, we have investigated the non-equilibrium transport between two non-interacting chiral quantum wires by computing exactly the non-equilibrium Green function (NEGF) expressed in terms of simple functions of positions and times unlike previous analytical approaches that deal only with the steady state response to a constant bias. We have calculated the time-dependent tunneling current and differential conductance across an infinite bandwidth point contact for an arbitrary time-dependent bias. In this case no transients in the tunneling current are seen when a step voltage bias is considered $V_{b}(t) = \theta(t)\mbox{ } V_{0}$. In this case, the steady state in the current is reached instantaneously which may be attributed to the extreme ideal situation of absence of interactions between fermions and infinite bandwidth in the point contact. The NEGF method allows us to study transient phenomena as well as steady state properties. The full space-time NEGF that we have obtained exhibits transient behavior upon sudden switch on of a bias even when the bandwidth of the point-contact is infinite. We also examine the situation where in addition to the bias voltage, the tunneling amplitude also becomes time-dependent. We calculate the NEGF and the time-dependent transport properties in such a scenario. We also demonstrate how resonant tunneling through a simple double barrier structure can be easily studied using our method.\\ We go beyond the infinite bandwidth limit and consider the situation when a finite bandwidth ($\Lambda$) in the point-contact is introduced in the problem. Although exact treatment of time-dependent non-equilibrium transport in a similar system has been studied before using Keldysh NEGF formalism, our method is different as we work in the position and time domain and our method applies for an arbitrary time-dependent bias unlike previous works \cite{PhysRevB.74.085324} that are restricted to special cases like a sharp step bias or a square pulse voltage bias. When a finite bandwidth is introduced in the point-contact, a short distance cutoff ($\frac{1}{\Lambda}$) becomes implicit. A systematic perturbative treatment in this parameter allows the calculation of the correction to the tunneling current upto $O(\frac{1}{\Lambda})$. We have shown that the transport properties are non-Markovian in this case. The tunneling current now shows transient behaviour before reaching a steady state (for a bias that is suddenly switched on) which is merely a consequence of the presence of a short-distance cutoff in the problem description and not on other details.\\ In addition to the non-equilibrium two-point function, it is also possible to write down (using Wick's theorem) the four-point functions. Using these correlations in conjunction with powerful novel bosonization techniques \cite{das2018quantum} it is possible to extend our NEGF approach to study non-equilibrium transport between chiral fermionic edges with mutually interacting particles like in the case of Fractional Quantum Hall edge states \cite{chang2003chiral}. This approach could shed more light on the universality of power-law exponents, or more generally, the scaling functions \cite{PhysRevB.52.8934}, in systems with chiral Luttinger liquid character.\\ \section*{Acknowledgements} We gratefully acknowledge the reactions of the authors cited in this paper, specially C.J. Bolech who suggested that we investigate appearance of a transient in the current as predicted in earlier works using numerical methods like $tDMRG$. \section*{APPENDIX A: Calculation of tunneling current} \label{AppendixA} \setcounter{equation}{0} \renewcommand{\theequation}{A.\arabic{equation}} The tunneling current is \begin{align} I_{tun}(t) \mbox{ } = \mbox{ }-i e \Gamma \mbox{ } \lim_{t^{'} \rightarrow t } \bigg( < \psi^{\dagger}_R(0,t^{'}) \psi_L(0,t) > - < \psi^{\dagger}_L(0,t) \psi_R(0,t^{'}) > \bigg) \end{align} From Eqs.\ref{eqnoneq}-\ref{eqkappa4} and using the Dirichlet criterion $\theta(0) = \frac{1}{2}$ we can write \begin{align} <\psi^{\dagger}_{R}&(0,t^{'})\psi_{L}(0,t)> \mbox{ } = \mbox{ } - \frac{i}{2\pi} \frac{ \frac{ \pi }{\beta v_F } }{\sinh( \frac{ \pi }{\beta v_F } (-v_F(t-t^{'}) ) ) } \left( - U(t^{'},t) + 1 \right)\mbox{ }i \frac{\Gamma }{v_F} \frac{ 2 v_F^2 }{\Gamma^2 +4 v_F^2} \left[1 - \frac{ \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right] \end{align} then we get, \begin{align} I_{tun}(t) \mbox{ } = \mbox{ }i e \Gamma \mbox{ } \lim_{t^{'} \rightarrow t }\mbox{ } \bigg( \frac{1}{2\pi} \frac{ \frac{ \pi }{\beta v_F } }{\sinh( \frac{ \pi }{\beta v_F } (v_F(t-t^{'}) ) ) } \left( U(t,t^{'}) - U(t^{'},t) \right) \mbox{ } \frac{\Gamma }{v_F} \frac{ 2 v_F^2 }{\Gamma ^2 +4 v_F^2} \left[1 - \frac{ \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right] \bigg) \end{align} Note that we have defined $U(t,t^{'}) \mbox{ } = \mbox{ } e^{ -i \int^{t^{'}}_{t} d\tau \mbox{ } e V_b(\tau) }$. Evaluating the limit using L'Hospital's rule, \begin{align} I_{tun}(t) \mbox{ } = \mbox{ } - \Gamma^2 \mbox{ } \mbox{ } \frac{ 4 }{\Gamma ^2 +4 v_F^2} \left[1 - \frac{ \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right] \mbox{ } \frac{e^2}{2\pi } V_b(t) \end{align} Expressing the tunneling amplitude in terms of a tunneling parameter $t_{p}$ we get the expression for the tunneling current as in the main text Eq.\ref{eqtunc}. \section*{APPENDIX B: DDOS for right movers} \label{AppendixB} \setcounter{equation}{0} \renewcommand{\theequation}{B.\arabic{equation}} The dynamical density of states for the right movers is given by the equation \begin{align} D(\omega;x,T) \mbox{ } = \mbox{ }\int d\tau \mbox{ } e^{ -i \omega \tau } < \{ \psi(x,T + \frac{\tau }{2} ) , \psi^{\dagger}(x, T - \frac{ \tau }{2} ) \}> \end{align} Using Eq.\ref{eqnoneq} we can write, \begin{align} & < \{ \psi_{R}(x,T + \frac{\tau }{2} ) , \psi^{\dagger}_{R}(x^{'},T-\frac{\tau }{2}) \}> \mbox{ }= \mbox{ } \frac{i}{2\pi} \bigg( \frac{ \frac{ \pi }{\beta v_F } }{\sinh( \frac{ \pi }{\beta v_F } (x-x^{'}-v_F \tau + i v_F \epsilon ) ) } - \frac{ \frac{ \pi }{\beta v_F } }{\sinh( \frac{ \pi }{\beta v_F } (x-x^{'}-v_F \tau - i v_F \epsilon ) ) } \bigg) \nonumber \\ & \mbox{ } \bigg( U(T - \frac{ \tau }{2} ,T + \frac{\tau }{2}) \left[1 - \theta(x^{'}) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right]\mbox{ } \left[1 - \theta(x) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right]\nonumber \\ & + \left( \frac{\Gamma }{v_F} \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\right)^2 \mbox{ } \theta(x ) \theta(x^{'} ) \mbox{ } U(T - \frac{ \tau }{2} ,T - \frac{ \tau }{2} - \frac{x^{'}}{v_F}) \mbox{ } U(T + \frac{\tau }{2} - \frac{x}{v_F},T + \frac{\tau }{2} ) \bigg) \end{align} where we have taken $ t = T + \frac{\tau }{2} $ and $ t^{'} = T - \frac{ \tau }{2} $ and $\epsilon > 0$. In the zero temperature limit $\beta \rightarrow \infty$ doing an expansion in powers of $\epsilon$ and finally taking the limit $\epsilon \rightarrow 0$ we can write \begin{align} & \frac{i}{2\pi} \bigg( \frac{ \frac{ \pi }{\beta v_F } }{\sinh( \frac{ \pi }{\beta v_F } (x-x^{'}-v_F \tau + i v_F \epsilon ) ) } - \frac{ \frac{ \pi }{\beta v_F } }{\sinh( \frac{ \pi }{\beta v_F } (x-x^{'}-v_F \tau - i v_F \epsilon ) ) } \bigg) \mbox{ }= \mbox{ }\\ & \frac{\epsilon/\pi }{2 v_F \left((\tau + \frac{ -x+x^{'} }{v_F} )^2+\epsilon ^2\right)}= \frac{1}{2v_F} \delta(\tau + \frac{ -x+x^{'} }{v_F} ) \end{align} This means, \begin{align} &< \{ \psi_{R}(x,T + \frac{\tau }{2} ) , \psi^{\dagger}_{R}(x^{'}, T - \frac{ \tau }{2} ) \}> \mbox{ } \nonumber \\ =& \mbox{ } \frac{1}{2v_F} \delta(\tau + \frac{ -x+x^{'} }{v_F} )U(T - \frac{ x-x^{'} }{2v_F} ,T + \frac{ x-x^{'} }{2v_F})\mbox{ } \mbox{ } \bigg( \left[1 - \theta(x^{'}) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right]\mbox{ } \left[1 - \theta(x) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right] \nonumber \\ & + \left( \frac{\Gamma }{v_F} \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\right)^2 \mbox{ } \theta(x ) \theta(x^{'} ) \bigg) \end{align} At $x^{'} = x$, the DDOS for the right movers is, \begin{align} & D(\omega;x,T) \mbox{ }= \mbox{ }\int d\tau \mbox{ } e^{ -i \omega \tau } < \{ \psi_{R}(x,T + \frac{\tau }{2} ) , \psi^{\dagger}_{R}(x, T - \frac{ \tau }{2} ) \}> \mbox{ }\nonumber \\ =& \mbox{ } \frac{1}{2v_F} \mbox{ } \bigg( \left[1 - \theta(x) \mbox{ } \frac{ 2 \Gamma^2 }{\Gamma ^2 +4 v_F^2}\right]^2 + \left( \frac{\Gamma }{v_F} \mbox{ } \frac{(2 v_F)^2 }{\Gamma ^2 +4 v_F^2}\right)^2 \mbox{ } \theta(x ) \bigg)\mbox{ } = \mbox{ }\frac{1}{2v_F} \end{align} \section*{APPENDIX C: Relation to the Keldysh non-equilibrium Green function} \label{AppendixC} \setcounter{equation}{0} \renewcommand{\theequation}{C.\arabic{equation}} It is easy to reinterpret the space-time dependent Green functions of the main text as Keldysh Green functions where the times are on the Keldysh contour as shown in Fig.\ref{figkeldysh}. For this we reinterpret $ U $ as follows, \begin{align} U(t,t^{'}) \mbox{ } = \mbox{ } e^{ -i \int_Cd\tau \mbox{ } \theta_C(t-\tau)\theta_C(\tau-t^{'})\mbox{ }e V_b(\tau) } \end{align} The standard meaning of the contour ordering $ \theta_C(t-t^{'}) $ is that $ \theta_C(t-t^{'}) = 1 $ if $ t $ is to the right of $ t^{'} $ on the contour $ C $ and $ \theta_C(t-t^{'}) = 0 $ if $ t $ is to the left of $ t^{'} $ on this contour and $ \theta_C(0) = \frac{1}{2} $ consistent with Dirichlet regularisation. The claim is that the Keldysh contour ordered Green function of the system may simply be written down as, \begin{align} <T_C\mbox{ } \psi_{\nu}(x,t)\psi^{\dagger}_{\nu^{'}}(x^{'},t^{'})> \mbox{ }=\mbox{ }\frac{i}{2\pi} \frac{\frac{\pi}{\beta v_{F}}}{\sinh( \frac{ \pi }{\beta v_F } (\nu x-\nu^{'}x^{'}-v_F(t-t^{'}) ) )} \kappa_{\nu,\nu^{'}} \end{align} where $\nu$,$\nu^{'} = \pm 1$. The reader may recall from the main text that the quantities $ \kappa $ contains evolution functions such as $ U(t+ \frac{x}{v_F},t^{'}+\frac{x^{'}}{v_F}) $. For example if $ t $ is on the lower branch and $ t^{'} $ is on the upper branch of the contour and $ V_b(\tau) = 0 $ when $ \tau $ is purely imaginary then, \begin{align} U(t+ \frac{x}{v_F}&,t^{'}+\frac{x^{'}}{v_F}) \mbox{ } = \mbox{ } e^{ -i \int_Cd\tau \mbox{ } \theta_C(t + \frac{x}{v_F} -\tau)\theta_C(\tau-t^{'}-\frac{x^{'}}{v_F})\mbox{ }e V_b(\tau) } \end{align} But since, \begin{align} \int_Cd\tau \mbox{ } \theta_C(t + \frac{x}{v_F} -\tau)\theta_C(\tau-t^{'}-\frac{x^{'}}{v_F})\mbox{ }e V_b(\tau) \mbox{ } = \mbox{ } \int_{t^{'}+\frac{x^{'}}{v_F}}^{t + \frac{x}{v_F}}d\tau \mbox{ } e V_b(\tau) \end{align} as before, we see that the reinterpretation has no effect on the results. Similarly, if $ t,t^{'} $ are on the upper branch, \begin{align} \int_Cd\tau \mbox{ } \theta_C(t + \frac{x}{v_F} -\tau)\theta_C(\tau-t^{'}-\frac{x^{'}}{v_F})\mbox{ }e V_b(\tau) \mbox{ } = \mbox{ } \theta(t + \frac{x}{v_F}-t^{'}-\frac{x^{'}}{v_F}) \int_{t^{'}+\frac{x^{'}}{v_F}}^{t + \frac{x}{v_F}}d\tau \mbox{ } e V_b(\tau) \end{align} where $ \theta $ is now the ordinary Heaviside step function with a real argument. \begin{figure}[H] \subfigure[]{\includegraphics[scale=0.4]{ctink1}}\quad \quad \subfigure[]{\includegraphics[scale=0.4]{ctink2}}\quad \quad \subfigure[]{\includegraphics[scale=0.4]{ctink3}} \caption{\small (\textbf{a}) The extended complex-time Keldysh contour on which Keldysh Green function theory is constructed. Times on the lower branch are greater than the upper branch. The contour is extended along the imaginary axis in a third branch to include the possibility of finite temperature Green functions. (\textbf{b}) $t$ is on the lower branch and $t^{'}$ is on the upper branch, so $t>t^{'}$ although on the real time axis it appears that $t^{'}>t$. (\textbf{c}) Both $t$ and $t^{'}$ are on the same branch of the contour.} \label{figkeldysh} \end{figure} Therefore reinterpreting the evolution function in the manner shown is sufficient to allow a recasting of the Green function in the main text as a Keldysh Green function where the time ordering is on the Keldysh contour. \section*{APPENDIX D: Calculation of finite bandwidth tunneling current} \label{AppendixD} \setcounter{equation}{0} \renewcommand{\theequation}{D.\arabic{equation}} We already know from the derivation for the infinite bandwidth case the following expressions, \begin{align} c^{\infty}_{.,R}(t) = \left( \sum_{p} c^{\infty}_{p,R}(t_0) e^{ -i (t-t_0) p v_F }\right)\mbox{ } U(t_0,t) - i \Gamma \int_{t_0}^t c^{\infty}_{.,L}(t_2) \delta(v_F(t-t_2))\mbox{ } U(t_2,t) \, dt_2 \label{cinfr} \end{align} and \begin{align} c^{\infty}_{.,L}(t) = \left( \sum_{p} c^{\infty}_{p,L}(t_0) e^{ i (t-t_0) p v_F }\right) - i \Gamma \int_{t_0}^t c^{\infty}_{.,R}(t_2) \delta(v_F(t-t_2)) \, dt_2 \label{cinfl} \end{align} Making use of the relation $c_{.,\nu}(t) = c^{\infty}_{.,\nu}(t) + \delta c_{.,\nu}(t)$ and Eqs.\ref{cinfr}, \ref{cinfl}, \ref{cdotR} and \ref{cdotL} we obtain the following coupled equations \begin{widetext} \begin{align} \delta c_{.,R}(t)\mbox{ } = \mbox{ } (&- \sum_{|p|>\Lambda}c^{\infty}_{p,R}(t_0) + \sum_p \delta c_{p,R}(t_0)) e^{ -i (t-t_0) p v_F }\mbox{ } U(t_0,t) \nonumber \\&- i \Gamma \int_{t_0}^t (\delta(v_F(t-t_2))\delta c_{.,L}(t) - c^{\infty}_{.,L}(t_2) \Delta(v_F(t-t_2)))\mbox{ } U(t_2,t) \, dt_2 \end{align} and \begin{align} \delta c_{.,L}(t) \mbox{ }=\mbox{ } (&- \sum_{|p| > \Lambda}c^{\infty}_{p,L}(t_0) + \sum_p\delta c_{p,L}(t_0) ) e^{ i (t-t_0) p v_F } \nonumber \\ &- i \Gamma \int_{t_0}^t (\delta(v_F(t-t_2))\delta c_{.,R}(t_2) - \Delta(v_F(t-t_2))c^{\infty}_{.,R}(t_2)) \, dt_2 \end{align} We solve the above two equations and write separate expressions for $\delta c_{.,R}(t)$ and $\delta c_{.,L}(t)$. \begin{align} \delta c_{.,R}(t)\mbox{ } = \mbox{ } &\frac{2 v_F }{\Gamma ^2+4v_F^2}(2v_F ((- \sum_{|p|>\Lambda}c^{\infty}_{p,R}(t_0) + \sum_p \delta c_{p,R}(t_0)) e^{ -i (t-t_0) p v_F }\mbox{ } U(t_0,t) \nonumber \\ &+ i \Gamma \int_{t_0}^t c^{\infty}_{.,L}(t_2) \Delta(v_F(t-t_2))\mbox{ } U(t_2,t) \, dt_2)-i \Gamma ( (- \sum_{|p| > \Lambda}c^{\infty}_{p,L}(t_0) + \sum_p\delta c_{p,L}(t_0) ) e^{ i (t-t_0) p v_F } \nonumber \\ &+ i \Gamma \int_{t_0}^t \Delta(v_F(t-t_2))c^{\infty}_{.,R}(t_2) \, dt_2)) \label{deltacr} \end{align} and \begin{align} \delta c_{.,L}(t)\mbox{ } = \mbox{ }&\frac{2 v_F }{\Gamma ^2+4v_F^2}(2v_F ( (- \sum_{|p| > \Lambda}c^{\infty}_{p,L}(t_0) + \sum_p\delta c_{p,L}(t_0) ) e^{ i (t-t_0) p v_F } \nonumber \\ &+ i \Gamma \int_{t_0}^t \Delta(v_F(t-t_2))c^{\infty}_{.,R}(t_2) \, dt_2) -i \Gamma ((- \sum_{|p|>\Lambda}c^{\infty}_{p,R}(t_0) + \sum_p \delta c_{p,R}(t_0)) e^{ -i (t-t_0) p v_F }\mbox{ } U(t_0,t) \nonumber \\ &+ i \Gamma \int_{t_0}^t c^{\infty}_{.,L}(t_2) \Delta(v_F(t-t_2))\mbox{ } U(t_2,t) \, dt_2) ) \label{deltacl} \end{align} Also Eqs.\ref{cinfr} and \ref{cinfl} reduce to \begin{align} c^{\infty}_{.,R}(t)\mbox{ } = \mbox{ } \frac{2 v_F (2 v_F ( \sum_p c^{\infty}_{p,R}(t_0) e^{ -i (t-t_0) p v_F } U(t_0,t)) -i \Gamma (\sum_p c^{\infty}_{p,L}(t_0) e^{ i (t-t_0) p v_F }))}{\Gamma ^2+4 v_F^2} \label{cinfrfinal} \end{align} and \begin{align} c^{\infty}_{.,L}(t)\mbox{ } = \mbox{ } \frac{2 \left(2 v_F^2(\sum_p c^{\infty}_{p,L}(t_0) e^{ i (t-t_0) p v_F })-i \Gamma v_F ( \sum_p c^{\infty}_{p,R}(t_0) e^{ -i (t-t_0) p v_F } U(t_0,t)) \right)}{\Gamma ^2+4 v_F^2} \label{cinflfinal} \end{align} \end{widetext} Using Eqs. \ref{deltacr}, \ref{deltacl}, \ref{cinfrfinal} and \ref{cinflfinal} and the corresponding complex conjugates we can write down expressions for the correlations of the type $<\delta c^{\dagger}_{.,\nu^{'}}(t^{'})c^{\infty}_{.,\nu}(t)>$ and $<c^{\dagger \infty}_{.,\nu^{'}}(t^{'})\delta c_{.,\nu}(t)>$. After some simplification these correlation functions are obtained as some non-trivial combinations of the bias $V_{b}(t)$ and the equal-time equilibrium infinite bandwidth Green functions which we already know but for $<T\mbox{ } \delta \psi_{R}(x,t_{0})\psi^{\dagger,\infty}_{\nu^{'}}(x^{'},t_{0}) >_{eq} $ and $<T\mbox{ } \delta \psi_{L}(x,t_{0})\psi^{\dagger,\infty}_{\nu^{'}}(x^{'},t_{0}) >_{eq}$ (where $\nu^{'} = R,L$) which we are required to explicitly calculate. Note that the Green functions at equal-time $t_{0}$ implies equilibrium Green functions as we take $t_{0} \rightarrow -\infty$ i.e. long before the bias is switched on. \begin{widetext} In equilibrium we have \begin{align} &i \partial_t \delta c_{p,R}(t) = v_Fp \mbox{ } \delta c_{p,R}(t) + \frac{\Gamma}{L}\delta c_{.,L}(t)- \theta(|p|-\Lambda)\frac{\Gamma}{L}c^{\infty}_{.,L}(t) \nonumber \\& i \partial_t \delta c_{p,L}(t) = -v_Fp \mbox{ } \delta c_{p,L}(t) + \frac{\Gamma}{L} \delta c_{.,R}(t)- \theta(|p|-\Lambda)\frac{\Gamma}{L}c^{\infty}_{.,R}(t) \end{align} We transform from time to discrete Matsubara frequency ($z_{n} = \frac{(2n+1) \pi}{\beta}$) and write down the correlations, \begin{align} <T\mbox{ } \delta c_{p,R}(n)c^{\dagger,\infty}_{p^{'},\nu^{'}}(n) > = - \theta(|p|-\Lambda)\frac{\Gamma}{L}\frac{1}{(i z_n-v_Fp) }<T\mbox{ } c^{\infty}_{.,L}(n)c^{\dagger,\infty}_{p^{'},\nu^{'}}(n) > + \frac{\Gamma}{L}\frac{1}{(i z_n-v_Fp) }<T\mbox{ } \delta c_{.,L}(n)c^{\dagger,\infty}_{p^{'},\nu^{'}}(n) > \end{align} and \begin{align} <T\mbox{ } \delta c_{p,L}(n)c^{\dagger,\infty}_{p^{'},\nu^{'}}(n) > = - \theta(|p|-\Lambda)\frac{\Gamma}{L}\frac{1}{(i z_n + v_Fp) }<T\mbox{ } c^{\infty}_{.,R}(n)c^{\dagger,\infty}_{p^{'},\nu^{'}}(n) > + \frac{\Gamma}{L} \frac{1}{(i z_n + v_Fp) }<T\mbox{ } \delta c_{.,R}(n)c^{\dagger,\infty}_{p^{'},\nu^{'}}(n) > \end{align} After some algebra we Fourier transform to real space which allows for further simplification. We then transform the discrete frequencies back to time taking the time interval $t-t^{'}$ to be small. Finally we obtain the following correlations for small $t-t^{'}$ and large finite bandwidth \begin{align} <T\mbox{ } \delta \psi_{R}(x,t)\psi^{\dagger,\infty}_{\nu^{'}}(x^{'},t^{'}) >_{eq} \mbox{ }\approx\mbox{ } 4 i \Gamma v_F \Gamma \frac{ (2 v_F)^2 }{ (\Gamma ^2+4 v_F^2)^2 } \frac{ i }{ -i\beta } \delta_{\nu^{'},-1} \mbox{ } \frac{i \Gamma \theta (x x^{'}) \coth \left(\frac{\pi (x+x^{'})}{\beta v_F}\right) \text{csch}\left(\frac{\pi (x+x^{'})}{\beta v_F}\right)}{2 \beta \Lambda v_F^4} \nonumber \\ + ( \Gamma^2 - (2v_F)^2 ) \mbox{ } \Gamma \frac{ (2 v_F)^2 }{ (\Gamma ^2+4 v_F^2)^2 }\frac{ i }{ -i\beta } \delta_{\nu^{'},1} \mbox{ } \frac{1}{L^2} \mbox{ } \mbox{ } \frac{i \Gamma L^2 (\text{sgn}(x) \theta (-x x^{'})) \coth \left(\frac{\pi (x^{'}-x)}{\beta v_F}\right) \text{csch}\left(\frac{\pi (x^{'}-x)}{\beta v_F}\right)}{2 \beta \Lambda v_F^4} \label{deltapsipsir} \end{align} and \begin{align*} <T\mbox{ } \delta \psi_{L}(x,t)\psi^{\dagger,\infty}_{\nu^{'}}(x^{'},t^{'}) >_{eq} \mbox{ } \approx\mbox{ } \end{align*} \begin{align} -\frac{\Gamma}{L^2} \mbox{ } \mbox{ } \frac{ iL }{ v_F } \mbox{ } \frac{ (2 v_F)^2 }{ (\Gamma ^2+4 v_F^2)^2 } \frac{i \Gamma }{\pi \Lambda v_F^2} \frac{ i }{ -i\beta } 4 i v_F \Gamma \delta_{\nu^{'},1} \mbox{ } \theta( x x^{'}) \mbox{ } \frac{ iL }{ v_F } \mbox{ } \left( \frac{\pi \coth \left(\frac{\pi (x+ x^{'} )}{\beta v_F}\right) \text{csch}\left(\frac{\pi (x+ x^{'})}{\beta v_F}\right)}{2 \beta } \right) \nonumber \\ - \frac{\Gamma}{L^2} \mbox{ } \mbox{ } \frac{ iL }{ v_F } \mbox{ } \frac{ (2 v_F)^2 }{ (\Gamma ^2+4 v_F^2)^2 } \frac{i \Gamma }{\pi \Lambda v_F^2} \frac{ i }{ -i\beta } (\Gamma^2 -(2v_F)^2) \delta_{\nu^{'},-1} \mbox{ } \mbox{ } ( \text{sgn}(x^{'}) \theta (-x x^{'})) \mbox{ } \frac{ iL }{ v_F } \left( \frac{\pi \coth \left(\frac{\pi (x-x^{'})}{\beta v_F}\right) \text{csch}\left(\frac{\pi (x-x^{'})}{\beta v_F}\right)}{2 \beta }\right) \label{deltapsipsil} \end{align} \end{widetext} Now we have all the ingredients to calculate $<\delta c^{\dagger}_{.,\nu^{'}}(t^{'})c^{\infty}_{.,\nu}(t)>$ and $<c^{\dagger \infty}_{.,\nu^{'}}(t^{'})\delta c_{.,\nu}(t)>$. Substituting in Eq.\ref{ditun} and simplifying we obtain Eq.\ref{deltaifinal} in the main text. \newpage \section*{References} \bibliographystyle{iopart-num}
2,869,038,156,469
arxiv
\section{Introduction} The question of whether the solar diameter changes on timescales of years to centuries is very controversial. A recent paper (Thuillier et al 2005a) presents a detailed summary of the issue. In essence, different measurement and analysis techniques, and sometimes even identical instruments and similar analysis methods, yield incompatible results. It is not presumptuous to infer that the cause of this controversy is that for the majority of the techniques, the results are at the borderline of the sensitivity of the technique. The major exception to the above statement is the technique of helioseismology, particularly of the f-modes of oscillation. Schou et al. (1997) and Antia (1998) have demonstrated that the frequencies of f-modes can be used to estimate the solar radius. Since these frequencies have been measured with a precision of one part in $10^5$, one may expect to determine the solar radius to similar precision. Changes in the f-mode frequencies have been used to determine changes in the solar radius (see e.g., Dziembowski et al. 1998, Antia et al. 2000, etc.). The radius changes are estimated assuming that the fractional change in radius is uniform in the range of sensitivity of the method. The radius change determined by f-modes is the change at the radius where the f-modes are concentrated. One way of quantifying the depth at which the f-modes are sensitive is to look at the depth at which energy of the f-modes is concentrated. This is shown in Fig.~1, where we plot the density for f-modes of several degrees. The range of degrees on the plot reflects the range of available f-mode frequencies. We see that for the lowest degree mode in the figure ($\ell=140$), the peak in the energy is at about 6.3 Mm (where temperature $T$ is about 41000K), and for the highest degree ($\ell=300$) it is at 3.3 Mm ($T=24000$K). There are of course, other criteria by which one can determine radius (see e.g. Cox 1980, Unno et al. 1989). However, here we show the energy density because that is the criterion used by various authors when discussing their radius-change results. In any event, the different methods do not significantly alter the conclusions of this paper. It should be noted that although the f-modes have a precision of one part in $10^5$ or so, the changes in radius cannot be determined with this precision since that depends on how large the changes in f-modes are, and these changes happen to be very small. As a result, the radius-change measurements are not always very precise as can be seen from the results shown below. Consequently, even in the case of radius determination using f-mode oscillations, there does not seem to be consensus yet as to the exact amount by which the radius changes. The results obtained so far (Dziembowski et al. 1998, 2000, 2001; Antia et al. 2000, 2001; Antia \& Basu 2004) are not in agreement with each other. Dziembowski et al. (1998), using MDI data, found that the solar radius reached a minimum around the minimum activity period in 1996 and was larger, by about 5 km, 6 months before and after the minimum. However, later results, using longer time intervals, did not find any systematic changes (Dziembowski et al. 2000). On the other hand, Antia et al. (2000) using Global Oscillation Network Group (GONG) data found that the solar radius decreased by about 5 km between 1995 and 1998, and this variation appeared to be correlated (but in antiphase) with the level of solar activity. Subsequently Antia et al. (2001), using both GONG and MDI data, put an upper limit of 1 km/year for the change in solar radius. Meanwhile, on a related paper, Dziembowski et al. (2001) claimed a solar radius decrease as a rate of 1.5 km yr$^{-1}$ during 1996-2000. Antia et al. (2001) made some sense of all these discrepant results by showing that the variation in f-mode frequencies could be divided into at least two components: one oscillatory, with a period of 1 yr, and a second, non-oscillatory, and probably correlated with solar activity. They argued that the oscillatory component is most likely an artifact introduced by the orbital period of the Earth. They also showed that most of the discrepancy between different results could be explained by the use of data sets that cover different time periods and by the failure to remove the oscillatory component. Upon performing those corrections, all the different investigations appear to indicate that the solar radius decreases with increasing solar activity. In a more recent investigation Antia \& Basu (2004) examined the changes in f-mode frequencies using eight years of MDI data. They obtain an upper limit of about 1 km/year for radius changes during the entire solar cycle. It is to be noted, however, that even this result is not very clear cut, since different degree ranges of f-modes implied different radius changes. When the available higher degree modes were used ($ 140 < \ell \le 300$), they got an average change of $-0.91\pm0.03$ km yr$^{-1}$ between 1996 and 2004. F-modes in the range $140\le\ell\le 250$ show a change of $-0.41\pm 0.04$ km yr$^{-1}$ for the same period, but for $\ell < 140$, no observable change was obtained, ($\Delta R =0.13\pm 0.20$ km per year). Antia \& Basu (2004) suggested that the difference in the results yielded by the different degree ranges indicated that the evidence for radius change was not conclusive. In this paper we will present an alternative interpretation of these observations, i.e., that the Sun does not expand or contract homologously with the change in solar activity. \section{Model calculations} We construct models to calculate the change in solar radius with change in solar activity. The numerical code that we use to compute the structure and evolution of the solar model is an outgrowth of YREC (Winnick et al 2002) into which the effects of magnetic fields and turbulence have been included. The starting values of the basic solar papameters are: $R_\sun = 6.9598 \times 10^{10}$ cm and $L_\sun = 3.8515\times 10^{33}$ erg/s. These particular choices have negligible effects on the results. The version of the code used in these calculations is one-dimensional. The inclusion of magnetic fields considers their contribution to pressure and internal energy, and their modification of energy transfer, primarily convection. The dynamical effects modify turbulent pressure and energy transport. The detailed formulation of the modifications to YREC is based on the approach first presented by Lydon \& Sofia (1995), and subsequently expanded by Li \& Sofia (2001), and Li et al. (2002, 2003). Because the location, magnitude and temporal behavior of the internal field are not known, we made two general assumptions: (1) the magnitude of the magnetic field would be that required to cause a luminosity change of 0.1 percent over the cycle, and (2) the temporal behavior assumed is sinusoidal, and it mimics the shape of the activity cycle determined, for example, by the averaged sunspot number. We computed four cases (listed in Table 1), three with only magnetic fields at different depths, and one with both magnetic fields and turbulence. For the cases with only magnetic fields, the field configuration was Gaussian. Guided by the observation of p-mode oscillations, we were led to the inclusion of turbulence (Li et al. 2003). In this case, the properties of the turbulence were derived from numerical simulations of the outer region of the solar convective envelope (Robinson et al. 2003), and the magnetic field distribution was dictated by a feedback process between turbulence and magnetic field. Although the specific details of the calculations reported here are contained in Li et al. (2003) we present in this paper the results of the calculations that are relevant to the radius problem, and not contained in that paper. The lower panel of Fig.~2 shows the difference in magnetic energy per unit mass ($\chi_m\equiv B^2/8\pi\rho$) between the years 2000 and 1996 for all cases where only magnetic fields are taken into account. The upper panel of the figure shows the ratio between the radius change as a function of radius to the radius change at 5 Mm. Fig.~3 is similar to Fig.~2, but for the case in which turbulence (modulated by the magnetic field) is included. The lower panel shows the difference in turbulent ($\chi_t\equiv \frac{1}{2}(v'')^2$, where $v''$ is the magnitude of the turbulent velocity) plus magnetic ($\chi_m$) energy per unit mass between the years 2000 and 1996 ($\chi=\chi_m+\chi_t$), and the upper panel the ratio between the radius change as a function of radius to the radius change at 5 Mm. From Fig.~2 we notice that in all cases the radius increases with increasing solar activity. This is to be expected, since all the contribution of the magnetic field to pressure and internal energy is positive, and consequently, it can only lead to an increase of the radius. We also notice that the increase of the radius is monotonic towards the surface. This is because the increase at a given radius is made up of the sum of the increase at all levels below it. Finally, we notice that the expansion, which only increases in the magnetic region, is accelerated towards the shallower layers. This can be understood since, for a given value of the magnetic field, the ratio of magnetic to total pressure increases with increasing radius, and so does the expansion. Fig.~3 represents the case that, according to Li et al. (2003), meets all the observational requirements imposed by helioseismology. In particular, it produces the correct cycle-related variations of the p-mode oscillations, it does not alter the depth of the convection zone, and it produces diameter changes in opposite phase of the activity cycle. In this case, the magnetic field slows down turbulent flows so that the increase of magnetic pressure when the magnetic field grows is overcompensated by the corresponding decrease in turbulent pressure. In all cases, the radius variation at the solar surface [which is measured by any limb-observing instrument, such as the Solar Disk Sextant (Sofia et al. 1994), and PICARD Thuillier et al. (2005b)], can be hundreds of times larger than the radius variation inferred by f-mode oscillations, which represents changes at several Mm. To determine the depth of the level of the ``diameter'' provided by the f-mode oscillations, we refer to Fig. 1, which represents the kinetic energy of the modes of different $\ell$-values (abscissa in arbitrary units). We can see that for higher $\ell$s, the peak energy occurs at shallower layers than for lower-$\ell$ modes. It would appear that 5 Mm is a good number to represent the depth of the layer given by f-mode oscillations of all degrees observed. Thus, the upper panel of Figs. 2 and 3 give the magnification factor between radius changes determined from f-mode oscillations, and the radius changes that can be expected at the photospheric level, and thus to be observed by all limb-observing instruments. Because $\Delta R$ increases in the shallower layers, the high-$\ell$ modes, which peak in shallower layers, should show a larger radius change than the low-$\ell$ modes, which peak at deeper layers. We believe this is precisely what the results obtained by Antia \& Basu (2004) imply. \section{Summary and Conclusions} We have shown that the model of variability of the solar interior that obeys all observational constraints (global parameters and p-mode and f-mode oscillations) produces variations of the solar radius that increase by a factor of approximately 1000 from a depth of 5 Mm to the solar surface. This model includes the effects of a variable dynamo magnetic field, and of a field-modulated turbulence, and it explains features of the f-mode oscillations in different degree ranges that were previously not understood. On the basis of the above argument, we conclude that results from f-mode oscillations that the solar radius only changes by about 1 km/year does not preclude the less-sensitive efforts to measure variations of the solar radius at the photosphere by limb observations since the latter are likely to be much larger than the former. Limb observations are made by a number of ground-based instruments, and from above the atmosphere by the Solar Disk Sextant (SDS) balloon-borne experiment (Sofia et al. 1994), and will be made starting in 2008 by the PICARD microsatellite (Thuillier et al. 2005b). In thermal equilibrium the space-based instruments have a theoretical precision of about 1 milli arc s (about 1 km). PICARD should easily reach such a precision. Balloon based observations, however, cannot reach this precision because the short duration of the flights prevents the instrument from reaching thermal equilibrium. The current SDS results have reached a precision of the order $\pm 0.05$ arc s (Egidi et al 2005). \acknowledgments This work was supported in part by NSF grants ATM 0206130 and ATM 0348837 to SB. SS and PD were supported in part by NASA grant NAG5-13299.
2,869,038,156,470
arxiv
\section{Introduction} Cluster categories were introduced by Buan, Marsh, Reineke, Reiten and Todorov~\cite{BMRRT06} as a means to model the combinatorics of cluster algebras with acyclic skew-symmetric exchange matrices within the framework of quiver representations (see also the work of Caldero, Chapoton and Schiffler~\cite{CCS06} for the case of $A_n$ quivers). The cluster category of an acyclic quiver is the orbit category of the bounded derived category of its path algebra with respect to the autoequivalence $F=\nu \Sigma^{-2}$ where $\nu$ denotes the Serre functor and $\Sigma$ is the suspension functor. By a result of Keller~\cite{Keller05}, the cluster category is a triangulated 2-Calabi-Yau category. Particular role is played by the 2-cluster-tilting objects within this category (the precise definitions will be given in Section~\ref{ssec:notat} below) which model the clusters in the corresponding cluster algebra. As already shown in~\cite{Keller05}, by replacing $\Sigma^{-2}$ by $\Sigma^{-m}$ for $m>2$ and considering the orbit category with respect to the autoequivalence $\nu \Sigma^{-m}$, one gets an $m$-Calabi-Yau triangulated category with $m$-cluster-tilting objects. The categories obtained in this way, called \emph{$m$-cluster categories}, were the subject of many investigations, see~\cite{BaurMarsh08,Thomas07,Wraalsen09,ZhouZhu09}. The endomorphism algebras of 2-cluster-tilting objects in cluster categories are known as \emph{cluster-tilted algebras} and they possess many remarkable representation-theoretic and homological properties~\cite{ABS08,BMR06,BMR07,KellerReiten07}. More generally, consider an \emph{$m$-Calabi-Yau-tilted algebra}, i.e.\ an algebra $A=\End_{\cC}(T)$ where $\cC$ is a $K$-linear, triangulated, $\Hom$-finite, $m$-Calabi-Yau category over a field $K$ and $T$ is an $m$-cluster-tilting object in $\cC$ for some positive integer $m$. Keller and Reiten have shown the following results in the case $m=2$ (for the first two points, see Sections~2 and~3 of~\cite{KellerReiten07} and for the third one, see \cite[\S2]{KellerReiten08}): \begin{itemize} \item $A$ is Gorenstein of dimension at most $m-1$ (i.e.\ $\id_A A \leq m-1$ and $\pd_A DA \leq m-1$, where $D(-)=\Hom_K(-,K)$); \item The stable category of Cohen-Macaulay $A$-modules is $(m+1)$-Calabi-Yau; \item If $K$ is algebraically closed, $\cC$ is algebraic and $A \cong KQ$ for an acyclic quiver $Q$, then $\cC$ is triangle equivalent to the $m$-cluster category of~$Q$. \end{itemize} In addition, they have shown that these results hold also in the case $m>2$ provided that one imposes an additional condition on the $m$-cluster-tilting object $T$ stated in terms of the vanishing of some of its negative extensions, namely \begin{equation} \label{e:vosnex} \tag{$\star$} \Hom_{\cC}(T, \Sigma^{-i} T)=0 \text{ for any $0 < i < m-1$}, \end{equation} see \cite[\S4]{KellerReiten08}. Note that by the $m$-Calabi-Yau property of $\cC$ this condition is equivalent to the vanishing of the positive extensions $\Hom_{\cC}(T, \Sigma^i T)$ for all $m < i < 2m-1$, whereas these extensions for $0 < i < m$ always vanish since $T$ is $m$-cluster-tilting. In the terminology of~\cite{Beligiannis15}, the condition~\eqref{e:vosnex} means that $T$ is $(m-2)$-corigid, and in~\cite[Theorem~B]{Beligiannis15} Beligiannis presents a more refined result connecting the corigidity property of an $m$-cluster-tilting object with the Gorenstein property of its endomorphism algebra and the Calabi-Yau property of its stable category of Cohen-Macaulay modules. We note also that a condition analogous to~\eqref{e:vosnex}, stated for $m$-cluster-tilting subcategories inside bounded derived categories of modules and abbreviated ``vosnex'', appears in the works of Amiot-Oppermann~\cite[Definition~4.9]{AmiotOppermann14a} and Iyama-Oppermann~\cite[Notation~3.5]{IyamaOppermann13}. Whereas for $m=2$ the condition~\eqref{e:vosnex} is empty and hence automatically holds, this is no longer the case for $m \geq 3$. In particular, there are examples of triangulated $m$-Calabi-Yau categories $\cC$ and $m$-cluster-tilting objects $T \in \cC$ for which the vosnex condition~\eqref{e:vosnex} does not hold and moreover the endomorphism algebra $\End_{\cC}(T)$ is a path algebra of a connected acyclic quiver, see Iyama and Yoshino \cite[Theorem~9.3]{IyamaYoshino08} for $m=3$ and \cite[Theorem~10.2]{IyamaYoshino08} for $m$ odd, and also \cite[Example~4.3]{KellerReiten08}. Another example, where the algebra $\End_{\cC}(T)$ is not Gorenstein, is given in~\cite[Example~5.3]{KellerReiten07}. The purpose of this note is to extend this class of examples by showing that given $m \geq 3$, \emph{any} finite-dimensional algebra over an algebraically closed field is the endomorphism algebra of an $m$-cluster-tilting object in an $m$-Calabi-Yau triangulated category. Hence, without any further assumptions on $\cC$ and $T$, one cannot say too much about the algebras $\End_{\cC}(T)$. To this end we invoke the construction of generalized cluster categories due to Amiot~\cite{Amiot09} in the case $m=2$ and generalized by Guo~\cite{Guo11} to the case $m>2$. This construction produces an $m$-Calabi-Yau triangulated category with an $m$-cluster-tilting object from any dg-algebra which is homologically smooth, bimodule $(m+1)$-Calabi-Yau and satisfies additional finiteness conditions. A rich source of such dg-algebras is provided by the deformed Calabi-Yau completions defined and investigated by Keller~\cite{Keller11}. Given a basic finite-dimensional algebra $A$, we choose a dg-algebra $B$ with two properties; firstly, the underlying graded algebra of $B$ is the path algebra of a graded quiver whose arrows are concentrated in degrees $0$ and $-1$, and secondly, $\hh^0(B) \cong A$. Such dg-algebra can be constructed from any presentation of $A$ as a quotient of a path algebra of a quiver by an ideal generated by a finite sequence of elements. Conversely, any such dg-algebra arises in this way. It turns out that for any $m \geq 2$, the $(m+1)$-Calabi-Yau completion of $B$ is a Ginzburg dg-algebra $\Gamma$ of a graded quiver with homogeneous superpotential of degree $2-m$ which can be written explicitly in terms of the quiver and the sequence of elements. In the case $m=2$, the zeroth homology $\hh^0(\Gamma)$ is a split extension of $A$, whereas when $m>2$ it is isomorphic to $A$, hence $\Gamma$ satisfies the finiteness conditions required in the construction of~\cite{Amiot09,Guo11} and thus gives rise to a $\Hom$-finite $m$-Calabi-Yau triangulated category with an $m$-cluster-tilting object whose endomorphism algebra is isomorphic to~$A$. The $m$-cluster-tilting object we get almost never satisfies the vosnex condition~\eqref{e:vosnex}. More precisely, that condition holds if and only if $A$ is the path algebra of an acyclic quiver and $B$ is chosen such that $B=A$. Moreover, the flexibility in the choice of the dg-algebra $B$ allows to construct, for certain algebras $A$ and any $m>2$, inequivalent triangulated $m$-Calabi-Yau categories $\cC \not \simeq \cC'$ with $m$-cluster-tilting objects $T \in \cC$ and $T' \in \cC'$ such that $\End_{\cC}(T) \cong \End_{\cC}(T') \cong A$. \section{Recollections} \subsection{Notations} \label{ssec:notat} We recall the definitions of Calabi-Yau triangulated categories and cluster-tilting objects. Throughout, we fix a field $K$. \begin{defn} Let $\cC$ be a $K$-linear triangulated category with suspension functor $\Sigma$. \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item We say that $\cC$ is \emph{$\Hom$-finite} if the spaces $\Hom_{\cC}(X,Y)$ are finite-dimensional over $K$ for any $X, Y \in \cC$. \item Let $m \in \bZ$. We say that $\cC$ is \emph{$m$-Calabi-Yau} if $\cC$ is $\Hom$-finite and there exist functorial isomorphisms \[ \Hom_{\cC}(X,Y) \cong D\Hom_{\cC}(Y, \Sigma^m X) \] for any $X, Y \in \cC$, where $D$ denotes the duality $D(-) = \Hom_K(-, K)$. \end{enumerate} \end{defn} For an object $X$ in an additive category $\cC$, denote by $\add X$ the full subcategory of $\cC$ whose objects are finite direct sums of direct summands of $X$. For a subcategory $\cY$ of $\cC$, let ${^\perp}\cY = \left\{ X \in \cC \,:\, \Hom_{\cC}(X,Y)=0 \text{ for any $Y \in \cY$} \right\}$. \begin{defn} Let $\cC$ be a triangulated $m$-Calabi-Yau category for some integer $m \geq 1$. An object $T$ of $\cC$ is \emph{$m$-cluster-tilting} if: \begin{enumerate} \renewcommand{\theenumi}{\roman{enumi}} \item $\Hom_{\cC}(T, \Sigma^i T) = 0$ for any $0 < i < m$; and \item if $X \in \cC$ is such that $\Hom_{\cC}(X, \Sigma^i T) = 0$ for any $0 < i < m$, then $X \in \add T$. \end{enumerate} Equivalently, $\add T = \bigcap_{0 < i < m} {^\perp} \Sigma^i(\add T)$. \end{defn} \begin{defn} Let $m \geq 1$. A $K$-algebra $\Lambda$ is \emph{$m$-Calabi-Yau-tilted} (\emph{$m$-CY-tilted} for short) if there exist a triangulated $m$-Calabi-Yau category $\cC$ and an $m$-cluster-tilting object $T$ of $\cC$ such that $\Lambda \cong \End_{\cC}(T)$. \end{defn} \subsection{Generalized cluster categories} \sloppy In this section we briefly review the construction of triangulated $m$-Calabi-Yau categories with $m$-cluster-tilting object due to Amiot~\cite{Amiot09} (for the case $m=2$) and its generalization by Guo~\cite{Guo11} (for the case $m>2$). Let $\Gamma$ be a differential graded (dg) algebra over a field $K$. Denote by $\cD(\Gamma)$ its derived category and by $\per \Gamma$ its smallest full triangulated subcategory containing $\Gamma$ and closed under taking direct summands. Let $\cD_{\fd}(\Gamma)$ denote the full subcategory of $\cD(\Gamma)$ whose objects are those of $\cD(\Gamma)$ with finite-dimensional total homology. \begin{defn} $\Gamma$ is said to be \emph{homologically smooth} if $\Gamma \in \per \Gamma^e$, where $\Gamma^e = \Gamma^{op} \otimes_K \Gamma$. \end{defn} Let $\Gamma$ be homologically smooth and let $\Omega = \RHom_{\Gamma^e}(\Gamma, \Gamma^e)$. By definition, $\Omega \in \cD((\Gamma^e)^{op})$. We can view $\Omega$ as an object of $\cD(\Gamma^e)$ via restriction of scalars along the morphism $\tau \colon \Gamma^e \xrightarrow{\sim} (\Gamma^e)^{op}$ given by $\tau(x \otimes y) = y \otimes x$. By~\cite[Lemma~3.4]{Keller11} (see also~\cite[Lemma~4.1]{Keller08} and~\cite[Lemma~2.1]{Guo11}) one has $\cD_{\fd}(\Gamma) \subseteq \per \Gamma$ and \begin{equation} \label{e:Serre} \Hom_{\cD(\Gamma)}(L \otimes_{\Gamma} \Omega, M) \cong D \Hom_{\cD(\Gamma)}(M, L) \end{equation} for any $L \in \cD(\Gamma)$, $M \in \cD_{\fd}(\Gamma)$. \begin{defn} Let $m \in \bZ$. If $\RHom_{\Gamma^e}(\Gamma, \Gamma^e) \cong \Sigma^{-m} \Gamma$ in $\cD(\Gamma^e)$ we say that $\Gamma$ is \emph{bimodule $m$-Calabi-Yau}. \end{defn} If $\Gamma$ is homologically smooth and bimodule $m$-Calabi-Yau then the triangulated category $\cD_{\fd}(\Gamma)$ is $m$-Calabi-Yau. More precisely, \eqref{e:Serre} yields functorial isomorphisms \[ \Hom_{\cD(\Gamma)}(\Sigma^{-m} L, M) \cong D \Hom_{\cD(\Gamma)}(M, L) \] for any $L \in \cD(\Gamma)$, $M \in \cD_{\fd}(\Gamma)$. \begin{theorem}[\protect{\cite[\S2]{Amiot09}}, \protect{\cite[\S2]{Guo11}}] \label{t:cluster} Let $m \geq 1$ and let $\Gamma$ be a dg-algebra satisfying the following conditions: \begin{enumerate} \renewcommand{\theenumi}{\roman{enumi}} \item $\Gamma$ is homologically smooth; \item $\hh^i(\Gamma)=0$ for any $i>0$; \item $\dim_K \hh^0(\Gamma) < \infty$; \item $\Gamma$ is bimodule $(m+1)$-Calabi-Yau. \end{enumerate} Consider the triangulated category $\cC_{\Gamma} = \per \Gamma / \cD_{\fd}(\Gamma)$. Then: \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item $\cC_{\Gamma}$ is $\Hom$-finite and $m$-Calabi-Yau. \item For any $i \in \bZ$, set $\cD^{\leq i} = \left\{ X \in \cD(\Gamma) \,:\, \hh^p(X)=0 \text{ for all $p>i$} \right\}$ and let $\cF = \cD^{\leq 0} \cap {^\perp}\cD^{\leq -m} \cap \per \Gamma$. The restriction of the canonical projection $\pi \colon \per \Gamma \to \cC_{\Gamma}$ to $\cF$ induces an equivalence of $K$-linear categories $\cF \xrightarrow{\sim} \cC_{\Gamma}$. \item \label{it:t:CT} The image $\pi \Gamma$ of $\Gamma$ in $\cC_{\Gamma}$ is an $m$-cluster-tilting object and $\End_{\cC_{\Gamma}}(\pi \Gamma) \cong \hh^0(\Gamma)$. \end{enumerate} \end{theorem} The object $\pi \Gamma$ occurring in part~\eqref{it:t:CT} of the theorem is called the \emph{canonical $m$-cluster-tilting} object in $\cC_{\Gamma}$. The statement of the next lemma concerning negative extension groups of the canonical $m$-cluster-tilting object is implicit in the proof of~\cite[Corollary~3.4]{Guo11}. \begin{lemma} \label{l:snex} Let $\Gamma$ be a dg-algebra satisfying the conditions of Theorem~\ref{t:cluster} and let $T=\pi \Gamma$ be the canonical $m$-cluster-tilting object in $\cC_{\Gamma}$. Then $\Hom_{\cC}(T, \Sigma^{-i} T) \cong \hh^{-i}(\Gamma)$ for any $0 \leq i \leq m-1$. \end{lemma} \begin{proof} As observed in~\cite{Guo11}, the objects $\Gamma, \Sigma \Gamma, \dots, \Sigma^{m-1} \Gamma$ are in the fundamental domain $\cF$. Therefore, for any $0 \leq i \leq m-1$, \begin{align*} \Hom_{\cC_{\Gamma}}(\pi \Gamma, \Sigma^{-i} \pi \Gamma) &\cong \Hom_{\cC_{\Gamma}}(\Sigma^i \pi \Gamma, \pi \Gamma) = \Hom_{\cC_{\Gamma}}(\pi \Sigma^i \Gamma, \pi \Gamma) \cong \Hom_{\cD(\Gamma)}(\Sigma^i \Gamma, \Gamma) \\ &\cong \Hom_{\cD(\Gamma)}(\Gamma, \Sigma^{-i} \Gamma) \cong \hh^{-i}(\Gamma) . \end{align*} \end{proof} \subsection{Ginzburg dg-algebras} Ginzburg dg-algebras were introduced by Ginzburg in~\cite{Ginzburg06}. We recall their definition in the case of a graded quiver with homogeneous superpotential and quote the result of Keller~\cite{Keller11} that they are homologically smooth and bimodule Calabi-Yau (see also the paper~\cite{VandenBergh15} by Van den Bergh). We note that a graded version of quivers with potentials (and their mutations) has also been introduced in~\cite{AmiotOppermann14b} and~\cite{dTdVVdB13}, however in these papers one still implicitly considers Ginzburg dg-algebras which are bimodule 3-Calabi-Yau. In contrast, in the setting described below the degree of the superpotential affects the Calabi-Yau dimension of its Ginzburg dg-algebra. A \emph{quiver} is a finite directed graph. More precisely, it is a quadruple $Q=(Q_0, Q_1, s, t)$, where $Q_0$ and $Q_1$ are finite sets (of \emph{vertices} and \emph{arrows}, respectively) and $s,t \colon Q_1 \to Q_0$ are functions specifying for each arrow its starting and terminating vertex, respectively. A quiver $Q$ is \emph{graded} if we are given a grading $|\cdot| \colon Q_1 \to \bZ$. A \emph{path} $p$ in $Q$ is a sequence of arrows $\alpha_1 \alpha_2 \dots \alpha_n$ such that $s(\alpha_{i+1})=t(\alpha_i)$ for all $1 \leq i < n$. For a path $p$ we denote by $s(p)$ its starting vertex $s(\alpha_1)$ and by $t(p)$ its terminating vertex $t(\alpha_n)$. A path $p$ is a \emph{cycle} if it starts and ends at the same vertex, i.e.\ $s(p)=t(p)$. Any vertex $i \in Q_0$ gives rise to a cycle $e_i$ of length zero with $s(e_i)=t(e_i)=i$. The \emph{path algebra} $KQ$ has a basis consisting of the paths of $Q$, and the product of two paths $p$ and $q$ is their concatenation $pq$ if $s(q)=t(p)$ and zero otherwise. The path algebra is graded if $Q$ is graded, with the degree of a path being the sum of the degrees of its arrows. The degree of a homogeneous element $x$ of $KQ$ will be denoted by $|x|$. Consider the $K$-bilinear map $[-,-] \colon KQ \times KQ \to KQ$ whose value on a pair of homogeneous elements $x, y$ is given by their \emph{supercommutator} $[x,y] = xy - (-1)^{|x||y|}yx$. Denote by $[KQ,KQ]$ the linear subspace of $KQ$ spanned by all the supercommutators. The quotient $KQ/[KQ,KQ]$ has a basis consisting of cycles considered up to cyclic permutation ``with signs''. \begin{defn} A \emph{superpotential} on $Q$ is a homogeneous element in $KQ/[KQ,KQ]$. \end{defn} Any arrow $\alpha \in Q_1$ gives rise to a linear map $\partial_\alpha \colon KQ/[KQ,KQ] \to KQ$ (called \emph{cyclic derivative with respect to $\alpha$}) whose value on any cycle $p$ is given by \begin{equation} \label{e:deriv} \partial_\alpha p = (-1)^{|\alpha|} \sum_{p=u \alpha v} (-1)^{|u|(|\alpha|+|v|)} vu \end{equation} where the sum runs over all possible decompositions $p=u \alpha v$ with $u, v$ paths of length $\geq 0$. More explicitly, if $p=\alpha_1 \alpha_2 \dots \alpha_n$ has degree $w$, then \[ \partial_\alpha (\alpha_1 \alpha_2 \dots \alpha_n) = (-1)^{|\alpha|} \sum_{\ell \,:\, \alpha_{\ell} = \alpha} (-1)^{(w-1)(|\alpha_1|+\dots+|\alpha_{\ell-1}|)} \alpha_{\ell+1} \dots \alpha_n \alpha_1 \dots \alpha_{\ell-1} \] is homogeneous of degree $w-|\alpha|$. \begin{defn} \label{def:Ginzburg} Let $Q$ be a graded quiver and let $m \in \bZ$. Let $\wb{Q}$ be the graded quiver whose vertices are those of $Q$ and its set of arrows consists of \begin{itemize} \item the arrows of $Q$ (with their degree unchanged); \item an arrow $\alpha^* \colon j \to i$ of degree $1-m-|\alpha|$ for each arrow $\alpha \colon i \to j$ of $Q$; \item a loop $t_i \colon i \to i$ of degree $-m$ for each vertex $i \in Q_0$. \end{itemize} Let $W$ be a superpotential on $Q$ of degree $2-m$. The \emph{Ginzburg dg-algebra of $(Q,W)$}, denoted $\Gamma_{m+1}(Q,W)$, is the dg-algebra whose underlying graded algebra is the path algebra $K\wb{Q}$ and the differential $d$ is defined by its action on the generators as \begin{itemize} \item $d(\alpha) = 0$ and $d(\alpha^*) = \partial_{\alpha} W$ for each $\alpha \in Q_1$; \item $d(t_i) = e_i (\sum_{\alpha \in Q_1} [\alpha, \alpha^*]) e_i$ for each $i \in Q_0$. \end{itemize} \end{defn} \begin{remark} Note that for each $\alpha \in Q_1$ the element $\partial_\alpha W$ is homogeneous of degree $2-m-|\alpha|$ and the supercommutator $[\alpha,\alpha^*]$ is homogeneous of degree $1-m$, hence the definition of the differential $d$ makes sense. Moreover, as $[\alpha,\alpha^*]=\alpha \alpha^* - (-1)^{|\alpha||\alpha^*|} \alpha^* \alpha$ for any $\alpha \in Q_1$, by using the sign conventions in~\eqref{e:deriv} one verifies that $d^2(t_i)=0$ for any $i \in Q_0$, so that $d$ is indeed a differential. Note also that in~\cite{Keller11} the differential of $t_i$ is $(-1)^{m-1}$ times the one given here, but of course by replacing each $t_i$ by $(-1)^{m-1}t_i$ one sees that the dg-algebras are isomorphic. \end{remark} \begin{theorem}[\protect{\cite[Theorem~6.3]{Keller11}}] \label{t:GinzburgCY} Let $m \in \bZ$ and let $W$ be a superpotential of degree $2-m$ on a graded quiver $Q$. Then $\Gamma_{m+1}(Q,W)$ is homologically smooth and bimodule $(m+1)$-Calabi-Yau. \end{theorem} We record two useful observations. Let $Q$ be a graded quiver, $m \in \bZ$ an integer and $W$ a homogeneous superpotential on $Q$ of degree $2-m$. \begin{lemma} \label{l:reparrow} Suppose that $\alpha \colon i \to j$ is an arrow in $Q$ such that no term of $W$ contains $\alpha$. Define a graded quiver $Q'$ by $Q'_0 = Q_0$ and $Q'_1 = Q_1 \setminus \{\alpha\} \cup \{\alpha^*\}$ where $\alpha^* \colon j \to i$ has degree $1-m-|\alpha|$. Then $W$ can be naturally viewed as a superpotential $W'$ on $Q'$ and $\Gamma_{m+1}(Q,W) \cong \Gamma_{m+1}(Q',W')$. \end{lemma} \begin{proof} Since no term of $W$ contains $\alpha$, we can view $W$ as an element $W'$ in the path algebra $KQ'$. The graded quivers $\wb{Q'}$ and $\wb{Q}$ are isomorphic by the map $\varphi \colon \wb{Q'} \to \wb{Q}$ sending $(\alpha^*)^*$ to $\alpha$ and fixing all other arrows. Moreover, since $\partial_\alpha W = 0 = \partial_{\alpha^*} W'$ and $\partial_\beta W = \partial_\beta W'$ for any $\beta \in Q_1 \cap Q'_1$, the map $\varphi$ induces an isomorphism of the Ginzburg dg-algebras $\Gamma_{m+1}(Q',W') \cong \Gamma_{m+1}(Q,W)$. \end{proof} For a subset of arrows $\Omega \subseteq Q_1$, let $Q_{\Omega}$ be the subquiver of $Q$ with $(Q_{\Omega})_0=Q_0$ and $(Q_{\Omega})_1 = \Omega$. For the definition of Calabi-Yau completion, see~\cite[\S4]{Keller11}. \begin{lemma} \label{l:CYcompl} Suppose that $\Omega \subseteq Q_1$ is a set of arrows such that $W = \sum_{\beta \in Q_1 \setminus \Omega} \beta \omega_\beta$ with $\omega_\beta \in KQ_{\Omega}$ for each $\beta \not \in \Omega$. Consider the subquiver $Q'$ of $\wb{Q}$ defined by $Q'_0 = Q_0$ and $Q'_1 = \Omega \cup \{\beta^* : \beta \in Q_1 \setminus \Omega\}$. Let $d$ be the differential on $\Gamma_{m+1}(Q,W)$. Then: \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item $d(KQ') \subseteq KQ'$, hence $B=(KQ',d)$ is a sub-dg-algebra of $\Gamma_{m+1}(Q,W)$. \item $\Gamma_{m+1}(Q,W)$ is isomorphic to the $(m+1)$-Calabi-Yau completion of $B$. \end{enumerate} \end{lemma} \begin{proof} The first claim holds since $d(\alpha)=0$ for any $\alpha \in \Omega \subseteq Q_1$ and $d(\beta^*) = \partial_\beta W = (-1)^{|\beta|} \omega_\beta \in K Q_{\Omega} \subseteq KQ'$ for any $\beta \not \in \Omega$. This argument also shows that the differential $d$ on $KQ'$ satisfies the condition in~\cite[\S3.6]{Keller11} via the filtration $\varnothing \subseteq \Omega \subseteq Q'$ of the set of arrows. Hence we can use~\cite[Proposition~6.6]{Keller11} to compute the $(m+1)$-Calabi-Yau completion of $B$ and get that it is isomorphic to $(K\wb{Q'},d')$ with the differential $d'$ given on the generators by \begin{align*} d'(\alpha) = \partial_{\alpha^*} W' = 0 &,& d'((\beta^*)^*) = \partial_{\beta^*} W' = 0 &,& d'(\alpha^*) = \partial_\alpha W' &,& d'(\beta^*) = \partial_{(\beta^*)^*} W' \end{align*} and $d(t_i) = (-1)^{m+1} e_i (\sum_{\gamma \in Q'_1} [\gamma, \gamma^*]) e_i$, where $W' \in K\wb{Q'}$ is the element \[ W' = \sum_{\alpha \in \Omega} (-1)^{|\alpha|} \alpha^* d(\alpha) + \sum_{\beta \in Q_1 \setminus \Omega} (-1)^{|\beta^*|} (\beta^*)^* d(\beta^*) = \sum_{\beta \in Q_1 \setminus \Omega} (-1)^{m-1} (\beta^*)^* \omega_\beta . \] Finally, the isomorphism $\varphi \colon K\wb{Q'} \to K\wb{Q}$ defined on the generators by \[ \varphi(\gamma) = \begin{cases} (-1)^{m-1} \gamma & \text{if $\gamma=\alpha^*$ for some $\alpha \in Q_1$}, \\ \beta & \text{if $\gamma=(\beta^*)^*$ for some $\beta \in Q_1 \setminus \Omega$}, \\ \gamma & \text{otherwise} \end{cases} \] induces an isomorphism $(K\wb{Q'},d') \cong \Gamma_{m+1}(Q,W)$ of dg-algebras. \end{proof} \subsection{Non-positively graded Ginzburg dg-algebras} In this section we restrict attention to Ginzburg dg-algebras which are concentrated in non-positive degrees and quote the construction of Guo~\cite{Guo11} (generalizing that of Amiot in~\cite{Amiot09} for the case $m=2$) of the generalized cluster category associated to a quiver with superpotential. Let $Q$ be a graded quiver and let $w \in \bZ$. Denote by $Q^{(w)}$ the subquiver of $Q$ consisting of the arrows of degree $w$, in other words, $Q^{(w)}_0 = Q_0$ and $Q^{(w)}_1 = \{ \alpha \in Q_1 \,:\, |\alpha|=w \}$. Now fix a graded quiver $Q$ and let $m \in \bZ$. Let $\wb{Q}$ denote the quiver constructed in Definition~\ref{def:Ginzburg}. Fix a homogeneous superpotential $W$ on $Q$ of degree $2-m$ and let $\Gamma=\Gamma_{m+1}(Q,W)$ be the Ginzburg dg-algebra. We immediately observe: \begin{remark} $\Gamma^i=0$ for any $i>0$ if and only if all the arrows of $\wb{Q}$ have non-positive degrees. This condition implies that $\hh^i(\Gamma)=0$ for any $i>0$. \end{remark} \begin{lemma} \label{l:h0} Assume that all the arrows of $\wb{Q}$ have non-positive degrees. Then $\hh^0(\Gamma) \cong K{\wb{Q}}^{(0)}/(d\alpha : \alpha \in \wb{Q}^{(-1)}_1)$. \end{lemma} \begin{proof} Since there are no arrows of $\wb{Q}$ of positive degree, the graded piece $\Gamma^0$ equals the path algebra of $\wb{Q}^{(0)}$ and the graded piece $\Gamma^{-1}$ is spanned by the elements of the form $u \alpha v$ where $u,v \in K{\wb{Q}}^{(0)}$ and $\alpha \in \wb{Q}^{(-1)}_1$. As $d(u \alpha v) = u(d\alpha)v$, the claim follows. \end{proof} \begin{lemma} \label{l:wbQnonpos} The following conditions are equivalent: \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item Each arrow of $\wb{Q}$ has non-positive degree; \item $m \geq 0$ and for any $\alpha \in Q_1$ one has $1-m \leq |\alpha|$ and $|\alpha| \leq 0$. \end{enumerate} \end{lemma} \begin{proof} If $m<0$ then the each of the loops $t_i$ in $\wb{Q}$ has positive degree $-m$. In addition, each other arrow of $\wb{Q}$ is either $\alpha$ or $\alpha^*$ for some arrow $\alpha \in Q_1$, and its degree is $|\alpha|$ or $1-m-|\alpha|$, respectively. These observations imply the statement of the lemma. \end{proof} In the next two examples we discuss the cases $m=0$ and $m=1$. \begin{example} Assume that $m=0$. Lemma~\ref{l:wbQnonpos} implies that the arrows of $\wb{Q}$ have non-positive degrees if and only if $Q$ has no arrows. In this case $\wb{Q}$ is a disjoint union of graded quivers of the form \[ \xymatrix{ {\bullet} \ar@{-}@(ur,dr)[]^{t} } \] with $|t|=0$ and $dt=0$. Hence $\Gamma$ is concentrated in degree $0$ and it is a finite direct product of polynomial rings $k[t]$. The algebra $k[t]$ is $1$-Calabi-Yau, see~\cite[\S4.2]{Keller08} \end{example} \begin{example} Assume that $m=1$. Lemma~\ref{l:wbQnonpos} implies that the arrows of $\wb{Q}$ have non-positive degrees if and only if all the arrows of $Q$ are in degree $0$, so we can regard $Q$ as an ungraded quiver. Since the superpotential on $Q$ is homogeneous of degree $1$, it must vanish. In this case all the arrows $\alpha$ and $\alpha^*$ of $\wb{Q}$ are in degree $0$ and the differential of each loop $t_i$, whose degree is $-1$, is given by $d(t_i) = e_i (\sum_{\alpha \in Q_1} [\alpha,\alpha^*]) e_i$. By Lemma~\ref{l:h0}, $\hh^0(\Gamma)$ is isomorphic to the \emph{preprojective algebra} of the quiver $Q$. \end{example} When $m \geq 2$, the next lemma shows that we may assume that $Q^{(1-m)}$ has no arrows. \begin{lemma} Assume that $m \geq 2$ and that $1-m \leq |\alpha| \leq 0$ for any $\alpha \in Q_1$. Then there exist a graded quiver $Q'$ with $2-m \leq |\alpha'| \leq 0$ for each $\alpha' \in Q'_1$ and a homogeneous superpotential $W'$ on $Q'$ of degree $2-m$ such that $\Gamma_{m+1}(Q,W) \cong \Gamma_{m+1}(Q',W')$. \end{lemma} \begin{proof} We define the graded quiver $Q'$ as a subquiver of $\wb{Q}$; we set $Q'_0=Q_0$ and \[ Q'^{(w)}_1 = \begin{cases} Q_1^{(0)} \cup \{\alpha^* : \alpha \in Q_1^{(1-m)}\} & \text{if $w=0$,} \\ Q_1^{(w)} & \text{if $1-m < w < 0$,} \end{cases} \] with $Q'^{(w)}_1$ empty for any other $w \in \bZ$. Then $2-m \leq |\alpha'| \leq 0$ for any arrow $\alpha' \in Q'_1$ by construction. Moreover, since the superpotential $W$ is of degree $2-m$ and all the arrows of $Q$ have non-positive degrees, no term of $W$ can contain any arrows of $Q^{(1-m)}$. The result now follows by iterated application of Lemma~\ref{l:reparrow} for each of the arrows in $Q^{(1-m)}$. \end{proof} In particular, when $m=2$ one can always reduce to the classical setting of an ungraded quiver with potential. \begin{cor} Let $Q$ be a graded quiver such that $-1 \leq |\alpha| \leq 0$ for any $\alpha \in Q_1$ and let $W$ be a homogeneous superpotential on $Q$ of degree $0$. Then there exist a quiver $Q'$ concentrated in degree $0$ and a superpotential $W'$ on $Q'$ such that $\Gamma_3(Q,W) \cong \Gamma_3(Q',W')$. \end{cor} The next lemma generalizes ~\cite[Lemma~2.11]{KellerYang11}. \begin{lemma} \label{l:h0Q} Assume that $m \geq 2$ and $2-m \leq |\alpha| \leq 0$ for any $\alpha \in Q_1$. Then \[ \hh^0(\Gamma_{m+1}(Q,W)) \cong KQ^{(0)}/(\partial_\alpha W : \alpha \in Q^{(2-m)}_1) . \] \end{lemma} \begin{proof} Observe that since the arrows of $Q$ have non-positive degrees and $W$ is of degree $2-m$, the cyclic derivative $\partial_\alpha W$ with respect to any arrow $\alpha$ of degree $2-m$ lies in the path algebra of $Q^{(0)}$ and the quotient in the right hand side makes sense. The arrows of $\wb{Q}$ have non-positive degrees by Lemma~\ref{l:wbQnonpos}, hence we may apply Lemma~\ref{l:h0} to deduce that $\hh^0(\Gamma_{m+1}(Q,W)) \cong K \wb{Q}^{(0)}/(d \alpha : \alpha \in \wb{Q}_1^{(-1)})$. Observe that $\wb{Q}^{(0)} = Q^{(0)}$ since $Q_1^{(1-m)}$ is empty by assumption and $\wb{Q}^{(-1)}_1 = Q^{(-1)}_1 \cup \{\alpha^* : \alpha \in Q^{(2-m)}_1 \}$ with $d(\alpha)=0$ and $d(\alpha^*) = \partial_\alpha W$ for any $\alpha \in Q_1$, hence the claim follows. \end{proof} \begin{theorem}[\protect{\cite[Theorem~3.3]{Guo11}}] \label{t:mCY} Let $m \geq 1$ be an integer, let $Q$ be a graded quiver, let $W$ be a superpotential on $Q$ of degree $2-m$ and denote by $\Gamma = \Gamma_{m+1}(Q,W)$ the Ginzburg dg-algebra. Assume that: \begin{enumerate} \renewcommand{\theenumi}{\roman{enumi}} \item \label{it:t:deg} $1 - m \leq |\alpha| \leq 0$ for all $\alpha \in Q_1$; \item \label{it:t:fd} $\hh^0(\Gamma)$ is finite-dimensional. \end{enumerate} Then the triangulated category $\cC_{(Q,W)} = \per \Gamma / \cD_{\fd}(\Gamma)$ is $\Hom$-finite and $m$-Calabi-Yau. Moreover, the image of $\Gamma$ in $\cC_{(Q,W)}$ is an $m$-cluster-tilting object whose endomorphism algebra is isomorphic to $\hh^0(\Gamma)$. \end{theorem} \begin{proof} The condition~\eqref{it:t:deg} implies that $\hh^i(\Gamma)=0$ for all $i>0$. Now Theorem~\ref{t:GinzburgCY} and condition~\eqref{it:t:fd} imply that $\Gamma$ satisfies the conditions of Theorem~\ref{t:cluster} and the result is now a consequence of that theorem. \end{proof} \section{The construction} \subsection{Superpotentials from quivers with relations} In this section we construct, given a quiver $Q$, a finite sequence $R$ of relations on $Q$ and an integer $m \geq 2$, a graded quiver with homogeneous superpotential of degree $2-m$. The construction generalizes that of Keller in~\cite[\S6.9]{Keller11} for the case where $m=2$ and $KQ/(R)$ has global dimension $2$. The idea of adding, for each relation, an arrow in the opposite direction appears already in the description of relation-extension algebras by Assem, Br\"{u}stle and Schiffler~\cite{ABS08}. \begin{defn} Let $Q$ be a quiver and let $\fr$ be the ideal of $KQ$ generated by all the arrows. A \emph{relation} on $Q$ is an element of $e_i \fr e_j$ for some $i, j \in Q_0$. In other words, a relation is a linear combination of paths of positive lengths starting at $i$ and ending at~$j$. \end{defn} We start with some preparations concerning split extensions of algebras which will be needed for the case $m=2$. \begin{defn} An algebra $\wt{A}$ is a \emph{split extension} of an algebra $A$ if there exist algebra homomorphisms $\iota \colon A \to \wt{A}$ and $\pi \colon \wt{A} \to A$ such that $\pi \iota = id_A$. \end{defn} Let $Q$ be a quiver and let $\wt{Q}$ be a quiver such that $\wt{Q}_0=Q_0$ and $Q_1 \subseteq \wt{Q}_1$ (in other words, $\wt{Q}$ is obtained from $Q$ by adding arrows). Then the path algebra $K\wt{Q}$ is a split extension of the path algebra $KQ$. Indeed, there are algebra homomorphisms \[ KQ \xrightarrow{\iota_{Q,\wt{Q}}} K\wt{Q} \xrightarrow{\pi_{\wt{Q},Q}} KQ \] whose values on the generators are given by \begin{align*} \iota_{Q,\wt{Q}}(\alpha) = \alpha \quad (\alpha \in Q_1) \qquad \text{and} \qquad \pi_{\wt{Q},Q}(\alpha) = \begin{cases} \alpha & \text{if $\alpha \in Q_1$,} \\ 0 & \text{if $\alpha \in \wt{Q}_1 \setminus Q_1$.} \end{cases} \end{align*} Denote by $\fr'$ the ideal of $K\wt{Q}$ generated by the arrows in the set $\wt{Q}_1 \setminus Q_1$. Let $R$ be a set of relations in $KQ$ and let $\wt{R}$ be a set of relations in $K\wt{Q}$ such that $R \subseteq \wt{R}$ (via the natural embedding $\iota_{Q,\wt{Q}} \colon KQ \hookrightarrow K\wt{Q}$). \begin{lemma} \label{l:splitext} Assume that $\wt{R} \setminus R \subseteq \fr'$. Then the algebra $K\wt{Q}/(\wt{R})$ is a split extension of the algebra $KQ/(R)$. \end{lemma} \begin{proof} Consider the composition $KQ \xrightarrow{\iota_{Q,\wt{Q}}} K\wt{Q} \twoheadrightarrow K\wt{Q}/(\wt{R})$. Since $R \subseteq \wt{R}$, the image of any relation $\rho \in R$ vanishes and we get an algebra homomorphism $\iota \colon KQ/(R) \to K\wt{Q}/(\wt{R})$. Consider now the composition $K\wt{Q} \xrightarrow{\pi_{\wt{Q},Q}} KQ \twoheadrightarrow KQ/(R)$ and let $\rho \in \wt{R}$. If $\rho \in R$, then its image obviously vanishes. Otherwise, $\rho \not \in R$ and our assumption that $\wt{R} \setminus R \subseteq \fr'$ implies that $\pi_{\wt{Q},Q}(\rho)=0$, so its image vanishes as well. Hence we get a well defined algebra homomorphism $\pi \colon K\wt{Q}/(\wt{R}) \to KQ/R$. The composition $\pi \iota$ maps the image of any arrow $\alpha \in Q$ in $KQ/(R)$ to itself, therefore it is the identity on $KQ/(R)$. \end{proof} Any quiver with a finite sequence of relations gives rise to a dg-algebra whose underlying graded algebra is the path algebra of a graded quiver with arrows concentrated in degrees $0$ and $-1$. The details are given in the construction below. \begin{constr} \label{con:dgQR} Let $(Q,R)$ be a pair where $Q$ is a quiver and $R = \bigcup_{i,j \in Q_0} R_{i,j}$, where each $R_{i,j}$ is a finite sequence of relations inside $e_i \fr e_j$ and $R$ is the concatenation of these sequences (there may be repetitions inside each sequence $R_{i,j}$ and moreover the zero element can appear inside several such sequences). For a relation $\rho \in R_{i,j}$, set $s(\rho)=i$ and $t(\rho)=j$. We define a graded quiver $Q'$ as follows: \begin{itemize} \item The set of vertices of $Q'$ equals that of $Q$; \item The set of arrows consists of \begin{itemize} \item the arrows of $Q$, with their degree set to $0$; \item an arrow $\eta_\rho \colon s(\rho) \to t(\rho)$ of degree $-1$ for each relation $\rho \in R$; \end{itemize} \end{itemize} and denote by $B(Q,R)$ the dg-algebra whose underlying graded algebra is the path algebra $KQ'$ with the differential acting on the generators by \begin{itemize} \item $d(\alpha)=0$ for any $\alpha \in Q_1$; \item $d(\eta_\rho)=\rho$ for any $\rho \in R$. \end{itemize} \end{constr} Obviously, $B(Q,R)$ is concentrated in non-positive degrees and $\hh^0(B(Q,R))$ is isomorphic to $KQ/(R)$, but in general $B(Q,R)$ is not quasi-isomorphic to its zeroth homology. \begin{remark} If $B=(KQ',d)$ is any dg-algebra whose underlying graded algebra is the path algebra of a graded quiver with arrows concentrated in degrees $0$ and $-1$ and the image of the differential $d$ lies in the ideal generated by the arrows, then $B \cong B(Q,R)$ for some $(Q,R)$. Indeed, we can take $Q=Q'^{(0)}$ and for each $i,j \in Q_0$ let $R_{i,j}$ be the list of $d(\alpha)$ where $\alpha$ runs over the arrows in $Q'^{(-1)}_1$ starting at $i$ and ending at $j$. \end{remark} It turns out (see Lemma~\ref{l:preprojB} below) that for any $m \geq 2$, the $(m+1)$-Calabi-Yau completion of $B(Q,R)$ is a Ginzburg dg-algebra of a graded quiver with homogeneous superpotential of degree $2-m$ whose construction is described below. \begin{constr} \label{con:QW} Let $(Q,R,m)$ be a triple where $(Q,R)$ is as in Construction~\ref{con:dgQR} and $m \geq 2$ is an integer. We construct a graded quiver $\wt{Q}$ with homogeneous superpotential $W$ of degree $2-m$ as follows. \begin{itemize} \item The set of vertices of $\wt{Q}$ equals that of $Q$; \item The set of arrows of $\wt{Q}$ consists of \begin{itemize} \item the arrows of $Q$, with their degree set to $0$; \item an arrow $\eps_\rho \colon t(\rho) \to s(\rho)$ of degree $2-m$ for each relation $\rho \in R$; \end{itemize} \item The superpotential $W$ is the image of the element $\sum_{\rho \in R} \eps_\rho \rho$ in $K\wt{Q}/[K\wt{Q},K\wt{Q}]$. \end{itemize} We denote by $\Gamma(Q,R,m)$ the Ginzburg dg-algebra $\Gamma_{m+1}(\wt{Q},W)$. \end{constr} \begin{remark} The quiver with superpotential $(\wt{Q},W)$ depends on the particular choice of the sequence $R$ and not only on the two-sided ideal $(R)$ it generates in $KQ$, as we shall see in Example~\ref{ex:nonuniq}. \end{remark} For the rest of this section, we fix a triple $(Q,R,m)$ where $Q$ is a quiver, $R$ is a finite sequence of relations on $Q$ and $m \geq 2$. We denote by $(\wt{Q},W)$ the graded quiver with superpotential of degree $2-m$ associated to $(Q,R,m)$ as in Construction~\ref{con:QW}, and by $\Gamma = \Gamma(Q,R,m) = \Gamma_{m+1}(\wt{Q},W)$ its Ginzburg dg-algebra. We denote the elements of $R$ by $\rho_1, \rho_2, \dots, \rho_n$ and write $|R|=n$. For simplicity, we denote by $\eps_k$ the arrow $\eps_{\rho_k}$ of degree $2-m$ in $\wt{Q}$ corresponding to the relation $\rho_k$, so that $W$ is the image of $\sum_{k=1}^n \eps_k \rho_k$ modulo $[K\wt{Q},K\wt{Q}]$. Similarly, denote by $\eta_k$ the arrow $\eta_{\rho_k}$ and let $B=B(Q,R)$ be the dg-algebra of Construction~\ref{con:dgQR}. We start by describing the graded quiver $\wb{\wt{Q}}$ underlying $\Gamma$ occurring in Definition~\ref{def:Ginzburg}. \begin{lemma} \label{l:QQarrows} The arrows of the graded quiver $\wb{\wt{Q}}$, their degrees and their differentials are as given in Table~\ref{tab:arrows}. \end{lemma} \begin{proof} The description of the arrows and their degrees is evident from Definition~\ref{def:Ginzburg}. For the differentials, note that since none of the arrows $\eps_k$ occur in any $\rho \in R$, we have \[ \partial_{\eps_k}W = \partial_{\eps_k} \Bigl(\sum_{\ell=1}^{n} \eps_{\ell} \rho_{\ell} \Bigr) = \partial_{\eps_k} (\eps_k \rho_k) = (-1)^{|\eps_k|} \rho_k = (-1)^m \rho_k \] for any $1 \leq k \leq n$. \end{proof} \begin{table} \[ \begin{array}{cccl} \text{arrow} & \text{degree} & \text{differential} \\ \hline \alpha & 0 & 0 & (\alpha \in Q_1) \\ \eps_k^* & -1 & (-1)^m \rho_k & (1 \leq k \leq n) \\ \eps_k & 2-m & 0 & (1 \leq k \leq n) \\ \alpha^* & 1-m & \partial_{\alpha} W & (\alpha \in Q_1) \\ t_i & -m & d(t_i) & (i \in Q_0) \end{array} \] \caption{The arrows of the graded quiver $\wb{\wt{Q}}$ and their differentials.} \label{tab:arrows} \end{table} \begin{lemma} \label{l:H0} Consider the algebras $A = KQ/(R)$ and $\wt{A} = \hh^0(\Gamma)$. \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item \label{it:l:mgt2} If $m>2$ then $\wt{A} \cong A$. \item \label{it:l:meq2} If $m=2$ then $\wt{A}$ is a split extension of $A$. \end{enumerate} \end{lemma} \begin{proof} By Lemma~\ref{l:h0Q}, $\wt{A} \cong K\wt{Q}^{(0)}/(\partial_\alpha W : \alpha \in \wt{Q}_1^{(2-m)})$. If $m>2$ then $\wt{Q}^{(0)} = Q$, the arrows of $\wt{Q}^{(2-m)}$ are $\eps_1, \eps_2, \dots, \eps_n$ and $\partial_{\eps_k} W = (-1)^m \rho_k$ according to Table~\ref{tab:arrows}. Hence $\wt{A} \cong KQ/(\rho_1, \rho_2, \dots, \rho_n)$. This shows part~\eqref{it:l:mgt2}. If $m=2$ then all the arrows of the quiver $\wt{Q}$ have degree $0$, and we can think of $\wt{Q}$ as an ungraded quiver consisting of the arrows of $Q$ and the arrows $\eps_k$ for $1 \leq k \leq n$. In other words, $\wt{Q} = \wt{Q}^{(0)}$ and $Q \subseteq \wt{Q}$ with $\wt{Q} \setminus Q = \{\eps_1, \eps_2, \dots, \eps_n\}$. Moreover, $\wt{A} = K\wt{Q}/(\wt{R})$ for $\wt{R} = \{\partial_\alpha W : \alpha \in \wt{Q}_1\}$. Now $R \subseteq \wt{R}$ since $\partial_{\eps_k} W = \rho_k$ for each $1 \leq k \leq n$. In addition, for any $\alpha \in Q_1$ the element $\partial_\alpha W = \sum_{k=1}^n \partial_\alpha (\eps_k \rho_k)$ lies in the ideal generated by $\eps_1, \eps_2, \dots, \eps_n$ in $\wt{Q}$. Part~\eqref{it:l:meq2} now follows from Lemma~\ref{l:splitext}. \end{proof} The next two lemmas relate the dg-algebra $B$ and the Ginzburg dg-algebra $\Gamma$. For a similar result in the case $m=2$, see~\cite[\S6.7 and Proposition~6.8]{Keller11}. \begin{lemma} \label{l:preprojB} $\Gamma$ is the $(m+1)$-Calabi-Yau completion of $B$. \end{lemma} \begin{proof} This is a consequence of Lemma~\ref{l:CYcompl}. Indeed, we may take $\Omega = Q_1$ so that $\wt{Q}_1 \setminus \Omega = \{\eps_1, \dots, \eps_n\}$ and the superpotential $W = \sum_{k=1}^n \eps_k \rho_k$ has the required form. We only note that the differential of $\Gamma$ restricts to the path algebra of the quiver with arrows $Q_1 \cup \{\eps^*_1, \dots, \eps^*_n\}$ and the resulting dg-algebra is isomorphic to $B$ by mapping each arrow $\eps^*_k$ to $(-1)^m \eta_{\rho_k}$ and sending each $\alpha \in Q_1$ to itself. \end{proof} \begin{lemma} $\hh^{-i}(\Gamma) \cong \hh^{-i}(B)$ for any $i < m-2$. \end{lemma} \begin{proof} From Table~\ref{tab:arrows} we see that $\Gamma^{-i} \cong B^{-i}$ for any $i < m-2$, hence $\hh^{-i}(\Gamma) \cong \hh^{-i}(B)$ for any $i < m-3$. The graded piece $\Gamma^{2-m}$ can be decomposed as $\Gamma^{2-m} \cong B^{2-m} \oplus F$, where the space $F$ is spanned by the elements of the form $u \eps_k v$ where $1 \leq k \leq n$ and $u, v$ are paths in $Q$. Since the differential of each such element vanishes, one has $d(F)=0$, hence $d(\Gamma^{2-m}) \cong d(B^{2-m})$ and therefore $\hh^{3-m}(\Gamma) \cong \hh^{3-m}(B)$ as well. \end{proof} \begin{lemma} \label{l:Qempty} If $R$ is empty then $\hh^{-i}(\Gamma)=0$ for any $1 \leq i \leq m-2$. \end{lemma} \begin{proof} If $R$ is empty, then by Lemma~\ref{l:QQarrows} the arrows in the graded quiver $\wb{\wt{Q}}$ have degrees $0$, $1-m$ or $-m$. Hence the graded piece $\Gamma^{-i}$ vanishes for each $1 \leq i \leq m-2$ and the claim follows. \end{proof} The next lemma provides a partial converse to Lemma~\ref{l:Qempty}. \begin{lemma} \label{l:H2mR} If $m>2$ and $R \subseteq \fr^2$, then $\dim_K \hh^{2-m}(\Gamma) \geq |R|$. \end{lemma} \begin{proof} The graded piece $\Gamma^{2-m}$ is spanned by two types of elements: \begin{enumerate} \item \label{it:typ01} $u_1 \eps_{k_1}^* u_2 \eps_{k_2}^* u_3 \dots \eps_{k_{m-2}}^* u_{m-1}$, where $u_1, u_2, \dots, u_{m-1}$ are paths in $Q$ and $1 \leq k_j \leq n$ for each $1 \leq j \leq m-2$; \item \label{it:typ02} $u \eps_k v$, where $u, v$ are paths in $Q$ and $1 \leq k \leq n$; \end{enumerate} As a $K$-vector space, we may thus decompose $\Gamma^{2-m}$ into a direct sum $E \oplus E'$, where $E$ is the $n$-dimensional subspace $E = \{\lambda_1 \eps_1 + \lambda_2 \eps_2 + \dots + \lambda_n \eps_n : (\lambda_1, \lambda_2, \dots, \lambda_n) \in K^n\}$ and $E'$ is spanned by all the elements of type~\eqref{it:typ01} and those of type~\eqref{it:typ02} such that at least one of $u, v$ has positive length. We will show that $d(\Gamma^{1-m}) \subseteq E'$ and hence no non-zero element in $E$ lies in the image of the differential $d$ acting on the graded piece $\Gamma^{1-m}$. Since $d$ vanishes on $E$, this will yield an $n$-dimensional subspace inside $\hh^{2-m}(\Gamma)$. If $m>3$, the graded piece $\Gamma^{1-m}$ is spanned by three types of elements: \begin{enumerate} \item \label{it:typ1} $u_1 \eps_{k_1}^* u_2 \eps_{k_2}^* u_3 \dots \eps_{k_{m-1}}^* u_m$, where $u_1, u_2, \dots, u_m$ are paths in $Q$ and $1 \leq k_j \leq n$ for each $1 \leq j \leq m-1$; \item \label{it:typ2} $u \eps_k v \eps_l^* w$ and $u \eps_k^* v \eps_l w$, where $u, v, w$ are paths in $Q$ and $1 \leq k, l \leq n$; \item \label{it:typ3} $u \alpha^* v$, where $u, v$ are paths in $Q$ and $\alpha \in Q_1$. \end{enumerate} If $m=3$, in addition to the elements above there is a fourth type \begin{enumerate} \setcounter{enumi}{3} \item \label{it:typ4} $u \eps_k v \eps_l w$, where $u, v, w$ are paths in $Q$ and $1 \leq k, l \leq n$. \end{enumerate} It suffices to prove that the differential of any of these elements belongs to $E'$. This is clear for the elements of the type~\eqref{it:typ1} since \[ d(u_1 \eps_{k_1}^* u_2 \eps_{k_2}^* u_3 \dots \eps_{k_{m-1}}^* u_m) = \sum_{j=1}^{m-1} \pm u_1 \eps_{k_1}^* u_2 \dots \eps_{k_{j-1}}^* u_j \rho_j u_{j+1} \eps_{k_{j+1}}^* \dots \eps_{k_{m-1}}^* u_m \] and none of the arrows $\eps_1, \eps_2, \dots, \eps_n$ can appear in the right hand side. This is also clear for the elements of type~\eqref{it:typ4} since their differential vanishes. Consider an element of type~\eqref{it:typ2}. Then $d(u \eps_k v \eps_l^* w) = u \eps_k v \rho_l w$, hence the differential is spanned by elements of the form $u' \eps_k v'$ where $u', v'$ are paths in $Q$ and $v'$ has positive length. The case of $d(u \eps_k^* v \eps_l w)$ is similar. Consider an element of type~\eqref{it:typ3}. Then $d(u \alpha^* v) = u (\partial_{\alpha} W) v = \sum_{k=1}^n u \partial_\alpha (\eps_k \rho_k) v$. Our assumption that $R \subseteq \fr^2$ implies that each $\partial_\alpha (\eps_k \rho_k)$ is a linear combination of terms $u'' \eps_k v''$ where at least one of the paths $u'', v''$ has positive length. \end{proof} \begin{cor} If $m>2$ and $R \subseteq \fr^2$, then $\hh^{2-m}(\Gamma)=0$ if and only if $R$ is empty. \end{cor} Consider the triangulated category $\cC_{(Q,R,m)} = \per \Gamma(Q,R,m) / \cD_{\fd}(\Gamma(Q,R,m))$. \begin{theorem} \label{t:mCYtilted} Let $Q$ be a quiver, let $R$ be a finite sequence of relations on $Q$ such that the algebra $A=KQ/(R)$ is finite-dimensional and let $m>2$ be an integer. Then: \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item The category $\cC = \cC_{(Q,R,m)}$ is $\Hom$-finite and $m$-Calabi-Yau. \item The image $T$ of $\Gamma(Q,R,m)$ in $\cC$ is an $m$-cluster-tilting object with $\End_{\cC}(T) \cong A$. \item If $R \subseteq \fr^2$ then $\dim_K \Hom_{\cC}(T, \Sigma^{-(m-2)} T) \geq |R|$. \end{enumerate} \end{theorem} \begin{proof} Recall that $\Gamma(Q,R,m)= \Gamma_{m+1}(\wt{Q},W)$ where $(\wt{Q},W)$ is the graded quiver with superpotential of degree $2-m$ associated to the triple $(Q,R,m)$ as in Construction~\ref{con:QW}. We claim that $(\wt{Q},W)$ satisfies the conditions of Theorem~\ref{t:mCY}. Indeed, condition~\eqref{it:t:deg} holds since the degree of any arrow in $\wt{Q}$ is either $0$ or $2-m$, and condition~\eqref{it:t:fd} holds since by Lemma~\ref{l:H0}, $\hh^0(\Gamma) \cong A$ and the algebra $A$ is assumed to be finite-dimensional. Hence, by Theorem~\ref{t:mCY}, the triangulated category $\cC = \cC_{(Q,R,m)} = \cC_{(\wt{Q},W)}$ is $\Hom$-finite, $m$-Calabi-Yau and the image $T$ of $\Gamma = \Gamma(Q,R,m)$ under the canonical projection $\per \Gamma \to \cC$ is an $m$-cluster-tilting object whose endomorphism algebra is $\End_{\cC}(T) \cong \hh^0(\Gamma) \cong A$. The last assertion follows from Lemma~\ref{l:snex} and Lemma~\ref{l:H2mR}. \end{proof} \begin{remark} The $m$-Calabi-Yau category $\cC$ of Theorem~\ref{t:mCYtilted} depends on $Q$ and $R$ and not only on the algebra $A$. There exist quivers $Q$ and sequences of relations $R$ and $R'$ on $Q$ such that the algebras $KQ/(R)$ and $KQ/(R')$ are finite-dimensional and isomorphic but for any $m>2$ the categories $\cC_{(Q,R,m)}$ and $\cC_{(Q,R',m)}$ are not equivalent, see Example~\ref{ex:nonuniq} below. \end{remark} \begin{remark} \label{rem:endproj} Keep the notations and assumptions of Theorem~\ref{t:mCYtilted} and write $A$ as $A = \oplus_{i \in Q_0} P_i$ where $P_i = e_i A$ are the indecomposable projective right $A$-modules. The decomposition $\Gamma = \oplus_{i \in Q_0} e_i \Gamma$ for $\Gamma=\Gamma(Q,R,m)$ induces a decomposition $T = \oplus_{i \in Q_0} T_i$ with $T_i$ being the image of $e_i \Gamma$ under the canonical projection to $\cC$. Hence for any finitely generated projective $A$-module $P = \oplus_{i \in Q_0} P_i^{e_i}$ such that $e_i>0$ for all $i \in Q_0$ there exists an $m$-cluster-tilting object $T_P = \oplus_{i \in Q_0} T_i^{e_i}$ in $\cC$ with $\End_{\cC}(T_P) \cong \End_A(P)$. \end{remark} The next result shows that the $m$-cluster category of any acyclic quiver can be realized as a category of the form $\cC(Q,R,m)$. Recall that a quiver is \emph{acyclic} if it has no cycles of positive length. We refer to~\cite[Corollary~3.4]{Guo11} for a related result. \begin{prop} \label{p:vosnex} Let $Q$ be a quiver, let $R$ be a finite sequence of relations on $Q$ such that $R \subseteq \fr^2$ and the algebra $KQ/(R)$ is finite-dimensional, and let $m>2$ be an integer. Then the following conditions are equivalent, where $\cC$ denotes the $m$-Calabi-Yau category $\cC_{(Q,R,m)}$ and $T$ is the canonical $m$-cluster-tilting object in $\cC$. \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item \label{it:p:KQ} The quiver $Q$ is acyclic and the sequence $R$ is empty; \item \label{it:p:B0} The dg-algebra $B(Q,R)$ is concentrated in degree $0$ and has finite total dimension; \item \label{it:p:vosnex} $\Hom_{\cC}(T, \Sigma^{-i} T) = 0$ for any $0 < i < m-1$; \item \label{it:p:m2} $\Hom_{\cC}(T, \Sigma^{-(m-2)} T) = 0$. \end{enumerate} Moreover, if any of these equivalent conditions holds and the field $K$ is algebraically closed, then $\cC$ is triangle equivalent to the $m$-cluster category of $Q$. \end{prop} \begin{proof} The equivalence of~\eqref{it:p:KQ} and~\eqref{it:p:B0} is clear. Let $\Gamma = \Gamma(Q,R,m)$. For the implication \eqref{it:p:KQ} $\Rightarrow$ \eqref{it:p:vosnex}, note that if $Q$ is acyclic and $R$ is empty then $\Hom_{\cC}(T, \Sigma^{-i} T) \cong \hh^{-i}(\Gamma) = 0$ for any $0 < i < m-1$ by Lemma~\ref{l:snex} and Lemma~\ref{l:Qempty}. The implication \eqref{it:p:vosnex} $\Rightarrow$ \eqref{it:p:m2} is clear. For the implication \eqref{it:p:m2} $\Rightarrow$ \eqref{it:p:KQ}, note that by Lemma~\ref{l:snex} and Lemma~\ref{l:H2mR} \[ \dim_K \Hom_{\cC}(T, \Sigma^{-(m-2)} T) = \dim_K \hh^{2-m}(\Gamma) \geq |R|, \] hence if $\Hom_{\cC}(T, \Sigma^{-(m-2)} T)$ vanishes $R$ must be empty and then $Q$ is acyclic by our assumption that $KQ/(R)$ is finite-dimensional. If any of these conditions holds, then $\cC$ is an algebraic $\Hom$-finite, $m$-Calabi-Yau triangulated category with an $m$-cluster-tilting object $T$ such that $\End_{\cC}(T) \cong KQ$ for an acyclic quiver $Q$ and $\Hom_{\cC}(T, \Sigma^{-i} T) = 0$ for any $0 < i < m-1$. By the characterization of higher cluster categories of Keller and Reiten \cite[Theorem~4.2]{KellerReiten08}) if $K$ is algebraically closed, then $\cC$ is triangle equivalent to the $m$-cluster-category of the quiver $Q$. \end{proof} \subsection{Systems of relations} Let $Q$ be a quiver and let $\fr$ be the two-sided ideal of $KQ$ generated by the arrows of $Q$. An ideal $I$ of $KQ$ is \emph{admissible} if there exists some $N \geq 2$ such that $\fr^N \subseteq I \subseteq \fr^2$. \begin{defn}[\protect{\cite{Bongartz83}}] Let $I$ be an ideal of $KQ$. A \emph{system of relations for $I$} is a set $R$ of relations such that $R$, but no proper subset of it, generates $I$ as a two-sided ideal. \end{defn} The following statement is well-known. \begin{lemma} \label{l:sysrel} If $I$ is admissible then there exists a finite system of relations for $I$. \end{lemma} \begin{proof} By assumption, there is some $N \geq 2$ such that $\fr^N \subseteq I$. The algebra $KQ/\fr^N$ is finite-dimensional, as it is spanned by all the paths of $Q$ of length smaller than $N$. Therefore the space $I/\fr^N$ is also finite-dimensional. For each $i,j \in Q_0$, choose a basis of $e_i (I/\fr^N) e_j$ and choose a set $R_{i,j}$ inside $e_i I e_j$ whose image modulo $\fr^N$ equals that basis. Then $I=(R)$ for the finite set $R$ given by \[ R = \{\text{the paths of length $N$ in $Q$}\} \cup \bigcup_{i,j \in Q_0} R_{i,j} . \] If $R$ is not a system of relations for $I$ then there is a proper subset $R'$ of $R$ such that $I=(R')$. In this way we can repeatedly remove elements and still have a set generating $I$. Since $R$ is finite, this process must terminate and we eventually end with a system of relations for $I$. \end{proof} Under some conditions, another approach to the construction of systems of relations involves lifting of basis elements of the space $I/(I \fr + \fr I)$, see the discussion in~\cite[\S7]{BIKR08}. \begin{lemma} \label{l:fdIrrI} If $I$ is admissible, then $I/(I \fr + \fr I)$ is finite-dimensional. \end{lemma} \begin{proof} Let $N \geq 2$ be such that $\fr^N \subseteq I$. Then $\fr^{N+1} \subseteq (I \fr + \fr I)$ and we have an inclusion and a surjection \[ KQ/\fr^{N+1} \supset I/\fr^{N+1} \twoheadrightarrow I/(I \fr + \fr I). \] The claim now follows since the quotient $KQ/\fr^{N+1}$ is finite-dimensional. \end{proof} \begin{lemma} \label{l:minrel} Let $I$ be an ideal of $KQ$ and let $R$ be a set of relations inside $I$. \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item \label{it:l:span} If $I=(R)$ then the image of $R$ modulo $I \fr + \fr I$ spans the vector space $I/(I \fr + \fr I)$. \item \label{it:l:lift} Assume that the ideal (R) is admissible and the image of $R$ modulo $I \fr + \fr I$ spans the vector space $I/(I \fr + \fr I)$. Then $I=(R)$. \item \label{it:l:liftbasis} Assume that the ideal (R) is admissible and the image of $R$ modulo $I \fr + \fr I$ is a basis of the vector space $I/(I \fr + \fr I)$. Then $R$ is a system of relations for $I$. \end{enumerate} \end{lemma} \begin{proof} For part~\eqref{it:l:span}, observe that since each $\rho \in R$ is a relation, multiplying it from the left or from the right by an element of the form $\sum_{i \in Q_0} \lambda_i e_i$ gives a scalar multiple of $\rho$. For part~\eqref{it:l:lift}, we slightly modify the argument in~\cite[Lemma~3.6]{BMR06}. Let $N \geq 2$ such that $\fr^N \subseteq (R)$ and let $x \in I$. By assumption, we can write \begin{equation} \label{e:IbyR} x = \lambda_1 \rho_1 + \dots + \lambda_n \rho_n + x'_1 r'_1 + \dots + x'_k r'_k + r''_1 x''_1 + \dots + r''_{\ell} x''_{\ell} \end{equation} with scalars $\lambda_1, \dots, \lambda_n \in K$, relations $\rho_1, \dots, \rho_n \in R$ and $x'_1, \dots, x'_k, x''_1, \dots, x''_{\ell} \in I$, $r'_1, \dots, r'_k, r''_1, \dots, r''_{\ell} \in \fr$. Writing each of the elements $x'_1, \dots, x'_k, x''_1, \dots, x''_{\ell}$ using~\eqref{e:IbyR} and repeating this process $N$ times, we conclude that $x = \rho + r$ where $\rho \in (R)$ and $r \in \fr^N$, hence $x \in (R)$. Finally, part~\eqref{it:l:liftbasis} follows from~\eqref{it:l:span} and~\eqref{it:l:lift}. Moreover, $R$ is finite by Lemma~\ref{l:fdIrrI}. \end{proof} \begin{example} \label{ex:minrel} The assumption in parts~\eqref{it:l:lift} and~\eqref{it:l:liftbasis} that the ideal $(R)$ is admissible cannot be dropped. For example, consider the quiver $Q$ given by \[ \xymatrix{ {\bullet} \ar@(ul,dl)[]_{\alpha} \ar@(ur,dr)[]^{\beta} } \] and let $I=(\alpha^2-\beta \alpha \beta, \beta^2 - \alpha \beta \alpha, \alpha^2 \beta)$. One can check that $\fr^5 \subseteq I \subseteq \fr^2$ hence the ideal $I$ is admissible. The $8$-dimensional algebra $KQ/I$ is an algebra of quaternion type in the sense of Erdmann~\cite{Erdmann90}. When $K$ is algebraically closed of characteristic $2$, this algebra is isomorphic to the group algebra of the quaternion group. The image of $\alpha^2 \beta$ in $I \fr + \fr I$ vanishes, as the following calculation shows: \[ \alpha^2 \beta = (\alpha^2 - \beta \alpha \beta) \beta + \beta \alpha (\beta^2 - \alpha \beta \alpha) + \beta \alpha^2 \beta \alpha \in I \fr + \fr I, \] hence $I/(I \fr + \fr I)$ is spanned by the images of the elements $\alpha^2 - \beta \alpha \beta$ and $\beta^2 - \alpha \beta \alpha$. Nevertheless, the ideal $I' = (\alpha^2-\beta \alpha \beta, \beta^2 - \alpha \beta \alpha)$ is not equal to $I$. Indeed, by letting $\alpha$ and $\beta$ act on $K$ as the identity we get a one-dimensional module over $KQ/I'$ with a non-zero action of $\alpha^2 \beta$, hence $\alpha^2 \beta \not \in I'$. \end{example} \subsection{Finite-dimensional algebras are $(m>2)$-CY-tilted} In this section we assume that the field $K$ is algebraically closed. A finite-dimensional algebra $A$ over $K$ is called \emph{basic} if $A_A \cong P_1 \oplus \dots \oplus P_r$ where $P_1, \dots, P_r$ are representatives of the isomorphism classes of the indecomposable projective right $A$-modules. \begin{theorem}[Gabriel] \label{t:Gabriel} Let $A$ be a basic, finite-dimensional algebra over $K$. Then there exist a quiver $Q$ and an admissible ideal $I$ of $KQ$ such that $A \cong KQ/I$. \end{theorem} \begin{proof} See~\cite[\S4.3]{Gabriel80}. \end{proof} Let $A$ be a basic, finite-dimensional algebra. By Theorem~\ref{t:Gabriel}, we can write $A=KQ/I$ for a quiver $Q$ and an admissible ideal $I$ of $Q$. We denote by $S_i$ the simple $A$-module corresponding to a vertex $i \in Q_0$ and consider the $A$-module $S = \bigoplus_{i \in Q_0} S_i$. \begin{lemma} \label{l:mincardR} If $R$ is a system of relations for $I$ then $|R| \geq \dim_K \Ext^2_A(S,S)$. \end{lemma} \begin{proof} We have $|R| \geq \dim_K I/(I \fr + \fr I) = \dim_K \Ext^2_A(S,S)$ where the left inequality is a consequence of Lemma~\ref{l:minrel}\eqref{it:l:span} and the right equality is~\cite[Corollary~1.1]{Bongartz83}. \end{proof} Note that Example~\ref{ex:minrel} shows that the inequality in Lemma~\ref{l:mincardR} can be strict. \begin{theorem} \label{t:mCYbasic} Let $A$ be a basic finite-dimensional algebra. Then the set of pairs $(Q,R)$ consisting of a quiver $Q$ and a sequence of relations $R \subseteq \fr^2$ such that $A \cong KQ/(R)$ is not empty. For any such pair $(Q,R)$ and any integer $m>2$, the triangulated category $\cC=\cC_{(Q,R,m)}$ is $\Hom$-finite, $m$-Calabi-Yau and its canonical $m$-cluster-tilting object $T$ satisfies $\End_{\cC}(T) \cong A$ and $\dim_K \Hom_{\cC}(T, \Sigma^{-(m-2)}T) \geq \dim_K \Ext^2_A(S,S)$. \end{theorem} \begin{proof} The first claim is a consequence of Theorem~\ref{t:Gabriel} and Lemma~\ref{l:sysrel}. The second claim is a consequence of Theorem~\ref{t:mCYtilted} and Lemma~\ref{l:mincardR}. \end{proof} \begin{cor} A finite-dimensional algebra over an algebraically closed field is $m$-CY-tilted for any $m>2$. \end{cor} \begin{proof} We can write $A \cong \End_{\bar{A}}(P)$ for a finite-dimensional, basic algebra $\bar{A}$ and a finitely generated projective $\bar{A}$-module $P$ containing as direct summands all the indecomposable projective $\bar{A}$-modules. Hence the claim is a consequence of Theorem~\ref{t:mCYbasic} and Remark~\ref{rem:endproj}. \end{proof} \begin{remark} If $\gldim A \geq 2$, then for any of the $m$-Calabi-Yau categories $\cC$ of Theorem~\ref{t:mCYbasic}, the small negative extension $\Hom_{\cC}(T, \Sigma^{-(m-2)}T)$ of the canonical $m$-cluster-tilting object $T$ cannot vanish. Had it vanished, Proposition~\ref{p:vosnex} would then imply that $A$ is the path algebra of an acyclic quiver, a contradiction. \end{remark} \subsection{Examples} Our first example is similar in spirit to~\cite[Example~3.5]{Guo11}. \begin{example} Consider the algebra $A=KQ/I$ where $Q$ is the left quiver \begin{align*} \xymatrix@=1.5pc{ & {\bullet} \ar[dr]^{\beta} \\ {\bullet} \ar[ur]^{\alpha} \ar[dr]_{\gamma} && {\bullet} \\ & {\bullet} \ar[ur]_{\delta} } & & \xymatrix@=1.5pc{ & {\bullet} \ar[dr]^{\beta} \\ {\bullet} \ar[ur]^{\alpha} \ar[dr]_{\gamma} \ar[rr]^{\eta} && {\bullet} \\ & {\bullet} \ar[ur]_{\delta} } & & \xymatrix@=1.5pc{ & {\bullet} \ar[dr]^{\beta} \\ {\bullet} \ar[ur]^{\alpha} \ar[dr]_{\gamma} && {\bullet} \ar[ll]^{\eps} \\ & {\bullet} \ar[ur]_{\delta} } \end{align*} and $I=(R)$ for the system of relations $R=\{\alpha \beta - \gamma \delta\}$. The algebra $A$ has global dimension $2$, hence it cannot be 2-CY-tilted by~\cite[Corollary~2.1]{KellerReiten07}. Let $B=B(Q,R)$ be the dg-algebra of Construction~\ref{con:dgQR}. Its graded quiver is shown in the middle; the arrows $\alpha, \beta, \gamma$ and $\delta$ have degree $0$ while $\eta$ has degree $-1$ and $d(\eta)=\alpha \beta - \gamma \delta$. Observe that the dg-algebra $B$ is quasi-isomorphic to $A \cong \hh^0(B)$. Let $m \geq 2$ and let $(\wt{Q},W)$ be the graded quiver with homogeneous superpotential of degree $2-m$ of Construction~\ref{con:QW}. The graded quiver $\wt{Q}$ is shown on the right; the degrees of the arrows $\alpha, \beta, \gamma, \delta$ are $0$, that of $\eps$ is $2-m$ and the superpotential is $W=\eps(\alpha \beta - \gamma \delta)$. The Ginzburg dg-algebra $\Gamma = \Gamma_{m+1}(\wt{Q},W)$ is the $(m+1)$-Calabi-Yau completion of $B$ (Lemma~\ref{l:preprojB}). Since $B$ is quasi-isomorphic to $A$ and $A$ is derived equivalent to the path algebra of the Dynkin quiver $D_4$, the Morita invariance of Calabi-Yau completions~\cite[Proposition~4.2]{Keller11} implies that $\Gamma$ is Morita equivalent to $\Gamma_{m+1}(D_4,0)$, hence by~\cite[Corollary~3.4]{Guo11} (or Proposition~\ref{p:vosnex}) the generalized $m$-cluster category $\cC_{\Gamma}$ is triangle equivalent to the $m$-cluster category of type $D_4$. If $m=2$ then $\hh^0(\Gamma) \cong K\wt{Q}/(\wt{R})$, where $\wt{R}=\{\alpha \beta - \gamma \delta, \eps \alpha, \beta \eps, \eps \gamma, \delta \eps\}$. This is a cluster-tilted algebra~\cite{BMR06} of type $D_4$, which is the relation-extension~\cite{ABS08} of the tilted algebra $A$. If $m>2$ then $\hh^0(\Gamma) \cong A$ and $A$ is $m$-CY-tilted by Theorem~\ref{t:mCYtilted}. \end{example} \begin{example} \label{ex:nonuniq} Let $K$ be algebraically closed and consider the algebra $A=K$ whose quiver $Q$ is $\bullet$. Consider the two sequences of relations $R=\{\}$ and $R'=\{0\}$ on $Q$. Let $m > 2$. The Ginzburg dg-algebras $\Gamma=\Gamma(Q,R,m)$ and $\Gamma'=\Gamma(Q,R',m)$ are given by the following graded quivers with differentials \begin{align*} \begin{array}{lc} \begin{array}{l} _{|t|=-m} \\ _{d(t)=0} \end{array} & \begin{array}{c} \xymatrix{ {\bullet} \ar@(ur,dr)[]^t } \end{array} \end{array} && \begin{array}{cl} \begin{array}{c} \xymatrix{ {\bullet} \ar@(ur,dr)[]^t \ar@(l,u)[]^{\eps^*} \ar@(l,d)[]_{\eps} } \end{array} & \begin{array}{l} _{|\eps^*|=-1,\, |\eps|=-(m-2),\, |t|=-m} \\ _{d(\eps) = d(\eps^*) = 0} \\ _{d(t) = \eps \eps^* - (-1)^m \eps^* \eps} \end{array} \end{array} \end{align*} A basis for the graded piece $\Gamma'^i$ is given by $(\eps^*)^i$ if $0 < i < m-2$; $(\eps^*)^{m-2}, \eps$ if $i = m-2$; $(\eps^*)^{m-1}, \eps \eps^*, \eps^* \eps$ if $i=m-1$ and $m>3$; and $(\eps^*)^{m-1}, \eps \eps^*, \eps^* \eps, \eps^2$ if $i=m-1$ and $m=3$, hence one computes \begin{align} \label{e:hiGamma} \dim_K \hh^{-i}(\Gamma) = \begin{cases} 1 & \text{if $i=0$,} \\ 0 & \text{if $0 < i < m$,} \end{cases} && \dim_K \hh^{-i}(\Gamma') = \begin{cases} 1 & \text{if $0 \leq i < m-2$,} \\ 2 & \text{if $i=m-2$,} \\ 2 + \delta_{3,m} & \text{if $i=m-1$.} \end{cases} \end{align} Let $\cC=\cC(Q,R,m)$ and $\cC'=\cC(Q,R',m')$ be the corresponding $m$-Calabi-Yau categories and let $T \in \cC$ and $T' \in \cC'$ be the canonical $m$-cluster-tilting objects. We have $\End_{\cC}(T) \cong \End_{\cC'}(T') \cong K$. By Proposition~\ref{p:vosnex}, the category $\cC$ is triangle equivalent to the $m$-cluster category of $Q$. Hence the $m$-cluster-tilting objects in $\cC$ are the indecomposable objects $\Sigma^j T$ for $0 \leq j < m$. Since $\Hom_{\cC}(T, \Sigma^{-i} T) = 0$ for any $0 < i < m$, this remains true if we replace $T$ by any of the other $m$-cluster-tilting objects $\Sigma^j T$ in $\cC$. On the other hand, $T'$ is an $m$-cluster-tilting object in $\cC'$ with $\Hom_{\cC'}(T', \Sigma^{-i} T') \neq 0$ for any $0 < i < m$ by~\eqref{e:hiGamma}, hence $\cC'$ cannot be triangle equivalent to $\cC$. \end{example} The previous example can be generalized as follows. A \emph{Dynkin quiver} is a quiver obtained by orienting the edges of a Dynkin diagram of type $A_n$ ($n \geq 1$), $D_n$ ($n \geq 4$) or $E_n$ ($n=6,7,8$). \begin{prop} Let $Q$ be a Dynkin quiver and let $m > 2$. There exists an $m$-Calabi-Yau triangulated category $\cC'$ with an $m$-cluster-tilting object $T'$ such that $\End_{\cC'}(T') \cong KQ$ but $\cC'$ is not triangle equivalent to the $m$-cluster category of $Q$. \end{prop} \begin{proof} Let $\cC$ be the $m$-cluster category of $Q$. Since $Q$ is Dynkin, the category $\cC$ has only finitely many indecomposable objects, hence the number of $m$-cluster-tilting objects $T$ in $\cC$ such that $\End_{\cC}(T) \cong KQ$ is finite. Therefore there exists an integer $n$ such that $\dim_K \Hom_{\cC}(T, \Sigma^{-(m-2)} T) < n$ for any such $T$. Let $R = \{0, 0, \dots, 0\}$ be a sequence consisting of $n$ zero elements (it does not matter which starting and ending vertex we assign to each zero element) and let $\cC' = \cC_{(Q,R,m)}$. By Theorem~\ref{t:mCYtilted}, the category $\cC'$ is $\Hom$-finite and $m$-Calabi-Yau with an $m$-cluster-tilting object $T'$ satisfying $\End_{\cC'}(T') \cong KQ$ and $\dim_K \Hom_{\cC'}(T', \Sigma^{-(m-2)} T') \geq n$. If $F \colon \cC' \simeq \cC$ were a triangulated equivalence, then $T=FT'$ would be an $m$-cluster-tilting object in $\cC$ with $\End_{\cC}(T) \cong KQ$ and $\dim_K \Hom_{\cC}(T, \Sigma^{-(m-2)}T) \geq n$, a contradiction. \end{proof} \bibliographystyle{amsplain}
2,869,038,156,471
arxiv
\section{Introduction} Recently the study of transverse momentum dependent distribution functions is among the special issues in hadronic physics. Of particular interest, are two leading-twist (na\"{i}ve) time-reversal odd transverse momentum dependent distribution functions: Sivers function~\cite{sivers90,abm95} $f_{1T}^{\perp}(x,\mathbf{k}_\perp^2)$ and its chiral-odd partner $h_1^{\perp}(x,\mathbf{k}_\perp^2)$~\cite{bm98,boer99}. Sivers function represents the unpolarized parton distribution in a transversely polarized hadron, while $h_1^{\perp}(x,\mathbf{k}_\perp^2)$ denotes the parton transversity distribution in an unpolarized hadron. One main motivation to investigate these two distributions is that they are the possible sources of the unsuppressed azimuthal asymmetries observed in hadronic reactions. The former distribution function was proposed first by Sivers~\cite{sivers90} to illustrate that it can lead to large single-spin azimuthal asymmetries. This nontrivial correlation between the transverse momentum of the quark and the polarization of the hadron was thought to be forbidden by time-reversal invariance~\cite{collins93}. Recently a direct calculation~\cite{bhs02a,bhs02b} of Sivers asymmetry by inclusion of final-state (in semi-inclusive deep inelastic scattering(SIDIS)) or initial-state interaction (in Drell-Yan process) shows that the asymmetry is in principle non-zero. Then it was found that the presence of the Wilson lines in the operators defining the parton densities allows for the Sivers effect without a violation of time-reversal invariance~\cite{collins02}, and the final- or initial-state interaction can be factorized into a full gauge-invariance definition of transverse momentum dependent distribution functions~\cite{jy02,bmp03}. These theoretical developments open a wide range of phenomenological applications. Several model calculations~\cite{yuan03,gg03,bsy04,lm04} of Sivers function have been performed to estimate single-spin asymmetries in SIDIS process, which is under investigation by current experiment~\cite{hermes04}. On the other hand, it is shown~\cite{gg03} that non-zero $h_1^{\perp}(x,\mathbf{k}_\perp^2)$ can arise from the same mechanism which produces $f_{1T}^{\perp}(x,\mathbf{k}_\perp^2)$. It has been demonstrated~\cite{boer99} that $h_1^{\perp}(x,\mathbf{k}_\perp^2)$ can account for the substantial cos$2\phi$ asymmetries in unpolarized Drell-Yan lepton pair production from pion-nucleon scattering: $\pi^-N\rightarrow \mu^+\mu^-X$~\cite{na10,conway89}. In Ref.~\cite{bbh03}, Boer, Brodsky, and Hwang computed $h_1^{\perp}(x,\mathbf{k}_\perp^2)$ of the proton in a quark-scalar-diquark model within soft gluon exchange. They found that $h_1^{\perp}(x,\mathbf{k}_\perp^2)$ is equal to $f_{1T}^{\perp}(x,\mathbf{k}_\perp^2)$ obtained from the same model. Then the maximum magnitude of the cos$2\phi$ asymmetries in $p\bar{p}\rightarrow l\bar{l}X$ is estimated to be $\sim$30$\%$, by using the calculated $h_1^{\perp}(x,\mathbf{k}_\perp^2)$. In this paper, we perform the first computation on $h_1^{\perp}(x,\mathbf{k}_\perp^2)$ of the pion (denoted as $h_{1\pi}^{\perp}(x,\mathbf{k}_\perp^2)$) in a quark-spectator-antiquark model in presence of final-state interaction, in similar to the quark-scalar-diquark model of the proton. We find that the transverse momentum dependence of $h_{1\pi}^{\perp}(x,\mathbf{k}_\perp^2)$ in our model is the same as that of $h_1^{\perp}(x,\mathbf{k}_\perp^2)$ of the proton from the quark-scalar-diquark model, although the $x$ dependence is different. This feature allows one to expect that $h_{1\pi}^{\perp}(x,\mathbf{k}_\perp^2)$ and $h_1^{\perp}(x,\mathbf{k}_\perp^2)$ are closely related. With the present model result we investigate the cos2$\phi$ asymmetry in unpolarized $\pi^-p$ Drell-Yan process and obtain an unsuppressed result. The shape of the asymmetry is similar to the cos$2\phi$ asymmetries in $p\bar{p}\rightarrow l\bar{l}X$, estimated in Ref.~\cite{bbh03}. The result suggests a connection between cos2$\phi$ asymmetries in Drell-Yan processes with different initial hadrons. \section{Calculation of $h_{1\pi}^{\perp}(x,\mathbf{k}_\perp^2)$ of the pion} In this section, we present the calculation $h_1^{\perp}(x,\mathbf{k}_\perp^2)$ of the pion. We start our computation from the quark light-cone correlation function of the pion in Feynman gauge (we will perform our calculation in this gauge): \begin{eqnarray} \Phi_{\alpha\beta}(x,\mathbf{k}_\perp)&=&\int \frac{d \xi^- d^2 \mathbf{\xi}_\perp}{(2\pi)^3}e^{ik\cdot\xi}\langle P_\pi|\bar{\psi}_\beta(0) \mathcal{L}_0^\dag(0^-,\infty^-)\nonumber\\ &&\times\mathcal{L}_\xi(\infty^-,\xi^-)\psi_\alpha(\xi)|P_\pi\rangle\bigg{|}_{\xi^+=0}. \end{eqnarray} We use notation $a^\pm=a^0\pm a^3$, $a\cdot b=\frac{1}{2}(a^+b^-+a^-b^+)-\mathbf{a}_\perp\cdot\mathbf{b}_\perp$. The pion momentum is denoted by $P_\pi^{\mu}=(P_\pi^+, P_\pi^-, \mathbf{P}_{\pi\perp})=(P_\pi^+, M/P_\pi^+, \mathbf{0}_{\perp})$. $\mathcal{L}_0(0,\infty)$ is the path-ordered exponential (Wilson line) with the form: \begin{equation} \mathcal{L}_0(0,\infty)=\mathcal{P}~\textmd{exp}\left (-ig\int_{0^-}^{\infty^-}d\xi^-\cdot A(0,\xi^-,\mathbf{0}_\perp)\right). \end{equation} Releasing the constraint of (na\"{i}ve) time-reversal invariance and keeping parity invariance and hermiticity, the quark correlation function of the pion can be parameterized into a set of transverse momentum dependent distribution functions in leading twist as follows~\cite{tm96,bm98} \begin{equation} \Phi(x,\mathbf{k}_\perp)=\frac{1}{2}\left [f_{1\pi}(x,\mathbf{k}_\perp^2)n\!\!\!/+h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)\frac{\sigma_{\mu\nu}\mathbf{k}_\perp^\mu n^{\nu}}{M_\pi}\right ], \end{equation} where $n$ is the light-like vector with components $(n^+, n^-, \mathbf{n}_\perp)=(1, 0, \mathbf{0}_\perp)$, $\sigma_{\mu\nu}=\frac{i}{2}[\gamma^\mu,\gamma^\nu]$, $f_{1\pi}(x,\mathbf{k}_\perp^2)$ and $h_{1\pi}^{\perp}(x,\mathbf{k}_\perp^2)$ denote the unpolarized quark distribution and the quark transversity of the pion, respectively. Knowing $\Phi_\pi(x,\mathbf{k}_\perp)$, one can obtain these distributions from equations: \begin{eqnarray} &f_{1\pi}(x,\mathbf{k}_\perp^2)&=\textmd{Tr}[\Phi(x,\mathbf{k}_\perp)\gamma^+];\\ &\frac{2h_{1\pi}^{\perp}(x,\mathbf{k}_\perp^2)\mathbf{k}_\perp^i}{M_\pi}&= \textmd{Tr}[\Phi(x,\mathbf{k}_\perp)\sigma^{i+}]. \end{eqnarray} We will calculate above distribution functions in the quark-spectator-antiquark model. It is similar to the quark-scalar-diquark model for calculating Sivers function and $h_1^{\perp}$ of the proton, and the differences are that the intermediate state here is the constituent antiquark instead, as shown in Fig.~1, and that the pion-quark-antiquark interaction is modelled by pseudoscalar coupling: \begin{equation} \mathcal{L}_I=-g_\pi\bar{\psi}\gamma_5\psi\varphi-e_2\bar{\psi}\gamma^u\psi A_\mu, \end{equation} in which $g_\pi$ is the pion-quark-antiquark coupling constant, and $e_2$ is the charge of the antiquark. $f_{1\pi}(x,\mathbf{k}_\perp^2)$ can be calculated from the lowest order $\Phi(x,\mathbf{k}_\perp)$ without the path-ordered exponential. From Fig.~\ref{lo} we obtain \begin{eqnarray} f_{1\pi}(x,\mathbf{k}_\perp^2)&=&-\frac{1}{4(1-x)P_\pi^+}\frac{g_\pi^2}{2(2\pi)^3}\sum_s\bar{v}^s\gamma_5 \frac{k\!\!\!/+m}{k^2-m^2}\nonumber\\ &&\times\gamma^+\frac{k\!\!\!/+m}{{k^2-m^2}}\gamma_5v^s, \end{eqnarray} where $m$ is the mass of the outgoing quark which is the same of the antiquark, $v^s$ is the spinor of the antiquark and the quark momentum $k=(xP^+,(\mathbf{k}_\perp^2+m^2)/xP^+,\mathbf{k}_\perp)$. We take the spin sum as $\sum_s v^s\bar{v}^s=(P\!\!\!/_\pi-k\!\!\!/-m)$, which is a little different from the spin sum adopted in Ref.~\cite{jmr97}. Immediately we arrive at \begin{eqnarray} f_{1\pi}(x,\mathbf{k}_\perp^2)&=&\frac{g_\pi^2}{2(2\pi)^3}\frac{(1-x)[\mathbf{k}_\perp^2+(1+x)^2m^2]} {(\mathbf{k}_\perp^2+m^2-x(1-x)M_\pi)^2}\nonumber\\ &=&C_{\pi}\frac{\mathbf{k}_\perp^2+D_{\pi}}{(\mathbf{k}_\perp^2+B_{\pi})^2}, \end{eqnarray} where we set $C_{\pi}=g_{\pi}^2(1-x)/[2(2\pi)^3]$, $D_{\pi}=(1+x)^2m^2$ and \begin{equation} B_\pi=m^2-x(1-x)M_\pi^2.\label{bpi} \end{equation} \begin{figure} \begin{center} \scalebox{0.85}{\includegraphics{fig1.eps}}\caption{\small Diagram which gives $\Phi$ of the pion in the quark-spectator-antiquark model in lowest order.}\label{lo} \end{center} \end{figure} The T-odd distribution $h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)$, however, is absent in the lowest order $\Phi(x,\mathbf{k}_\perp)$. In order to produce this T-odd distribution, the path-ordered exponential which ensures gauge invariance of the distribution function has to be included. The exponential serves as the final-state interaction~(FSI) or initial-state interaction~(ISI) between the struck quark and the remnant of the hadron, which is also viewed as the soft gluon scattering, to provide nontrivial phase to generate T-odd distribution function. In our calculation we expand path-ordered exponential to first order, means that the final- or/and initial-state interaction is modelled by one gluon exchange, as shown in Fig.~\ref{fsi}. Thus the nonzero $h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)$ can be calculated from the expression \begin{eqnarray} &&\frac{2h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)\mathbf{k}_\perp^i}{M_\pi}=\sum_{\bar{q}}\frac{1}{2}\int\frac{d\xi^-d\xi_\perp}{(2\pi)^3} e^{ik\cdot\xi}\langle P_\pi|\bar{\psi}_\beta(0)|\bar{q}\rangle\langle \bar{q}|\nonumber\\ &&\times\left (-ie_1 \int_{\xi^-}^{\infty^-} A^+(0,\xi^-,\mathbf{0}_\perp)d\xi^-\right ) \sigma^{i+}_{\beta\alpha}\psi_{\alpha}(\xi)|P_\pi\rangle\bigg{|}_{\xi^+=0}\nonumber\\ &&+h.c., \end{eqnarray} in which $|\bar{q}\rangle$ represents the antiquark spectator state, and $e_1$ is the charge of the struck quark. In momentum space we write down: \begin{eqnarray} &&\frac{2h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)\mathbf{k}_\perp^i}{M_\pi}=\frac{-ie_1e_2}{8(2\pi)^3(1-x)P_\pi^+} \sum_{s}\int\frac{d^4q}{(2\pi)^4}\bar{v}^s\gamma_5\nonumber\\ &&\times\frac{k\!\!\!/+m}{k^2-m^2} \sigma^{i+}\frac{k\!\!\!/+q\!\!\!/+m}{(k+q)^2-m^2}\gamma_5 \frac{k\!\!\!/+q\!\!\!/-P\!\!\!/_\pi+m}{(k+q+P_\pi)^2+m^2}\nonumber\\ &&\times\gamma^+v^s\frac{1}{q^++i\epsilon}\frac{1}{q^2-i\epsilon}+h.c. . \label{h1t} \end{eqnarray} The $\gamma^+$ in the second line of Eq.~(\ref{h1t}) comes from the antiquark-gluon interaction vertex. The $q^-$ integral can be done from the contour method. $q^+$ integral is realized from taking the imaginary part of the eikonal propagator: $1/(q^++i\epsilon)\rightarrow -i\pi\delta(q^+)$. Then we obtain \begin{eqnarray} \frac{h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)\mathbf{k}_\perp^i}{M_\pi}&=&-\frac{g_\pi^2e_1e_2(1-x)}{2(2\pi)^3(\mathbf{k}_\perp^2+B_\pi)} \int\frac{d^2q_\perp}{(2\pi)^2}\nonumber\\ &&\times\frac{2m\mathbf{q}_\perp^i}{q_\perp^2[(\mathbf{k}_\perp+q_\perp)^2+B_\pi]}. \end{eqnarray} \begin{figure} \begin{center} \scalebox{0.85}{\includegraphics{fig2.eps}}\caption{\small Diagram which yields $\Phi$ with final-state interaction modelled by one gluon exchange.}\label{fsi} \end{center} \end{figure} To arrive at above equation, we have calculated the trace in the numerator in Eq.~(\ref{h1t}) as follows \begin{eqnarray} &&\sum_{s}\bar{v}^s\gamma_5(k\!\!\!/+m)\sigma^{i+}(k\!\!\!/+q\!\!\!/+m)\gamma_5 \nonumber\\ &&\times(k\!\!\!/+q\!\!\!/-P\!\!\!/_\pi+m)\gamma^+v^s\\ &=&\textmd{Tr}[(P\!\!\!/_\pi-k\!\!\!/-m)(k\!\!\!/-m)\gamma^i\gamma^+(k\!\!\!/+q\!\!\!/-m)\nonumber\\ &&\times(k\!\!\!/+q\!\!\!/-P\!\!\!/_\pi+m)\gamma^+]\\ &=&8(P_\pi^+)^2m\mathbf{q}_\perp^i ~~~~~~~~~~~~~~~~\textmd{when}~~q^+=0. \end{eqnarray} After integrating over $\mathbf{q}_\perp$, we obtain the expression of $h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)$ in the antiquark spectator model: \begin{eqnarray} h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)&=&-\frac{g_\pi^2}{2(2\pi)^3}\frac{e_1e_2}{4\pi}\frac{mM_\pi(1-x)}{\mathbf{k}_\perp^2(\mathbf{k}_\perp^2+B_\pi)} \textmd{ln}\left (\frac{\mathbf{k}_\perp^2+B_\pi}{B_\pi}\right )\nonumber\\ &=&\frac{A_{\pi}}{\mathbf{k}_\perp^2(\mathbf{k}_\perp^2+B_\pi)}\textmd{ln}\left (\frac{\mathbf{k}_\perp^2+B_\pi}{B_\pi}\right ) \label{h1p}, \end{eqnarray} with $B_{\pi}$ given in Eq.~(\ref{bpi}) and \begin{equation} A_\pi=\frac{g_\pi^2}{2(2\pi)^3}\left (-\frac{e_1e_2}{4\pi}\right)mM_\pi(1-x). \end{equation} In Ref.~\cite{bbh03} $h_1^\perp(x,\mathbf{k}_\perp^2)$ of the proton is computed in the quark-scalar-diquark model as: \begin{equation} h_{1}^\perp(x,\mathbf{k}_\perp^2)=\frac{A_p}{\mathbf{k}_\perp^2(\mathbf{k}_\perp^2+B_p)}\textmd{ln}\left (\frac{\mathbf{k}_\perp^2+B_p}{B_p}\right ).\label{h1} \end{equation} Comparing Eq.~(\ref{h1p}) with Eq.~(\ref{h1}) one can find that the calculated $h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)$ in the spectator antiquark model has a same transverse momentum dependence of $h_{1}^\perp(x,\mathbf{k}_\perp^2)$ of the proton obtained from the quark-scalar-diquark model, although the $x$ dependence is different in $A_{p/\pi}$ and $B_{p/\pi}$ respectively, due to the different mass parameters in them. This feature may not be held generally, but one can expect that there is close relation between $h_1^\perp$ distributions of different hadrons since both functions are generated by the same underling mechanism. We can also calculate the T-odd distribution of the transversity distribution of the valence antiquark $\bar{h}_{1\pi}^\perp$ of the pion from the same model, by replacing the antiquark spectator with the quark spectator. A similar calculation yields $\bar{h}_{1\pi}^\perp=h_{1\pi}^\perp$, showing that the magnitudes of the distributions for two valence quarks are the same. \begin{figure*} \begin{center} \scalebox{0.65}{\includegraphics{fig3.eps}}\caption{\small Model prediction of $|\mathbf{k}_\perp h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)|/[M_\pi f_{1\pi}(x,\mathbf{k}_\perp^2)]$ as a function of $x$ and $|\mathbf{k}_\perp|$.}\label{ratio} \end{center} \end{figure*} With $f_{1\pi}(x,\mathbf{k}_\perp^2)$ and $h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)$ we estimate $|\mathbf{k}_\perp h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)|/[M_\pi f_{1\pi}(x,\mathbf{k}_\perp^2)]$, which is proportional to the cos2$\phi$ asymmetries in unpolarized Drell-Yan process. We choose the mass parameters $m=0.1$~GeV for the constituent quark mass and $M_\pi=0.137$~GeV for the pion mass. To generalize our model result to the consequence in QCD we extrapolate the coupling constant $e_1e_2/4\pi\rightarrow C_F\alpha_s$, and take $\alpha_s=0.3$ which is adopted in Ref.~\cite{bhs02a}. We plot the $x$ dependence of the ratio at $|\mathbf{k}_\perp|=0.3$~GeV in Fig.~\ref{ratio}a and the $|\mathbf{k}_\perp|$ dependence at $x=0.15$ in Fig.~\ref{ratio}b. The ratios are comparable in magnitude with $|\mathbf{k}_\perp h_{1}^\perp(x,\mathbf{k}_\perp^2)|/[M f_{1}(x,\mathbf{k}_\perp^2)]$ of the proton (one can see Ref.~\cite{bhs02a}, where $|\mathbf{k}_\perp f_{1T}^\perp|/(M f_1)$ is given, since $h_1^\perp=f_{1T}^\perp$) in the quark-scalar-diquark model, where $M$ is the proton mass and $f_{1}(x,\mathbf{k}_\perp^2)$ is the unpolarized quark distribution of the proton. \section{Cos2$\phi$ asymmetries in unpolarized Drell-Yan process} The general form of the angular differential cross section for unpolarized $\pi^- p$ Drell-Yan process is \begin{eqnarray} \frac{1}{\sigma}\frac{d\sigma}{\Omega}&=&\frac{3}{4\pi}\frac{1}{\lambda+3} \left (1+\lambda\textmd{cos}^2\theta+\mu\textmd{sin}^2\theta\textmd{cos}\phi\right .\nonumber\\ &&\left .+\frac{\nu}{2}\textmd{sin}^2\theta\textmd{cos}2\phi\right ), \end{eqnarray} where $\phi$ is the angle between the lepton plane and the plane of the incident hadrons in the lepton pair center of mass frame (see Fig.~\ref{drell-yan}). The experimental data show large value of $\nu$ near to 30\%, which can not be explained by perturbative QCD. Many theoretical approaches have been proposed to interpret the experimental data, such as high-twist effect~\cite{bbkd94,ehvv94}, and factorization breaking mechanism~\cite{bnm93}. In Ref.~\cite{boer99} Boer demonstrated that unsuppressed cos$2\phi$ asymmetries can arise from a product of two chiral-odd $h_1^\perp$ which depends on transverse momentum. In Ref.~\cite{bbh03} the cos2$\phi$ asymmetry in unpolarized $p\bar{p}\rightarrow l\bar{l}X$ Drell-Yan process has been estimated from $h_1^\perp(x,\mathbf{k}_\perp^2)$ for the proton computed by quark-scalar-diquark model. The maximum of $\nu$ in that case is in the order of 30$\%$. In this section we give a simple estimate of cos2$\phi$ asymmetry in unpolarized $\pi^-p$ Drell-Yan process, from $h_{1\pi}^\perp$ computed by our model. The leading order unpolarized Drell-Yan cross section expressed in the Collins-Soper frame~\cite{cs77} is \begin{eqnarray} &&\frac{d\sigma(h_1h_2\rightarrow l\bar{l}X)}{d\Omega dx_1dx_2d^2\mathbf{q}_\perp}= \frac{\alpha^2}{3Q^2}\sum_{a,\bar{a}}\Bigg{\{} A(y)\mathcal{F}[f_1\bar{f_1}] +B(y)\nonumber\\ &&\times\textmd{cos}2\phi\mathcal{F}\left [(2\hat{\mathbf{h}}\cdot \mathbf{p}_\perp\hat{\mathbf{h}}\cdot \mathbf{k}_\perp) -(\mathbf{p}_\perp\cdot \mathbf{k}_\perp)\frac{h_1^\perp\bar{h}_1^\perp}{M_1M_2}\right ]\Bigg{\}},\nonumber\\ \label{cs} \end{eqnarray} where $Q^2=q^2$ is the invariance mass square of the lepton pair, and the vector $\hat{\mathbf{h}}=\mathbf{q}_\perp/Q_T$. We have used the notation \begin{eqnarray} \mathcal{F}[f_1\bar{f}_1]&=&\int d^2\mathbf{p}_\perp d^2\mathbf{k}_\perp\delta^2(\mathbf{p}_\perp+\mathbf{k}_\perp-\mathbf{q}_\perp) f_1^a(x,\mathbf{p}_\perp^2)\nonumber\\ &&\times\bar{f}_1^a(\bar{x},\mathbf{k}_\perp^2). \end{eqnarray} From Eq.~\ref{cs} one can give the expression for the asymmetry coefficient $\nu$~\cite{boer99}: \begin{eqnarray} \nu&=&2\sum_{a,\hat{a}}e_a^2\mathcal{F}\left [(2\hat{\mathbf{h}}\cdot \mathbf{p}_\perp\hat{\mathbf{h}}\cdot \mathbf{k}_\perp) -(\mathbf{p}_\perp\cdot \mathbf{k}_\perp)\frac{h_1^\perp\bar{h}_1^\perp}{M_1M_2}\right ]\Bigg{/}\nonumber\\ &&\sum_{a,\bar{a}}e_a^2\mathcal{F}[f\bar{f}]\nonumber\\ &=&\frac{2}{M_1M_2}\frac{\sum_{a,\bar{a}}e_a^2F_a}{\sum_{a,\bar{a}}e_a^2G_a}. \label{nu} \end{eqnarray} \begin{figure} \begin{center} \scalebox{1}{\includegraphics{fig4.eps}}\caption{\small Angular definitions of unpolarized Drell-Yan process in the lepton pair center of mass frame.}\label{drell-yan} \end{center} \end{figure} Our model calculation has shown $\bar{h}_{1\pi}^\perp=h_{1\pi}^\perp$. Thus in $\pi^-p$ unpolarized Drell-Yan process we can assume $u$-quark dominance, which means the main contribution to asymmetry comes from $\bar{h}_{1\pi}^{\perp,\bar{u}}(\bar{x},\mathbf{k}_\perp^2)\times h_1^{\perp,u}(x,\mathbf{p}_\perp^2)$, since $\bar{u}$ in $\pi^-$ and $u$ in proton are both valence quarks. Then we have $\nu \approx 2F_u/(M_\pi MG_u)$. To evaluate $\nu$, we use our model result for $\bar{h}_{1\pi}^{\perp,\bar{u}}$ and $\bar{f}_{1\pi}$, and we adopt $h_1^{\perp,u}$ and $f_1$ from Ref.~\cite{bbh03}. Using the $\mathbf{p}_\perp$ integration to eliminate the delta function in the denominator and numerator in Eq.~(\ref{nu}) one arrives at \begin{eqnarray} F_u&=&\int_0^\infty d|\mathbf{k}_\perp|\int_0^{2\pi} d\theta_{qk}\frac{A_{\pi}A_p |\mathbf{k}_\perp|} {\mathbf{k}_\perp^2(\mathbf{k}_\perp^2+B_\pi)}\textmd{ln}\left (\frac{\mathbf{k}_\perp^2+B_\pi}{B_\pi}\right )\nonumber\\ &&\times\frac{(\mathbf{k}_\perp^2-2\mathbf{k}_\perp^2\textmd{cos}^2\theta_{qk}+|\mathbf{k}_\perp| |\mathbf{q}_\perp|\textmd{cos}\theta_{qk})}{(\mathbf{k}_\perp^2+f)(\mathbf{k}_\perp^2+f+B_p)}\nonumber\\ &&\times\textmd{ln}\left (\frac{\mathbf{k}_\perp^2+f+B_p}{B_p}\right ),\label{fu}\\ G_u&=&\int_0^\infty d|\mathbf{k}_\perp|\int_0^{2\pi} d\theta_{qk}\frac{C_{\pi}C_{p}|\mathbf{k}_\perp|(\mathbf{k}_\perp^2+D_{\pi})}{(\mathbf{k}_\perp^2+B_{\pi})^2}\nonumber\\ &&\times\frac{(\mathbf{k}_\perp^2+f+D_{p})}{(\mathbf{k}_\perp^2+f+B_{p})^2},\label{gu} \end{eqnarray} where $f=\mathbf{q}_\perp^2-2|\mathbf{q}_\perp||\mathbf{k}_\perp|\textmd{cos}\theta_{qk}$, and $\theta_{qk}$ is the angle between $\mathbf{k}_\perp$ and $\mathbf{q}_\perp$. In above equations we change the integration variables into polar coordinates. For the parameters in Eq.~{\ref{fu}} and Eq.~{\ref{gu}}, we take $A_p/C_p=0.3$ (GeV$^2$) and $D_p=4B_p\approx1/4$ which are used in Ref.~\cite{bbh03}, and $A_\pi/C_\pi=0.01$ (GeV$^2$), $D_\pi=2B_\pi\approx0.014$ which corresponds $\bar{x}=0.2$. We give the numerical evaluation of the asymmetry as a function of $Q_T$ in Fig.~\ref{cos2phi}, which shows an unsuppressed result of the order of $\mathcal{O}$ (10\%). Besides this, the shape of the asymmetry is similar to the cos2$\phi$ asymmetry in $\bar{p}p$ unpolarized Drell-Yan process estimated in the quark-scalar-diquark model~\cite{bbh03}. \begin{figure} \begin{center} \scalebox{0.7}{\includegraphics{fig5.eps}}\caption{\small Numerical result for the cos2$\phi$ asymmetry in $\pi^-p$ unpolarized Drell-Yan (solid line), using $M_\pi=0.137$ GeV, m=0.1~GeV, and $x=\bar{x}=0.2$. The dashed line corresponds to the same asymmetry in $\bar{p}p$ Drell-Yan process estimated in the quark-scalar-diquark model (see Ref.~\cite{bbh03}).}\label{cos2phi} \end{center} \end{figure} \section{Conclusion} In this paper we calculate the (na\"{\i}ve) T-odd $\mathbf{k}_\perp$-dependent quark transversity distribution $h_1^\perp(x,\mathbf{k}_\perp^2)$ of the pion for the first time in a quark-spectator-antiquark model in presence of final-state interaction, and investigate the cos2$\phi$ asymmetry in unpolarized Drell-Yan process with our model result. The calculated $h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)$ shows a same form of transverse momentum dependence as that of $h_1^\perp(x,\mathbf{k}_\perp^2)$ for the proton computed from a quark-scalar-diquark model. The similarity of $h_1^\perp(x,\mathbf{k}_\perp^2)$ for different hadrons (for example, nucleon and pseudoscalar meson) implies that these functions are closely related, because the mechanism that generates them is the same. Besides this, $h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)$ is an interesting observable that can account for the large cos2$\phi$ asymmetry in $\pi^-N$ unpolarized Drell-Yan process. The contributed asymmetry of $h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)$ is proportional to $|\mathbf{k}_\perp h_{1\pi}^\perp|/(M_\pi f_{1\pi})$, which is comparable with $|\mathbf{k}_\perp h_{1}^\perp|/(M f_{1})$ of the proton in magnitude. With nonzero $h_{1\pi}^\perp(x,\mathbf{k}_\perp^2)$ we reveal an unsuppressed cos2$\phi$ asymmetry in unpolarized $\pi^- p$ Drell-Yan process from our calculation. The shape of the asymmetry is similar to that of cos$2\phi$ asymmetries in $p\bar{p}$ Drell-Yan process estimated in Ref.~\cite{bbh03}, suggesting a connection between cos2$\phi$ asymmetries in Drell-Yan processes with different initial hadrons. \begin{acknowledgments} This work is partially supported by National Natural Science Foundation of China under Grant Numbers 10025523 and 90103007. \end{acknowledgments}
2,869,038,156,472
arxiv
\section{CONCLUSIONS AND FUTURE WORK} \label{sec:conclusions} In this paper, a novel drone detection approach using depth maps has been successfully validated for obstacle avoidance. A rich dataset of 6k synthetic depth maps using 3 different drone models has been generated using AirSim and released to the public, in order to enable further exploration of this technique. A subset of these depth maps, generated only using a model resembling an AR Drone, were used to train YOLOv2, a deep learning model for real-time object detection. Experiments in a real scenario show that the model achieves high precision and recall not only when detecting using depth maps from a real Parrot AR Drone, but also from a DJI Matrice 100. In other words, the model generalizes well for different types of drones. An advantage of depth sensing versus other detection methods is that a depth map is able to provide 3D relative localization of the detected objects. This is particularly useful for collision avoidance onboard drones, as the localization of the drone can be useful for effective collision-free path planning. The quality of this localization method has been assessed through a series of depth measurements with a Parrot AR Drone hovering at different positions while it was being detected. A record max depth range of 9510 mm was achieved, with an average error of 254 mm. To the best of our knowledge, this is the first time that depth maps are proposed for drone detection and subsequent localization. As for future work, our immediate objective is to test the system onboard a flying drone. A C++ implementation of the model and inference will be explored in order to increase the execution speed. Additionally, a multi-object tracking approach using joint probabilistic data association will be implemented to provide continuous, real-time detection. \section{DATASET} \label{sec:dataset} In order to train a deep neural network for successful drone detection and evaluate it, we create a synthetic dataset of depth and segmentation maps for several sample drone platforms\footnote{The dataset can be found at: \url{https://vision4uav.com/Datasets}}. We utilize the UAV simulator Microsoft AirSim to construct simulated environments inside which drones are instantiated. Microsoft AirSim \cite{airsim2017fsr} is a recently released simulator for unmanned aerial vehicles which is built upon Unreal Engine: a popular videogame engine that provides capabilities such as high fidelity and high resolution textures, realistic post-processing, soft shadows, photometric lighting etc. These features make the combination of AirSim and Unreal Engine a particularly good choice for modeling cameras onboard drones and obtain the resultant images. Over the base functionality of Unreal Engine, AirSim provides flight models for drones as well as basic flight control features. \begin{figure} \centering \subfigure[]{\includegraphics[width=3cm,height=2cm]{quad1.png}} \subfigure[]{\includegraphics[width=3cm,height=2cm]{quad2.png}} \subfigure[]{\includegraphics[width=3cm,height=2cm]{hex.png}} \caption{Figures of the three drone models that are part of the training dataset. (a) Quadrotor, resembling a Parrot AR Drone. (b) Quadrotor, resembling a 3DR Solo. (c) Hexrotor, resembling a DJI S800.} \label{fig:drone_models} \end{figure} To create our synthetic dataset, we enhance the current functionality of AirSim by adding multiple models of drones. AirSim provides a base model for a quadrotor which resembles a Parrot AR Drone. For a more diverse representation of the appearance of drones, we create two additional models: one, a hexrotor platform resembling the DJI S800 and another quadrotor platform resembling a 3DR Solo. In Fig~\ref{fig:drone_models}, we show images of the three models used in AirSim that are included in the released dataset. AirSim contains an internal camera model, which we replicate to create stereo camera functionality for the drones. Through this functionality, we have generated more than 6k images of the three aforementioned types of drones, in various types of environments. For this purpose, we build and use custom environments within Unreal Engine. In particular, our dataset includes three different environments: an indoor office space environment, and outdoor environment with trees, buildings, etc. and a simple environment containing only a table with two chairs. In all the scenes, one of the drones is considered to be a `host', from which depth maps are obtained: and the other drone(s) that are visible in the depth maps are considered to be `target' drones, which are being observed. Figure \ref{fig:environments} shows pictures of the indoor and outdoor environments used. \begin{figure} \centering \subfigure[]{\includegraphics[width=4.2cm,height=2.5cm]{outdoorResized.png}} \subfigure[]{\includegraphics[width=4.2cm,height=2.5cm]{indoorResized.png}} \caption{Environments created within Unreal Engine simulate an urban outdoor environment (left) and an indoor environment (right), within which we instantiate multiple drones and obtain depth maps for training images} \label{fig:environments} \end{figure} \begin{figure*} \centering \subfigure[]{\includegraphics[width=54mm]{rgb_sample.png}} \subfigure[]{\includegraphics[width=54mm]{depth_sample.png}} \subfigure[]{\includegraphics[width=54mm]{seg_sample.png}} \caption{Sample images from the dataset. In (a), the RGB image from the `host' drone's perspective is shown for reference, where it views a `target' drone, a hexrotor. The corresponding depth map is shown in (b), and (c) shows the segmentation image that isolates only the target drone.} \label{fig:dataset_images} \end{figure*} In our dataset, we include at least two types of images. First, we render and record the disparity image obtained from the drone's viewpoint as per the preset baseline and resolution. Secondly, we include a segmentation image in which the drone(s) being observed is isolated. As Unreal Engine has the ability to keep track of all object materials in a scene, we identify only the materials that create the drone and isolate them to create the corresponding segmentation image. The location of the drone in the segmentation image is used later in order to create bounding boxes for the target drone, which are subsequently used for training the deep neural network. For the indoor and the outdoor environments we also include the RGB images. We record these images of the target drone from various distances, viewpoints and angles in three dimensions, attempting to simulate the observation of a drone hovering as well as in motion. In Fig.~\ref{fig:dataset_images}, we show sample depth and segmentation images generated from AirSim for the hexrotor model in an outdoor environment, along with the corresponding RGB image for reference. \section{DETECTION AND LOCALIZATION METHOD} \label{sec:detection_method} The proposed method for drone detection relies only on depth maps. Given a depth map, first, a trained deep neural network is used to predict the bounding boxes containing a drone and a confidence value for each bounding box. In order to localize the drone with respect to the camera, as the next step, a 2D point in each bounding box is chosen as actually belonging to the drone. The chosen point is then reprojected to 3D to get the actual drone relative position. \subsection{Deep Neural Network} YOLOv2 \cite{redmon2016yolo9000} is currently one of the fastest object detection algorithms, having obtained one of the highest performances in both speed and precision reported for the VOC 2007 challenge (see Fig.\ref{fig:voc}). It is also a very versatile model, as the input image size can be modified even after the model has been trained, allowing for an easy tradeoff between speed and precision. \begin{figure} \includegraphics[width=80mm]{YoLov2.png} \caption{Accuracy and speed of different object detection models on VOC 2007. The blue dots correspond to arbitrary input image sizes at which the YOLOv2 model can operate, even after it has been trained with a different input image size. In this way, the model provides a customizable trade-off between speed and accuracy.} \label{fig:voc} \end{figure} In YOLOv2, a single convolutional neural network predicts bounding boxes and class probabilities directly from full images in a single forward pass. This model has also been proposed for drone detection using RGB images \cite{8078539}. \subsection{2D position of the drone in the depth image} The bounding boxes predicted by the model do not always accurately indicate the actual position of the drone in the depth image. In the case of stereo vision, this happens mainly due to noise or errors in the stereo matching process. We propose the following method as a means to handle these potential inaccuracies. Let $P=\{P_1,P_2,...P_n\}$ be the set of 2D points within the bounding box and $Z =\{Z_1,Z_2,...Z_n\}$ a set with their associated depths. We wish to choose a point $P_i \in P$ in the depth map which best indicates the position of the drone. We do this by choosing $P_i$ such that $i = argmin(|Z_i-Z_{ref}|)$. Let $Q1$ be the first quartile of $Z$. Three different methods for choosing $Z_{ref}$ are proposed: \begin{itemize} \item Method 1 simply consists of choosing the 2D point with the minimum depth within the bounding box, or equivalently: \begin{equation} Z_{ref} = min(Z_i) \end{equation} \item Method 2 picks the 2D point with the closest depth to the mean of the 25\% smallest depths within the bounding box. \begin{equation} Z_{ref} = mean(Z_i) \forall Z_i<Q1 \end{equation} \item Method 3 picks the 2D point with the closest depth to the median of the 25\% smallest depths within the bounding box. \begin{equation} Z_{ref} = median(Z_i) \forall Z_i<Q1 \end{equation} \end{itemize} In these methods, points that are further away are discarded, as the object to be detected should be closer to the camera than the background. Method 1 is the simplest, but also the most sensitive to spurious depth measurements as it relies on a single measurement. Methods 2 and 3 are intended to be robustified versions of method 1. \subsection{3D localization} In the case of a stereo camera, it is possible to estimate the 3D coordinates corresponding to the previously designated point $P_i(u,v)$ with disparity $d$ using Eq.~\ref{eq:reprojection}. \begin{align} \begin{bmatrix} X \\ Y \\Z \end{bmatrix} &= \frac{T}{c_x^l-c_x^r-d} \begin{bmatrix} u-c_x^l \\ v-c_y^l \\ f^l \end{bmatrix} \label{eq:reprojection} \end{align} where $C^l(c_x^l,c_y^l)$ is the principal point and $f^l$ is the focal length of the left camera and $C^r(c_x^r,c_y^r)$ is the principal point of the right camera. \section{EXPERIMENTAL SETUP} \label{sec:implementation} \subsection{Hardware} Once the deep neural network was trained with images from the synthetic depth map dataset, our experiments were aimed at using this model to detect real drones, assuming deployment onboard a mini UAV. Hardware for the experiments was selected trying to minimize size, weight and power demands, considering the limitations of applications onboard small drones. The StereoLabs ZED stereo camera \cite{ZED} was selected as our imaging sensor due to its excellent features: high FOV (110$^\circ$ diagonal), low size and weight (175 x 30 x 33 mm, 159g) and acquisition speed (16 fps with HD1080 images). HD1080 video mode was selected in order to improve the detection of smaller/more distant objects. An NVIDIA Jetson TX2 module (85 grams) was used for the image acquisition and processing. \subsection{Model and Inference} For compatibility reasons with the ZED stereo camera API, Darkflow \cite{darkflow}, a python implementation of YOLO based in Tensorflow was used. By doing this, images can be acquired with the ZED camera and passed directly to the model for inference. A smaller version of the YOLOv2 model called Tiny YOLOv2 was chosen to obtain faster performance. This model was reported to have 57.1\% mean average precision (mAP) in the VOC 2007 dataset running at 207 fps in a NVIDIA Titan X GPU. The model runs at 20 fps in a Jetson TX2. In our implementation, we modify the model configuration to perform single object detection and increase the input image size from its original value of 416x416 to 672x672, in other to improve the detection of smaller or more distant objects. Input depth maps were codified as 8-bit, 3-channel images. For this, we downsample the resolution of the single-channel depth maps provided by the camera from 32-bit to 8-bit and store the same information in each of the three channels. This was done for simplicity of the implementation, as the objective was to explore the feasibility of drone detection with depth maps. \section{INTRODUCTION} Unmanned aerial vehicles (UAVs), or drones, are a popular choice for robotic applications given their advantages such as small size, agility and ability to navigate through remote or cluttered environments. Drones are currently being widely used for surveying, mapping with many more applications being researched such as reconnaissance, disaster management, etc. and therefore, the ability of a system to detect drones has multiple applications. Such technologies can be deployed in security systems to prevent drone attacks in critical infrastructures (e.g. government buildings, nuclear plants) or to provide enhanced security in large scale venues, such as stadiums. At the same time, this technology can be used on-board drones themselves to avoid drone-to-drone collisions. As an exteroceptive sensing mechanism, electro-optical sensors provide a small, passive, low-cost and low-weight solution for drone detection and are therefore suitable for this specific application. Additionally, drone detection typically requires large detection ranges and wide FOVs, as they provide more time for effective reaction. In the literature, drone detection using image sensors has been proposed mainly in the visible spectrum \cite{7299040, 5938030, wu2017vision}. Thermal infrared imaging has also been proposed for drone detection \cite{ANDRASI2017183}. Thermal images typically have lower resolutions than those in visible spectrum cameras, but they have the advantage that they can operate at night. Several other sensing technologies have been applied for drone detection (radar \cite{7497351} and other RF-based sensors \cite{Nguyen:2016:ICR:2935620.2935632}, acoustic sensors \cite{7382945} and LIDAR \cite{7778108}). Hybrid approaches have as well been proposed \cite{christnacher2016optical}. However, some of these technologies have limitations for being integrated on-board small drones, mainly their high power consumption, weight and size requirements and cost. Image-based detection systems typically rely either on background subtraction methods \cite{ganti2016implementation}, or on the extraction of visual features, either manually, using morphological operations to extract background contrast features \cite{lai2011airborne} or automatically using deep learning methods \cite{8078541, 8078539}. Rozantsev et al. \cite{7299040} present a comparison between the performance of various of these methods. The aforementioned detection techniques rely on the assumption that there is enough contrast between the drone and the background. Depth maps, which can be obtained from different sensors (stereo cameras, \mbox{RGB-D} sensors or LIDAR), do not have these requirements. 3D point clouds have been recently proposed for obstacle avoidance onboard drones using an RGB-D camera \cite{7989677}, but focusing on the detection of static obstacles. An alternative representation for point clouds are depth maps, which have been proposed for general object detection \cite{song2014sliding} and human detection \cite{xia2011human}, providing better detection performance as compared to RGB images. In the context of drone detection, a key concept that explains the usefulness of depth maps is that any flying object in a depth map appears with depth contrast with respect to the background. This happens as there are typically no objects with consistently the same depth around it. In other words, a flying object should generate a discontinuity in the depth map, which can be used as a distinct visual feature for drone detection. This concept is depicted in Fig. \ref{fig:sample_images}. An additional advantage of detecting using depth maps is that, while data from other sensing technologies can generally provide relative altitude and azimuth of the object only, depth maps can provide full 3D relative localization of the objects. This is particularly useful in the case of obstacle avoidance for drones, since the 3D position of the drone can be exploited to perform effective collision-free path planning. \begin{figure*} \centering \subfigure[]{\includegraphics[width=81mm]{ImageRGB_259_edited.png}} \subfigure[]{\includegraphics[width=81mm]{Image_259.png}} \caption{The above images, RGB (left) and depth map (right), captured simultaneously, intuitively illustrate how the concept of depth contrast, as opposed to visual contrast, can be a better choice for drone detection. Based on this concept, we propose a novel, alternative method for drone detection only using depth maps.} \label{fig:sample_images} \end{figure*} In this paper, we present a dataset of synthetic, annotated depth maps for drone detection. Furthermore, we propose a novel method for drone detection using deep neural networks, which relies only on depth maps and provides 3D localization of the detected drone. To the best of the authors' knowledge, this is the first time that depth maps are used for drone detection. The proposed detection method has been evaluated in a series of real experiments in which different types of drones fly towards a stereo camera. The reason to choose a stereo camera as the depth sensor for this work is the trade-off they provide in terms of detection range, FOV, lightweight and small size. The remainder of this paper is as follows. Firstly, in section \ref{sec:detection_method}, we present our drone detection method. Secondly, in section \ref{sec:dataset}, details about the synthetic drone depth map dataset are presented. Thirdly, in section \ref{sec:implementation}, we describe the implementation details. In section \ref{sec:results}, we present the results of the proposed method and finally, in section \ref{sec:conclusions}, we present the conclusions and future work. \section{RESULTS} \label{sec:results} \subsection{Training results} From the dataset presented in Section \ref{sec:dataset}, a subset of 3263 images containing depth maps corresponding only to the Parrot AR Drone model were extracted. This was done in a effort to evaluate the generalization capability of the model, as it would be later evaluated on depth maps containing different drones. The Tiny YOLOv2 model was trained on these images using a desktop computer equipped with an NVIDIA GTX 1080Ti. 80\% of the images were used for training and 20\% for validation. After 420k iterations (about 4-5 days) the model achieved a validation IOU of 86.41\% and a recall rate of 99.85\%. \subsection{Precision and recall} In order to obtain live measurements of the precision and recall of the model in real flights, a Parrot AR Drone and a DJI Matrice 100 were flown in an indoor space. The drones were manually flown at up to 2 m/s towards the camera, which was kept static. The live video stream obtained from the ZED camera was processed using a Jetson TX2 development board. The average processing time measured was about 200 ms per frame. For a drone flying at 2 m/s, this is equivalent to a detection every 0.4m, which should be acceptable for collision avoidance as long as the detector can also provide a large enough detection range. The low framerate is caused by the GPU being simultaneously used by the ZED camera for stereo matching and by Darkflow for inference of the detection model. An optimized version of the detection software is currently under development. We use precision and recall as evaluation metrics for the detector. Precision here indicates the number of frames with correct drone detections with respect to the number of frames for which the model predicted a drone, while recall here indicates the number of frames with correct detections with respect to the number of frames containing drones. The detection results using a threshold of 0.7 for the detection confidence are shown in Table \ref{tab:precision_recall}. The model successfully generalizes from AR Drone depth maps, on which it was trained, to depth maps generated by other types of drones. While processing a live stream of depth maps, it achieves an average precision of 98.7\% and an average recall of 74.7\%\footnote{A video showing some detection results can be found at the following link: \url{https://vimeo.com/259441646}}. \begin{table}[] \centering \caption{Precision and recall in online detection} \label{tab:precision_recall} \begin{tabular}{ c | c | c | c | c } \begin{tabular}{c} Video \\ sequence \end{tabular} & \begin{tabular}{c} No. of \\ frames \end{tabular} & \begin{tabular}{c} Drone \\ model \end{tabular} & \begin{tabular}{c} Precision \\ (\%) \end{tabular} & \begin{tabular}{c} Recall \\ (\%) \end{tabular} \\ \hline 1 & 77 & AR Drone & 96.6 & 74.0 \\ 2 & 48 & AR Drone & 95.3 & 85.4 \\ 3 & 39 & AR Drone & 100.0 & 66.6 \\ 4 & 33 & AR Drone & 100.0 & 66.6 \\ 5 & 27 & AR Drone & 100.0 & 77.7 \\ 6 & 64 & DJI Matrice & 100.0 & 67.1 \\ 7 & 35 & DJI Matrice & 100.0 & 77.1 \\ \hline \multicolumn{3}{r}{Averaged precision and recall} & 98.7 & 74.7 \\ \hline \end{tabular} \end{table} \subsection{Depth range} For assessing the depth range and its reliability, frames were acquired with the camera in a static position while a Parrot AR Drone hovered at different distances, ranging from 1.5 to almost 10 m. For each of those hovering positions, 10 image detections were registered and the depth of the drone was measured using a laser ranger with $\pm$3 mm accuracy, which was recorded as the ground truth. The averaged depth error for those 10 detections was computed using each of the 3 methods proposed in Section \ref{sec:detection_method}. While the proposed method has been proven valid to detect the drone while flying at up to 2 m/s, here it was put in a hovering position only to enable accurate depth assessment and never to increase its observability. The results are shown in Table \ref{tab:depth}. \begin{table}[] \centering \caption{Comparison of depth estimation methods} \label{tab:depth} \begin{tabular}{ c | c | c | c } & \multicolumn{3}{c}{Averaged depth RMS error (mm)} \\ \begin{tabular}{c} Hovering \\ distance (mm) \end{tabular} & Method 1 & Method 2 & Method 3 \\ \hline 1555 &56 & 101 &89 \\ 2303 & 235 & 315 & 171\\ 2750 & 149 & 213 &184\\ 3265 & 30 & 1436 & 1356\\ 4513 & 151 & 118 &126\\ 5022 & 401 & 230 & 239 \\ 5990 & 69 & 823 & 616\\ 7618 & 292 & 147 & 139\\ 8113 & 108 & 760 & 610\\ 9510 & 254 & 937& 1042\\ \hline Average per method & 175 & 508 & 457\\ \end{tabular} \end{table} The best method is the one that assigns to the drone the 2D point with the minimum depth in the bounding box (i.e. Method 1). It appears to be robust enough for the application, with a maximum error of 401 mm. The failure of other methods can be explained by the fact that in many cases, the points belonging to the drone are less than 25\% of the points with depth in the bounding box. Accurate drone detections at a distance of up to 9510 mm have been achieved using this method. At this record distance, depth measurements using the minimum distance method had a minimum error of 143 mm and a maximum error of 388 mm. This depth range greatly exceeds the one recently reported for collision avoidance onboard small drones using point clouds in \cite{7989677}. In their work, a max indoor range of 4000 mm was obtained using the Intel\textregistered RealSense\textsuperscript{TM} R200 RGB-D sensor. \section*{ACKNOWLEDGMENT} The authors would like to thank Hriday Bavle and \'Angel Luis Mart\'inez for their help with the hardware used in the experiments. \bibliographystyle{IEEEtran}
2,869,038,156,473
arxiv
\section{GENERAL FORMATTING INSTRUCTIONS} Papers are in 2 columns with the overall line width of 6.75~inches (41~picas). Each column is 3.25~inches wide (19.5~picas). The space between the columns is .25~inches wide (1.5~picas). The left margin is 1~inch (6~picas). Use 10~point type with a vertical spacing of 11~points. Times Roman is the preferred typeface throughout. Paper title is 16~point, caps/lc, bold, centered between 2~horizontal rules. Top rule is 4~points thick and bottom rule is 1~point thick. Allow 1/4~inch space above and below title to rules. Author descriptions are center-justified, initial caps. The lead author is to be listed first (left-most), and the Co-authors are set to follow. If up to three authors, use a single row of author descriptions, each one center-justified, and all set side by side; with more authors or unusually long names or institutions, use more rows. (But, do not include author names in the initial double-blind submission! Instead leave a row of ``Anonymous Author'' descriptions as above.) One-half line space between paragraphs, with no indent. \section{FIRST LEVEL HEADINGS} First level headings are all caps, flush left, bold, and in point size 12. One line space before the first level heading and 1/2~line space after the first level heading. \subsection{Second Level Heading} Second level headings are initial caps, flush left, bold, and in point size 10. One line space before the second level heading and 1/2~line space after the second level heading. \subsubsection{Third Level Heading} Third level headings are flush left, initial caps, bold, and in point size 10. One line space before the third level heading and 1/2~line space after the third level heading. \paragraph{Fourth Level Heading} Fourth level headings must be flush left, initial caps, bold, and Roman type. One line space before the fourth level heading, and place the section text immediately after the heading with, no line break, but an 11 point horizontal space. \subsection{CITATIONS, FIGURES, REFERENCES} \subsubsection{Citations in Text} Citations within the text should include the author's last name and year, e.g., (Cheesman 1985). References should follow any style that you are used to using, as long as their style is consistent throughout the paper. Be sure that the sentence reads correctly if the citation is deleted: e.g., instead of ``As described by (Cheesman, 1985), we first frobulate the widgets,'' write ``As described by Cheesman (1985), we first frobulate the widgets.'' Be sure to avoid accidentally disclosing author identities through citations. \subsubsection{Footnotes} Indicate footnotes with a number\footnote{Sample of the first footnote.} in the text. Use 8 point type for footnotes. Place the footnotes at the bottom of the column in which their markers appear, continuing to the next column if required. Precede the footnote section of a column with a 0.5 point horizontal rule 1~inch (6~picas) long.\footnote{Sample of the second footnote.} \subsubsection{Figures} All artwork must be centered, neat, clean, and legible. All lines should be very dark for purposes of reproduction, and art work should not be hand-drawn. Figures may appear at the top of a column, at the top of a page spanning multiple columns, inline within a column, or with text wrapped around them, but the figure number and caption always appear immediately below the figure. Leave 2 line spaces between the figure and the caption. The figure caption is initial caps and each figure should be numbered consecutively. Make sure that the figure caption does not get separated from the figure. Leave extra white space at the bottom of the page rather than splitting the figure and figure caption. \begin{figure}[h] \vspace{.3in} \centerline{\fbox{This figure intentionally left non-blank}} \vspace{.3in} \caption{Sample Figure Caption} \end{figure} \subsubsection{Tables} All tables must be centered, neat, clean, and legible. Do not use hand-drawn tables. Table number and title always appear above the table. See Table~\ref{sample-table}. One line space before the table title, one line space after the table title, and one line space after the table. The table title must be initial caps and each table numbered consecutively. \begin{table}[h] \caption{Sample Table Title} \label{sample-table} \begin{center} \begin{tabular}{ll} {\bf PART} &{\bf DESCRIPTION} \\ \hline \\ Dendrite &Input terminal \\ Axon &Output terminal \\ Soma &Cell body (contains cell nucleus) \\ \end{tabular} \end{center} \end{table} \newpage \section{INSTRUCTIONS FOR CAMERA-READY PAPERS} For the camera-ready paper, if you are using \LaTeX, please make sure that you follow these instructions. (If you are not using \LaTeX, please make sure to achieve the same effect using your chosen typesetting package.) \begin{enumerate} \item Install the package \texttt{fancyhdr.sty}. The \texttt{aistats2e.sty} file will make use of it. \item Begin your document with \begin{flushleft} \texttt{\textbackslash documentclass[twoside]\{article\}}\\ \texttt{\textbackslash usepackage[accepted]\{aistats2e\}} \end{flushleft} The \texttt{twoside} option for the class article allows the package \texttt{fancyhdr.sty} to include headings for even and odd numbered pages. The option \texttt{accepted} for the package \texttt{aistats.sty} will write a copyright notice at the end of the first column of the first page. This option will also print headings for the paper. For the \emph{even} pages, the title of the paper will be used as heading and for \emph{odd} pages the author names will be used as heading. If the title of the paper is too long or the number of authors is too large, the style will print a warning message as heading. If this happens additional commands can be used to place as headings shorter versions of the title and the author names. This is explained in the next point. \item If you get warning messages as described above, then immediately after $\texttt{\textbackslash begin\{document\}}$, write \begin{flushleft} \texttt{\textbackslash runningtitle\{Provide here an alternative shorter version of the title of your paper\}}\\ \texttt{\textbackslash runningauthor\{Provide here the surnames of the authors of your paper, all separated by commas\}} \end{flushleft} The text that appears as argument in \texttt{\textbackslash runningtitle} will be printed as a heading in the \emph{even} pages. The text that appears as argument in \texttt{\textbackslash runningauthor} will be printed as a heading in the \emph{odd} pages. If even the author surnames do not fit, it is acceptable to give a subset of author names followed by ``et al.'' \item Use the file sample2e.tex as an example. \item Both submitted and camera-ready versions of the paper are 8 pages, plus any additional pages needed for references. If you need to include additional appendices during submission, you can either include them in the supplementary material file, or include them as additional pages after the end of the references in the submitted file, so long as they are clearly marked ``APPENDIX---SUPPLEMENTARY MATERIAL''. \item Please, don't change the layout given by the above instructions and by the style file. \end{enumerate} \subsubsection*{Acknowledgements} Use unnumbered third level headings for the acknowledgements. All acknowledgements go at the end of the paper. Be sure to omit any identifying information in the initial double-blind submission! \subsubsection*{References} References follow the acknowledgements. Use an unnumbered third level heading for the references section. Any choice of citation style is acceptable as long as you are consistent. Please use the same font size for references as for the body of the paper---remember that references do not count against your page length total. J.~Alspector, B.~Gupta, and R.~B.~Allen (1989). Performance of a stochastic learning microchip. In D. S. Touretzky (ed.), {\it Advances in Neural Information Processing Systems 1}, 748-760. San Mateo, Calif.: Morgan Kaufmann. F.~Rosenblatt (1962). {\it Principles of Neurodynamics.} Washington, D.C.: Spartan Books. G.~Tesauro (1989). Neurogammon wins computer Olympiad. {\it Neural Computation} {\bf 1}(3):321-323. \end{document} \section{Introduction} Kernel-based nonparametric models, such as support vector machines and Gaussian processes (\gp{}s), have been one of the dominant paradigms for supervised machine learning over the last 20 years. These methods depend on defining a kernel function, $\kernel(\inputVar,\inputVar')$, which specifies how similar or correlated outputs $\outputVar$ and $\outputVar'$ are expected to be at two inputs $\inputVar$ and $\inputVar'$. By defining the measure of similarity between inputs, the kernel determines the pattern of inductive generalization. Most existing techniques pose kernel learning as a (possibly high-dimensional) parameter estimation problem. Examples include learning hyperparameters \cite{rasmussen38gaussian}, linear combinations of fixed kernels \cite{Bach_HKL}, and mappings from the input space to an embedding space \cite{salakhutdinov2008using}. However, to apply existing kernel learning algorithms, the user must specify the parametric form of the kernel, and this can require considerable expertise, as well as trial and error. To make kernel learning more generally applicable, we reframe the kernel learning problem as one of structure discovery, and automate the choice of kernel form. In particular, we formulate a space of kernel structures defined compositionally in terms of sums and products of a small number of base kernel structures. This provides an expressive modeling language which concisely captures many widely used techniques for constructing kernels. We focus on Gaussian process regression, where the kernel specifies a covariance function, because the Bayesian framework is a convenient way to formalize structure discovery. Borrowing discrete search techniques which have proved successful in equation discovery \cite{todorovski1997declarative} and unsupervised learning \cite{grosse2012exploiting}, we automatically search over this space of kernel structures using marginal likelihood as the search criterion. We found that our structure discovery algorithm is able to automatically recover known structures from synthetic data as well as plausible structures for a variety of real-world datasets. On a variety of time series datasets, the learned kernels yield decompositions of the unknown function into interpretable components that enable accurate extrapolation beyond the range of the observations. Furthermore, the automatically discovered kernels outperform a variety of widely used kernel classes and kernel combination methods on supervised prediction tasks. While we focus on Gaussian process regression, we believe our kernel search method can be extended to other supervised learning frameworks such as classification or ordinal regression, or to other kinds of kernel architectures such as kernel SVMs. We hope that the algorithm developed in this paper will help replace the current and often opaque art of kernel engineering with a more transparent science of automated kernel constructio . \section{Expressing structure through kernels} \label{sec:Structure} Gaussian process models use a kernel to define the covariance between any two function values: ${\textrm{Cov}(\outputVar, \outputVar') = \kernel(\inputVar,\inputVar')}$. The kernel specifies which structures are likely under the \gp{} prior, which in turn determines the generalization properties of the model. In this section, we review the ways in which kernel families\footnotemark can be composed to express diverse priors over functions. \footnotetext{When unclear from context, we use `kernel family' to refer to the parametric forms of the functions given in the appendix. A kernel is a kernel family with all of the parameters specified.} There has been significant work on constructing \gp{} kernels and analyzing their properties, summarized in Chapter 4 of \cite{rasmussen38gaussian}. Commonly used kernels families include the squared exponential (\kSE), periodic (\kPer), linear (\kLin), and rational quadratic (\kRQ) (see Figure~\ref{fig:basic_kernels} and the appendix). \input{tables/simple_kernels_table_v3.tex} \paragraph{Composing Kernels} Positive semidefinite kernels (\ie those which define valid covariance functions) are closed under addition and multiplication. This allows one to create richly structured and interpretable kernels from well understood base components. All of the base kernels we use are one-dimensional; kernels over multidimensional inputs are constructed by adding and multiplying kernels over individual dimensions. These dimensions are represented using subscripts, e.g. $\SE_2$ represents an \kSE{} kernel over the second dimension of $\inputVar$. \input{tables/example_structures_table_v4.tex} \paragraph{Summation} By summing kernels, we can model the data as a superposition of independent functions, possibly representing different structures. Suppose functions ${\function_1, \function_2}$ are draw from independent \gp{} priors, ${\function_1 \dist \GP(\mu_1, \kernel_1)}$, ${\function_2 \dist \GP(\mu_2, \kernel_2)}$. Then ${\function := \function_1 + \function_2 \dist \GP(\mu_1 + \mu_2, \kernel_1 + \kernel_2)}$. In time series models, sums of kernels can express superposition of different processes, possibly operating at different scales. In multiple dimensions, summing kernels gives additive structure over different dimensions, similar to generalized additive models~\citep{hastie1990generalized}. These two kinds of structure are demonstrated in rows 2 and 4 of figure~\ref{fig:kernels}, respectively. \paragraph{Multiplication} Multiplying kernels allows us to account for interactions between different input dimensions or different notions of similarity. For instance, in multidimensional data, the multiplicative kernel $\SE_1 \times \SE_3$ represents a smoothly varying function of dimensions 1 and 3 which is not constrained to be additive. In univariate data, multiplying a kernel by \kSE{} gives a way of converting global structure to local structure. For example, $\Per$ corresponds to globally periodic structure, whereas $\Per \times \SE$ corresponds to locally periodic structure, as shown in row 1 of figure~\ref{fig:kernels}. Many architectures for learning complex functions, such as convolutional networks \cite{lecun1989backpropagation} and sum-product networks \cite{poon2011sum}, include units which compute AND-like and OR-like operations. Composite kernels can be viewed in this way too. A sum of kernels can be understood as an OR-like operation: two points are considered similar if either kernel has a high value. Similarly, multiplying kernels is an AND-like operation, since two points are considered similar only if both kernels have high values. Since we are applying these operations to the similarity functions rather than the regression functions themselves, compositions of even a few base kernels are able to capture complex relationships in data which do not have a simple parametric form. \paragraph{Example expressions} In addition to the examples given in Figure~\ref{fig:kernels}, many common motifs of supervised learning can be captured using sums and products of one-dimensional base kernels: \begin{tabular}{l|l} Bayesian linear regression & $\Lin$ \\ Bayesian polynomial regression & $\Lin \times \Lin \times \ldots$\\ Generalized Fourier decomposition & $\Per + \Per + \ldots$ \\ Generalized additive models & $\sum_{d=1}^D \SE_d$ \\ Automatic relevance determination & $\prod_{d=1}^D \SE_d$ \\ Linear trend with local deviations & $\Lin + \SE$ \\ Linearly growing amplitude & $\Lin \times \SE$ \end{tabular} We use the term `generalized Fourier decomposition' to express that the periodic functions expressible by a \gp{} with a periodic kernel are not limited to sinusoids. \section{Searching over structures} \label{sec:Search} As discussed above, we can construct a wide variety of kernel structures compositionally by adding and multiplying a small number of base kernels. In particular, we consider the four base kernel families discussed in Section \ref{sec:Structure}: \kSE, \kPer, \kLin, and \kRQ. Any algebraic expression combining these kernels using the operations $+$ and $\times$ defines a kernel family, whose parameters are the concatenation of the parameters for the base kernel families. Our search procedure begins by proposing all base kernel families applied to all input dimensions. We allow the following search operators over our set of expressions: \begin{itemize} \item[(1)] Any subexpression $\subexpr$ can be replaced with $\subexpr + \baseker$, where $\baseker$ is any base kernel family. \item[(2)] Any subexpression $\subexpr$ can be replaced with $\subexpr \times \baseker$, where $\baseker$ is any base kernel family. \item[(3)] Any base kernel $\baseker$ may be replaced with any other base kernel family $\baseker^\prime$. \end{itemize} These operators can generate all possible algebraic expressions. To see this, observe that if we restricted the $+$ and $\times$ rules only to apply to base kernel families, we would obtain a context-free grammar (CFG) which generates the set of algebraic expressions. However, the more general versions of these rules allow more flexibility in the search procedure, which is useful because the CFG derivation may not be the most straightforward way to arrive at a kernel family. Our algorithm searches over this space using a greedy search: at each stage, we choose the highest scoring kernel and expand it by applying all possible operators. Our search operators are motivated by strategies researchers often use to construct kernels. In particular, \begin{itemize} \item One can look for structure, \eg periodicity, in the residuals of a model, and then extend the model to capture that structure. This corresponds to applying rule (1). \item One can start with structure, \eg linearity, which is assumed to hold globally, but find that it only holds locally. This corresponds to applying rule (2) to obtain the structure shown in rows 1 and 3 of figure~\ref{fig:kernels}. \item One can add features incrementally, analogous to algorithms like boosting, backfitting, or forward selection. This corresponds to applying rules (1) or (2) to dimensions not yet included in the model. \end{itemize} \paragraph{Scoring kernel families} Choosing kernel structures requires a criterion for evaluating structures. We choose marginal likelihood as our criterion, since it balances the fit and complexity of a model \citep{rasmussen2001occam}. Conditioned on kernel parameters, the marginal likelihood of a \gp{} can be computed analytically. However, to evaluate a kernel family we must integrate over kernel parameters. We approximate this intractable integral with the Bayesian information criterion \citep{schwarz1978estimating} after first optimizing to find the maximum-likelihood kernel parameters. Unfortunately, optimizing over parameters is not a convex optimization problem, and the space can have many local optima. For example, in data with periodic structure, integer multiples of the true period (\ie harmonics) are often local optima. To alleviate this difficulty, we take advantage of our search procedure to provide reasonable initializations: all of the parameters which were part of the previous kernel are initialized to their previous values. All parameters are then optimized using conjugate gradients, randomly restarting the newly introduced parameters. This procedure is not guaranteed to find the global optimum, but it implements the commonly used heuristic of iteratively modeling residuals. \section{Related Work} \label{sec:related_work} \paragraph{Nonparametric regression in high dimensions} Nonparametric regression methods such as splines, locally weighted regression, and \gp{} regression are popular because they are capable of learning arbitrary smooth functions of the data. Unfortunately, they suffer from the curse of dimensionality: it is very difficult for the basic versions of these methods to generalize well in more than a few dimensions. Applying nonparametric methods in high-dimensional spaces can require imposing additional structure on the model. One such structure is additivity. Generalized additive models (GAM) assume the regression function is a transformed sum of functions defined on the individual dimensions: $\expect[f(\vx)] = g\inv(\sum_{d=1}^D f_d(x_d))$. These models have a limited compositional form, but one which is interpretable and often generalizes well. In our grammar, we can capture analogous structure through sums of base kernels along different dimensions. It is possible to add more flexibility to additive models by considering higher-order interactions between different dimensions. Additive Gaussian processes \cite{duvenaud2011additive11} are a \gp{} model whose kernel implicitly sums over all possible products of one-dimensional base kernels. \citet{plate1999accuracy} constructs a \gp{} with a composite kernel, summing an \kSE{} kernel along each dimension, with an SE-ARD kernel (\ie a product of \kSE{} over all dimensions). Both of these models can be expressed in our grammar. A closely related procedure is smoothing-splines ANOVA \cite{wahba1990spline, gu2002smoothing}. This model is a linear combinations of splines along each dimension, all pairs of dimensions, and possibly higher-order combinations. Because the number of terms to consider grows exponentially in the order, in practice, only terms of first and second order are usually considered. Semiparametric regression \citep[e.g.][]{ruppert2003semiparametric} attempts to combine interpretability with flexibility by building a composite model out of an interpretable, parametric part (such as linear regression) and a `catch-all' nonparametric part (such as a \gp{} with an SE kernel). In our approach, this can be represented as a sum of \kSE{} and \kLin{}. \paragraph{Kernel learning} There is a large body of work attempting to construct a rich kernel through a weighted sum of base kernels \citep[e.g.][]{christoudias2009bayesian, Bach_HKL}. While these approaches find the optimal solution in polynomial time, speed comes at a cost: the component kernels, as well as their hyperparameters, must be specified in advance. Another approach to kernel learning is to learn an embedding of the data points. \citet{lawrence2005probabilistic} learns an embedding of the data into a low-dimensional space, and constructs a fixed kernel structure over that space. This model is typically used in unsupervised tasks and requires an expensive integration or optimisation over potential embeddings when generalizing to test points. \citet{salakhutdinov2008using} use a deep neural network to learn an embedding; this is a flexible approach to kernel learning but relies upon finding structure in the input density, p(\inputVar). Instead we focus on domains where most of the interesting structure is in \function(\inputVar). \citet{WilAda13} derive kernels of the form ${\SE \times \cos(x-x')}$, forming a basis for stationary kernels. These kernels share similarities with ${\kSE \times \kPer}$ but can express negative prior correlation, and could usefully be included in our grammar. \citet{diosan2007evolving} and \citet{bing2010gp} learn composite kernels for support vector machines and relevance vector machines, using genetic search algorithms. Our work employs a Bayesian search criterion, and goes beyond this prior work by demonstrating the interpretability of the structure implied by composite kernels, and how such structure allows for extrapolation. \paragraph{Structure discovery} There have been several attempts to uncover the structural form of a dataset by searching over a grammar of structures. For example, \cite{schmidt2009distilling}, \cite{todorovski1997declarative} and \cite{washio1999discovering} attempt to learn parametric forms of equations to describe time series, or relations between quantities. Because we learn expressions describing the covariance structure rather than the functions themselves, we are able to capture structure which does not have a simple parametric form. \citet{kemp2008discovery} learned the structural form of a graph used to model human similarity judgments. Examples of graphs included planes, trees, and cylinders. Some of their discrete graph structures have continous analogues in our own space; \eg $\SE_1 \times \SE_2$ and $\SE_1 \times \Per_2$ can be seen as mapping the data to a plane and a cylinder, respectively. \citet{grosse2012exploiting} performed a greedy search over a compositional model class for unsupervised learning, using a grammar and a search procedure which parallel our own. This model class contained a large number of existing unsupervised models as special cases and was able to discover such structure automatically from data. Our work is tackling a similar problem, but in a supervised setting. \section{Structure discovery in time series} \label{sec:time_series} To investigate our method's ability to discover structure, we ran the kernel search on several time-series. As discussed in section 2, a \gp{} whose kernel is a sum of kernels can be viewed as a sum of functions drawn from component \gp{}s. This provides another method of visualizing the learned structures. In particular, all kernels in our search space can be equivalently written as sums of products of base kernels by applying distributivity. For example, \[{\SE \times (\RQ + \Lin) = \SE \times \RQ + \SE \times \Lin}.\] We visualize the decompositions into sums of components using the formulae given in the appendix. The search was run to depth 10, using the base kernels from Section \ref{sec:Structure}. \label{sec:extrapolation} \paragraph{Mauna Loa atmospheric CO$\mathbf{_{2}}$} Using our method, we analyzed records of carbon dioxide levels recorded at the Mauna Loa observatory. Since this dataset was analyzed in detail by \citet{rasmussen38gaussian}, we can compare the kernel chosen by our method to a kernel constructed by human experts. \input{tables/mauna_growth.tex} \input{tables/mauna_decomp.tex} Figure \ref{fig:mauna_grow} shows the posterior mean and variance on this dataset as the search depth increases. While the data can be smoothly interpolated by a single base kernel model, the extrapolations improve dramatically as the increased search depth allows more structure to be included. Figure \ref{fig:mauna_decomp} shows the final model chosen by our method, together with its decomposition into additive components. The final model exhibits both plausible extrapolation and interpretable components: a long-term trend, annual periodicity and medium-term deviations; the same components chosen by \citet{rasmussen38gaussian}. We also plot the residuals, observing that there is little obvious structure left in the data. \paragraph{Airline passenger data} Figure \ref{fig:airline_decomp} shows the decomposition produced by applying our method to monthly totals of international airline passengers~\citep{box2011time}. We observe similar components to the previous dataset: a long term trend, annual periodicity and medium-term deviations. In addition, the composite kernel captures the near-linearity of the long-term trend, and the linearly growing amplitude of the annual oscillations. \paragraph{Solar irradiance Data} Finally, we analyzed annual solar irradiation data from 1610 to 2011 \citep{lean1995reconstruction}. \input{tables/solar_decomp.tex} The posterior and residuals of the learned kernel are shown in figure \ref{fig:solar_decomp}. \input{tables/airline_decomp.tex} None of the models in our search space are capable of parsimoniously representing the lack of variation from 1645 to 1715. Despite this, our approach fails gracefully: the learned kernel still captures the periodic structure, and the quickly growing posterior variance demonstrates that the model is uncertain about long term structure. \section{Validation on synthetic data} \label{sec:synthetic} \input{tables/synthetic_results_extended_v2.tex} We validated our method's ability to recover known structure on a set of synthetic datasets. For several composite kernel expressions, we constructed synthetic data by first sampling 300 points uniformly at random, then sampling function values at those points from a \gp{} prior. We then added \iid Gaussian noise to the functions, at various signal-to-noise ratios (SNR). Table~\ref{tbl:synthetic} lists the true kernels we used to generate the data. Subscripts indicate which dimension each kernel was applied to. Subsequent columns show the dimensionality $D$ of the input space, and the kernels chosen by our search for different SNRs. Dashes - indicate that no kernel had a higher marginal likelihood than modeling the data as \iid Gaussian noise. For the highest SNR, the method finds all relevant structure in all but one test. The reported additional linear structure is explainable by the fact that functions sampled from \kSE{} kernels with long length scales occasionally have near-linear trends. As the noise increases, our method generally backs off to simpler structures. \input{tables/regression_results_combined.tex} \section{Quantitative evaluation} \label{sec:quantitative} In addition to the qualitative evaluation in section \ref{sec:time_series}, we investigated quantitatively how our method performs on both extrapolation and interpolation tasks. \subsection{Extrapolation} \input{tables/extrapolation.tex} We compared the extrapolation capabilities of our model against standard baselines\footnotemark. Dividing the airline dataset into contiguous training and test sets, we computed the predictive mean-squared-error (MSE) of each method. We varied the size of the training set from the first 10\% to the first 90\% of the data. Figure \ref{fig:extrapolation} shows the learning curves of linear regression, a variety of fixed kernel family \gp{} models, and our method. \gp{} models with only \kSE{} and \kPer{} kernels did not capture the long-term trends, since the best parameter values in terms of \gp{} marginal likelihood only capture short term structure. Linear regression approximately captured the long-term trend, but quickly plateaued in predictive performance. The more richly structured \gp{} models (${\kSE + \kPer}$ and ${\kSE \times \kPer}$) eventually captured more structure and performed better, but the full structures discovered by our search outperformed the other approaches in terms of predictive performance for all data amounts. \footnotetext{ In one dimension, the predictive means of all baseline methods in table \ref{tbl:Regression Mean Squared Error} are identical to that of a \gp{} with an $\kSE{}$ kernel.} \subsection{High-dimensional prediction} To evaluate the predictive accuracy of our method in a high-dimensional setting, we extended the comparison of \cite{duvenaud2011additive11} to include our method. We performed 10 fold cross validation on 5 datasets \footnote{The data sets had dimensionalities ranging from 4 to 13, and the number of data points ranged from 150 to 450.} comparing 5 methods in terms of MSE and predictive likelihood. Our structure search was run up to depth 10, using the \SE{} and \RQ{} base kernel families. The comparison included three methods with fixed kernel families: Additive \gp{}s, Generalized Additive Models (GAM), and a \gp{} with a standard \kSE{} kernel using Automatic Relevance Determination (\gp{} \kSE{}-ARD). Also included was the related kernel-search method of Hierarchical Kernel Learning (HKL). Results are presented in table \ref{tbl:Regression Mean Squared Error}. Our method outperformed the next-best method in each test, although not substantially. All \gp{} hyperparameter tuning was performed by automated calls to the GPML toolbox\footnote{Available at \href{http://www.gaussianprocess.org/gpml/code/} {\texttt{www.gaussianprocess.org/gpml/code/}} }; Python code to perform all experiments is available on github\footnote{ \href{http://www.github.com/jamesrobertlloyd/gp-structure-search} {\texttt{github.com/jamesrobertlloyd/gp-structure-search}} }. \section{Discussion} \begin{quotation} ``It would be very nice to have a formal apparatus that gives us some `optimal' way of recognizing unusual phenomena and inventing new classes of hypotheses that are most likely to contain the true one; but this remains an art for the creative human mind.'' \defcitealias{Jaynes85highlyinformative}{E. T. Jaynes, 1985} \hspace*{\fill}\citetalias{Jaynes85highlyinformative} \end{quotation} Towards the goal of automating the choice of kernel family, we introduced a space of composite kernels defined compositionally as sums and products of a small number of base kernels. The set of models included in this space includes many standard regression models. We proposed a search procedure for this space of kernels which parallels the process of scientific discovery. We found that the learned structures are often capable of accurate extrapolation in complex time-series datasets, and are competitive with widely used kernel classes and kernel combination methods on a variety of prediction tasks. The learned kernels often yield decompositions of a signal into diverse and interpretable components, enabling model-checking by humans. We believe that a data-driven approach to choosing kernel structures automatically can help make nonparametric regression and classification methods accessible to non-experts. \subsubsection*{Acknowledgements} We thank Carl Rasmussen and Andrew G. Wilson for helpful discussions. This work was funded in part by NSERC, EPSRC grant EP/I036575/1, and Google. \subsection{Real datasets}
2,869,038,156,474
arxiv
\section{Introduction} Let $X$ be a complex projective K3 surface. A recent result by Chen--Gounelas--Liedtke \cite{chen2019curves} completed the proof of the conjecture that there are infinitely many rational curves on $X$. Their method also provides information on the classes of these curves in the Picard group if the Picard rank is small. Unfortunately, the folklore conjecture that for every ample class $H\in \Pic X$ there are infinitely many rational curves in $\bigcup_m |mH|$ still remains unknown even for small ranks greater or equal to $2$. In {\itshape loc.\ cit.}\ the following weaker question is posed. \begin{question*} Does every projective K3 surface $X$ admit rational curves $R_i\subset X$ such that $\lim_i R_i^2 = \infty$? \end{question*} As there are infinitely many rational curves the question has a positive answer as long as $|\mathrm{Aut}(X)|<\infty$: For a fixed even natural number $2d \in 2\NN$ there are only finitely many orbits of classes $[C]\in \Pic X$ with $C$ an irreducible curve and $C^2= 2d$ under the action of the automorphism group. Moreover the techniques of {\itshape loc.\ cit.}\ prove the question for Picard ranks $1$ and $2$ as well. In this paper we will answer the question positively in the case of elliptic K3 surfaces, too. \begin{theorem} \label{thm:MainTheorem1} Let $X\to\PP^1$ be an elliptic K3 surface. Then there are rational curves $R_i \subset X$ such that $R_i^2 \to \infty$. \end{theorem} In other words, the only missing cases are non-elliptic K3 surfaces of Picard rank $3$ or $4$ with infinite automorphism group. The method of the proof of Theorem \ref{thm:MainTheorem1} builds on the techniques by Bogomolov--Tschinkel \cite{bogomolov1999density} and Hassett \cite{hassett2003potential} who constructed infinitely many rational curves on a complex elliptic K3 surface. Their results have since also been extended to characteristic $p>3$ by Tayou in \cite{tayou2018rational}. The main idea is to start with a rational curve $R$ and look at its image under certain rational maps between elliptic K3 surfaces. As it turns out the main problem faced in these papers is that the initial rational curve $R$ might be torsion, which prevents the images from giving \emph{new} curves. Here, torsion means that for any two points in $R\cap X_t$ of a smooth fiber their difference in $\mathrm{Jac}^0(X_t)$ is torsion. In our case we look at the same construction and examine when the image of the curve $R$ will have more singularities. What prevents the images from doing so is very similar to being a torsion section which leads to our main definition of \emph{quasi-torsion sections}, see Section \ref{sec:QuasiTorsion}. The existence of rational non-quasi-torsion curves will then be carried out in Section \ref{sec:ExistenceRNQT} which will then be needed to produce the rational curves with an unbounded number of singularities in Section \ref{sec:ProducingSingularities}. In Section \ref{sec:JetSpace} we will apply the methods to examine lifts of rational curves in the first jet space $\PP(\Omega_X)$ by which we mean the space of one-dimensional quotients of $\Omega_X$. Recall the construction of such lifts: For every curve $C\subset X$ and its normalization $f\colon \tilde{C}\to C\hookrightarrow X$ the usual short exact sequence of cotangent bundles gives a map \begin{equation*} f^*\Omega_X\to \Omega_{\tilde{C}}^1. \end{equation*} Denote its torsion free image by $L$ which is automatically a line bundle. Then the surjective map $f^*\Omega_X\twoheadrightarrow L$ gives rise to a lift $\tilde{C}\to \PP(\Omega_X)$. If $C$ is rational then $\deg L <0$ and the lift is negative with respect to $\oO_{\PP(\Omega_X)}(1)$. It turns out that these pathological curves form a dense subset. \begin{theorem} \label{thm:MainTheorem2} Let $X\to\PP^1$ be an elliptic K3 surface. Then the union of lifts of rational curves to the jetspace $\PP(\Omega_X)$ is Zariski-dense. \end{theorem} In Section \ref{sec:Applications} we give some easy consequences of these results. For example the above mentioned density yields a short proof of Kobayashi's theorem in the elliptic case, see Theorem \ref{thm:Kobayashi}. In \cite{chen2013density} Chen-Lewis were concerned with the conjecture that the union of rational curves on $X$ is dense in the \emph{usual} topology. For elliptic K3 surfaces they proved this as long as there exists a rational multisection on $X$ that is not torsion. As a by-product of our theorems we see that the elliptic structure can be changed in such a way that there exists such a multisection and hence density of rational curves holds for \emph{every} elliptic K3 surface, see Corollary \ref{cor:DensityUsual}.\\ \textbf{Notations: }Let $p\colon X\to B$ be an elliptic fibration and $U\subset B$ be the subset on which the fibration is smooth. By $(\;)_U$ we mean the restriction to $p^{-1}(U)$. If the fibration is moreover Jacobian, i.e., it admits a section, then we denote the closure of the $m$-torsion of the fibers by $X[m]$. The upper halfplane in $\CC$ is denoted by $\HH$.\\ \textbf{Acknowledgements: }The author would like to thank his adviser Frank Gounelas for many hours of helpful discussions and his advice on several draft versions. \section{Background on Elliptic K3 surfaces and Jacobians} \label{sec:background} We start by collecting facts on elliptic K3 surfaces, which we always assume to be projective. For a detailed discussion, see \cite[Chapter 11]{huybrechts2016lectures}. Let $X\to\PP^1$ be an elliptic K3 surface. Its index $d_0\in \NN$ is defined as \begin{equation*} d_0 = \min\{0< c_1(L).X_t \eft L\in\Pic X\} = \min\{0\neq C.X_t \eft C\subset X \;\text{a curve}\}, \end{equation*} where the last equation follows as $c_1(L)+nX_t$ becomes effective for $n\gg 0$. \subsection{Compactified Jacobians} \label{sub:compactifiedJacobian} Denote by $\mathrm{Jac}^d (X/\PP^1)\to\PP^1$ the relative Jacobian of the elliptic fibration. Then we can define the compactified Jacobian $J^d(X)\to\PP^1$ as the unique relatively minimal smooth model of $\mathrm{Jac}^d(X/\PP^1)\to\PP^1$. Therefore over the smooth fibers one recovers $J^d(X)_t \cong \mathrm{Jac}^d(X_t)$, where the latter is the usual Jacobian of a curve. By \cite[Prop. 11.4.5]{huybrechts2016lectures} all compactified Jacobians are K3 surfaces as well and moreover for every $n\in \NN$ we can find another elliptic K3 surface $Y\to\PP^1$ such that there is an isomorphism $J^n(Y) \cong X$ as elliptic surfaces. Moreover the index of $Y$ is exactly $nd_0$, where $d_0$ is the index of $X$. Furthermore Jacobians give rise to rational maps between elliptic K3 surfaces as follows: For a smooth fiber we have a canonical morphism \begin{equation*} \mathrm{Jac}^m(X_t) \times \mathrm{Jac}^n(X_t)\to \mathrm{Jac}^{m+n}(X_t), \end{equation*} which is given by the tensor product of line bundles. This globalizes to give a rational map \begin{equation*} J^m(X) \times_{\PP^1} J^n(X)\dashrightarrow J^{m+n}(X), \end{equation*} which is defined over the smooth locus $U\subset\PP^1$. Using the diagonal morphism we can construct a multiplication map $J^1(X)\dashrightarrow J^n(X)$ for every $n\in \NN$ by mapping \begin{equation*} J^1(X)\to J^1(X)\times_{\PP^1} \ldots \times_{\PP^1} J^1(X) \dashrightarrow J^{n}(X), \end{equation*} where the first map is the diagonal map into the $n$-fold fiberproduct. To relate these rational maps to the K3 surface $X$ we mention that the canonical isomorphism $X_t\cong \mathrm{Jac}^1(X_t)$ gives an \emph{isomorphism} $X\to J^1(X)$ respecting the fibration. Moreover choosing a line bundle $M\in \Pic X$ of degree $d_0$ we get another isomorphism \begin{equation*} J^n(X) \to J^{n+d_0}(X), \end{equation*} which fiberwise is given by the tensor product with $M$, i.e., \begin{equation*} L \mapsto L\otimes M|_{X_t} \end{equation*} for a line bundle $L\in \mathrm{Jac}^n(X_t)$. \subsection{Framed elliptic curves} \label{sub:framedElliptic} We recall some standard facts on elliptic curves, see e.g., \cite{hain2014lectures}. \begin{definition} A framed elliptic curve is a triple $(E,a,b)$ of a complex elliptic curve $E$ and two elements $a,b\in \hh_1(E,\ZZ)$ such that their intersection is $a\cdot b = 1$. Isomorphisms of framed elliptic curves are isomorphisms of elliptic curves that respect the frame. A framed lattice is a triple $(\Lambda, \lambda_1, \lambda_2)$ such that $\Lambda\subset \CC$ is a rank two lattice and $\lambda_1,\lambda_2\in\Lambda$ is a $\ZZ$-basis of $\Lambda$ with $\Im (\lambda_1/\lambda_2) > 0$. Two framed lattices are isomorphic if the lattice and the frame coincide up to a complex multiple. \end{definition} For example every family of elliptic curves $F\to B$ over a simply connected base $B$ can be simultaneously framed, i.e., there is a tuple $(a,b)$ in $\hh_1(F,\ZZ)$ such that the pushforward of the frames of every fiber coincide with $(a,b)$. There is a one-to-one bijection \begin{equation*} \HH \leftrightarrow \left\{\begin{gathered} \textit{isomorphism classes} \\ \textit{of framed lattices} \end{gathered}\right\} \leftrightarrow\left\{\begin{gathered} \textit{isomorphism classes of} \\ \textit{framed elliptic curves} \end{gathered}\right\} \end{equation*} which sends some $\tau \in \HH$ to $\Lambda_\tau = \ZZ\tau+\ZZ$ and a lattice $\Lambda$ to $\CC/\Lambda$. Moreover the upper half plane $\HH$ is a fine moduli space for framed elliptic curves with universal curve given by \begin{equation*} \eE = \CC\times \HH / \{(\ZZ\tau+\ZZ, \tau)\eft \tau \in \HH\}. \end{equation*} For a chosen frame $(\Lambda,\lambda_1,\lambda_2)$ there is a natural choice of coordinate function \begin{align*} \RR^2 &\to \CC/\Lambda\\ (x,y)&\to x\lambda_1+y\lambda_2, \end{align*} which induces a homeomorphism $\RR^2/\ZZ^2 \cong \CC/\Lambda$. If we change the frame of $\Lambda$ by an element $\gamma\in \SL(2,\ZZ)$, the corresponding coordinates for $p = x\lambda_1+y\lambda_2$ in the new frame are given by $\gamma^T\cdot \left(\begin{smallmatrix} x\\y \end{smallmatrix}\right)$, where $\gamma^T$ is the transposed matrix. \subsection{Singular Fibers} \label{sub:singularFibers} The singular fibers of elliptic fibrations can be completly understood by means of their local monodromy group, for details see \cite[Lecture IV]{miranda1989basic}. The latter is defined as follows. Pick a small disc $\Delta\subset \PP^1$ such that over the punctured disc the map $X_{\Delta^*}\to\Delta^*$ is smooth and fix a fiber $X_t\cong \CC/(\ZZ+\tau\ZZ)$. Then the usual monodromy action of $\ZZ \cong \pi_1(\Delta^*, t)$ on the first integral cohomology of $X_t$ gives rise to a subgroup $\Gamma\subset \SL(2,\ZZ)$ which is called the \emph{local monodromy group}. We just recall the facts that are important to our case, for a complete classification see \cite[Diagram 11.1.3]{huybrechts2016lectures}. It turns out that the local monodromy is infinite precisely for the fibers of type $I_n, I_n^*\;(n>0)$, which occur on a K3 surface if and only if the fibration is non-isotrivial. In this case the local monodromy can be generated by the following elements \begin{equation*} I_n:\;\begin{pmatrix} 1&n\\0&1 \end{pmatrix}\qquad\qquad I_n^*:\;-\begin{pmatrix} 1&n\\0&1 \end{pmatrix}. \end{equation*} \section{Quasi-torsion sections} \label{sec:QuasiTorsion} In the following we will introduce the main definition of this paper, which is a generalization of torsion multisections. Recall the definition of the latter from \cite{bogomolov1999density}. \begin{definition} Let $X\to\PP^1$ be an elliptic K3 surface. A multisection $M\subset X$ is called torsion if for any two points $x,y\in M\cap X_t$ in every smooth fiber $X_t$ their difference $x-y\in \mathrm{Jac}^0(X_t)$ is torsion. \end{definition} Throughout this section we will work in the analytic category unless otherwise stated. Let $p\colon X\to\Delta$ a smooth elliptic Jacobian fibration between complex manifolds over a simply connected base $\Delta$. Then a choice of frame for the family yields a holomorphic $\tau\colon \Delta\to \HH= \{z\in \CC\eft \Im z > 0\}$ such that \begin{equation*} \label{eq:LocalElliptic} X = \CC\times \Delta / (\ZZ\tau(t)+\ZZ,t) \end{equation*} and the section is given by $\{0\}\times \Delta$. We call such a choice a \emph{standard model}. The branches of the $m$-torsion $X[m]$ are of the form $\{(a\tau(t)+b, t)\eft t\in \Delta\}$ for some $a,b\in \frac{1}{m}\ZZ\subset\QQ$. We generalize these multisections in the following way. \begin{definition} Let $X\to B$ be an elliptic Jacobian fibration between two complex manifolds such that the base $B$ is $1$-dimensional. A holomorphic curve $C\subset X$ is called \emph{elementary quasi-torsion} if $C_U\to U$ is \'{e}tale and the branches over every simply connected $\Delta\subset U$ and some choice of standard model $X_\Delta = \CC\times \Delta / (\ZZ\tau(t)+\ZZ, t)$ are given by \begin{equation*} \{(a\tau(t)+b,t)\eft t\in\Delta\} \subset X_\Delta \end{equation*} for some $a,b\in \RR$ which may depend on $\Delta$ and the chosen standard model. \end{definition} \begin{remark} The above definition is independent of the choice of standard model: If we have two standard models over $\Delta$ given by $\tau,\tau':\Delta\to \HH$, then $\tau' = \gamma \cdot \tau$ with $\gamma\in \SL(2,\ZZ)$. If we denote $\left(\begin{smallmatrix} a^\prime\\b^\prime \end{smallmatrix}\right) = (\gamma^{T})^{-1}\cdot \left(\begin{smallmatrix} a\\b \end{smallmatrix}\right)$ then the two curves \begin{align*} \{(a\tau(t)+b,t)\eft t\in\Delta\} &\subset \CC\times \Delta / (\ZZ\tau(t)+\ZZ,t)\\ \{(a'\tau'(t)+b',t)\eft t\in\Delta\} &\subset \CC\times \Delta / (\ZZ\tau'(t)+\ZZ,t) \end{align*} coincide in $X_\Delta$. Moreover by the same reasoning it suffices to check the conditions only on an open cover of $U$. \end{remark} \begin{example} \label{ex:EQTIsotrivial} Let $p\colon X\to \PP^1$ be an isotrivial Jacobian elliptic projective surface with general fiber isomorphic to a fixed elliptic curve $E$. Then there exists a projective curve $C$ and a finite rational morphism \begin{equation*} \label{eq:RationalMapIsotrival} C\times E \dashrightarrow X \end{equation*} that respects the section and the elliptic structure. The closure of the image of $C\times \{pt\}$ under the rational map above defines an elementary quasi-torsion curve. In fact this is an example of an \emph{algebraic} elementary quasi-torsion curve. \end{example} \begin{lemma} Let $X\to\PP^1$ be a Jacobian elliptic fibration and $x\in X_U$ a point. Then there exists a unique holomorphic connected elementary quasi-torsion curve inside $X_U$ that contains $x$. \end{lemma} \begin{proof} Let $\Delta\subset U$ be a simply connected subset such that $x\in X_\Delta$ and let $X_\Delta\cong \CC\times \Delta/(\ZZ\tau(t)+\ZZ,t)$ be a standard model. Then we can choose $(a,b)\in\RR^2$ such that $x = (a\tau(t_0)+b,t_0)$. Such a choice is unique up to $\ZZ^2$ and hence any branch of an elementary quasi-torsion curve that contains $x\in X_U$ is equal to \begin{equation*} \{(a\tau(t)+b)\eft t\in\Delta\}\subset X_\Delta. \end{equation*} Thus, the uniqueness follows from the curve being \'{e}tale and connected. To construct the curve we denote by $U'\to U$ the universal cover and by $X'$ the pullback of $X_U\to U$ to $U'$. If we choose a standard model \begin{equation*} \CC\times U'/(\ZZ\tau(t)+\ZZ,t) \cong X' \end{equation*} we may choose a point $x'\in X'$ that lies over $x$ via the map $p\colon X' \to X_U$. Then we may choose $(a,b)\in \RR^2$ such that $x'$ lies in \begin{equation*} T'[x'] := \{(a\tau(t)+b, t)\eft t\in\Delta\}. \end{equation*} We then denote \begin{equation*} T[x] := p(T'[x'])\subset X_U, \end{equation*} which is a connected elementary quasi-torsion curve containing $x\in X_U$. \end{proof} \begin{definition} Let $X\to \PP^1$ be a Jacobian elliptic fibration. For any point $x\in X_U$ the unique holomorphic elementary quasi-torsion curve that contains $x$ is denoted $T[x]$. \end{definition} As we have seen in Example~\ref{ex:EQTIsotrivial} in the isotrivial case every $T[x]$ is algebraic and hence extends to a curve on $X$. But as the construction above is very analytic in nature this is not guaranteed in any case. We will see that for non-isotrivial fibrations quite the opposite is true: only those $T[x]$ contained in $X[m]$ for some $m\in \NN$ extend to the whole of $X$. \begin{remark} Let $x = a\tau+b \in X_{t_0} = \CC/(\ZZ\tau+\ZZ)$ be an element in a smooth fiber of $X\to \PP^1$. As $T[x]$ is \'{e}tale over $U$ there is a well defined action of $\pi_1(U, t_0)$ on $X_{t_0}$. This action factors through the monodromy group $\Gamma\subset \operatorname{SL}(2,\ZZ)$ by acting on the tuple $(a,b)$ by the right action induced by the transposed matrix. \end{remark} \begin{proposition} \label{prop:NoQuasiTorsion} Let $X\to \PP^1$ be a non-isotrivial elliptic projective Jacobian surface. Then for some $x\in X_U$ the holomorphic curve $T[x]\subset X_U$ extends to an algebraic curve on $X$ if and only if $T[x] \subset X[m]$ is torsion for some $m\in \NN$. \end{proposition} \noindent The main idea of the proof is to show that $|T[x]\cap X_t|=\infty$ for non-torsion points $x\in X_U$. This can be seen as an analogue of the fact that the torsion $X[p]$ without the zero-section is irreducible for $p$ a large prime and $X[p].X_t = p^2$, see e.g., \cite[Theorem 8.3]{hassett2003potential} To deduce the above statement we make use of the monodromy action, which can be characterized by the following lemma. \begin{lemma}[{Hassett \cite[Lemma 8.4, Lemma 8.5]{hassett2003potential}}] Let $X\to \PP^1$ be a projective non-isotrivial Jacobian elliptic surface. Then the reduction $\Gamma\subset \SL(2,\ZZ) \to \SL(2,\ZZ/p\ZZ)$ of the monodromy group is surjective for primes $p\gg 0$. \end{lemma} \begin{proof}[Proof of Proposition \ref{prop:NoQuasiTorsion}] Suppose $T[x]$ extends on $X$, i.e., it is algebraic. In particular $|T[x]\cap X_t|$ is finite. As $X$ is non-isotrivial there is a degenerate fiber of type $I_N$ or $I_N^*$. By fixing an appropriate smooth fiber $X_t = \CC/(\ZZ\tau+\ZZ)$ we can assume that \begin{equation*} \gamma_n =\begin{pmatrix} 1&2nN\\0&1 \end{pmatrix} \in \Gamma \end{equation*} is contained in the monodromy group $\Gamma$ for every $n\in \NN$. Let $x = a\tau+b \in X_t\cap T[x]$. Then applying $\gamma_n$ yields \begin{equation*} \gamma_n^T.\begin{pmatrix} a\\b \end{pmatrix} = \begin{pmatrix} a\\2anN+b \end{pmatrix}. \end{equation*} As the intersection of $T[x]$ with $X_t$ is finite $2anN+b = b\in \RR/\ZZ$ for some $n\in\NN_{>0}$. Therefore we have that $a\in \QQ$ is rational. On the other hand choose $p\gg 0$ such that the previous lemma is fulfilled. Then the matrix \begin{equation*} \begin{pmatrix} pw& 1+px\\ -1+py& pz \end{pmatrix}\in \Gamma \end{equation*} is contained in the monodromy group for some $w,x,y,z\in \ZZ$. This yields \begin{equation*} \begin{pmatrix} pwa+(-1+py)b\\ (1+px)a+pzb \end{pmatrix}\in T[x]\cap X_t, \end{equation*} which then implies that $pwa+(-1+py)b\in\QQ$ is rational as above and hence $b\in\QQ$ is rational as well. \end{proof} We will now give a local criterion for a holomorphic curve to be elementary quasi-torsion. \begin{proposition} \label{prop:CapTorsionEmpty} Let $X\to\Delta$ be a standard model and let $I\subset \NN$ be an infinite multiplicatively closed subset. Assume that a section $C\subset X$ of $X\to\Delta$ satisfies \begin{equation*} C\cap \bigcup_{n\in I} X[n] = \emptyset. \end{equation*} Then $C$ is elementary quasi-torsion. \end{proposition} \begin{proof} Let \begin{equation*} X' = \CC\times \Delta \to (\CC\times\Delta)/(\ZZ\tau(t)+\ZZ,t)= X \end{equation*} be the universal cover of the standard model. As $\Delta$ is simply connected the section $C$ lifts to a section $C'$ of $X'\to \Delta$. By assumption \begin{equation*} C' \subset (\CC\times \Delta) \setminus \bigcup_{n\in I} (\tfrac{1}{n}\ZZ\tau(t)+\tfrac{1}{n}\ZZ, t) \end{equation*} for the infinite multiplicatively closed set $I$. Denote by $f\colon \Delta\to \CC$ a function that induces a chart for the curve $C'\subset \CC\times \Delta$, i.e., $C' = \{(f(t), t)\eft t\in\Delta\}$. Then $f(t) = a(t)\tau(t)+b(t)$ for some continuous real valued functions $a,b\colon \Delta\to \RR$. We will now use the fact that $\bigcup_{n\in I} X[n]$ is dense in $X$ to show that $a(t)$ and $b(t)$ are constant. By contradiction assume that this is not the case, i.e., without loss of generality $b$ is non-constant and therefore there is some $t_0$ such that $b_0 = b(t_0)\in \tfrac{1}{n}\ZZ$ for some $n\in I$. Then the function \begin{equation*} F\colon \CC\times \Delta \to \CC,\; F(z,t) = f(t)- z\tau(t)-b_0 \end{equation*} has a zero at $(a(t_0), t_0)$ and a Jacobian of maximal rank. The implicit function theorem gives an open $t_0\in U\subset \Delta$ and a holomorphic function $g\colon U\to \CC$ such that $f(t) - g(t)\tau(t)-b_0 = 0$ for all $t\in U$. If $g$ is constant we are done, so otherwise the image is open. As $a(t_0)\in \RR$ is contained in the image of $g$ there is an $a_0 = g(t')\in \tfrac{1}{m}\ZZ$ also contained in the image for $m\in I$ large enough. Therefore the point \begin{equation*} (f(t'),t')=(a_0\tau(t') + b_0,t') \in X[nm]\cap C \end{equation*} is torsion, a contradiction. \end{proof} \begin{corollary} \label{cor:TorsionDense} Let $p\colon X\to \PP^1$ be a Jacobian elliptic fibration and $C\subset X$ an irreducible holomorphic curve that is not elementary quasi-torsion. Then the set \begin{equation*} C\cap \bigcup_{n\in I} X[n] \subset C \end{equation*} is dense. \end{corollary} \begin{proof} Let $V\subset C$ be an open set. By shrinking we may assume it to be simply connected and open. If $V\cap \bigcup_{n\in I} X[n] = \emptyset$ then for $\Delta = p(V)$ the set $V$ is an elementary quasi-torsion curve in $X_\Delta\to\Delta$ by the previous proposition. Hence, $C$ agrees with $T[x]$ on the open set $V$ for some $x\in X_U$ and thus they are equal everywhere. \end{proof} We now come to the main definition, which is a generalization of torsion multisections. Let $X\to \PP^1$ be a (not necessarily Jacobian) elliptic K3 surface. Then there is the rational difference map to the compactified Jacobian $J^0(X)$: \begin{equation*} \label{eq:DifferenceMap} d\colon X_U\times_{U} X_U \cong J^1(X)_U \times_{U} J^1(X)_U \to J^0(X), \end{equation*} where the last arrow maps two line bundles $L,L'\in \Pic X_t$ to $L^{-1}\otimes L'$. \begin{definition} Let $C\subset X$ be an irreducible holomorphic curve not contained in a fiber. We define $D(C) = d(C_U\times_{U} C_U)$ and say that $C$ is a \emph{quasi-torsion} multisection if \begin{equation*} D(C) = \bigcup_{x\in S} T[x] \end{equation*} for some finite subset $S\subset X_U$. Otherwise it is \emph{non-quasi-torsion}. \end{definition} \begin{remark} Every torsion multisection is quasi-torsion as well as all elementary quasi-torsion curves. \end{remark} \section{Existence of rational non-quasi-torsion curves} \label{sec:ExistenceRNQT} In this section we will prove that there are rational non-quasi-torsion curves on elliptic K3 surfaces, as long as we allow a change of the fibration. The proof will be split into two parts as we have to take care of the isotrivial case seperately. We will introduce some notation which is taken from \cite[Corollary 9.5]{hassett2003potential} applied to the isotrivial case. If $X\to\PP^1$ is an isotrivial K3 surface with $n_0$ (resp. $n_2, n_3$ and $n_4$) fibers of type $I_0^*$ (resp. type $II, II^*$, type $III$,$III^*$ and type $IV$,$IV^*$), we denote \begin{equation*} c(X\to\PP^1) = \tfrac{1}{2}n_0 + \tfrac{5}{6}n_2 + \tfrac{3}{4}n_3 + \tfrac{2}{3}n_4 -2. \end{equation*} The goal of this section is to prove the following theorem. \begin{theorem} \label{thm:NQTCurvesExist} Let $p\colon X\to\PP^1$ be an elliptic K3 surface. If $X\xrightarrow{p}\PP^1$ is non-isotrivial or isotrivial with $c(X\xrightarrow{p}\PP^1)>0$, then there is a non-quasi-torsion rational curve on $X$. If $p\colon X\to\PP^1$ is isotrivial with $c(X\xrightarrow{p}\PP^1)\le0$ there is another elliptic fibration $p'\colon X\to\PP^1$ such that the previous conditions hold. \end{theorem} We start with the latter reduction step by using a similar technique as in \cite{oguiso1989jacobian}, where all Jacobian elliptic pencils on some special elliptic Kummer surfaces are constructed. \begin{lemma} \label{lem:WLOGC>0} Let $p\colon X\to\PP^1$ be an isotrivial elliptic K3 surface with $c(X\xrightarrow{p}\PP^1)\le 0$. Then there is another fibration $p'\colon X\to\PP^1$ that is non-isotrivial or isotrivial with $c(X\xrightarrow{p'}\PP^1)>0$. \end{lemma} \begin{proof} By \cite[Proposition 9.6]{hassett2003potential} we have $\operatorname{rk} \Pic X\ge 16$. Hence we can replace $p\colon X\to \PP^1$ by a Jacobian fibration $p'\colon X\to\PP^1$ with a section $S$. If it is isotrivial with $c(X\xrightarrow{p'}\PP^1)\le 0$ then by \cite[Proposition 9.6]{hassett2003potential} the only singular fibers that can occur are as in Figure \ref{diag:DynkinKummerType}. We pick two degenerate fibers $F_1,F_2$ and denote the components as indicated in Figure~\ref{diag:DynkinKummerType}, where $\alpha_1$ denotes the component meeting the section $S$. \begin{figure}[H] \begin{subfigure}[b]{\textwidth} \begin{minipage}{0.4\textwidth} \center \dynkin[Kac,extended,labels*={1,1,2,1,1}, labels={\alpha_1, \alpha_3,\alpha_2, \alpha_4}] D4 \end{minipage} \hfill \begin{minipage}{0.4\textwidth} \center \dynkin[Kac,extended,labels*={1,1,2,3,2,1,2}, labels={,\alpha_1,\alpha_2,\alpha_3, \alpha_4,,\alpha_5}] E6 \end{minipage} \hfill \end{subfigure} \bigskip \begin{subfigure}[b]{\textwidth} \begin{minipage}{0.4\textwidth} \center \dynkin[Kac,extended,labels*={1,2,3,4,3,2,1,2}, labels={\alpha_1,\alpha_2,\alpha_3, \alpha_4,\alpha_5,,,\alpha_6}] E7 \end{minipage} \hfill \begin{minipage}{0.4\textwidth} \center \dynkin[Kac,extended,labels*={1,2,3,4,5,6,4,2,3}, labels={\alpha_1,\alpha_2,\alpha_3, \alpha_4,\alpha_5,\alpha_6, \alpha_7, ,\alpha_8}] E8 \end{minipage} \end{subfigure} \caption{Fibers occuring in isotrivial fibrations with $c(X)\le 0$.} \label{diag:DynkinKummerType} \end{figure} \begin{figure}[ht] \begin{minipage}{0.4\textwidth} \center \dynkin[Kac,extended,labels*={1,1,2,2,2,2,1,1}, ] D{} \end{minipage} \caption{Fiber of type $I_n^*$} \end{figure} For both fibers $F_i\;(i=1,2)$ let $A_i= 2\sum_{j=1}^{k-2} \alpha_j +\alpha_{k-1}+ \alpha_k\in \Pic(X)$, where the components $\alpha_j$ are the components of the respective fiber as indicated above in Figure~\ref{diag:DynkinKummerType} and $k$ is the highest occuring index. The effective divisor $E = 2S+A_1+A_2$ defines a nef primitive class with $E.E = 0$. Then $|\oO(E)|$ induces an elliptic fibration by \cite[Proposition 2.3.10]{huybrechts2016lectures} and by construction the fibration has a fiber of type $I_n^*$ with $n>0$. We conclude that this fibration is non-isotrivial. \end{proof} \subsection*{The non-isotrivial case} The non-isotrivial case is particularly simple as there is the following theorem: \begin{theorem}[{\cite[Theorem 8.3]{hassett2003potential}}] Let $X\to\PP^1$ be a non-isotrivial K3 surface. Then there exist non-torsion rational multisections on $X$. \end{theorem} \begin{proof}[Proof of Theorem \ref{thm:NQTCurvesExist} in the non-isotrivial case.] Let $R\subset X$ be a non-torsion rational multisection coming from \cite{hassett2003potential}. The difference $D(C)\subset J^0(X)$ yields an algebraic subset with not all of its irreducible components contained in some $X[m]\; (m\in \NN)$. But if $C$ was quasi-torsion, all components would be contained in some $X[m]$ by Proposition \ref{prop:NoQuasiTorsion}, a contradiction. \end{proof} \subsection*{The isotrivial case with \texorpdfstring{$c(X\to\PP^1)>0$}{c(X, P1)>0}} We proceed by imitating the genus calculation from \cite{hassett2003potential} in the case of quasi-torsion multisections by investigating the local monodromy. From this we will see that the genus of quasi-torsion curves $C$ grows with its fiber degree $C.X_t$. We will need the following preparatory lemma: \begin{lemma} Let $\operatorname{id} \neq \gamma\in \SL(2,\ZZ)$ be an element of finite order and $d<\ord(\gamma)$. Then there is a natural number $\kappa$ such that there exists an $x\in \RR^2\setminus \bigcup_{i=1}^\kappa \tfrac{1}{i}\ZZ^2$ with \begin{equation*} \sum_{i=0}^d \gamma^i x = 0 \mod \ZZ^2 \end{equation*} if and only if $d = \ord(\gamma)-1$. Moreover in this case $\sum_{i=0}^{\ord(\gamma)-1} \gamma^i = 0$. \end{lemma} \begin{proof} As $\sum_{i=0}^{\ord(\gamma)-1} \gamma^i = 0$ holds, one direction is obvious. Let $n< \ord(\gamma)$. Then $\id - \gamma^n$ is invertible over $\QQ$ as $\gamma^n$ has no Eigenvalue $1$. Let $A$ be its inverse. Then $B = A\cdot (1-\gamma)$ is an inverse for $C = \sum_{i=0}^n\gamma^n$. Therefore $Cx \in \ZZ^2$ implies \begin{equation*} x = BCx \in \frac{1}{|\det \id - \gamma^n|}\ZZ^2. \end{equation*} Then $\kappa = \max_n |\det \id - \gamma^n|$ yields the result. \end{proof} The geometric meaning of the lemma is as follows. Recall that an element $\gamma\in\SL(2,\ZZ)$ acts on the points $x=a\tau+b\in \CC/(\ZZ\tau+\ZZ)$ of an elliptic curve $E= \CC/(\ZZ\tau+\ZZ)$ by acting on the tuple $(a,b)$ via the transposed matrix. \begin{corollary} \label{cor:OrderOfPointInElliptic} Let $\operatorname{id} \neq \gamma\in \SL(2,\ZZ)$ be an element of finite order, $d<\ord(\gamma)$ and $E= \CC/(\ZZ\tau+\ZZ)$ an elliptic curve. There is a natural number $\kappa$ such that for any element $x\in E$ that is not torsion of order less than or equal to $\kappa$ the sum \begin{equation*} \sum_{i=0}^d \gamma^i x = 0 \in E \end{equation*} if and only if $d = \ord(\gamma)-1$. \end{corollary} \begin{definition} Let $X\to\PP^1$ be an elliptic Jacobian isotrivial K3 surface. Then the minimal $\kappa$ fulfilling the conditions of the previous corollary for all $\gamma\in \SL(2,\ZZ)$ that occur in the local monodromy of a singular fiber of $X\to\PP^1$ is denoted by $\kappa_X$. \end{definition} \begin{proposition} Let $X\to\PP^1$ be an isotrivial K3 surface. Let $C\subset X$ be a quasi-torsion curve such that $D(C)$ contains no component that is torsion of order up to $\kappa_X$. Then the geometric genus $g(C)$ satisfies \begin{equation*} g(C) \ge (C.X_t-1)c(X\to\PP^1) -2. \end{equation*} \end{proposition} \begin{proof} We follow the idea of \cite{hassett2003potential} by calculating the ramification occuring at the singular fibers and then applying the Hurwitz formula. Let $\Delta\subset \PP^1$ be a small disc around a singular fiber such that $C$ is smooth over punctured disc $\Delta^*$. Pick a local branch $B$ of $C$. Then there are two cases:\bigskip\\ \underline{\textit{Case 1: $B$ is not a section:}} Fix a fiber $X_t$ and a point $p \in B\cap X_t$. Moreover let $\gamma\in\SL(2,\ZZ)$ be a generator of the local monodromy group. By construction the point $q = \gamma.p - p\in J^0(X_t)$ is not zero. Applying $\gamma$ again yields $\gamma^i.q = \gamma^i.p - \gamma^{i-1}.p$ and therefore \begin{equation*} \gamma^i.p - p = \sum_{j=0}^{i-1} \gamma^j.q. \end{equation*} By Corollary \ref{cor:OrderOfPointInElliptic} the smallest $i>0$ such that $\gamma^i.p = p$ is equal to $\ord(\gamma)-1$. Thus the ramification contribution $e_i$ of this branch is $\ord(\gamma)-1$. \bigskip\\ \underline{\textit{Case 2: $B$ is a section:}} Suppose there is another branch that is also a local section. This in turn would yield a local section of $D(C)$ as well and thus we have $\gamma.p = p$ for some $p\in J^0(X_t)$. But this is a contradiction to the previous corollary and the assumption that $D(C)$ contains no torsion of order up to $\kappa_X$. Hence there is at most one branch that is a local section. \bigskip\\ To conclude, for one fixed degenerate fiber with local monodromy generated by $\gamma\in \SL(2,\ZZ)$, we have that the ramification contribution $e_i$ is greater or equal to $(X_t.C-1)\cdot \tfrac{\ord(\gamma)-1}{\ord(\gamma)}$. Hence, by the Hurwitz formula we get \begin{align*} 2g(C)-2 &\ge (X_t.C)\cdot (2g(\PP^1)-2) + \sum_i e_i \\ &\ge-2(X_t.C)+ (X_t.C-1)\cdot (\tfrac{1}{2}n_0 + \tfrac{5}{6}n_2 + \tfrac{3}{4}n_3 + \tfrac{2}{3}n_4) \\ &= (X_t.C-1)c(X\to\PP^1)-2.\qedhere \end{align*} \end{proof} Now we are finally able to prove the last remaining part of Theorem \ref{thm:NQTCurvesExist}. \begin{proof}[Proof of Theorem \ref{thm:NQTCurvesExist} in the remaining case.] Let $d_0$ be the index of $X$ and let $p\gg0$ be a prime. By \cite[Chapter 11.5]{huybrechts2016lectures} we can choose a $p$-twist $Y\to \PP^1$ of $X\to\PP^1$, i.e., $J^p(Y)\cong X$ as an isomorphism of elliptic surfaces. Then the index of $Y$ is $d_0p$. By \cite[Lemma 3.5]{bogomolov1999density} we can choose a rational curve $R\subset Y$ with $R.Y_t = d_0p$. Suppose that this curve is quasi-torsion and denote $k = (\kappa_X!)^n$ for some $n\in\NN$. Recall from Section \ref{sub:compactifiedJacobian} that there is a multiplication map, i.e., \begin{equation*} g_k: J^1(Y) \dashrightarrow J^k(Y). \end{equation*} Then taking the image of $R$ under this map yields that $R' = g_{k}(R)$ is a rational curve in $Y' = J^{k}(Y)$. Moreover as $\gcd(p,k) = 1$ we know that $R'.Y'_t \ge p$. For $n\in\NN$ big enough we can assume that $D(R')$ does not contain non-trivial torsion of order up to $\kappa_X$. Then the previous proposition shows that $R'$ (and hence $R$) can not be rational for $p\gg 0$, a contradiction. Therefore $R$ is not quasi-torsion and $g_p(R)\subset J^p(Y) \cong X$ gives the desired curve. \end{proof} \section{Producing curves with many singularities} \label{sec:ProducingSingularities} In this section we will prove Theorem~\ref{thm:MainTheorem1}. The idea is to examine what happens to rational curves under self-rational maps. The latter are constructed as follows. We define the map $g_n\colon X\dashrightarrow J^n(X)$ as the composition of the identification $X\cong J^1(X)$ and the multiplication map $J^1(X)\dashrightarrow J^n(X)$ \begin{equation*} g_n\colon X\to J^1(X) \dashrightarrow J^n(X). \end{equation*} We will then show that given a non-quasi-torsion curve $C\subset X$ the rational maps $g_n$ produce new curves $C'=g_n(C)\subset X$ such that $C'$ has many singularities. \begin{proposition} \label{prop:ProductionNewCurves} Let $X\to \PP^1$ be an elliptic K3 surface and $C$ be a curve with $C.X_t > 1$ that is non-quasi-torsion and such that $g_n|_C$ is a birational map to its image for all $n\equiv 1\mod d_0$. Then for every $n\in \NN$ there are curves $C_i\subset J^n(X)$ with a rational map $C\dashrightarrow C_i$ such that $C_i^2 \to \infty$. \end{proposition} \begin{proof} Let some open $V\subset U\subset \PP^1$ be given. First we will show that there is some $m\equiv 1 \mod d_0$ such that $D(C)_V$ has a component with an isolated torsion point of order $m$. Suppose the contrary, i.e., $D(C)_V$ does not contain a component with an isolated torsion point $p_0$ of order $m \equiv 1 \mod d_0$ for some $m$. By shrinking $V$ we may assume that $D(C)_V$ is \'{e}tale over $V$, $V$ is simply connected and $J^0(X)_V\to V$ is given by a standard model. Applying Proposition \ref{prop:CapTorsionEmpty} to all branches of $D(C)_V$ we get that the branches of $D(C)_V$ - and hence all components of $D(C)$ - are quasi-torsion, which is a contradiction. Let $k\in \NN$ be given and choose $k$ disjoint analytically open sets $V_1,\ldots,V_k\subset U$. Then by the above there are $m_1,\ldots, m_k\equiv 1\mod d_0$ such that $D(C)$ has an isolated torsion point of order $m_i$ over some $t_i\in V_i$. Denote $m = \prod m_i$. Then by assumption the map $C\dashrightarrow g_m(C)$ is birational. Therefore $g_m(C)$ has a singularity over $t_i$ for all $i$ as $g_m|C$ maps two points of $C$ over $t_i$ to the same point in $g_m(C)$ by construction, giving a locally reducible singularity. For the last statement let $n\in\NN$ be given. We observe that $nm \equiv n\mod d_0$. Then $g_{nm}(C)$ also has at least $k$ singularities and the isomorphism $J^{nm}(X)\cong J^n(X)$ gives the result. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:MainTheorem1}] Let $R\subset X$ be a non-quasi-torsion rational curve as constructed in Theorem \ref{thm:NQTCurvesExist}. As $R$ is non-torsion, the set \begin{equation*} \{J^k(X)_t.g_k(R)\eft k\equiv 1\mod d_0\} \end{equation*} attains a minimum greater than $1$ for some $k_0$ as otherwise $R$ would be torsion. Now replace $R$ with $g_{k_0}(R)$ via the isomorphism $J^k(X)\cong J^1(X)$. Then the previous Proposition~\ref{prop:ProductionNewCurves} applies: If $R\dashrightarrow g_k(R)$ is not birational for some $k$, then $J^k(X)_t.g_k(R)< X_t.R$, a contradiction. \end{proof} \section{Density of lifted rational curves in \texorpdfstring{$\PP(\Omega_X)$}{P(OmX)}} \label{sec:JetSpace} Let $X\to \PP^1$ be an elliptic K3 surface. In this section we will examine the density in the jet space $\PP(\Omega_X)$ for lifts of curves $C$ that are constructed similarly to those in the last section. Recall that the lift $j\colon\tilde{C}\to \PP(\Omega_X) = P(\mathcal{T}_X)$ is analytically given by the pushforward of the tangent vectors. Moreover by construction we get \begin{equation*} c_1(\oO_{\PP(\Omega_X)}(1)).j_*(\tilde{C}) \le 2g(C)-2. \end{equation*} Now we will investigate the behaviour of lifts of (rational) curves in the jetspace of an elliptic K3 surface $X\to \PP^1$. Denote its index by $d_0$ and fix a line bundle $\mM\in\Pic X$ of degree $d_0$. Furthermore let $C\subset X$ be a non-quasi-torsion curve coming from Section \ref{sec:ExistenceRNQT}. For $n\in I = \{n\in\NN\eft n\equiv 1\mod d_0\}$ denote by $G_n\colon J^1(X)\dashrightarrow J^n(X)\to J^1(X)$ the multiplication map $J^1(X)\dashrightarrow J^n(X)$ composed with the isomorphism $J^n(X)\to J^1(X)$ induced by the line bundle $\mM$, i.e., fiberwise a line bundle $L\in \mathrm{Jac}^1 (X_t)$ gets mapped to \begin{equation*} L\mapsto L^{\otimes n} \mapsto L^{\otimes n} \otimes \mM|_{X_t}^{\otimes-(n-1)/d_0}. \end{equation*} \begin{lemma} Let $X\to\PP^1$ be an elliptic K3 surface and $\Delta\subset \PP^1$ simply connected such that \begin{equation*} \CC\times \Delta / (\ZZ\tau(t)+\ZZ, t) \cong J^0(X)_\Delta \to\Delta \end{equation*} is a standard model. Then we may choose an isomorphism $J^1(X)_\Delta \to J^0(X)_\Delta$ such that under this identification $G_n$ is given by \begin{equation*} (z,t)\mapsto (nz,t). \end{equation*} \end{lemma} \begin{proof} The line bundle $\mM$ induces a section $S\subset J^{d_0}(X)$ and we denote by $H$ the preimage of $S_\Delta$ under the smooth multiplication map $J^1(X)_\Delta\to J^{d_0}(X)_\Delta$. Then $H$ decomposes into a disjoint union of $d_0^2$ branches and picking one branch $h$ induces an isomorphism $J^1(X)_\Delta \to J^0(X)_\Delta$: Every point $h_t$ of $h$ over $t\in \Delta$ corresponds to a line bundle $L$ on $X_t$ of degree $1$ such that $L^{\otimes d_0} = \mM|_{X_t}$ and substracting this line bundle fiberwise yields the desired map. Viewing $G_n$ as a map $J^0(X)_\Delta\to J^0(X)_\Delta$ via this isomorphism a line bundle $L'$ on $J^0(X_t)$ gets mapped to \begin{equation*} L' \mapsto (L'\otimes L)^{\otimes n} \otimes \mM^{\otimes (n-1)/d_0} \otimes L^{-1}= L'^{\otimes n}, \end{equation*} and we are done. \end{proof} \begin{remark} \label{rem:TangentSpace} Let $X=(\CC\times\Delta)/(\ZZ\tau(t)+\ZZ,t)\to\Delta$ be a standard model and $p = (x\tau(t)+y,t)\in X$ a point. Then we can naturally choose an isomorphism of the tangent spaces \begin{equation*} T_pX \cong T_{(x\tau(t)+y,t)}\CC\times \Delta \cong \CC\times \CC. \end{equation*} For a given deck transformation $(z,t)\mapsto(z+a\tau(t)+b,t)$ the induced isomorphism on $T_pX \cong \CC\times\CC$ is given by \begin{equation*} (z,t) \mapsto (z+a\partial_t\tau(t), t). \end{equation*} \end{remark} The multiplication map $G_n$ is very similar to the maps $g_n$ from the last section. The difference becomes necessary as we really need to consider \emph{self}-rational maps of K3 surfaces in the following. We will show that the union of curves $G_n(C)$ lifted to the jet space $\PP(\Omega_X)$ are Zariski-dense. In particular if we take any rational non-quasi-torsion rational multisection from Section \ref{sec:ExistenceRNQT} the following proves Theorem \ref{thm:MainTheorem2}. \begin{theorem} Let $X\to\PP^1$ be an elliptic K3 surface of index $d_0$ and $C$ be a non-quasi-torsion curve. Then the curves $G_n(C)\; (n\equiv 1\mod d_0)$ lifted to $\PP(\Omega_X)$ form a dense subset in the Zariski topology. \end{theorem} \begin{proof} Denote the projection by $\operatorname{pr}\colon \PP(\Omega_X)\to X$. It suffices to show that given any open subset $V\subset U\subset \PP^1$ there is a point $p\in C_V$ such that $\operatorname{pr}^{-1}(p)$ intersects the union of the lifts of the $G_n(C)$ at infinitely many points. By shrinking $V$ we may assume by the previous lemma that $X$ is given by a standard model \begin{equation*} X \cong (\CC\times\Delta)/(\ZZ\tau(t)+\ZZ,t), \end{equation*} the map $G_n$ is given by $(z,t)\mapsto (nz,t)$, and $C$ is smooth and locally given by $(f(t),t)$ for some holomorphic function $f\colon \Delta\to \CC$. As $C$ is non-quasi-torsion by assumption the curve $G_{d_0}(C)$ is non-quasi-torsion as well and we can apply that its torsion points are dense, see Corollary \ref{cor:TorsionDense}. This means that there exist $t_j\in V$ and $n_j\in I = \{n\in \NN\eft n\equiv 1\mod d_0\}$ such that \begin{equation} \label{eq:TosionOfF} (n_j-1)f(t_j) = a_j\tau(t_j)+b_j \end{equation} for some $a_j,b_j\in\ZZ$. Then for every $k\in \NN$ the $n_j^kf(t_j)$ satisfy \begin{equation*} n_j^k f(t_j) = n_j^{k-1}(a_j\tau(t_j)+b_j) + n_j^{k-1} f(t_j) = f(t_j) + (a_j\tau(t_j)+b_j)\tfrac{1-n_j^{k}}{1-n_j}, \end{equation*} where the last equality follows by induction. Therefore for $p,q>0$ the curve $G_{n_j^p}(C)$ intersects $G_{n_j^q}(C)$ over $t_j\in V$. Assume that for almost all indices $j$ there exist $p>q>0$ such that the tangent directions of $G_{n_j^p}(C)$ and $G_{n_j^q}(C)$ are the same over $t_j$. Then by Remark~\ref{rem:TangentSpace} this is equivalent to \begin{equation* n_j^q\partial_t f(t_j)-a_j\tfrac{n_j^q-1}{n_j-1}\partial_t \tau(t_j) = n_j^p\partial_t f(t_j)- a_j\tfrac{n_j^p-1}{n_j-1}\partial_t \tau(t_j). \end{equation*} In other words \begin{equation*} \partial_t f(t_j) = \tfrac{a_j}{n_j-1}\partial_t\tau(t_j) \end{equation*} independently of $p,q$. In the isotrivial case this means that $f$ is constant as $\partial_t \tau = 0$. If this was the case for all branches of $C$ over $V$ then the curve would be quasi-torsion, a contradiction. In the non-isotrivial case the holomorphic function $\frac{\partial_t f}{\partial_t \tau}$ maps to $\RR$. Therefore it is constant as well and $a = \tfrac{a_j}{n_j-1}$ does not depend on $j$. Then on the other hand equation \eqref{eq:TosionOfF} yields that the holomorphic function $f-a\tau$ also maps to $\RR$ and therefore $b= \tfrac{b_j}{n_j-1}$ is independent of $j$ as well. But this in turn yields that $C$ is quasi-torsion if this was the case for every branch over $V$, and hence we are done. \end{proof} \section{Applications} \label{sec:Applications} \noindent As we will see the last section provides a simple tool to prove Kobayashi's Theorem in the special case of elliptic K3 surfaces. \begin{corollary}[Kobayashi's Theorem] \label{thm:Kobayashi} Let $X$ be an elliptic K3 surface. Then \begin{equation*} \hh^0(X,\operatorname{Sym}^n \Omega_X) = 0 \end{equation*} for all $n>0$. \end{corollary} \begin{proof} Let $\PP(\Omega_X)$ be the first jet-space of $X$. Then we have the equality \begin{equation*} \hh^0(\PP(\Omega_X), \oO(n)) = \hh^0(X,\operatorname{Sym}^n \Omega_X). \end{equation*} But we know from the last section that there are rational curves $R_i\subset X$ such that the union of their lifts is Zariski-dense in the jet space. But by construction $c_1(\oO(n)).R_i< 0$ and hence $\oO(n)$ is not effective. \end{proof} \noindent We would also like to mention the following corollary on the density of rational curves for all elliptic K3 surfaces in the usual topology. For a Baire-general K3 surface this was achieved in \cite{chen2013density}. Moreover in {\itshape loc.\ cit.}\ the following theorem has been proven: \begin{theorem}[{\cite[Theorem 1.6]{chen2013density}}] Let $X\to\PP^1$ be an elliptic K3 surface. If there is a non-torsion rational multisection then the union of rational curves is dense in the usual topology. \end{theorem} \noindent Using Theorem~\ref{thm:NQTCurvesExist} we directly get the following stronger result: \begin{corollary} \label{cor:DensityUsual} Let $X\to\PP^1$ be an arbitrary projective elliptic K3 surface. Then the union of rational curves is dense in the usual topology. \end{corollary}
2,869,038,156,475
arxiv
\section{Introduction} \blfootnote{ % % % % % \hspace{-0.65cm} This work is licensed under a Creative Commons Attribution 4.0 International License. License details: \url{http://creativecommons.org/licenses/by/4.0/}. } Distributed representation learned with neural networks has shown to be effective in modeling natural language at different granularities. Learning representation for words~\cite{DBLP:conf/nips/BengioDV00,DBLP:journals/corr/abs-1301-3781,DBLP:conf/emnlp/PenningtonSM14}, for example, has achieved notable success. Much remains to be done to model larger spans of text such as sentences or documents. The approaches to computing sentence embedding generally fall into two categories. The first consists of learning sentence embedding with unsupervised learning, e.g., auto-encoder-based models~\cite{DBLP:conf/emnlp/SocherPHNM11}, Paragraph Vector~\cite{DBLP:conf/icml/LeM14}, SkipThought vectors~\cite{DBLP:conf/nips/KirosZSZUTF15}, FastSent~\cite{DBLP:conf/naacl/HillCK16}, among others. The second category consists of models trained with supervised learning, such as convolution neural networks (CNN)~\cite{DBLP:conf/emnlp/Kim14,DBLP:conf/acl/KalchbrennerGB14}, recurrent neural networks (RNN)~\cite{DBLP:conf/emnlp/ConneauKSBB17,DBLP:conf/emnlp/BowmanAPM15}, and tree-structure recursive networks~\cite{D13-1170,DBLP:conf/icml/ZhuSG15,DBLP:conf/acl/TaiSM15}, just to name a few. Pooling is an essential component of a wide variety of sentence representation and embedding models. For example, in recurrent-neural-network-based models, pooling is often used to aggregate hidden states at different time steps (i.e., words in a sentence) to obtain sentence embedding. Convolutional neural networks (CNN) also often uses max or mean pooling to obtain a fixed-size sentence embedding. In this paper we explore generalized pooling methods to enhance sentence embedding. Specifically, by extending scalar self-attention models such as those proposed in~\newcite{DBLP:journals/corr/LinFSYXZB17}, we propose vector-based multi-head attention, which includes the widely used max pooling, mean pooling, and scalar self-attention itself as special cases. On one hand, the proposed method allows for extracting different aspects of the sentence into multiple vector representations through the multi-head mechanism. On the other, it allows the models to focus on one of many possible interpretations of the words encoded in the context vector through the vector-based attention mechanism. In the proposed model we design penalization terms to reduce redundancy in multi-head attention. We evaluate the proposed model on three different tasks: natural language inference, author profiling, and sentiment classification. The experiments show that the proposed model achieves significant improvement over strong sentence-encoding-based methods, resulting in state-of-the-art performances on four datasets. The proposed approach can be easily implemented for more problems than we discuss in this paper. \section{Related Work} There exist in the literature much previous work for sentence embedding with supervised learning, which mostly use RNN and CNN as building blocks. For example,~\newcite{DBLP:conf/emnlp/BowmanAPM15} used BiLSTMs as sentence embedding for natural language inference task.~\newcite{DBLP:conf/emnlp/Kim14} used CNN with max pooling for sentence classification. More complicated neural networks were also proposed for sentence embedding. For example,~\newcite{D13-1170} introduced Recursive Neural Tensor Network (RNTN) over parse trees to compute sentence embedding for sentiment analysis.~\newcite{DBLP:conf/icml/ZhuSG15} and~\newcite{DBLP:conf/acl/TaiSM15} proposed tree-LSTM.~\newcite{DBLP:conf/eacl/YuM17a} proposed a memory augmented neural networks, called Neural Semantic Encoder (NSE), as sentence embedding for natural language understanding tasks. Some recent research began to explore inner/self-sentence attention mechanism for sentence embedding, which can be classified into two categories: self-attention network and self-attention pooling.~\newcite{DBLP:conf/emnlp/0001DL16} proposed an intra-sentence level attention mechanism on the base of LSTM, called LSTMN. For each step in LSTMN, it calculated the attention between a certain word and its previous words.~\newcite{DBLP:conf/nips/VaswaniSPUJGKP17} proposed a self-attention network for the neural machine translation task. The self-attention network uses multi-head scaled dot-product attention to represent each word by weighted summation of all word in the sentence.~\newcite{DBLP:journals/corr/abs-1709-04696} proposed DiSAN, which is composed of a directional self-attention with temporal order encoded.~\newcite{DBLP:journals/corr/abs-1801-10296} proposed reinforced self-attention network (ReSAN), which integrate both soft and hard attention into one context fusion with reinforced learning. Self-attention pooling has also been studied in previous work.~\newcite{DBLP:journals/corr/LiuSLW16} proposed inner-sentence attention based pooling methods for sentence embedding. They calculate scalar attention between the LSTM states and the mean pooling using multi-layer perceptron (MLP) to obtain the vector representation for a sentence.~\newcite{DBLP:journals/corr/LinFSYXZB17} proposed a scalar structure/multi-head self-attention method for sentence embedding. The multi-head self-attention is calculated by a MLP with only LSTM states as input. There are two main differences from our proposed method; i.e., (1) they used scalar attention instead of vectorial attention, (2) we propose different penalization terms which is suitable for vector-based multi-head self-attention, while their penalization term on attention matrix is only designed for scalar multi-head self-attention.~\newcite{choi2018fine} proposed a fine-grained attention mechanism for neural machine translation, which also extend scalar attention to vectorial attention.~\newcite{DBLP:journals/corr/abs-1709-04696} proposes multi-dimensional/vectorial self-attention pooling on the top of self-attention network instead of BiLSTM. However, both of them didn't consider multi-head self-attention. \section{The Model} In this section we describe the proposed models that enhance sentence embedding with generalized pooling approaches. The pooling layer is built on a state-of-the-art sequence encoder layer. Below, we first discuss the sequence encoder, which, when enhanced with the proposed generalized pooling, achieves state-of-the-art performance on three different tasks on four datasets. \subsection{Sequence Encoder} The sequence encoder in our model takes into $T$ word tokens of a sentence ${\vect S} = (w_1, w_2, \dots, w_T)$. Each word $w_i$ is from the vocabulary ${\vect V}$. For each word we concatenate pre-trained word embedding and embedding learned from characters. The character composition model feeds all characters of the word into a convolution neural network (CNN) with max pooling~\cite{DBLP:conf/emnlp/Kim14}. The detailed experiment setup will be discussed in Section \ref{sec:setup}. The sentence ${\vect S}$ is represented as a word embedding sequence: ${\vect X} = ({\vect e}_1, {\vect e}_2, \dots, {\vect e}_T) \in \mathbb{R}^{T \times d_e}$, where $d_e$ is the dimension of word embedding which concatenates embedding obtained from character composition and pretrained word embedding. To represent words and their context in sentences, the sentences are fed into stacked bidirectional LSTMs (BiLSTMs). Shortcut connections are applied, which concatenate word embeddings and input hidden states at each layer in the stacked BiLSTM except for the first (bottom) layer. The formulae are as follows: \begin{align} \overrightarrow{\vect h}^l_t &= \text{LSTM}([{\vect e}_t;\overrightarrow{\vect h}^{l-1}_t], \overrightarrow{\vect h}_{t-1}^l) \,,\\ \overleftarrow{\vect h}^l_t &= \text{LSTM}([{\vect e}_t;\overleftarrow{\vect h}^{l-1}_t], \overleftarrow{\vect h}_{t+1}^l) \,,\\ {\vect h}_t^l &= [\overrightarrow{\vect h}_t^l;\overleftarrow{\vect h}_t^l]\,. \end{align} \noindent where hidden states ${\vect h}_t^l$ in layer $l$ concatenate two directional hidden states of LSTM at time $t$. Then the sequence is represented as the hidden states in the top layer $L$: ${\mat H}^{L} = ({\vect h}_1^L, {\vect h}_2^L,\dots, {\vect h}_T^L) \in \mathbb{R}^{T \times 2d}$. For simplicity, we ignore the superscript $L$ in the remainder of the paper. \subsection{Generalized Pooling} \subsubsection{Vector-based Multi-head Attention} To transform a variable length sentence into a fixed size vector representation, we propose a generalized pooling method. We achieve that by using a weighted summation of the $T$ LSTM hidden vectors, and the weights are vectors rather than scalars, which can control every element in all hidden vectors: \begin{align} {\mat A} = \text{softmax}({\mat W}_2\text{ReLU}({\mat W}_1 {\mat H}^\mathrm{T} + {\vect b_1} ) + {\vect b_2})^\mathrm{T} \,, \end{align} \noindent where ${\mat W}_1 \in \mathbb{R}^{d_a \times 2d}$ and ${\mat W}_2 \in \mathbb{R}^{2d \times d_a}$ are weight matrices; ${\vect b_1} \in \mathbb{R}^{d_a}$ and ${\vect b_2} \in \mathbb{R}^{2d}$ are bias, where $d_a$ is the dimension of attention network and $d$ is the dimension of LSTMs. ${\mat H} \in \mathbb{R}^{T \times 2d}$ and ${\mat A} \in \mathbb{R}^{T \times 2d}$ are the hidden vectors at the top layer and weight matrices, respectively. The softmax ensures that $({\mat A}_1, {\mat A}_2, \dots, {\mat A}_T)$ are non-negative and sum up to 1 for every element in vectors. Then we sum up the LSTM hidden states ${\mat H}$ according to the weight vectors provided by ${\mat A}$ to get a vector representation ${\vect v}$ of the input sentence. However, the vector representation usually focuses on a specific component of the sentence, like a special set of related words or phrases. We extend pooling method to a multi-head way: \begin{align} \label{equ:att} {\mat A}^i &= \text{softmax}({\mat W}^i_2\text{ReLU}({\mat W}^i_1 {\mat H}^\mathrm{T} + {\vect b^i_1} ) + {\vect b^i_2})^\mathrm{T} \,, \forall i \in {1,\dots, I} \,, \\ {\vect v}^i &= \sum_{t=1}^{T}{\vect a}^i_t \odot {\vect h^i_t} \,, \forall i \in {1,\dots, I} \,, \end{align} \noindent where ${\vect a}^i_t$ indicates the vectorial attention from ${\mat A}^i$ for the $t$-th token in $i$-th head and $\odot$ is the element-wise product (also called the Hadamard product). Thus the final representation is a concatenated vector ${\vect v} = [{\vect v}^1;{\vect v}^2;\dots;{\vect v}^I]$, where each ${\vect v}^i$ captures different aspects of the sentence. For example, some heads of vectors may represent the predicate of sentence and other heads of vectors represent argument of the sentence, which enhances representation of sentences obtained in single-head attention. \subsubsection{Penalization Terms} To reduce the redundancy of multi-head attention, we design penalization terms for vector-based multi-head attention in order to encourage the diversity of summation weight across different heads of attention. We propose three types of penalization terms. \paragraph{Penalization Term on Parameter Matrices} The first penalization term is applied to parameter matrix ${\mat W}^i_1$ in Equation~\ref{equ:att}, as shown in the following formula: \begin{align} P &= \mu \sum_{i=1}^{I}\sum_{j=i+1}^{I}\max(\lambda -\lVert {\mat W}^i_1 - {\mat W}^j_1 \rVert_\mathrm{F}^2, 0) \,. \end{align} Intuitively, we encourage different heads to have different parameters. We maximum the \textit{Frobenius} norm of the differences between two parameter matrices, resulting in encouraging the diversity of different heads. It has no further bonus when the \textit{Frobenius} norm of the difference of two matrices exceeds the a threshold $\lambda$. Similar to adding an L2 regularization term on neural networks, the penalization term $P$ will be added to the original loss with a weight of $\mu$. Hyper-parameters $\lambda$ and $\mu$ need to be tuned on a development set. We can also add constrains on ${\mat W}^i_2$ in a similar way, but we did not observe further improvement in our experiments. \paragraph{Penalization Term on Attention Matrices} The second penalization term is added on attention matrices. Instead of using $\lVert{\mat A} {\mat A}^\mathrm{T} - {\mat I}\rVert_\mathrm{F}^2$ to encourage the diversity for scalar attention matrix as in~\newcite{DBLP:journals/corr/LinFSYXZB17}, we propose the following formula to encourage the diversity for vectorial attention matrices. The penalization term on attention matrices is \begin{align} P = \mu \sum_{i=1}^{I}\sum_{j=i+1}^{I}\max(\lambda -\lVert {\mat A}^i - {\mat A}^j \rVert_\mathrm{F}^2, 0) \,, \end{align} \noindent where $\lambda$ and $\mu$ are hyper-parameters which need to be tuned based on a development set. Intuitively, we try to encourage the diversity of any two different ${\mat A}^i$ under the threshold $\lambda$. \paragraph{Penalization Term on Sentence Embeddings} In addition, we propose to add a penalization term on multi-head sentence embedding ${\vect v}^i$ directly as follows: \begin{align} P = \mu \sum_{i=1}^{I}\sum_{j=i+1}^{I}\max(\lambda - \lVert {\vect v}^i - {\vect v}^j \rVert_2^2, 0) \,, \end{align} \noindent where $\lambda$ and $\mu$ are hyper-parameters. Here we try to maximize the $l^2$-norm of any two different heads of sentence embeddings under the threshold $\lambda$. \subsection{Top-layer Classifiers} The output of pooling is fed to a top-layer classifier to solve different problems. In this paper we evaluate our sentence embedding models on three different tasks: natural language inference (NLI), author profiling, and sentiment classification, on four datasets. The evaluation covers two typical types of problems. The author profiling and sentiment tasks classify individual sentences into different categories and the two NLI tasks classify sentence pairs. For the NLI tasks, to enhance the relationship between sentence pairs, we concatenate the embeddings of two sentences with their absolute difference and element-wise product~\cite{DBLP:conf/acl/MouMLX0YJ16} as the input to the multilayer perceptron (MLP) classifier: \begin{equation} {\vect v} = [{\vect v}_a;{\vect v}_b;\lvert{\vect v}_a - {\vect v}_b\rvert; {\vect v}_a \odot {\vect v}_b] \,, \end{equation} \noindent where $\odot$ is the element-wise product. The MLP has two hidden layers with \textit{ReLU} activation with shortcut connections and a \textit{softmax} output layer. The entire model is trained end-to-end through minimizing the cross-entropy loss. Note that for the two classification tasks on individual sentences (i.e., the author profiling and sentiment classification task), we use the same MLP classifiers described above for sentence pair classification. But instead of concatenating two sentences, we directly feed a sentence embedding into MLP. \section{Experimental Setup} \label{sec:setup} \subsection{Data} \paragraph{SNLI} The SNLI~\cite{DBLP:conf/emnlp/BowmanAPM15} is a large dataset for natural language inference. The task detects three relationships between a premise and a hypothesis sentence: the premise entails the hypothesis (\textit{entailment}), they contradict each other (\textit{contradiction}), or they have a neutral relation (\textit{neutral}). We use the same data split as in~\newcite{DBLP:conf/emnlp/BowmanAPM15}, i.e., 549.367 samples for training, 9,842 samples for development and 9,824 samples for testing. \paragraph{MultiNLI} MultiNLI~\cite{DBLP:journals/corr/WilliamsNB17} is another natural language inference dataset. The data are collected from a broader range of genres such as fiction, letters, telephone speech, and 9/11 reports. Half of these 10 genres are used in training while the rest are not, resulting in-domain and cross-domain development and test sets used to test NLI systems. We use the same data split as in~\newcite{DBLP:journals/corr/WilliamsNB17}, i.e., 392,702 samples for training, 9,815/9,832 samples for in-domain/cross-domain development, and 9,796/9,847 samples for in-domain/cross-domain testing. Note that, we do not use SNLI as an additional training/development set in our experiments. \paragraph{Age Dataset} To compare our models with that of~\newcite{DBLP:journals/corr/LinFSYXZB17}, we use the same Age dataset in our experiment here, which is an Author Profiling dataset. The dataset are extracted from the Author Profiling dataset\footnote{http://pan.webis.de/clef16/pan16-web/author-profiling.html}, which consists of tweets from English Twitter. The task is to predict the age range of authors of input tweets. The age range are split into 5 classes: 18-24, 25-34, 35-49, 50-64, 65+. We use the same data split as in~\newcite{DBLP:journals/corr/LinFSYXZB17}, i.e., 68,485 samples for training, 4,000 for development, and 4,000 for testing. \paragraph{Yelp Dataset} The Yelp dataset\footnote{https://www.yelp.com/dataset/challenge} is a sentiment analysis task, which takes reviews as input and predicts the level of sentiment in terms of the number of stars, from 1 to 5 stars, where 5-star means the most positive. We use the same data split as in~\newcite{DBLP:journals/corr/LinFSYXZB17}, i.e., 500,000 samples for training, 2,000 for development, and 2,000 for testing. \subsection{Training Details} We implement our algorithm with Theano~\cite{2016arXiv160502688short} framework. We use the development set (in-domain development set for MultiNLI) to select models for testing. To help replicate our results, we publish our code\footnote{https://github.com/lukecq1231/generalized-pooling}, which is developed from our codebase for multiple tasks~\cite{DBLP:conf/acl/ChenZLIW18,DBLP:conf/acl/ChenZLWJI17,DBLP:conf/ijcai/ChenZLWJ16,Zhang:qa:2017}. Specifically, we use Adam~\cite{DBLP:journals/corr/KingmaB14} for optimization. The initial learning rate is 4e-4 for SNLI and MultiNLI, 2e-3 for Age dataset, 1e-3 for Yelp dataset. For SNLI and MultiNLI dataset, stacked BiLSTMs have 3 layers. For Age and Yelp dataset, stacked BiLSTMs have 1 layer. The hidden states of BiLSTMs for each direction and MLP are 300 dimension, except for SNLI whose dimensions are 600. We clip the norm of gradients to make it smaller than 10 for SNLI and MultiNLI, and 0.5 for Age and Yelp dataset. The character embedding has 15 dimensions, and 1D-CNN filters lengths are 1, 3 and 5, respectively. Each filter has 100 feature maps, resulting in 300 dimensions for character-composition embedding. We initialize word-level embedding with pre-trained \textit{GloVe-840B-300D} embeddings~\cite{DBLP:conf/emnlp/PenningtonSM14} and initialize out-of-vocabulary words randomly with a Gaussian distribution. The word-level embedding is fixed during training for SNLI and MultiNLI dataset, but updated during training for Age and Yelp dataset, which is determined by the performance on development sets. The mini-batch size is 128 for SNLI and 32 for the rest. We use 5 heads generalized pooling for all tasks. And $d_a$ is 600 for SNLI and 300 for the other datasets. For the penalization term, we choose $\lambda = 1$; the penalization weight $\mu$ is selected from [1,1e-1,1e-2,1e-3,1e-4] based on performances on the development sets. \section{Experimental Results} \subsection{Overall Performance} For the NLI tasks, there are many ways to add cross-sentence~\cite{DBLP:journals/corr/RocktaschelGHKB15,DBLP:conf/emnlp/ParikhT0U16,DBLP:conf/acl/ChenZLWJI17} level attention. To ensure the comparison is fair, we only compare methods that use sentence-encoding-based models; i.e., cross-sentence attention is not allowed. Note that this follows the setup in the RepEval-2017 Shared Task. Table~\ref{tab:snli} shows the results of different models for NLI, consisting of results of previous work on sentence-encoding-based models, plus the performance of our baselines and that of the model proposed in this paper. We have three additional baseline models: the first uses max pooling on top of BiLSTM, which achieves an accuracy of 85.3\%; the second uses mean pooling on top of BiLSTM, which achieves an accuracy of 85.0\%; the third uses last pooling, i.e., concatenating the last hidden states of forward and backward LSTMs, which achieves an accuracy of 84.9\%. Instead of using heuristic pooling methods, the proposed sentence-encoding-based model with generalized pooling achieves a new state-of-the-art accuracy of 86.6\% on the SNLI dataset; the improvement over the baseline with max pooling is statistically significant under the one-tailed paired t-test at the 99.999\% significance level. The previous state-of-the-art model ReSAN~\cite{DBLP:journals/corr/abs-1801-10296} used a hybrid of hard and soft attention model with reinforced learning achieved an accuracy of 86.3\%. \begin{table}[t!] \renewcommand{\arraystretch}{0.9} \begin{center} \scalebox{1}{ \begin{tabular}{|l|r|} \hline \bf Model & \bf Test \\ \hline 100D LSTM~\cite{DBLP:conf/emnlp/BowmanAPM15} & 77.6 \\ 300D LSTM~\cite{DBLP:conf/acl/BowmanGRGMP16} & 80.6 \\ 1024D GRU~\cite{DBLP:journals/corr/VendrovKFU15} & 81.4 \\ 300D Tree CNN~\cite{DBLP:conf/acl/MouMLX0YJ16} & 82.1 \\ 600D SPINN-PI~\cite{DBLP:conf/acl/BowmanGRGMP16} & 83.3 \\ 600D BiLSTM~\cite{DBLP:journals/corr/LiuSLW16} & 83.3 \\ 300D NTI-SLSTM-LSTM~\cite{DBLP:conf/eacl/YuM17} & 83.4 \\ 600D BiLSTM intra-attention~\cite{DBLP:journals/corr/LiuSLW16} & 84.2 \\ 600D BiLSTM self-attention~\cite{DBLP:journals/corr/LinFSYXZB17} & 84.4 \\ 4096D BiLSTM max pooling~\cite{DBLP:conf/emnlp/ConneauKSBB17} & 84.5 \\ 300D NSE~\cite{DBLP:conf/eacl/YuM17a} & 84.6 \\ 600D BiLSTM gated-pooling~\cite{DBLP:conf/repeval/ChenZLWJI17} & 85.5 \\ 300D DiSAN~\cite{DBLP:journals/corr/abs-1709-04696} & 85.6 \\ 300D Gumbel TreeLSTM~\cite{DBLP:journals/corr/ChoiYL17} & 85.6 \\ 600D Residual stacked BiLSTM~\cite{DBLP:conf/repeval/NieB17} & 85.7 \\ 300D CAFE~\cite{DBLP:journals/corr/abs-1801-00102} & 85.9 \\ 600D Gumbel TreeLSTM~\cite{DBLP:journals/corr/ChoiYL17} & 86.0 \\ 1200D Residual stacked BiLSTM~\cite{DBLP:conf/repeval/NieB17} & 86.0 \\ 300D ReSAN~\cite{DBLP:journals/corr/abs-1801-10296} & \textbf{86.3} \\ \hline 1200D BiLSTM max pooling & \underline{85.3} \\ 1200D BiLSTM mean pooling & 85.0 \\ 1200D BiLSTM last pooling & 84.9 \\ 1200D BiLSTM \textbf{generalized pooling} & \textbf{86.6} \\ \hline \end{tabular} } \end{center} \caption{Accuracies of the models on the SNLI dataset. } \label{tab:snli} \end{table} Table~\ref{tab:multinli} shows the results of different models on the MultiNLI dataset. The first group is the results of previous sentence-encoding-based models. The proposed model with generalized pooling achieves an accuracy of 73.8\% on the in-domain test set and 74.0\% on the cross-domain test set; both improve over the baselines using max pooling, mean pooling and last pooling. In addition, the results on cross-domain test set yield a new state of the art at an accuracy of 74.0\%, which is better than 73.6\% of shortcut-stacked BiLSTM~\cite{DBLP:conf/repeval/NieB17}. \begin{table}[t!] \centering \begin{tabular}{|l|c|c|} \hline \textbf{Model} & \textbf{In} & \textbf{Cross} \\ \hline CBOW~\cite{DBLP:journals/corr/WilliamsNB17} & 64.8 & 64.5 \\ BiLSTM~\cite{DBLP:journals/corr/WilliamsNB17} & 66.9 & 66.9\\ BiLSTM gated-pooling~\cite{DBLP:conf/repeval/ChenZLWJI17} & 73.5 & \textbf{73.6} \\ Shortcut stacked BiLSTM~\cite{DBLP:conf/repeval/NieB17} & \textbf{74.6} & \textbf{73.6} \\ \hline BiLSTM max pooling & \underline{73.6} & \underline{73.0}\\ BiLSTM mean pooling & 71.5 & 71.6 \\ BiLSTM last pooling & 71.6 & 71.9\\ BiLSTM \textbf{generalized pooling} & \textbf{73.8} &\textbf{74.0} \\ \hline \end{tabular} \caption{Accuracies of the models on the MultiNLI dataset.} \label{tab:multinli} \end{table} \begin{table}[t!] \centering \begin{tabular}{|l|c|c|} \hline \textbf{Model} & \textbf{Yelp} & \textbf{Age} \\ \hline BiLSTM max pooling~\cite{DBLP:journals/corr/LinFSYXZB17} & 61.99 & 77.30 \\ CNN max pooling~\cite{DBLP:journals/corr/LinFSYXZB17} & 62.05 & 78.15 \\ BiLSTM self-attention~\cite{DBLP:journals/corr/LinFSYXZB17} & \textbf{64.21} & \textbf{80.45} \\ \hline BiLSTM max pooling & 65.00 & \underline{82.30}\\ BiLSTM mean pooling & \underline{65.30} & 81.78 \\ BiLSTM last pooling & 64.95 & 81.10\\ BiLSTM \textbf{generalized pooling} & \textbf{66.55} & \textbf{82.63}\\ \hline \end{tabular} \caption{Accuracies of the models on the Yelp and Age dataset. } \label{tab:yelp_age} \end{table} Table~\ref{tab:yelp_age} shows the results of different models for the Yelp and the Age dataset. The BiLSTM with self-attention proposed by~\newcite{DBLP:journals/corr/LinFSYXZB17} achieves better result than CNN and BiLSTM with max pooling. One of our baseline models using max pooling on BiLSTM achieves accuracies of 65.00\% and 82.30\% on the Yelp and the Age dataset respectively, which is already better than the self-attention model proposed by~\newcite{DBLP:journals/corr/LinFSYXZB17}. We also show that the results of baseline with mean pooling and last pooling, in which mean pooling achieves the best result on the Yelp dataset among three baseline models and max pooling achieves the best on the Age dataset among three baselines. Our proposed generalized pooling method obtains further improvement on these already strong baselines, achieving 66.55\% on the Yelp dataset and 82.63\% on the Age dataset (statistically significant $p < 0.00001$ against best baselines), which are also new state of the art performances on these two datasets. \subsection{Detailed Analysis} \paragraph{Effect of Multiple Vectors/Scalars} To compare the difference between vector-based attention and scalar attention, we draw the learning curves of different models using different heads on the SNLI development dataset without penalization terms as in Figure~\ref{fig:curve}. The green lines indicate scalar self-attention pooling added on top of the BiLSTMs, same as in~\newcite{DBLP:journals/corr/LinFSYXZB17}, and the blue lines indicate vector-based attention used in our generalized pooling methods. It is obvious that the vector-based attention achieves improvement over scalar attention. Different line styles are used to indicate self-attention using different numbers of multi-head, ranging from 1, 3, 5, 7 to 9. For vector-based attention, the 9-head model achieves the best accuracy of 86.8\% on the development set. For scalar attention, the 7-head model achieves the best accuracy of 86.4\% on the development set. \begin{figure}[!htb] \centering \includegraphics[width=0.5\linewidth]{curve} \caption{The effect of the number of heads and vectors/scalars in sentence embedding. The vertical axis indicates the development-set accuracy and the horizontal axis indicates training epochs. Numbers in the legend are the number of heads.} \label{fig:curve} \end{figure} \paragraph{Effect of Penalization Terms} To analyze the effect of penalization terms, we show the results with/without penalization terms on the four datasets in Table~\ref{tab:penalization}. Without using any penalization terms, the proposed generalized pooling achieves an accuracy of 86.4\% on the SNLI dataset, which is already slightly better than previous models (compared to accuracy 86.3\% in~\newcite{DBLP:journals/corr/abs-1801-10296}). When we use penalization on parameter matrices, the proposed model achieves a further improvement with an accuracy of 86.6\%. In addition, we also observe a significant improvement on MultiNLI, Yelp and Age dataset after using the penalization terms. For the MultiNLI dataset, the proposed model with penalization on parameter matrices achieves an accuracy of 73.8\% and 74.0\% on the in-domain and the cross-domain test set, respectively, which outperform the accuracy of 73.7\% and 73.4\% of the model without penalization, respectively. For the Yelp dataset, the proposed model with penalization on parameter matrices achieves the best results among the three penalization methods, which also improve the accuracy of 65.25\% to 66.55\% compared to the models without penalization. For the Age dataset, the proposed model with penalization on attention matrices achieves the best accuracy of 82.63\%, compared to the 82.18\% accuracy of the model without penalization. In general, the penalization on parameter matrices achieves the most effective improvement among most of these tasks, except for the Age dataset. To verify whether the penalization term $P$ discourages the redundancy in the sentence embedding, we visualize the vectorial multi-head attention according. We compare two models with the same hyper-parameters except that one is with penalization on attention matrices and the other without penalization. We pick a sentence from the development set of the Age data: \textit{Martin Luther King ``I was not afraid of the words of the violent, but of the silence of the honest'' }, with the gold label being the category of 65+. We plot all 5 heads of attention matrices as in Figure~\ref{fig:view}. From the figure we can tell that the model trained without the penalization term has much more redundancy between different heads of attention (Figure 3b), resulting in putting significant focus on the word ``Martin'' in the 1st, 3rd and 5th head, and on the word ``violent'' in the 2nd and 4th head. However in Figure 3a, the model with penalization shows much more variation between different heads. \begin{table*}[t!] \centering \scalebox{1}{ \begin{tabular}{|l|c|c|c|c|c|} \hline \textbf{Model} & \textbf{SNLI} & \multicolumn{2}{c|}{\textbf{MultiNLI}} & \textbf{Yelp}& \textbf{Age} \\ & & \textbf{IN} & \textbf{Cross} & & \\ \hline w/ Penalization on Parameter Matrices & \textbf{86.6} & \textbf{73.8} & \textbf{74.0} & \textbf{66.55} & 82.45\\ w/ Penalization on Attention Matrices & 86.2 & 73.6 & 73.8 & 66.15 & \textbf{82.63} \\ w/ Penalization on Sentence Embeddings & 86.1 & 73.5 & 73.6 & 65.75 & 82.15 \\ w/o Penalization & 86.4 & 73.7 & 73.4 & 65.25 & 82.18\\ \hline \end{tabular} } \caption{Performance with/without the penalization term. The penalization weight is selected from [1,1e-1,1e-2,1e-3,1e-4] on the development sets.} \label{tab:penalization} \end{table*} \begin{figure}[!htb] \begin{subfigure}{1\textwidth} \centering \includegraphics[width=1\linewidth]{visualization_1349} \caption{Age dataset with penalization} \end{subfigure \\ \begin{subfigure}{1\textwidth} \centering \includegraphics[width=1\linewidth]{visualization_1349_base} \caption{Age dataset without penalization} \end{subfigure} \caption{Visualization of vectorial multi-head attention. The vertical and horizontal axes indicate the source word tokens and the 600 dimensions of the attention ${\mat A}^i$ for different heads. } \label{fig:view} \end{figure} \section{Conclusions} In this paper, we propose a generalized pooling method for sentence embedding through vector-based multi-head attention, which includes the widely used max pooling, mean pooling, and scalar self-attention as its special cases. Specifically the proposed model aims to use vectors to enrich the expressiveness of attention mechanism and leverage proper penalty terms to reduce redundancy in multi-head attention. We evaluate the proposed approach on three different tasks: natural language inference, author profiling, and sentiment classification. The experiments show that the proposed model achieves significant improvement over strong sentence-encoding-based methods, resulting in state-of-the-art performances on four datasets. The proposed approach can be easily implemented for more problems than we discuss in this paper. Our future work includes exploring more effective MLP to use the structures of multi-head vectors, inspired by the idea from~\newcite{DBLP:journals/corr/LinFSYXZB17}. Leveraging structure information from syntactic and semantic parses is another direction interesting to us. \section*{Acknowledgments} This work was partially funded by the National Natural Science Foundation of China (Grant No. U1636201) and the Key Science and Technology Project of Anhui Province (Grant No. 17030901005). \clearpage \bibliographystyle{acl}
2,869,038,156,476
arxiv
\section{Introduction} \label{Section:Intro} In the outermost 30\% of the solar radius, the transfer of energy towards the surface occurs via turbulent convection. To date, a comprehensive theory of solar turbulent convection from small up to global scales has not yet been formulated. In the last decades, MHD simulations \citep[see, e.g.,][]{1997ASSL..225...79N, 1998ApJ...499..914S, 2001ApJ...546..585S, 2012A&A...539A.121B} have been extensively used to mimic the uppermost convection zone, as they match very well the observations of the solar photosphere. However, only tiny regions of the Sun can be realistically simulated, because of the wide range of temporal and spatial convective scales and current computer power. A complementary approach to investigate the properties of convection on the solar quiet photosphere consists in the study of the interaction between convective flows and the small-scale magnetic fields (hereafter magnetic elements) in the interior of supergranular cells. These internetwork magnetic elements can be reasonably regarded as passive objects advected by the underlying flow, as the drag force due to plasma kinetic energy is greater than the magnetic force they exert on the surroundings. Under this assumption (discussed in Sect.~\ref{Section:Results}), the dynamics of magnetic elements describes that of the plasma \citep[see, e.g., ][]{2011ApJ...727L..30Y, 2013SoPh..282..379B}. Tracking magnetic elements also allows us to study the onset and amplification of magnetic fields in the quiet Sun, the scales on which they organize, and the rate of interaction between fields \citep[see, e.g., ][]{2012ApJ...752...48C, 2014ApJ...787...87V}. This information is important in order to get insights, for example, on the mechanisms that contribute to heating the solar corona, such as magnetic reconnections \citep[see, e.g.,][]{1983ApJ...264..642P, 2006ApJ...652.1734V} and buffeting induced MHD waves \citep[see, e.g.,][]{Stangalini2013, 2013A&A...559A..88S}. Previous studies have tracked G-band magnetic bright points and magnetic elements from magnetograms (we refer to both of them as magnetic features), regarding them as Lagrangian probes. Under this assumption, the mean square displacement of such single magnetic features, namely $\langle\Delta l^2\rangle$, has been measured and shown to follow a power law $\langle\Delta l^2(\tau)\rangle\propto\tau^\gamma$, where time $\tau$ is defined as starting from the first detection of the magnetic feature. In particular, a spectral index $\gamma=1$ is associated with a normal diffusion (also known as random walk) with constant diffusivity $K\propto\langle\Delta l^2\rangle/\tau$. In this case, $\langle\Delta l^2\rangle$ corresponds to the standard deviation of a Gauss function describing the distribution of displacements. It is well known that the presence of the combined effects of the velocity field and a superposed diffusion can lead to a very large diffusive coefficient, the so-called eddy-diffusivity, which is the only relevant parameter needed to predict the long-time, long-space diffusion scale properties in many applied cases \citep{1983Moffatt, Bo90, Cr91, Bi95}. On the other hand, when there is anomalous diffusion (i.e., $\gamma\neq1$) the diffusivity depends on both spatial and temporal scales, and super-diffusive ($\gamma>1$) or sub-diffusive ($\gamma<1$) regimes can arise. The most recent works in the literature agreed that there was a super-diffusive regime in the quiet Sun \citep[see, e.g.,][]{2011ApJ...743..133A, 2013ApJ...770L..36G, 2014ApJ...788..137G, 2014A&A...563A.101J}. This implies that the effective diffusivity decreases when the temporal (spatial) scale is reduced \citep{2011ApJ...743..133A}, thus allowing magnetic fields to be enhanced on the very short (small) scales. Diffusive (normal or anomalous) regimes are typically well defined only in the asymptotic limit of large time and large distance, something that is very difficult to achieve in our case. In many geophysical and astrophysical situations only the transient behavior is observable and/or relevant, hence the interest in discussing in a more quantitative way the separation of pairs of magnetic elements in our observational set-up. While the diffusion of single Lagrangian probes is dominated by large scale motions, pair separation should be more universal, being affected only by the relative velocity fields on scales on the order of the distance between the magnetic elements. If we identify with $r_\tau = |\mathbf{X}^{(1)}_\tau-\mathbf{X}^{(2)}_\tau| $ the distance between two magnetic elements in our ensemble at a given time $\tau$, we will be interested in quantifying the probability distribution function (PDF), $p(r,\tau|r_0, \tau_0)$, of observing a given separation $r$ at time $\tau$ starting from an initial distance $r_0$ at time $\tau_0$. In general, we obtain \begin{equation} \label{eq:diff1} \langle\Delta r^2\rangle\equiv\langle |r_\tau-r_0|^2\rangle= \int_0^\tau \langle \delta_{r_{t}} v \delta_{r_0} v \rangle dt, \end{equation} where $\delta_r v $ is the velocity difference among the magnetic elements, the average is meant over all the considered pairs in our ensemble (see Sect.~\ref{Section:Observations}) and we have assumed that the initial position is uncorrelated from the underlying velocity field \citep{So99}. In many cases, the integral in Eq.~\ref{eq:diff1} is well behaved and in the limit of large time converges to $\propto \langle v^2\rangle \tau$, i.e., we have asymptotically a normal diffusion process with diffusivity given by the one-point velocity fluctuations along the trajectories of the magnetic elements. Nevertheless, there are many cases where this asymptotic regime is never reached, or it lies just outside the spatial and temporal limits of observation. For example, it is well known that in the presence of multi-scale non trivial statistical properties for the underlying velocity field, anomalous diffusion might develop. This is for instance the case of the celebrated Richardson diffusion \citep{Richardson} for homogeneous and isotropic turbulent flows in the range of spatial scales where the velocity obeys a Kolmogorov 1941 turbulent cascade \citep{Fr95}. In this case it is predicted and observed the presence of a super-diffusion with $\gamma=3$ and self-similar Richardson-like PDF \citep{Ju99, Fa01, Ye04, Mo07, Sa09, Be10}. Similarly, it is known that in the presence of a spatially smooth velocity field with very long temporal correlation, there might be an anomalous (super- or sub-) diffusion \citep[see, e.g., the pioneering works of ][]{Ge85, Av92, Za93}. Finally, there might also be the possibility of observing {\it strong} anomalous diffusion, i.e., a PDF for pair separation that is not self-similar \citep{Ca99}, something that has been recently detected in turbulent flows using a very high-statistical dataset \citep{Sc12}. In all cases, we are in the presence of a sort of strong or weak failure of the central limit theorem, i.e., the right-hand side of Eq.~\ref{eq:diff1} cannot be trated as the sum of many uncorrelated variables, either because we are exploring spatial and temporal scales too small compared with the characteristic variations of the underlying velocity, or because the velocity field itself possesses non-trivial spatial and temporal asymptotic multi-scale properties. A {\it local} (in temporal and spatial scale) effective diffusion coefficient can be defined as \begin{equation} \frac{d}{d\tau} \langle\Delta r^2\rangle=K(r_0,\tau, r), \end{equation} which gives us the typical separation speed of two magnetic elements found at separation $r$ after a time $\tau$ and with initial separation $r_0$. When the central limit theorem holds, we have $K \sim const.$, independently of the original separation $r_0$. Otherwise, different anomalous regimes arise. In particular, here we are interested in studying the effects of $r_0$ on pair separation for the case of magnetic elements in the quiet Sun, where the observational and intrinsic physical limitations do not allow us to extend the observing regimes to couples with separations much larger than $r_0$, and therefore where only the pre-asymptotic regime is observable. For instance, for fully developed turbulent flows, it is known that the pair separation is only ballistic, namely $\langle\Delta r^2\rangle \sim t^2$, up to a maximum time, the so-called Batchelor time \citep[i.e., $t_B$, ][]{Bo06}, wich depends on the initial separation and follows the law $t_B \propto r_0^{2/3}$. To our knowledge, only \citet{1998ApJ...506..439B} and \citet{2012ApJ...759L..17L} have applied pair dispersion analysis to magnetic features in the photosphere. \citet{1998ApJ...506..439B} tracked $622$ G-band bright point pairs acquired at the SVST by observing a region $29"\times29"$ wide for $70$ minutes, with a cadence of $25$ s and a spatial resolution of $\sim0".2$. They measured a spectral index consistent with $\gamma\simeq1.3$. \citet{2012ApJ...759L..17L} used NST images \citep{2010AN....331..620G} to track a maximum of $7912$ magnetic bright points in a quiet Sun region, a coronal hole, and a plage region. They measured a spectral index $\gamma\simeq1.5$ everywhere in the temporal range $10\lesssim\tau\lesssim400$ s. They interpreted this difference from known scaling as due to the imperfectly passive nature of bright points. The limitation in the statistics prevented the aforementioned authors from carrying out pair dispersion analysis taking into account the initial separation $r_0$. In this work, for the first time the pair separation approach is used to investigate the dynamic behavior of magnetic elements in the quiet Sun for different initial pair separations. We believe this is an important point, since it is difficult to imagine that pair separation is not affected by the initial condition for those temporal and spatial scales accessible on the Sun. We present the results obtained with $68,490$ tracked magnetic pairs. \section{Observations and data analysis} \label{Section:Observations} \begin{figure}[ht!] \centering \resizebox{\hsize}{!}{\includegraphics {Mean_Mag_ROI.eps}} \caption{Time-averaged magnetogram saturated at $25$ G. Only the magnetic elements inside the ROI (limited by the green circle) are considered for the analysis. A few trajectories of magnetic elements are shown in the ROI. The asterisks mark the first detection positions; the plus signs mark the last detection positions.} \label{Fig:Mag} \end{figure} \begin{figure*}[ht!] \centering \includegraphics{16-172_newnew_3.eps} \caption{Mean square pair separation as a function of time since the first appearance. Only the data points up to $\sim5500$ s have been used to make the fit. The error on $\gamma$ was computed as the standard deviation of the values obtained after a random subsampling of magnetic pairs. The results described in \citet{1998ApJ...506..439B} (dotted line) and \citet{2012ApJ...759L..17L} (dashed line) are superposed for comparison.} \label{Fig:SepSpectrum} \end{figure*} \begin{figure}[ht!] \centering \resizebox{\hsize}{!}{\includegraphics {sep_range_counter_plot_N_2.eps}} \caption{ Number of magnetic pairs as a function of the initial separation. Vertical bars represent errors (see the text). } \label{Fig:InitialSep} \end{figure} \begin{figure*}[ht] \centering \resizebox{\hsize}{!}{\includegraphics {range3_more_4.eps}} \begin{picture}(0,0) \put(40,85){\includegraphics[height=4.cm]{range3_resc_4.eps}} \end{picture} \caption{Mean square separation $\langle\Delta r^2(\tau,r_0)\rangle$ for seven different and equally spaced values of $r_0$. The black solid line corresponds to the fitting curve $\bar{y}$ of Figure \ref{Fig:SepSpectrum}. In the inset the compensated mean square separation $\langle\Delta r^2(\tau,r_0)\rangle/\bar{y}$ is shown. The errors (vertical bars) are shown only for a few data points.} \label{Fig:SepR0} \end{figure*} \begin{figure*}[ht] \centering \subfigure[]{\includegraphics [width=7cm]{Real_pdf_separations_64_67_6_log.eps}} \subfigure[]{\includegraphics [width=7cm]{Real_pdf_separations_88_91_6_log.eps}} \subfigure[]{\includegraphics [width=7cm]{Real_pdf_separations_64_67_5_log.eps}} \subfigure[]{\includegraphics [width=7cm]{Real_pdf_separations_88_91_5_log.eps}} \caption{Time-dependent PDF for pair separation of magnetic elements (a, b), and the same rescaled to unity rms (c, d) for initial separations in the range $7.42<r_0<7.77$ Mm (a, c) and $10.21<r_0<10.56$ Mm (b, d). The shaded areas in panels (a) and (b) cover the initial separation bins. All PDFs are normalized to unit area.} \label{Fig:PDFs} \end{figure*} \noindent The data set used in this work was described in \citet{Milan} and analyzed by \citet{2013ApJ...770L..36G, 2014ApJ...788..137G} to study the diffusion of single magnetic elements up to supergranular scales. It consists of $995$ Hinode-NFI magnetograms \citep{2007SoPh..243....3K, 2008SoPh..249..167T} with a spatial resolution of $0".3$ and a noise of $\sigma_B=6$ G for single frames. The magnetograms were co-aligned, trimmed to the same field of view (FoV, which is $\sim50$ Mm sized), and filtered out for oscillations at $3.3$ mHz \citep{Milan}. As a consequence, the present analysis is free from effects of acoustic oscillations and atmospheric seeing, and aimed to magnetic elements and not magnetic proxies like G-band bright points. The large FoV, which encloses an entire supergranule, and the high spatial resolution enabled us to investigate a wide range of spatial scales and observe a large number of magnetic elements. The series, acquired on November 2, 2010, covers 25 hours without interruption, with a cadence of $90$ s. This allowed us to study the dynamics of magnetic elements on a wide range of temporal scales (from a minute to a day). In Figure \ref{Fig:Mag}, we show the $25$ hr time-averaged magnetogram of the FoV saturated at $25$ G. We focus on the region of interest (ROI) inside the green circle, which is centered in the center of the supergranule and has a radius of $\sim10$ Mm, such that the ROI is completely enclosed within the supergranule itself, i.e., in the internetwork region \citep{2014ApJ...788..137G}. We applied the tracking algorithm described in \citet{2004A&A...428.1007D}. The algorithm uses a variable threshold in order to overcome the loss of weak fields and resolve the clustered peaks of the largest magnetic features \citep[e.g., ][]{2005SoPh..228...81B}. We discarded all the magnetic elements with speed $v>7$ kms$^{-1}$, which is roughly the speed of sound in the photosphere. We also discarded the magnetic elements passing close to the boundary of the ROI (at a distance $\lesssim1.86$ Mm). A total of $68,490$ tracked magnetic pairs originated in the ROI have been detected. In Figure \ref{Fig:Mag} we also show, for the sake of visualization, the evolution of a few magnetic elements forming a subset of pairs. \section{Results and discussion} \label{Section:Results} \noindent The detected pairs have been used to investigate the nature of turbulent convection under the hypothesis of magnetic elements passively transported by the supergranular flows in the ROI. By analyzing the same data set, \citet{2013ApJ...770L..36G} found that the equipartition magnetic flux density is $B_e\simeq255$ G. Only less than $4\%$ of the total magnetic elements have an average flux greater than that value, and are all located in the network. Therefore, the condition of passive magnetic elements is reasonably fulfilled in the internetwork regions, as demonstrated by the spectro-polarimetric studies performed by \citet{2007ApJ...670L..61O, 2012ApJ...751....2O}; and \citet{2012ApJ...757...19B}. As done in the previous works in the literature \citep[see, e.g.,][]{2012ApJ...759L..17L} the separation spectrum $\langle\Delta r^2\rangle$ was first computed for all the pairs of magnetic elements, regardless of their initial separation. We obtained the results shown in Figure \ref{Fig:SepSpectrum} (black diamonds). We found that the separation spectrum is best fitted by a power law $\bar{y}\propto\tau^\gamma$ with spectral index $\gamma=1.55\pm0.05$ (the red line in the same figure), in agreement with the results of \citet{2012ApJ...759L..17L}, but here extended to supergranular scales. The error on $\gamma$ was computed as the standard deviation of the values obtained after a random subsampling of the magnetic pairs. As mentioned in Sect.~\ref{Section:Intro}, the high number of magnetic elements tracked allowed us to perform for the first time the pair separation analysis for different values of the initial pair separation. By looking in greater detail at the distribution of initial separations, one easily recognizes that it is in general very broad. In Figure \ref{Fig:InitialSep} we show, for each initial separation $r_0$, the number of magnetic pairs found in the range $r_0-0.174<\Delta r<r_0+0.174$ Mm, which is $3$ Hinode-NFI pixels wide. Vertical bars in the graph represent the errors, which are mainly due to the Poissonian contribution. The peak of the curve lies between $8$ Mm and $9$ Mm ($r_{0,peak}$). By comparing Figs. \ref{Fig:SepSpectrum} and \ref{Fig:InitialSep} one can see that the global mean displacement with respect to the initial separation, averaged over all pairs, is of the same order of magnitude as the spread in the initial distribution of $r_0$ in our sample, i.e., we have not reached any asymptotic long-time regime. Hence, it is natural to ask the question: How robust is the observed super-diffusive behavior as a function of $r_0$? As magnetic elements are not point-like, but have diameters up to $d_{min}\simeq1.86$ Mm, it is not possible to chose arbitrarily small mutual distances. Therefore, the minimum pair separation set is $d_{min}$. Moreover, the maximum achievable separation is given by the diameter of the ROI, which is $d_{ROI}\sim20$ Mm. Thus, $r_0$ must satisfy $d_{min}\le r_0\le d_{ROI}$. We choose bins of $r_0$ large enough to collect a sufficiently high number of magnetic pairs within, and small enough to be able to study the variation of $\langle\Delta r^2\rangle$ with $r_0$. For this purpose, we set the bin size at $348$ km (i.e., $3$ Hinode-NFI pixels), and computed $\langle\Delta r^2(\tau,r_0)\rangle$ for each bin. In Figure \ref{Fig:SepR0} we plot seven of all the computed $\langle\Delta r^2(\tau,r_0)\rangle$. For comparison, we also over-plot the power law behavior $\bar{y}(\tau)$ (corresponding to $\gamma=1.55$). In order to emphasize the deviations from such a law, in the inset of the same figure we plot a compensated pair separation spectrum $\langle\Delta r^2(\tau,r_0)\rangle/\bar{y}$. The errors on data points were computed as the standard deviation of the values obtained after a random subsampling of the magnetic pairs. From these two plots we can deduce that 1) the smaller the $r_0$, the smaller the effective eddy diffusivity $\langle\Delta r^2\rangle/\tau$ for any $\tau$, and 2) the smaller the $r_0$, the smaller the value of the {\it effective} slope $\gamma$; in addition, there is a clear change in the trend for initial separation crossing the value $r_0\sim10$ Mm, which roughly corresponds to the radius of the ROI. From an observational point of view, 1) and 2) could be interpreted by taking into account the recent results in \citet{2012ApJ...758L..38O} and \citet{2014ApJ...788..137G}. In those works, the authors showed that the horizontal velocity field within a supergranule is mostly radial and directed from the center to the boundaries. Following this sketch, we expect that magnetic elements starting close to each other will, on average, separate more slowly than magnetic elements starting farther away from one another. In fact, magnetic elements with a larger initial separation are most likely to be dragged on along very different directions, thus separating faster. This effect naturally introduces a dependence of $\langle\Delta r^2\rangle$ on $r_0$. In particular, the systematic increase of the effective $\gamma$ from the granular to the supergranular scale suggests that the pre-asymptotic diffusion is a function of the probed spatial scale By comparing figures \ref{Fig:InitialSep} and \ref{Fig:SepR0} we note that for initial separations smaller than $r_{0,peak}$ the slopes in the pair separation spectrum decrease at longer times; while for larger initial separations the slopes increase. This trend is significant, as can be seen from the errors on the data points shown in the inset in Figure \ref{Fig:SepR0}. The pairs of magnetic elements with initial separation around $r_{0,peak}$ (from $\sim6.5$ to $\sim11.5$ Mm), which are in number about half of the entire population of pairs, are characterized by a separation spectrum with $\gamma$ around $\simeq1.55$. This explains why in the separation spectrum in Figure \ref{Fig:SepSpectrum}, which was retrieved by considering all the pairs of magnetic elements in the ROI (with any initial separation), there is an effective trend consistent with $\gamma\simeq1.55$ even at longer temporal scales. To further investigate the effects of $r_0$ on the pair separation, we computed the time-dependent PDF (normalized at unit area) of observing a given separation starting from the initial values of $7.42<r_0<7.77$ Mm and $10.21<r_0<10.56$ Mm, at which values the change in the trend shown in Figure \ref{Fig:SepR0} is observed. The results are shown in panels (a) and (b) of Figure \ref{Fig:PDFs}. In that figure, the initial separation range is depicted as a shaded area. As we can see, the PDF broadens with time, its rms being $\sigma(\tau)$, and the peak moves to gradually increasing separations. At $\tau=900$ s the tails begin to be important, and affect substantially the pair separation spectrum, indicating that the largest separations begin to become dominant. We rescaled the time-dependent PDF so that it is centered at zero, and its rms $\sigma(\tau)$ is unity \citep[see, e.g.,][]{Ju99}. To this end, we introduced the rescaled separation $q=(\Delta r-\langle\Delta r\rangle)/\sigma$, being $\langle\Delta r\rangle$ the mean separation value, and computed PDF($q$) at each time. The correct re-normalization required to consider $\sigma$PDF(q) instead of PDF(q). The results are shown in panels (c) and (d) of Figure \ref{Fig:PDFs}. As we can see, the curves seem to collapse on each other, especially in proximity of $\langle\Delta r\rangle$ (which corresponds to $q=0$). This means that the small deviations from the mean values seem to be {\it self-similar}. When it holds, self-similarity indicates that there is a single underlying distribution governing the process at any time \citep{Ju99}. However, in our case the statistics is still too low to come to a conclusion in this sense. More data points are nedeed to better sample the tails of the time-dependent PDF, where any possible breaking of self-similarity can be detected. \section{Conclusions} \label{Section:Conclusions} Dynamic processes in the solar photosphere can be studied at spatial and temporal scales from granular to supergranular by measuring the pair separation rate of quiet Sun magnetic elements. By taking advantage of uninterrupted $25$ hr magnetograms acquired by Hinode at high resolution and imaging a whole supergranule, we computed the separation spectrum of $68,490$ pairs of magnetic elements. When considering all the pairs within the supergranule, we found a spectral index $\gamma=1.55\pm0.05$, in agreement with the most recent literature, but extended at unprecedented spatial and temporal scales. Such a super-diffusive regime can be interpreted as being due to an underlying velocity field with either characteristic spatial (temporal) scales larger (longer) than the scales of observation; or non-trivial asymptotic multi-scale properties. For the first time we investigated the separation spectrum for different values of the initial pair separation of magnetic elements, $r_0$. The main conclusion is that the rate of pair separation depends on the spatial scale under consideration. The possibility that the pre-asymptotic diffusive behavior detected here possesses non-trivial multi-scaling properties remains to be investigated; in other words, whether higher order moments do not scale proportionally to the second order moment, $\langle (\Delta r)^{2p} \rangle \neq \langle (\Delta r)^{2} \rangle^p $. This would indicate the presence of {\it strong} anomalous diffusion \citep{Ca99}, possibly connected to the presence of intermittent properties of the advecting velocity field. This ambitious goal surely represents a great challenge for future research since it can only be achieved by extending by at least one order of magnitude the statistical ensemble and the temporal window of the observation. \begin{acknowledgements} This work was supported by a PhD grant at the University of Rome, Tor Vergata. Part of this work was done while F.G. was a Visiting Scientist at Instituto de Astrof\'isica de Andaluc\'ia (CSIC). Financial support by the Spanish MEC through project AYA2012-39636-C06-05 (including European FEDER funds) is gratefully acknowledged. This work has also benefited from discussions in the Flux Emergence meetings held at ISSI, Bern in December 2011 and June 2012. The data used here were acquired in the framework of {\itshape Hinode} Operation Plan 151, entitled {\itshape Flux replacement in the solar network and internetwork}.\\ L.B. acknowledges partial funding from the European Research Council under the European Community’s Seventh Framework Programme, ERC Grant Agreement No 339032.\\ {\itshape Hinode} is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as a domestic partner, NASA and STFC (UK) as international partners. Scientific operation of the Hinode mission is conducted by the Hinode science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NAOJ (Japan), STFC (U.K.), NASA, ESA, and NSC (Norway). \end{acknowledgements}
2,869,038,156,477
arxiv
\section{Introduction \label{INTROD-SEC}} Distance-redshift surveys of distant Type Ia supernovae point out that the universe has recently entered a phase of accelerating expansion \cite{Rie98}, \cite{Per99}. The origin of this acceleration is not known and it is generally attributed to a dark energy component with negative pressure, inducing repulsive gravity and thus causing accelerated expansion \cite{Cop04}. The simplest candidate for this dark energy is the cosmological constant, $\Lambda$, which is expected to correspond to the zero-point fluctuations of the quantum fields in QFT, giving rise to a vacuum energy density $\rho_{vac}$ \cite{Sah00}. The cosmological constant enters as an additional source in the Einstein-Hilbert action, with no (or just weak) coupling to the other fields. Large scale observations put an upper bound on the $\Lambda$-term: $|\Lambda|<10^{-56}~{\rm cm^{-2}}$ or $|\rho_\Lambda | < 10^{-47}~{\rm GeV}^4$ \cite{Car92}. By contrast, theoretical estimates of various contributions to the vacuum energy density in QFT exceed this observational bound by many orders of magnitude \cite{Dol07}. In the ${\rm \Lambda}$CDM model, the equation of state parameter is a constant, $w=-1$, while alternative dynamical models (e.g. quintessence, phantom field) this parameter is a function of time. Naturally, measurements of deviations from $w=-1$ would be decisive for a real breakthrough in cosmology. At the same time, as the Universe was once decelerating and is now accelerating, it is also useful to consider the third derivative of $a(t)$, the jerk parameter $j(t)$, to probe deviations from $w=-1$. An interesting way to use this parameter is the approach introduced by \cite{Bla04}. In that work, flat ${\rm \Lambda}$CDM models have a constant jerk with $j(a)=1$, thus, any deviation from $j=1$ measures a departure from ${\rm \Lambda}$CDM in the kinematical framework using a minimum of prior information, being independent of any particular gravity theory. The jerk parameter has been used to discriminate models of dark energy and modified gravity \cite{Sah03}, \cite{Ala07}, however, its observational status is presently quite poor. A critical discussion can be found in Ref. \cite{Boc13}, where it is argued that the value of the jerk parameter cannot be converged to high precision with current data. Yet, it is clear that a thorough investigation of the possible cosmological imprints of this observable is fundamental for discriminating dark energy models with future, higher-resolution experiment data. Another approach which have been increasingly recognized as a potential tool for constraining dynamical dark energy models comes from probing the uniformity and constancy of fundamental couplings \cite{Mar15b}, as for example the fine structure constant, $\alpha$. In this scenario, the dark energy and electromagnetic sectors are coupled through a dimensionless parameter, $\zeta$, which can be independently constrained. Recently \cite{Mar15}, a constraint at the two-sigma (95.4\%) confidence level has been obtained for this coupling, namely: $|\zeta | < 5 \times 10^{-6}$, assuming a dark energy with a constant equation of state. By relaxing this assumption, other constraints were found for a series of dynamical models \cite{Mar15b}, with bounds at the 2$\sigma$ level ranging from $|\zeta | < 5.6 \times 10^{-6}$ to $|\zeta | < (0.8 \pm 1.2) \times 10^{-4}$, depending on the model. In this report, we investigate the predictions for the jerk parameter and possible variations of the fine structure constant, both as a function of the redshift, $z$, based on the stochastically generated, dynamical models of dark energy proposed by Chongchitnan and Efstathiou \cite{Cho07} (hereon CE07). In this framework, viable models are selected out of solutions evolving from a quintessence-dominated regime at high z's, but which satisfy the observational constraints inferred at low z's ($z < 1$). We specifically analyse whether $j(z)$, derived from these models, differs significatively from the pure $\Lambda$CDM model ($j_{\Lambda{\rm CDM}} \equiv 1$). We also obtain constraints to possible variations of $\alpha$ based on an ``extreme'' model (a ``late time'' viable model, as explained in detail in the next sections), parameterized by the coupling $\zeta$. This paper is organized as follows: in Sec. \ref{THEORY-SEC} we briefly review the Friedmann equations casted into energy phase-space variables, define the jerk parameter as a function of the scalar field, and the coupling between the dark and electromagnetic sections. We also summarize the method by CE07 for obtaining viable stochastic quintessence models. In Sec. \ref{RES-SEC}, we present a family of viable solutions and qualitatively analyse their behavior in the energy phase-space, classify the trajectories of selected models, and analyse the derived quantities of interest. In Sec. \ref{CONC-SEC}, we summarize our results. \section{Theoretical framework for the stochastic models \label{THEORY-SEC}} \subsection{The jerk parameter as a function of the scalar field \label{JERK-SEC}} Starting from the Friedmann equations for a flat universe, namely, \begin{equation} \left({\dot{a}\over a}\right)={\kappa^2\over 3}(\rho_m + \rho_\phi), ~~ \left({\ddot{a}\over a}\right)= - {\kappa^2\over 6}(\rho_m + \rho_\phi + 3p_\phi), \label{Friedmann_Eqs} \end{equation} \noindent where $\kappa^2=m_{\rm Planck}^{-2} = 8\pi G$, and from the jerk parameter definition, \begin{equation} j(t)=+{1\over a}{d^3\over dt^3}\left[{1\over a}{da\over dt}\right]^{-3}, \end{equation} \noindent one can find: \begin{equation} {dp_\phi\over dt}={2H^3\over 8\pi G}(1-j), \end{equation} \noindent where $H(t)=(\dot{a}/a)$. For a scalar field in the FRW background, \begin{equation} \rho_\phi={1\over 2} \dot{\phi}^2 + V(\phi), p_\phi={1\over 2} \dot{\phi}^2 - V(\phi), \end{equation} \noindent where $V$ is the scalar field potential, and thus, in the slow roll approximation, we obtain: \begin{equation} j (\phi)\simeq 1 - {\kappa^2\over 2H^4}[V^\prime(\phi)]^2, \label{Jerk_Eq} \end{equation} \noindent where the prime indicates the derivative with respect to $\phi$. Note that one can follow $j$ as a function of $\phi$ once the potential is chosen, and $H\rightarrow H(\phi)$. Also note that, by using the slow-roll approximation in Eq. \ref{Jerk_Eq}, the following constraint must be satisfied (e.g., \cite{Wei08}): \begin{equation} \epsilon_V \equiv {1 \over 16 \pi G} \left ( {V^{\prime} \over V} \right ) ^2 << 1. \label{EPS-V} \end{equation} \subsection{Brief review of the dynamics in the energy phase-space \label{DYN-SEC}} The phase-space approach was introduced by Copeland, Liddle and Wands (1998), and here we only state the relevant equations for completeness and to fix our notation, which follows identically the presentation of CE07. The reader is referred to these papers for further details. We here also ignore any couplings between matter and the quintessence field, and the universe is assumed to be spatially flat as well. The canonical pair of variables in the energy phase-space are defined as: \begin{equation} x \equiv {\kappa \dot{\phi} \over \sqrt{6}H}, ~~ y \equiv {\kappa \sqrt{V} \over \sqrt{3} H}, \label{XY_Eq} \end{equation} \noindent with $x^2 + y^2 = 1 - \Omega_m -\Omega_r$, where $\Omega_m$ and $\Omega_r$ are the matter and radiation density parameters, respectively. Using the Friedmann equations (Eqs. \ref{Friedmann_Eqs}), one finds the following system of coupled differential equations in terms of the redshift ($z$): \begin{eqnarray} {dx \over dz} & = & -{1 \over 1 + z } \left [ -3x + \sqrt{3 \over 2} \lambda y^2 + 3 x^3+ {3 \over 2} x (1 +w_b ) (1 - x^2-y^2) \right ] \nonumber \\ {dy \over dz} & = & -{1 \over 1 + z } \left [ -\sqrt{3 \over 2} \lambda x y + 3 x^2 y + {3 \over 2}y(1+w_b)(1 - x^2-y^2) \right ], \label{Phase_Eqs} \end{eqnarray} \noindent where $w_b$ is the matter-radiation equation of state parameter ($w_b = 0$ in the matter-dominated era, and $w_b = 1/3$ in the radiation-dominated era), and $\lambda$ is the ``roll'' parameter, defined as: \begin{equation} \lambda \equiv -{V^{\prime} \over \kappa V}. \label{Roll_Eq} \end{equation} In these variables, the dark energy equation of state parameter is: \begin{equation} w_q = {x^2 - y^2 \over x^2 + y^2}, \end{equation} \noindent and the dark energy density parameter is: \begin{equation} \Omega_q = x^2 + y^2. \end{equation} \subsection{The coupling between the electromagnetic and dark energy sectors \label{COUP-SEC}} The stochastic quintessence model represents a dynamical scalar field scenario, and therefore a complete theoretical description requires the specification of how it couples to all other fields of the theory. We assume a coupling of the quintessence field with the electromagnetic sector only (e.g., \cite{Cal11}, \cite{Mar15}), given by the Lagrangean: \begin{equation} \mathcal{L}_{\rm coup} = {1 \over 4} - B_{\rm F} (\phi) F_{\mu \nu} F^{\mu \nu}, \label{EQ-LAG} \end{equation} \noindent where $F_{\mu \nu}$ is the electromagnetic field tensor, and $B_{\rm F} (\phi)$ is coupling function, assuming, in our present work \cite{Mar05}: \begin{equation} B_{\rm F} (\phi) = \left \{ 1 - \zeta_{\rm PC} [\Phi(z)]^q \right \} \exp [-\zeta_{\rm EC} \Phi(z)], \label{EQ-BF} \end{equation} \noindent with $q \in \mathbb{N}_+$, and the coupling parameters, $\{ \zeta_{\rm PC}, \zeta_{\rm EC}\} \in \mathbb{R}$ (dimensionless), are subject to independent observational constraints. The function $\Phi(z)$ is specified below (Eq. \ref{DA-FORM}). We then have the following possibilities: \begin{itemize} \item {Linear coupling (LC): $\{ q, \zeta_{\rm EC} \} = \{1, 0 \}$; } \item {Polynomial coupling (PC): $\{ q, \zeta_{\rm EC} \} = \{\ge 2, 0 \}$; } \item {Exponential coupling (EC): $\zeta_{\rm PC}=0$; } \item {``Mixed'' coupling (MC): $\{\zeta_{\rm PC}, \zeta_{\rm EC} \} \neq 0$. } \end{itemize} To simplify notation, we will adopt $\zeta \equiv \zeta_{\rm PC}$ (in the LC and PC cases) or $\zeta \equiv \zeta_{\rm EC}$ (in the EC case), whenever no confusion may arise. In the MC case, we always specify each $\zeta_{\rm PC}, \zeta_{\rm EC}$. Note that for small values of $\zeta_{EC}$ ($e^{-\zeta \Phi} \approx 1 - \zeta \Phi$), the EC becomes the LC-like case. Large values of $\zeta_{\rm EC}$ may violate observational constraints \cite{Mar05}, so we will not consider them here. Therefore, we omit the purely EC case and consider the MC case with small values of $\zeta_{EC}$ only. These couplings lead to a possible variation of the fine structure constant $\alpha$ as a function of cosmic evolution \cite{Cal11}, \cite{Mar15}: \begin{equation} {\Delta \alpha \over \alpha}(z) \equiv {\alpha(z) -\alpha(z=0) \over \alpha(z=0)} \equiv 1 - B_{\rm F}. \label{EQ-DAA} \end{equation} \noindent In terms of cosmological parameters, we have: \begin{equation} \Phi(z) \equiv \kappa (\phi -\phi_0) = \int_0^z \sqrt{3\Omega_q(z)(1+w_q(z))} {d z^{\prime} \over 1+ z^{\prime}}. \label{DA-FORM} \end{equation} \subsection{Viable models of quintessence from stochastic models \label{VIAB-SEC}} In essence, the method by CE07 assigns stochastic values to the ``roll'' parameter (Eq. \ref{Roll_Eq}), by uniformly binning the trajectory in the energy phase-space (a solution to Eq. \ref{Phase_Eqs}), that is, $\lambda = \lambda(\Delta z)$, where $0 \leq \lambda \leq 10$, and $\lambda$ is a random variable. Therefore, in the process of integrating the coupled equations (Eqs. \ref{Phase_Eqs}), the stochastic values assigned for $\lambda$ are used in each step. The reader is referred to their paper for details of the binning method, which includes a bin refinement for low redshifts, as we strictly follow their procedure. The second important step in their method is to select only those solutions which are compatible with tight observational constraints, namely: \begin{itemize} \item{$-1 \leq w_q \leq -0.99$, at $z=z_{\rm obs}=0.3$;} \item{$0.73 \leq \Omega_q \leq 0.74$, at $z=0$.} \end{itemize} As a third important step, we mention the derivation of the quintessence potential from the energy variables. One integrates Eq. \ref{XY_Eq} in order to find the quintessence field value as a function of z: \begin{equation} \phi(z) = -\sqrt{6} \int_0^z {x(\bar{z}) \over 1 + \bar{z}} d \bar{z}, \end{equation} \noindent where the field variable is in units of $m_{\rm Planck}$ and is set to zero at $z=0$. Then, Eq. \ref{Roll_Eq} is integrated in order to find $V(z)$: \begin{equation} V(z) = V_0 \exp \left ( \sqrt{6} \int_0^z {\lambda (\bar{z}) x(\bar{z}) \over 1 + \bar{z}} d \bar{z} \right ), \label{VZ_Eq} \end{equation} \noindent where $V_0$ is $\Omega_q (z=0)$. Hence, to find $V(\phi)$, one just combines the results of the two previous integrations. In order to clarify how the dynamical variables should be read in the energy phase-space, according to the exposition of Sec. \ref{DYN-SEC}, we present in Fig. \ref{Fig-Illustr} an illustration indicating the main features of the energy phase-space and the general characteristics of the stochastic trajectories of the CE07 proposal. This illustration should serve as a guide to the actual obtained models, to be discussed in Sec. \ref{ENS-SEC}, since at first the various solution trajectories will be presented collectively in the same phase-space, which tend to produce an overloaded picture. The models will be subsequently analysed into different sub-classes. \clearpage \onecolumngrid \begin{figure} [tbhp] \includegraphics[width=1\columnwidth]{FIG1.pdf} \caption{(Color online). Illustration of the main features of the energy phase-space and general characteristics of the stochastic trajectories of the CE07 proposal. \label{Fig-Illustr}} \end{figure} \twocolumngrid \section{Models and Results \label{RES-SEC}} \subsection{Ensembles of stochastic quintessence models \label{ENS-SEC}} We consider dynamical models with solutions evolving from a quintessence-dominated regime in high redshifts and decaying towards a behavior that mimics the cosmological constant in low redshifts, in agreement with the observational constraints mentioned in the previous section. In order to produce these models, the ``roll'' parameter, $\lambda$ (c.f. Eq. \ref{Roll_Eq}), must sharply decay to sufficient small values in low redshifts. As explained in CE07 the corresponding potential presents a transition from a steep to a shallow slope at some characteristic field value. In order to produce the above-mentioned potential decay, we imposed amplitudes of up to $\lambda \leq 10$ in higher redshifts and up to $\lambda \leq 0.1$ in lower redshifts. The latter amplitude is somewhat arbitrary but must be small enough to produce compatible models. An inspection of Fig. 2 (second panel) of CE07 indicates that $\lambda \leq 0.1$ would be an adequate setting. Indeed, after preliminary tests looking for solutions around that value, viable models could be found for that choice, so it was subsequently fixed in our analysis. Note, however, that the choice of the upper range of $\lambda$ in low redshifts has an important consequence for the resulting models, particularly for the jerk parameter. The approximation given by Eq. \ref{Jerk_Eq} depends on the derivative of the potential ($V^{\prime}$). On the other hand, Eq. \ref{VZ_Eq} states that the potential may fluctuate around some level, depending on fluctuations in $\lambda(z)$, which are convoluted with $x(z)$ in the integral. This fact may leave an imprint on the derivative $V^{\prime}$, and consequently, on the jerk parameter. This will be discussed in the next section. \clearpage \onecolumngrid \begin{figure} [tbhp] \includegraphics[width=1\columnwidth]{FIG2.pdf} \caption{(Color online). Trajectories of viable stochastic quintessence models in the energy phase-space. Left panel: the ($z_{\rm trans}=2.5$)-ensemble models, obtained from different sets of initial conditions and $\lambda(z)$ distributions (for each of the $20$ sets of the ensemble). Right panel: the ($z_{\rm trans}=1.5$)-ensemble models, obtained a fixed set of initial conditions but different $\lambda(z)$ distributions (for each of the $5$ sets of the ensemble). Different trajectory colors indicate different sets (or, equivalently, different $\lambda(z)$ distributions). The ``+'' symbols in the phase space of the ($z_{\rm trans}=1.5$)-ensemble models (right panel) indicate the (fixed) initial conditions taken from a set of the ($z_{\rm trans}=2.5$)-ensemble (see text). \label{Phase-ALL-COMB-prep}} \end{figure} \twocolumngrid We generated two different ``ensembles'' of solutions, according to two high/low-$\lambda$'s transition redshifts, namely: $z_{\rm trans}=\{1.5, 2.5\}$. These values basically cover the transitions used in CE07 (see their Fig. 2, second panel). We considered the ($z_{\rm trans}=2.5$)-ensemble as the ``fiducial'' one, and the ($z_{\rm trans}=1.5$)-ensemble was used for comparison purposes, as we explain later. These ensembles differ not only on the choice of $z_{\rm trans}$, so we discuss them briefly: \begin{itemize} \item {For the ($z_{\rm trans}=2.5$)-ensemble, $20$ sets of solutions were produced, tentatively starting from $100$ random initial conditions $(x_0,y_0)$ on the phase-space, {\it generated anew for each set}. A stochastic $\lambda(z)$ distribution was also {\it generated anew for each set}. Therefore, each set contains a varying number of viable solutions (or trajectories in phase-space). This ensemble has a total number of $321$ solutions. } \item {For the ($z_{\rm trans}=1.5$)-ensemble, $5$ sets of solutions were obtained, {\it but all of them starting from the same tentative initial conditions}. Otherwise, just as the ($z_{\rm trans}=2.5$)-ensemble, the $\lambda(z)$ distributions were also {\it generated anew} for each set. The initial conditions chosen correspond to those used in a specific set of the ($z_{\rm trans}=2.5$)-ensemble. The reason for that choice is given Sec. \ref{TYPES-SEC}. Each set also produced a varying number of viable solutions. This ensemble has a total number of $101$ solutions.} \end{itemize} In summary, the sets are a label for a distinct $\lambda(z)$ distribution. The ($z_{\rm trans}=2.5$)-ensemble carries a new initial condition for each of its $20$ sets, while the ($z_{\rm trans}=1.5$)-ensemble uses a fixed set of initial conditions for its $5$ sets. Notice that a given set may have produced, say, $n$ viable trajectories, with $3\leq n \leq 100$ (we have discarded sets producing $n\leq 2$ solutions). This means that, out of the $100$ tentative initial conditions for that set, ($100-n$) ones were not ``successful'' in generating viable trajectories. Hence, each set happens to have a varying number of solutions, with no predictable pattern due to the nonlinear nature of the equations and the stochastic character of the $\lambda(z)$ distribution. Evidently, while producing those sets, not every integration run could produce viable solutions for a given tentative set of initial conditions and $\lambda(z)$ distribution, so they were searched for by trior and error. The sets of solutions reported above are therefore the end result of that process. In Fig. \ref{Phase-ALL-COMB-prep}, all the obtained solutions (trajectories) on the phase-space are shown for each of the two ensembles ($z_{\rm trans}=\{1.5, 2.5\}$). All these trajectories represent viable stochastic quintessence solutions obeying the observational constraints at low redshifts. The reader should compare those figures with Fig. 2 (top panel) of CE07, which show good agreement. As pointed out in CE07, there are basically two main categories of models: (1) those obtained from flat potentials, like static ($\lambda(z) \sim 0, \forall z$) and ``skater'' models ($x\sim 0$), and (2) dynamical models (with trajectories evolving more broadly in phase-space), and both classes can be reproduced in stochastic quintessence models, leading to viable solutions. One should also note that these are locally (``time sliced'') exponential potential models (c.f. Eq. \ref{VZ_Eq}) and their behavior can be inferred from the attractor dynamics in energy phase-space for the purely exponential potential (see Ref. \cite{Cop98} for a complete analysis). Our results show (Fig. \ref{Phase-ALL-COMB-prep}) that the models move towards/around the known late-time attractor scaling solution at $x=y=\sqrt{1/6}$ (dust-dominated model with fixed $\lambda=3$, c.f. Ref. \cite{Cop98}), and subsequently decay towards the scalar field dominated solution in lower redshifts (after the given transition redshift; $z_{\rm trans}=\{1.5, 2.5\}$). One should recall that the ($z_{\rm trans}=2.5$)-ensemble models use, for each set, a large variety of different initial conditions and $\lambda(z)$ distributions, whereas the ($z_{\rm trans}=1.5$)-ensemble models are all drawn from the same initial conditions, but with different $\lambda(z)$ distributions (for each set). Therefore, the latter ensemble shows that a variety of trajectories are possible for the same initial conditions, with the dynamics being indeed dictated by the $\lambda(z)$s. At this level, it is difficult to infer significant differences in the behavior of the solutions between both ensembles. In order to advance this understanding, particularly with respect to potential observational imprints, we have selected solutions according to finer criteria. \subsection{Early and Late Models \label{EL-SEC}} For each set (or alternatively, for each $\lambda(z)$ distribution), we asked how early/late a trajectory meets the observational constraint of $w_q=-0.99$, by selecting, in this sense, the ``earliest'' and ``latest'' trajectories in each set. In addition we imposed a redshift cutoff restriction (solutions were discarded if not meeting the criteria): \begin{itemize} \item{EARLY MODELS: the ``earliest'' solution for each set, reaching $w_q=-0.99$ {\it earlier than} $z_w = 0.55$.} \item{LATE MODELS: the ``latest'' solutions for each set, reaching $w_q=-0.99$ {\it later than} $z_w = 0.45$.} \end{itemize} Notice that most of the original models do not meet the above criteria and were therefore left out in the subsequent analysis [only $26$ models met the criteria, with $21$ from the ($z_{\rm trans}=2.5$)-ensemble]. In Fig. \ref{Phase-EARLY-LATE-COMB-prep}, we present the phase-space of the selected models. The trajectories shown in this figure form a subset of those in Fig. \ref{Phase-ALL-COMB-prep}. \begin{figure} [bhtp] \includegraphics[width=1\columnwidth]{FIG3.pdf} \caption{(Color online). Trajectories of viable stochastic quintessence models in the energy phase-space, for the selected models: EARLY (left panels) vs. LATE (right panels), from the ($z_{\rm trans}=2.5$)-ensemble (top panels) and the ($z_{\rm trans}=1.5$)-ensemble (bottom panels). \label{Phase-EARLY-LATE-COMB-prep}} \end{figure} As mentioned previously, the attractor dynamics in energy phase-space for the fixed exponential potential has been thoroughly studied previously \cite{Cop98}, and it is not the main focus of the present investigation, but we mention two qualitative points: (1) the transition redshift does {\it not} seem to constrain the early/late classification, at least at the level here imposed, in the sense of, e.g., expecting EARLY models to be preferably found in ($z_{\rm trans}=2.5$)-ensemble models and/or LATE models to be preferably found in ($z_{\rm trans}=1.5$)-ensemble models. (2) These ``extreme'' models, however, seem to evolve preferentially (in higher redshifts) through the ``horizontal branch'' ($y^{\prime} \sim 0$), along paths satisfying $y<0.2$, before decaying towards the scalar field dominated solution. Since this characteristic seems to be a common feature of these ``extreme'' models, we focus on a qualitative analysis of their subsequent evolution in the next section. \subsection{Types of Evolution \label{TYPES-SEC}} In this section we identify main types of possible evolution paths in phase space from redshifts preceding the transition redshift. Given that the ($z_{\rm trans}=1.5$)-ensemble selections (EARLY and LATE models) are composed of only few representatives, they display more clearly the behavior mentioned in point (2) of the previous section, which we focus here. These models allow us to clearly identify different intermediary evolutions, {\it after moving along the ``horizontal branch''} ($y^{\prime} \sim 0$). We label these paths according to the subsequent evolution as the following types: \begin{itemize} \item{{\it type-(i)}-- the path never exceeds $w_q \sim -0.7$.} \item{{\it type-(ii)}-- the path circulates around a region nearby the late-time attractor scaling solution.} \item{{\it type-(iii)}-- the path is ``peaked'' at some point nearby the late-time attractor scaling solution.} \end{itemize} These types of paths can be clearly identified by an examination of another display, shown in Fig. \ref{W-COMB}, where we present the behavior of cosmological parameters of the selected solutions as a function of redshift: the dark energy density parameter $\Omega_q(z)$ and the dark energy equation of state parameter $w_q(z)$ (compare them with the corresponding panels in Fig. 2 of CE07). The type-(i)--(iii) behaviors give distinct forms to $w_q(z)$, which are illustrated in Fig. \ref{TYPES}. \begin{figure} [tbhp] \centering \includegraphics[width=1\columnwidth, height=1.1\columnwidth]{FIG4.pdf} \caption{(Color online). The behavior of cosmological parameters of the selected solutions as a function of redshift: the dark energy equation of state parameter $w_q(z)$ (top panel in each figure) and the dark energy density parameter $\Omega(z)$ (bottom panel in each figure). Figures are ordered as in Fig. \ref{Phase-EARLY-LATE-COMB-prep}: EARLY (left panels) vs. LATE (right panels), from the ($z_{\rm trans}=2.5$)-ensemble (top figures) and the ($z_{\rm trans}=1.5$)-ensemble (bottom figures). \label{W-COMB}} \end{figure} \begin{figure} [hbtp] \centering \includegraphics[width=1\columnwidth]{FIG5.pdf} \caption{(Color online). Representative curves of type-(i)---(iii) behaviors in the evolution of the dark energy equation of state parameter $w_q(z)$. The ``latest'' model obtained from the ($z_{\rm trans}=2.5$)-ensemble is indicated by the ``IC'' curve, and its initial conditions were fixed for all the sets generated by the ($z_{\rm trans}=1.5$)-ensemble. \label{TYPES}} \end{figure} Type-(i) paths correspond to a $w_q(z)$ with a ``step function'' behavior, with a little ``bump'' before the transition redshift. This agrees with the observation that these models do not ``move around'' the late-time attractor scaling solution, as types-(ii) and -(iii) do. Type-(ii) trajectories show a ``noisy'', ``periodic''-like behavior in $w_q(z)$ between redshifts $100$ and the transition redshift, corresponding to the ``circulatory'' behavior in phase-space. Type-(iii) solutions show a very similar behavior to the type-(i) ones at high redshifts, but show a larger, ``gaussian''-like profile before the transition redshift, corresponding to the ``peaked'' behavior of the path in phase-space. Type-(i) and -(ii) are favored in EARLY models, whereas type-(iii) is favored in LATE models, regardless of the transition redshift used, and all these properties are not sensitive to the initial conditions (c.f. Fig. \ref{Phase-ALL-COMB-prep}). Indeed, given the stochastic evolution of the models, the lack of sensitivity on initial conditions is somewhat expected. This was tested by setting the same initial conditions for all sets in the ($z_{\rm trans}=1.5$)-ensemble. The initial conditions used were taken from a set of the ($z_{\rm trans}=2.5$)-ensemble, indicated in Fig. \ref{TYPES} as the ``IC'' curve. It was the ``latest'' model obtained in the latter ensemble, clearly seen as a type-(iii) solution. Yet, the ($z_{\rm trans}=1.5$)-ensemble models exhibit all types of curves for these same initial conditions. It is interesting to note that type-(i) trajectories can mimic the cosmological constant at redshifts higher than $10$, but presents a small deviation from the purely $\Lambda$-CDM model at around the transition redshift, with a width of about $\Delta z \sim 3$. If observed, such a deviation could be a signature favoring a stochastic quintessence model of this type. \subsection{The behavior of $j(z)$ \label{JERKDEZ-SEC}} In Fig. \ref{VdePhi-INSET-COMB-prep} we present the obtained potential $V(\phi)$ of the selected models as well as the corresponding derivatives $dV/d\phi$ (insets). The reader should compare those figures with Fig. 4 of CE07, which show similar, ``exponential-like'' behavior. Note also that these curves are in qualitative agreement with the condition given by Eq. \ref{EPS-V}, in the late-time regime (after $z_{\rm trans}$), so that the ``slow-roll'' approximation, given by Eq. \ref{Jerk_Eq}, can be used at very low redshifts. \begin{figure} [hbtp] \includegraphics[width=1\columnwidth,height=1.02\columnwidth]{FIG6.pdf} \caption{(Color online). The potential of the models and the corresponding derivatives $dV/d\phi$ (insets). Figures are ordered as in Fig. \ref{Phase-EARLY-LATE-COMB-prep}: EARLY (left panels) vs. LATE (right panels), from the ($z_{\rm trans}=2.5$)-ensemble (top figures) and the ($z_{\rm trans}=1.5$)-ensemble (bottom figures).} \label{VdePhi-INSET-COMB-prep} \end{figure} \begin{figure} [hbtp] \includegraphics[width=1.0\columnwidth,height=1.0\columnwidth]{FIG7.pdf} \caption{(Color online). The jerk parameter of the models as compared to $\Lambda$CDM. The horizontal line at $|j(z)-1| = 0$ the signals a $\Lambda$CDM behavior ($\equiv j_{\Lambda{\rm CDM}}$). Figures are ordered as in Fig. \ref{Phase-EARLY-LATE-COMB-prep}: EARLY (left panels) vs. LATE (right panels), from the ($z_{\rm trans}=2.5$)-ensemble (top figures) and the ($z_{\rm trans}=1.5$)-ensemble (bottom figures). The ``latest model'' (labeled as ``IC'' in Fig. \ref{TYPES}) is indicated in the top right panel, as a thick line. \label{Jerk_VS-LCDM-COMB-prep}} \end{figure} The very steep parts of the potential curves are found for larger values of $|\Delta \phi |$ in the selected ($z_{\rm trans}=2.5$)-ensemble models, regardless of being EARLY or LATE models, with the exception of one EARLY model, showing the steep ascent starting at $|\Delta \phi | \sim 0.3$. This model is identified as a type-(i) path. The selected ($z_{\rm trans}=1.5$)-ensemble models show steep curves starting at smaller values of $|\Delta \phi |$, and they seem to ascend more gradually than the former models, except for an EARLY model, also identified as a type-(i) path, which ascends abruptly at $|\Delta \phi | \sim 0.2$. It is clear from the insets of Fig. \ref{VdePhi-INSET-COMB-prep} that the derivatives of the potential are noisy (we disregard steep parts of the potential), and this can be traced back to the amplitude of the $\lambda(z)$ distribution after the transition redshift. Even though the amplitude of the later is small, the impact on $d V/ d\phi$ can be significant. In order to obtain the behavior of the jerk parameter (c.f. Eq. \ref{Jerk_Eq}) for the ``extreme'' models defined in the previous section, we must calculate the derivatives of the potential, together with the Hubble parameter, $H(z)$, which is obtained directly from the solutions (c.f. Eq. \ref{XY_Eq}). In Fig. \ref{Jerk_VS-LCDM-COMB-prep}, we present a comparison of the obtained jerk parameter of the selected models relatively to a pure $\Lambda$CDM model, namely the absolute deviations $|j(z)-1|$ as a function of redshift. Due to the nature of the derivatives of the potential, the derived noisy $j(z)$ distributions were smoothed out using B\'ezier curves linking the jerk solution points, for a better visualization. The $j(z)$ generally diverges before the transition redshift, a regime where Eq. \ref{EPS-V} is no longer valid. However, for lower redshifts the models offer an upper bound to potential observational fluctuations in $j(z)$. Our results indicate that the jerk parameter may fluctuate up to $5\%$ from the pure $\Lambda$CDM model, and still comply with all the observational constraints in low redshifts. \clearpage \onecolumngrid \begin{figure} [bthp] \includegraphics[width=1\columnwidth]{FIG8.pdf} \caption{(Color online). Stochastic quintessence predictions for the variation $\Delta \alpha/\alpha (z)$ for polynomial couplings (PC). Curves are parametrized by the coupling factor $|\zeta | \equiv |\zeta_{\rm PC} |$. Linear coupling (LC) models correspond to $q=1$ in Eq. \ref{EQ-BF}. In each figure, the lower panel is a zoom-in of the upper panel, showing with more clarity the corresponding weakly coupling regime. {\it Upper left panel:} LC curves, using the ``latest model'' (labeled as ``IC'' in Figs. \ref{TYPES} and \ref{Jerk_VS-LCDM-COMB-prep}) as a reference for the calculations. {\it Upper right panel:} LC curves, for the type-(i) model (c.f. Fig. \ref{TYPES}). {\it Lower left panel:} PC curves (quadratic coupling, $q=2$) for the ``IC model''. {\it Lower left panel:} PC curves (quadratic coupling, $q=2$) for the ``IC model''. {\it Lower right panel:} PC curves ($q=6$) for the ``IC model''.\label{Da}} \end{figure} \twocolumngrid \subsection{Constraints for variations to the fine-structure constant \label{DADZ-SEC}} In this section, we show the stochastic quintessence prediction curves for possible variations in the fine-structure constant, $\Delta \alpha/\alpha (z)$, in the nearby Universe ($z<3$), according to the various coupling forms discussed in Sec. \ref{COUP-SEC}. Fig. \ref{Da} presents the predictions for polynomial couplings (PC), with curves parametrized by the coupling factor $|\zeta | \equiv |\zeta_{\rm PC} |$. The linear coupling (LC, upper panels) models correspond to $q=1$ (c.f. Eq. \ref{EQ-BF}). The curves in Fig. \ref{Da} were obtained directly from Eq. \ref{DA-FORM}, using the results of $\Omega_q(z)$ and $w_q(z)$ for these models, parameterized by $|\zeta | $, spanning $\sim 3$ orders in magnitude. The observational data on $\Delta \alpha(z)/\alpha$ are based on cosmological data (type Ia supernova and Hubble parameter measurements, compiled in \cite{Mar15}) and astrophysical data (high-resolution spectroscopy of QSOs \cite{Mur03}). For the LC models, we compared the predictions for the ``latest model'' (labeled as ``IC'' in Figs. \ref{TYPES} and \ref{Jerk_VS-LCDM-COMB-prep}) and the type-(i) model (c.f. Fig. \ref{TYPES}). We chose these reference models due to their very distinct behaviors in low redshifts. The IC model (which is an ``extreme'' type-(iii) model) is the ``latest'' viable solution generated from the ensembles, therefore representing the best generator of potential observational signatures at low redshifts. The type-(i) model, on the other hand, presents a peculiar trajectory in phase-space, as noted previously. The type-(ii) models have an intermediary behavior in that redshift range and are not here discussed. For all other predictions, we have fixed the calculations to the ``IC'' model only, given that its curves show steeper behavior for the linear coupling case. The resulting curves have a ``trumpet''-like form (considering negative and positive values of $\zeta$ as the lower and upper limits, respectively) and are smooth, given that $\Omega_q(z)$ and $w_q(z)$ are smooth in this range (see Fig. \ref{W-COMB}). The ``trumpet''-like form of these models should be compared with the ones analysed in Ref. \cite{Mar15b}: the former have a ``convex''-like form (tending to flatten for $z \gtrsim 2.5$), whereas the later, generally a ``concave''-like one (for fixed $w_q$). From Eq. \ref{DA-FORM}, the variability in $\alpha$ is directly proportional to the integrated contributions of $\Omega_q$ and $w_q$, with the proportionality factor given by the coupling $\zeta$. In the type-(i) model, the integral is very small, since $w_q(z) \sim -1$ for this model, in that redshift range. This gives a weak dependence of $\Delta \alpha/\alpha$ with redshift, even for relatively high couplings. For the IC model, on the other hand, this weak dependence requires lower coupling values. Yet, it is interesting to note that the equation of state parameter for the IC model varies significantly in that redshift range (i.e., from $w_q=0$ at $z\sim 3$ to $w_q=-1$ at $z=0$; c.f. Fig. \ref{TYPES}), but the impact of such a large variation is only established for higher coupling values. For the polynomial cases (lower panels of Fig. \ref{Da}), the quadratic ($q=2$) and higher ($q=6$) powers clearly have the effect of changing the shape of ``trumpet''-like form of the curves, indicating an earlier and steeper onset of the evolution in $\Delta \alpha/\alpha (z)$. The curves can still accommodate the data with weak enough couplings $|\zeta | \lesssim 1 \times 10^{-4}$. Mixed coupling (MC) cases are shown in Fig. \ref{Da2}. For this analysis, we have adopted $q=3$, and used the ``latest model'' (labeled as ``IC'' in Figs. \ref{TYPES} and \ref{Jerk_VS-LCDM-COMB-prep}) as a reference for the calculations. In order to facilitate the comparison with purely PC counterparts (that is, those with same coupling factors $|\zeta_{\rm PC} |$, but without the exponential term), we have included the results for these cases as well, using lighter grey curves in all panels of Fig. \ref{Da2}. The results show a general trend in the sense that, the higher the values of $\zeta_{\rm EC}$ (say, above $\zeta_{\rm EC} \gtrsim 1 \times 10^{-4}$), the stronger the deviation from the purely PC curves, ultimately leading to a conflict with the experimental data. \begin{figure} [tbhp] \includegraphics[width=1\columnwidth]{FIG9.pdf} \caption{(Color online). Stochastic quintessence predictions for the variation $\Delta \alpha/\alpha (z)$ for mixed couplings (MC), adopting $q=3$, and using the ``latest model'' (labeled as ``IC'' in Figs. \ref{TYPES} and \ref{Jerk_VS-LCDM-COMB-prep}) as a reference for the calculations. Curves are parametrized by the coupling factors $|\zeta_{\rm PC} |$ and $\zeta_{\rm EC} $ . In each figure, the lower panel is a zoom-in of the upper panel, showing with more clarity the corresponding weakly coupling regime. Lighter grey curves in all panels correspond to the PC curves with same coupling factors $|\zeta_{\rm PC} |$ but without the exponential term. {\it Upper panel:} Small deviations from the purely PC curves, for $\zeta_{\rm EC} = 1 \times 10^{-4}$. {\it Lower panel:} Stronger deviations from the purely PC curves, for $\zeta_{\rm EC} = 5 \times 10^{-4}$.\label{Da2}} \end{figure} \section{Conclusion \label{CONC-SEC}} We found that stochastically generated, dynamical models of dark energy, discussed by Chongchitnan and Efstathiou (2007), which behave as quintessence models in high redshifts but shift to a cosmological constant behavior at low redshifts, admit solutions with peculiar properties for the observables $j(z)$ and $\Delta \alpha(z)/\alpha$ in low redshifts. Despite of the dynamical richness of the stocastic solutions, they locally (in a ``time sliced'' sense) follow the purely exponential potential models and their behavior can be inferred from known attractor dynamics \cite{Cop98}. In this work, we presented a simple classification of trajectories in phase-space and qualitatively analysed their behavior in terms of the observables $j(z)$ and $\Delta \alpha(z)/\alpha$. Using a slow-roll approximation to express the jerk parameter as a function of the field variable, we showed that $j(z)$ deviates (fluctuates) from a pure $\Lambda$CDM model up to a $\sim 5\%$ level in the nearby Universe. However, considering the analysis given by Bochner et al. (2013), where the jerk parameter can only be constrained currently to the large interval $j \sim [-9.2, 9.8]$ in the nearby Universe, potential imprints of the fluctuations found in the present models are, at this stage, unlikely to be detected and must await future data (e.g. \cite{Abb05}). By assuming a linear coupling between the dark energy and the electromagnetic sectors, we showed that stochastic quintessence models are fully compatible with current observational limits ($z \lesssim 3$) on $\Delta \alpha(z)/\alpha$ and on $\zeta$ (note that constraints are much more stringent, e.g., \cite{Mar15}, \cite{Mar15b}). These models offer distinctive observational imprints for $\Delta \alpha(z)/\alpha$ only at high couplings. The IC model fits the data for any coupling of the order $|\zeta| \lesssim 10 ^{-4}$, and the type-(i) model, for $|\zeta| \lesssim 5 \times 10 ^{-4}$ (and they represent two opposite ``extremes'' of all solutions from the ensembles obtained in this work). However, these models illustrate that variabilities in $\alpha$ are weakly dependent on redshift, for couplings of the order $|\zeta| \sim 10 ^{-4}$, even for large variations in the equation of state parameter at relatively low redshifts. For the case of nonlinear couplings, the shape of ``trumpet''-like form of the curves is changed relatively to the linear case, indicating an earlier and steeper onset of the evolution in $\Delta \alpha/\alpha (z)$. In particular, the mixed coupling cases show that, the higher the values of $\zeta_{\rm EC}$ ($\gtrsim 1 \times 10^{-4}$), the stronger the deviation from the purely polynomial curves, ultimately leading to a conflict with the experimental data. All curves can still accommodate the data with weak enough couplings. Further systematic analysis of viable stochastic quintessence models, exploring the parameter space into more ``extreme'' choices ($z_{\rm trans}$, $\lambda(z)$ amplitudes, etc) are left for future work and would be very interesting to be tested against a larger volume of high-precision cosmological data, e.g. Ref. \cite{Sar15}. \begin{acknowledgments} We thank the anonymous referee for valuable suggestions and comments. A.L.B.R. thanks the support of CNPq, under grant 309255/2013-9, and the financial support from the project Casadinho PROCAD – CNPq/CAPES number 552236/2011-0. \end{acknowledgments} \thebibliography{0} \bibitem{Abb05} Abbott, T. {\it et al.} (The DES Collaboration), {\it preprint} arXiv:astro-ph/0510346. \bibitem{Ala07} Alam, U., Sahni, V. and Starobinsky, A., J. Cosmol. Astropart. Phys. 02, 011 (2007). \bibitem{Bar00} Barreiro, T., Copeland, E. and Nunes, N.J., Phys. Rev. D 61, 127301 (2000). \bibitem{Bla04} Blandford, R., Amin, M., Baltz, E., Mandel, K. and Marshall, P., in {\it ASP Conf. Se. 339, Observing Dark Energy}, edited by Wolff, S.C., Lauer, T.R. (Astron. Soc. Pac., San Francisco, p. 27, 2004). \bibitem{Boc13} Bochner, B., Pappas, D., and Dong, M., ApJ 814, 7 (2015). \bibitem{Cal11} Calabrese, E. {\it et al.}, Phys. Rev. D 84, 023518 (2011). \bibitem{Car92} Carrol, S., Press, W. and Turner, E., ARAAS, 30, 499 (1992). \bibitem{Cho07} Chongchitnan, S. and Efstathiou, G., Phys. Rev. D 76, 043508 (2007). \bibitem{Cop98} Copeland, E., Liddle, A. and Wands, D., Phys. Rev. D 57, 4686 (1998). \bibitem{Cop04} Copeland, E. {\it et al.}, Phys. Rev. D, 49, 6410 (2004). \bibitem{Cop06} Copeland, E., Sami, M. and Tsujikawa, S., Int. J. Mod. Phys. D15, 1753 (2006). \bibitem{Dol07} Dolgov, A.D., in {\it Cosmology and Gravitation: XIIth Brazilian School of Cosmology and Gravitation. AIP Conf. Proceedings} (Vol. 910, p. 3, 2007). \bibitem{Fer98} Ferreira, P.G. and Joyce, M., Phys. Rev. D 58, 023503 (1998). \bibitem{Mar05} Marra, V. \& Rosati, F., JCAP 0505:011 (2005). \bibitem{Mar15} Martins, C.J.A.P. \& Pinho, A.M.M., Phys. Rev. D 91, 103501 (2015). \bibitem{Mar15b} Martins, C.J.A.P., Pinho, A.M.M., Alves, R.F.C., Pino, M., Rocha, C.I.S.A and von Wietersheim, M., {\it preprint}, arXiv:1508.06157. \bibitem{Mur03} Murphy, M. T., Webb, J. K. and Flambaum, V. V., MNRAS 345, 609-638 (2003). \bibitem{Pad02} Padmanabhan, T., Phys. Rev. D 66, 021301 (2002). \bibitem{Per99} Perlmutter, S. {\it et al.}, ApJ, 517, 565 (1999). \bibitem{Rie98} Riess, A. {\it et al.}, AJ, 116, 1009 (1998). \bibitem{Sah00} Sahni, V. and Starobinsky, A., Int. J. Mod. Phys, D9, 373 (2000). \bibitem{Sah03} Sahni, V., Saini, T.D., Starobinsky, A. and Ulam, U., JETP Lett. 77, 201 (2003). \bibitem{Sar15} Sartoris, B., Biviano, A., Fedeli, C., Bartlett, J. G., Borgani, S. , Costanzi, M., Giocoli, C., Moscardini, L., Weller, J., Ascaso, B., Bardelli, S., Maurogordato, S., \& Viana, P. T. P. , {\it preprint} arXiv:1505.02165. \bibitem{Wei08} Weinberg, S., {\it Cosmology} (Oxford University Press, 2008). \bibitem{Wet88} Wetterich, C., Nucl. Phys. B302, 668 (1988). \end{document}
2,869,038,156,478
arxiv
\section{Introduction} Millimeter Wave (mmWave) systems are considered as one of the key technologies in next-generation wireless communication systems. Massive MIMO systems with mmWave technologies combine the advantages of leveraging spatial resources of MIMO with the high data rates offered by the large bandwidth available at millimeter wave frequency bands. However, excessive pathloss and penetration loss incurred during wave propagation severely affect the range of mmWave MIMO systems. Directional transmission and antenna beamforming have been proposed as a solution for compensating for the losses incurred during wave propagation \cite{kutty2015beamforming}. Directional beamforming has been traditionally achieved through beam sweeping methods \cite{nitsche2014ieee}, which involves a brute-force search through the possible steering directions. But brute-force search method is both time-consuming and requires energy expenditure. To reduce the complexity of the full-scale search, hierarchical search strategies have been proposed (See \cite{zhang2017codebook} and references therein). Blind beam steering relying on accurate location information has been proposed as a low complexity solution for beamforming \cite{nitsche2015steering}, but comes with the overhead of additional location information. In recent work, \cite{liu2019ekf} develop a method based on Extended Kalman Filter to track the mmWave beam in a mobile scenario with moving UEs. In \cite{zhang2019exploring}, a temporal correlation aided beam alignment scheme is presented. By describing the transition probabilities of Angle-of-Arrival (AoA) and Angle-of-Departure (AoD), the authors developed a semi-exhaustive search algorithm for beam alignment. \cite{yan2020fast} presents a solution to improve the Initial Access (IA) process of beam alignment in High-Speed-Railways (HSR). By exploiting the periodicity and regularity in the train trajectory and using historical beam training results, the authors proposed a method to reduce IA. A Genetic algorithm based search method for Initial Access is proposed in \cite{souto2019novel}. In the context of 5G millimeter wave communication, the authors showed that the proposed method can achieve the same capacity as exhaustive search algorithms under small parameter settings. However, even with better IA schemes, there are chances of beam misalignment and precise tracking may be a costly alternative. The work in \cite{pradhan2019beam} developed a beam misalignment aware hybrid precoding scheme to deal with the mismatch in estimation. By using GPS location data from UE and querying a multipath fingerprint database of historical data, \cite{va2017inverse} introduced a beam alignment method for vehicular UEs. The progress in deep learning in the areas of computer vision and speech signal processing\cite{lecun2015deep} has also triggered an interest in applying those techniques to complex wireless communication problems \cite{zhang2019deep}. Deep learning has been successfully used in channel estimation \cite{ye2018power,athreya2020beyond}, end to end communication systems \cite{o2017introduction,raj2018backpropagating,raj2020design}, OFDM systems \cite{gao2018comnet} etc. The application of developments in deep learning research has also improved the solutions for mmWave communication systems. Some of the fruitful applications include beamforming design for weighted sum-rate maximization \cite{huang2018unsupervised}, using an autoencoder deep learning model to improve hybrid precoding \cite{huang2019deep}, replacing hybrid precoding with a deep learning model to predict the best pair transmit/receive beam pairs from the observed channel\cite{li2019deep} and leveraging deep reinforcement learning for beamforming \cite{wang2020precodernet}. In the scenario of hybrid beam forming, \cite{elbir2019joint} proposed a Convolutional Neural Network based method for joint antenna selection and beamforming using twin networks. It can be observed from these works that mmWave MIMO systems can greatly benefit from the application of learning techniques into their core components. In this work, we consider the problem of blind mmWave beam alignment in a downlink channel using only the RF signature about the presence of UE in the cellular system. Specifically, we consider a multi-base station (BS) scenario with multiple mobile users (UE). A simple depiction of the scenario is given in Fig. \ref{fig:city_block}. \begin{figure}[h] \centering \input{./figs/city_block.tex} \caption{A depiction of problem scenario} \label{fig:city_block} \end{figure} We consider a scenario where multiple mmWave micro base stations ($\mu$BS) exist and there is a central base station (not depicted in Fig. \ref{fig:city_block}) which co-ordinates all the transmissions through $\mu$BS\footnote{ The learning algorithm presented in this work runs at the central base station which in turn decides the UE-$\mu$BS assignment and also the beam alignment directions based on the assignment. Extending to distributed learning through Federated Learning can be interesting future work.}. Due to the path loss and penetration loss properties of mmWave, each $\mu$BS has only a limited coverage area, usually within hundreds of meters. As the mobile users move in the environment, we need to select the best $\mu$BS to serve each user based on the channel characteristics between $\mu$BS and users. These user equipments can be any mobile transceiver that employs mmWave spectrum for communication. Examples of such user terminals include mobile phones, connected vehicles, autonomous cars, delivery robots, unmanned aerial vehicles (UAVs), etc. Specifically, our aim is to select one $\mu$BS out of the available base stations to serve each user and also to find the best beam alignment angles for transmission without any brute force beam sweep methods. Rather than relying on the location information for beam alignment as considered in previous works, the proposed method uses already available radio frequency (RF) signature available in the system about the presence of the UEs and leverage the advancements in deep reinforcement learning to achieve blind beam alignment from BS to UE for downlink communication. \textbf{Related Works.} Deep learning-based solutions for beam alignment with no location information have been proposed in multiple scenarios. In \cite{alkhateeb2018deep}, a coordinated beamforming solution using deep learning to enable high mobility and high data rate is proposed. Based on the uplink pilot signal received at the terminal BSs, a deep neural network is trained to predict the best beamforming vectors. The method is applied to the scenario where multiple BSs serve a single UE. Extending this method to support multiple UEs may not control the interference between the UEs as the existence of multiple UEs is not known to the trained deep learning model. Another approach that takes into account the varying traffic patterns and its impact on the mmWave channel is proposed in \cite{satyanarayana2019deep}. By taking the location, traffic parameters and RSSI thresholds of each UE, the proposed method suggests a set of beamforming vectors based on an estimated RF fingerprint which can then be used to conduct beam training. By deploying a deep learning based method to shortlist possible best beamforming vectors, this method can reduce the beam training time for initiating a mmWave communication. However, to train this model, an exhaustive dataset of possible RF fingerprints across multiple traffic patterns is required. On the other side, once trained, the model can continue to operate as long as there is no major change in the environment. In an alternative take on the problem, \cite{klautau2019lidar} proposed a solution that includes BSs broadcasting its location. Here, UEs (connected vehicles with LIDARs) fuses the information from its LIDAR and the received BS location information to shortlist a possible set of beamforming vectors. This also helps in reducing the delay due to brute force beam search. The work in \cite{dias2019position} proposed to use the LIDAR in connected smart vehicles such as autonomous driving vehicles to estimate the location of BS and then aiding beamforming. Instead of BS broadcasting its location, this method allows smart vehicles with LIDARto detect the location of BS through its LIDAR scans and complement this data with deep learning to reduce the search space for beamforming vectors. All these works mentioned above relies on additional resources such as GPS hardware to acquire location information\cite{nitsche2015steering,va2017inverse}, LIDAR hardware to acquire contextual information, etc \cite{klautau2019lidar,dias2019position}, for beam alignment. Even though all these solutions are able to present improved performance, additional hardware/resource requirements are a hindrance to the widespread adoption of these solutions. Further, human aided acquisition of labeled dataset for training deep learning models severely limit the scalability of the solutions. \textbf{Contributions.} In this work, we propose a deep reinforcement learning \cite{sutton2018reinforcement, mnih2015human} based technique for blind beam alignment that does not need any additional hardware/resources and does not require any labeled dataset for training. The proposed method is designed to work in a scenario with multiple base stations as well as multiple UEs. By using the RF fingerprint of each UE produced by its omnidirectional beacon transmission\footnote{Even though an explicit mention of beacon signaling is assumed in this work, this beacon signal from UE can be any of the existing control signals which are required to maintain connectivity between UE and BS as the work only requires the received signal strength of the beacon signal and not the specific signals themselves.}, the system learns to predict the best BS to serve each UE as well as the beam alignment directions for each transmission. This makes the proposed solution also ideal for situations where UEs and environments are non-stationary since in such a case any contextual information obtained about the location alone may not be useful for beamforming. The numerical experiments performed in the paper shows that the proposed method not only increases the average sum rate of UE four times when compared to the traditional beam-sweeping method in the case of small antenna arrays but also reaches the same rate as an Oracle that has full information about the environment. Also, the average sum rate of the proposed method is remarkably better than the corresponding rate values of traditional methods like BS-sweep or the DRL based methods like Vanilla DDPG even in the case of larger antenna arrays. As the proposed method is not dependent on the explicit channel modeling, it can be used for both ground-based as well as aerial communication networks. \textbf{Notations.} Bold face lower-case letters (eg. $\bm{x}$) denote column vector and upper-case (eg. $\bm{M}$) denote matrices. Script face letters (eg. $\mathcal{S}$) denotes a set, $|\mathcal{S}|$ denotes the cardinality of the set $\mathcal{S}$. $f(\textbf{x};\bm{\theta})$ represents a function which takes in a vector $\textbf{x}$ and has parameters $\bm{\theta}$. A distribution with parameters $\theta$ is represented as $p_\theta(\cdot)$. $\mathbb{E}_p$ is the expectation operator with respect to distribution $p$. \section{Blind mmWave wave Beam Alignment} We consider a mmWave downlink scenario with multiple BSs trying to serve multiple UEs. The aim is to select the best BS to serve each UE as well as the best set of beam alignment parameters for efficient beamforming. This is achieved by the central base station selecting one $\mu$BS which can best serve the UE based on the current information about UE (the received signal strength of beacon signal from UEs) and then selecting the right beam alignment directions. Each base station has a small cell radius (hundreds of meters) within which it can serve and there exists a central base station (BS) that coordinates the cellular system. We assume that a reliable link exists between the micro base stations ($\mu$BS) and the central BS which can ensure a robust exchange of data. For any additional signaling, we assume the existence of a dedicated control channel (CC). Also, we consider that all UEs use the same carrier frequency and hence the interference should also be reduced while selecting the beamforming parameters. \subsection{Channel Model} Let $N_{BS}$ represents the number of $\mu$BS and $N_{UE}$ represents the number of UEs. We consider a MISO system with $N_T$ transmit antennas at each $\mu$BS and $N_R = 1$ receive antenna at each UE. With single antenna UEs, have omni-directional transmission. We consider a Uniform Planer Array (UPA) antenna at $\mu$BS. The channel between the $\mu$BS and UE (of dimension $N_R \times N_T$) is modeled based on the popular Saleh-Valenzuela channel model \cite{el2014spatially} for mmWave systems. The channel between the transmitter and receiver is given by \begin{align} \bm{H} = \sqrt{\frac{N_T N_R}{\kappa}} \sum \limits_{l=1}^{\kappa} \alpha_l \bm{a}_r(\phi_l^r,\theta_l^r) \bm{a}^*_t(\phi_l^t,\theta_l^t), \end{align} where $\kappa$ is the number of propagation paths, $\alpha_l$ is the complex gain associated with $l^{th}$ path. $(\phi_l^t,\theta_l^t)$ are the azimuth and elevation angles of departure of $l^{th}$ ray at the $\mu$BS respectively. Similarly, $(\phi_l^r, \theta_l^r)$ are the angles of arrival of $l^{th}$ ray at UE. We measure $\theta$ from \textit{+z-axis} and $\phi$ from \textit{+x-axis}. We assume the $\mu$BS UPA antenna is in \textit{yz-plane} with $N_{t,h}$ and $N_{t,v}$ elements in $y$ and $z$ axis respectively and $N_t = N_{t,h} \times N_{t,v}$. The array response vector of transmitter, $\bm{a}_{t}(\phi,\theta)$ (of dimension $N_T$), is given by \ifCLASSOPTIONonecolumn \begin{align} \bm{a}_{t}(\phi,\theta) = \frac{1}{\sqrt{N}} \left[ 1, \ldots, e^{j \frac{2\pi}{\lambda} d \left(m \sin\phi \sin\theta + n \cos \theta \right)}, \ldots, e^{j \frac{2\pi}{\lambda} d \left((N_{t,h}-1) \sin\phi \sin\theta + (N_{t,v}-1) \cos \theta \right)} \label{eqn:resp_vec} \right]^T \end{align} \else \begin{align} \bm{a}_{t}(\phi,\theta) = \frac{1}{\sqrt{N}} \left[ 1, \ldots, e^{j \frac{2\pi}{\lambda} d \left(m \sin\phi \sin\theta + n \cos \theta \right)}, \ldots, \right. \nonumber \\ \left. e^{j \frac{2\pi}{\lambda} d \left((N_{t,h}-1) \sin\phi \sin\theta + (N_{t,v}-1) \cos \theta \right)} \label{eqn:resp_vec} \right]^T \end{align} \fi where $0 < m < N_{t,h} - 1$ and $0 < n < N_{t,v} - 1$ and $d$ is the inter-element spacing. Since we assume omni-directional reception with single antenna at the receiver, $\bm{a}^{r}(\phi,\theta) = 1$. The path loss (in $dB$) of mmWave propagation is modeled as \cite{5gmodel2016}, \ifCLASSOPTIONonecolumn \begin{align} PL(f,d)_{dB} &= 20 \log_{10} \left( \frac{4\pi f}{c} \right) + 10 n \left(1 + b \left(\frac{f-f_0}{f_0}\right) \right) \log_{10} \left(d\right) + X_{\sigma dB}, \label{eqn:pathloss} \end{align} \else \begin{align} PL(f,d)_{dB} &= 20 \log_{10} \left( \frac{4\pi f}{c} \right) \nonumber \\ & \qquad + 10 n \left(1 + b \left(\frac{f-f_0}{f_0}\right) \right) \log_{10} \left(d\right) \nonumber \\ & \qquad + X_{\sigma dB}, \label{eqn:pathloss} \end{align} \fi where $n$ is the path loss exponent, $f_0$ is the fixed reference frequency, $b$ captures the frequency dependency of path loss exponent and $X_{\sigma dB}$ is the shadow fading term in $dB$. The Signal to Interference Ratio (SINR) at $i^{th}$ UE which is served by $j^{th}$ $\mu$BS is given by \begin{align} \zeta^{i} = \frac{P_{TX}|\bm{H}_{i,j} \bm{f}_j|^2} {\sum \limits_{k=1, k \neq j}^{N_{BS}} P_{TX,i}|\bm{H}_{i,k} \bm{f}_k|^2 + \sigma^2}, \label{eqn:ue_sinr} \end{align} where $\bm{H}_{i,j}$ is the channel between $i^{th}$ UE and $j^{th}$ BS, $\bm{f}_j$ is the transmit codeword used by $\mu$BS, $\sigma^2$ is the noise power and $P_{TX,k}$ is the transmit power of $k^{th}$ $\mu$BS. The transmit codeword $\bm{f}_j$ is computed from beamforming angles $(\phi_{ij}, \theta_{ij})$ as $\bm{f}_j = \bm{a}(\phi_{ij}, \theta_{ij})$ as given in (\ref{eqn:resp_vec}). \textbf{RF signature of UE.} Each $UE$ transmits a uniquely identifiable beacon signal using omnidirectional transmission during the period of downlink. All $\mu$BS in the system receives a faded/corrupted copy of this uniquely identifiable signal. Since there may not exist direct pathways from the UE to BS for these beacon signals, the received power at each BS can have a complex relationship that is dictated by the scenario. The final RF signature used by the learning system is a vector of $N_{BS}$ dimensions, which is the received signal strength of the UE beacon signal at each of the $N_{BS}$ micro base stations. The learning problem is now to use the RF signatures along with the reported SINR values of each UE to predict which $\mu$BS should serve which UE and with what values of $\phi$ and $\theta$. An outline of the entire procedure is given below. \begin{enumerate} \item Each UE send out a uniquely identifiable beacon signal using omnidirectional transmission. \item Each $\mu$BS receives the beacon signals transmitted by all the UEs. These received power of the beacon signals at each $\mu$BS constitutes the RF signature for UEs. \item Central base station receives the RF signature collected by $\mu$BS about each UE. Central base station also obtains the SINR from each UE about the ongoing downlink transmission. \item Based on the RF signature and the SINR of each UE, the central base station runs the proposed algorithm and selects the best $\mu$BS for each UE and the beam alignment angles. \item Central base station commands the selected $\mu$BS to use the predicted angle parameters and serve particular UE. \item Corresponding $\mu$BS performs beam alignment based on the received command from the central base station for transmission until an update. \end{enumerate} This process can be repeated periodically or in an event-triggered fashion to update the beam alignment process for each UE. Compared to the standard procedure of beam sweeping which does brute force search in all available directions for the best channel response, the proposed approach can get the beam alignment parameters in a single shot rather than searching for multiple combinations. Further, this method also enables the central base station to choose the best $\mu$BS for each UE based on the instantaneous SINR feedback and hence improves the initial access problem as well. In the following section, we model the problem of selecting $\mu$BS and the beam alignment direction as a single Markov Decision Process (MDP) with the objective of improving the effective SINR experienced by each of the UEs. We then introduce Reinforcement Learning as a viable solution to solve the MDP problem and then proceed with the specific neural architecture as well as the learning algorithm for the problem. \section{Learning based beam alignment} We model the problem of $\mu$BS selection and beam alignment direction prediction as a Markov Decision Process (MDP). An MDP comprises of a state space $\mathcal{S}$, an action space $\mathcal{A}$, an initial state distribution $p(s_1)$, a stationary distribution for state transition which obeys Markov property $p(s_{t+1}|s_{t},a_{t}) = p(s_{t+1}|s_{t},a_{t}, \ldots, s_{1},a_{1})$ and a reward function $r: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$. An overview of the MDP formulation for the problem of blind beam alignment is given in Fig. \ref{fig:mmbeamer}. \begin{figure}[h] \centering \input{./figs/mmbeamer.tex} \caption{Blind beam alignment using DRL.} \label{fig:mmbeamer} \end{figure} The input to the learning agent is a $N_{UE} \cdot (N_{BS} + 1)$ dimensional vector with real number entries which is a concatenation of $N_{BS} + 1$ feature vector of $N_{UE}$ UEs in the network. The feature vector of dimension $N_{BS} + 1$ of each UE consists of a) the RF fingerprints ($N_{BS}$ elements) from seen at $\mu$BS about the $UE$ from the received beacon signal and b) the SINR experienced by that UE with current beam alignment configuration and from the $\mu$BS assigned to the UE. At each time step $t$, this constitutes the state described in the MDP ie., $s_t \in \mathbb{R}^{N_{UE} \cdot (N_{BS} + 1)}$. Action to be taken by the agent has to include a discrete value referring to the index of the $\mu BS$ to serve each UE and also the $(\phi,\theta)$ pair for the transmission. The reward at each time instant $t$, $r_t$ is taken as the mean rate achieved by the UEs at that instant and is defined as, \begin{align} r_t &= \frac{1}{N_{UE}} \sum \limits_{i=1}^{N_{UE}} \log_2 \left( 1 + \zeta_t^i \right), \label{eqn:reward} \end{align} where $\zeta_t^i$ is the instantaneous SINR at $i^{th}$ $UE$ as defined in (\ref{eqn:ue_sinr}). \subsection{Reinforcement Learning} Reinforcement Learning (RL) \cite{sutton2018reinforcement} is a sub-class of artificial intelligence (AI) where the learning algorithm \textit{interacts} with the \textit{environment} to learn \textit{optimal actions} which maximizes a formulated \textit{reward}. It differs from the Machine Learning (ML) paradigm in the way how samples are obtained for the learning process. Machine learning relies on labeled data to be fed into the algorithm for the learning process, reinforcement learning works by the learning agent itself acquiring the samples to improve its knowledge. Machine learning is ideal for scenarios where labeled data is available such as in classification and regression, and where the prediction itself is not going to affect the future observations. Reinforcement learning is used in scenarios where the learning agent is in control of the system whose output can change based on the agent's predictions. This difference also creates an \textit{explore-exploit} behavior in RL algorithms where it is also required to explore unknown/less-known actions to acquire new samples. In reinforcement learning, an agent is trained to optimize a policy $\bm{\pi}$ to increase the return $\mathbbm{r}_t(\gamma)$. A policy $\bm{\pi}$ maps observed states to actions $\pi : \mathcal{S} \rightarrow \mathcal{A}$ and inturn obtains reward $r_t(s_t, a_t)$. The return $\mathbbm{r}_t(\gamma)$ is defined as the total discounted reward from the timestep $t$ and can be expressed as $ \mathbbm{r}_t(\gamma) = \sum \limits_{t'=t}^{\infty} \gamma^{t'-t} r(s_{t'},a_{t'}), \label{eqn:return} $ where $\gamma \in [0,1]$. The discounting factor $\gamma$ is used to capture the importance of future rewards in the current value estimate. With $\gamma \rightarrow 0$, the policy will become myopic and only considers current reward. With $\gamma \rightarrow 1$, the policy learns for long-term high reward. The objective of agent is to find a policy $\bm{\pi}$ which maximizes the expected cumulative discounted return $J(\bm{\pi}) = \mathbb{E}[\mathbbm{r}_1(\gamma) |\bm{\pi}]$. An agent computes the value function by a policy $\bm{\pi}$ for each state as the expected return from that state by following $\bm{\pi}$, ie., $V^\pi(s) = \mathbb{E}\left[\mathbbm{r}_1(\gamma)| S_1 = s; \bm{\pi} \right]$. The value of a state indicates how favorable each state is for the agent to be in. The quality of an action at each state is computed using Q-function as $Q^\pi(s,a) = \mathbb{E}\left[\mathbbm{r}_1(\gamma)| S_1 = s, A_1 = a; \bm{\pi} \right]$ and indicates how rewarding each action is when taken from that state $s$. At each timestep, agent takes action which maximizes the Q-value. The policy $\bm{\pi}$ can either be created by a tabular approach or by function approximation approach\cite{sutton2018reinforcement}. In the case where the states and actions are discrete, tabular methods can be used to capture the value for $Q^\pi(s,a)$, which subsequently decides the action taken by policy $\bm{\pi}$. On the other hand, when states/actions can take continuous values or there are very high number of states/actions which can make maintaining a table for Q-values infeasible, function approximation methods are used. By appropriately defining a family of functions that operate on $(s,a)$ and learning the parameters of the function which maximizes the return from the policy $\bm{\pi}$, these methods depend on the expressive power of candidate functions for the success of RL policies. Multiple function approximation techniques including linear functions, radial basis functions, Fourier basis, neural networks, etc has been proposed \cite{konidaris2011value}. \subsection{Deep Reinforcement Learning based Beam Alignment} Deep Reinforcement Learning (DRL) \cite{mnih2015human} is the technique where an RL learning agent employs deep neural networks for facilitating the learning procedure. The function approximation capabilities of the neural networks are leveraged in DRL to map the observations to optimal actions. A neural network can be seen as a chain of functions that transforms its input to a set of outputs through a non-linear transform. In DRL, at each time step, after observing the state $\bm{s}$, agent uses a neural network policy $\bm{\pi}$ to take an action $\bm{a}$. The problem of beam alignment as formulated above is a challenging task for reinforcement learning as the action space $\mathcal{A}$ is a mixture of continuous (the value of angles) as well as discrete (the index to $\mu BS$) dimensions. Deep Deterministic Policy Gradient (DDPG) algorithm \cite{lillicrap2015continuous} is a recent actor-critic method for training deep reinforcement learning agents on continuous action domains. In this work, we use DDPG as the learning algorithm to train DRL agent\footnote{Even though we use DDPG in this work, any DRL method with continuous action space can be used with the proposed neural action predictor for the purpose of blind beam alignment}. As our problem has an action space which is a mix of discrete base station selection and continuous beam alignment angles selection, we propose a novel neural network architecture that can handle pseudo-discrete and pure-continuous action spaces simultaneously for predicting actions. \subsubsection{\textbf{Training the agent with DDPG}} DDPG is based on the family of policy gradient algorithms in which the parameters of the policy are changed towards the direction of improvement of return. It is a two-step iterative process in which the policy is evaluated for the quality with current set of parameters and then, a policy improvement step updates the parameters in the ascent direction of maximum returns. DDPG has two neural networks: an actor network $\mathbbm{A}$ parameterized by $\bm{\omega}^a$ which predicts the action $a_t$ based on current state $s_t$ and critic network $\mathbbm{C}$ parameterized by $\bm{\omega}^c$ which computes the Q-value for the predicted action $Q(s_t, a_t)$. The critic network $\mathbbm{C}$ predicts the quality of the action taken by actor network $\mathbbm{A}$ and hence encourages the actor network to take \textit{better} actions through its feedback. The critic network $\mathbbm{C}$ meanwhile trains itself for better prediction by observing the rewards after each action and hence computing the Q-value followed by gradient of the error on its prediction. To get stable, uncorrelated gradients for policy improvement, DDPG maintains a replay buffer of finite size $\tau$ and sample the observations from the buffer in minibatches to update the parameters. DDPG also uses target networks with parameters $\bar{\bm{\omega}}^a$ and $\bar{\bm{\omega}}^c$ to avoid divergence in value estimation. This helps the learning agent to update the parameters of the active network based on the values from the target network, hence giving the learning agent a stable error value to learn from. At each timestep, the state $s_i$ and the action takes $a_i$ along with the reward obtained $r_i$ and the next state $s_{i+1}$ is stored as an experience $(s_i, a_i, r_i, s_{i+1})$ to the buffer $\mathcal{B}$. For training the actor and critic networks, $N$ samples are taken from $\mathcal{B}$ and this is used to compute the gradients. For the critic network $\mathbbm{C}(\bm{\omega}^c)$ to compute the Q-value for each state action-pair, an estimate of return for state $s_i$ in each sample is computed as \begin{align} y_i = r_i + \gamma \mathbbm{C}(s_{i+1}, \mathbbm{A}(s_{i+1}| \bar{\bm{\omega}}^a)|\bar{\bm{\omega}}^c). \label{eqn:critic_return} \end{align} Based on the estimate for return, the Mean Squared Bellman Error (MSBE) is computed as \begin{align} \mathcal{L} = \frac{1}{N} \sum \limits_{i} \left( y_i - \mathbbm{C}(s_i, a_i|\bm{\omega}^c) \right)^2. \label{eqn:critic_msbe} \end{align} Then, the critic network parameters are updated as \begin{align} \bm{\omega}^c \gets \bm{\omega}^c - \eta_c \nabla_{\bm{\omega}^c} \mathcal{L}, \label{exp:critic_update} \end{align} where $\eta_c << 1$ is the stepsize for stochastic update. For the actor network, the update depends on both the gradient of action as well as the improvement in Q-value. The final update for updating parameters of critic network $\bm{\omega}^c$ is given by \begin{align} \bm{\omega}^a \gets \bm{\omega}^a + \eta_a \frac{1}{N} \sum \limits_{i} \left( \nabla_{\bm{\omega}^a} \mathbbm{A}(s) \nabla_a \mathbbm{C}(s,a) | _{a=\mathbbm{A}(s)} \right), \label{exp:actor_update} \end{align} where $\eta_a << 1$ is the update stepsize. Finally, the target network parameters are updated in every timestep to provide stable value estimates using an exponentially weighted update as \begin{align} \bar{\bm{\omega}}^c &\gets \lambda \bm{\omega}^c + (1 - \lambda) \bar{\bm{\omega}}^c; \: \bar{\bm{\omega}}^a \gets \lambda \bm{\omega}^a + (1 - \lambda) \bar{\bm{\omega}}^a, \label{exp:target_update} \end{align} with $\lambda << 1$. Interested readers are directed to \cite{silver2014deterministic,lillicrap2015continuous} for more information. \subsubsection{\textbf{Architecture of Neural Action predictor of the proposed method}} DDPG is originally proposed for continuous action spaces. Since the problem of $\mu BS$ selection is discrete and selecting $(\theta,\phi)$ is continuous, a direct application of DDPG for the problem will result in an inefficient learning procedure as no information about the discrete part of the action is taken into account\footnote{We provide the results for using Vanilla DDPG to our problem in Results section.}. Hence, we propose a novel architecture for neural function approximators that can be used for both discrete and continuous action spaces. The purpose of critic network $\mathbbm{C}$ is to predict the estimate the Q-value for each state-action pair. As Q-value is continuous, a traditional feedforward neural network with a scalar output can be used as $\mathbbm{C}$, as used in DDPG. However, the actor network is responsible for predicting the best action $a_t$ given a state $s_t$ and the neural function approximator for $\mathbb{A}$ needs to handle both discrete and continuous spaces. In the proposed architecture for actor network, we split the predictions for each UE through a sub-network at the output. All sub-networks share a common feature extractor which operates on the input to provide each UE sub-networks with a set of features that can be used to select the action corresponding to that UE. The architecture of the proposed Neural Action Predictor is given in Fig. \ref{fig:nn_arch}. \begin{figure}[h] \centering \input{./figs/neural_arch.tex} \caption{Architecture of Proposed Neural Action Predictor.} \label{fig:nn_arch} \end{figure} At timestep $t$, let $\bm{x}_0 = s_t$ be the input to the common feature extractor network. We have $s_t \in \mathbb{R}^{N_{UE}\cdot(N_{BS}+1)}$. The first $L$layers of the actor network constitute the common feature extractor network. At each layer, a linear combination of features from previous layer is created and is then passed through a non-linear activation function. Let $\bm{W}_l \in \mathbb{R}^{d_{l,o} \times d_{l,i}}$ and $\bm{b}_l \in \mathbb{R}^{d_{l,o}}$ be the weight and bias associated with layer $l$ and $d_{l,i}$ and $d_{l,o}$ be the input and output dimensions of $l^{th}$ layer. The output $\bm{x}_l$ from $l^{th}$ feature extractor layer is then computed as $ \bm{x}_{l} = g\left( \bm{W}_l \bm{x}_{l-1} + \bm{b}_l \right), \text{ for } l = 1, \ldots, L$ where $g(\cdot)$ is a non-linear activation function. The final set of extracted feature $\bm{x}_L \in \mathbb{R}^{d_{L,o}}$ is then fed to each of the sub-nets for action predictions for each UE. Actor sub-net for each UE uses a single layer for base station selection as most of the processing for $\mu BS$ selection from input has been already done by the common feature extractor network. The actor sub-net for $i{th}$ UE predicts a normalized score over all the $\mu BS$ indices using a softmax layer as \begin{align} \bm{a}^{(i)}_{bs} = \text{softmax}(\bm{W}_{i,bs} \bm{x}_{L} + \bm{b}_{i,bs}), \label{eqn:actor_bs_sel} \end{align} where $\bm{W}_{i,bs} \in \mathbb{R}^{N_{BS} \times d_{L,o}}$ and $\bm{b}_{i,bs} \in \mathbb{R}^{N_{BS}}$. Then, the $\mu BS$ to serve $i^{th}$ UE is selected as base station with highest normalized score in $\bm{a}^{(i)}_{bs}$. For a better convergence to the best $\mu BS$ for a particular UE, the proposed algorithm is made to explore the space of $\mu BS$ with an $\epsilon$-greedy like mechanism. In the $\epsilon-$greedy method, an agent explores a random discrete action with probability $\epsilon$ and takes an action suggested by the network with probability $(1-\epsilon)$. In the current set-up, the action space is pseudo discrete i.e. the output regarding the selection of $\mu BS$ is a softmax vector (not discrete). These pseudo discrete values are again used to compute the continuous valued angles further. To translate the $\epsilon$-greedy like exploration to our setup and still keep the network differentiable to backpropagate, we propose the following strategy: instead of taking a completely different discrete action, with a probability of $\epsilon_{bs}$, a standard Gaussian noise with high variance is added to the pseudo-discrete normalized score $\bm{a}^{(i)}_{bs}$. However with a probability $(1-\epsilon_{bs})$, the base station with highest normalized score in $\bm{a}^{(i)}_{bs}$ is exploited. Therefore the modified output regarding the $\mu BS$ selection from the actor subnet of each UE is: \begin{align} \Tilde{\bm{a}}^{(i)}_{bs} = \begin{cases} \bm{a}^{(i)}_{bs} & \text{w.p. } (1-\epsilon_{bs}) \\ \bm{a}^{(i)}_{bs}+n & \text{w.p. } \epsilon_{bs} \text{ and } n \sim \mathcal{N}(0,1). \end{cases} \label{eqn:actor_bs_sel_noisy} \end{align} The degree of exploration of the algorithm reduces over time i.e. $\epsilon_{bs}$ is an exponentially decaying function with rate $10^{-6}$. \begin{algorithm}[!t] \caption{Proposed algorithm for $\mu$BS selection and beam alignment} \begin{algorithmic}[1] \State \textbf{Parameters:} Set discounting factor $\gamma$, replay buffer size $\tau$, number of episodes $M$, target update period $U$, update parameter $\lambda$, learning rates $\eta_a$ and $\eta_c$. \State Initialize the actor $\mathbbm{A}(s|\bm{\omega}^a)$ and the critic $\mathbbm{C}(s,a|\bm{\omega}^c)$ networks with random weights $\bm{\omega}^a$ and $\bm{\omega}^c$ respectively. \State Initialize target networks with weights as $\bar{\bm{\omega}}^a \gets \bm{\omega}^a$ and $\bar{\bm{\omega}}^c \gets \bm{\omega}^c$. \State Create an empty replay buffer $\mathcal{B} \gets \{\}$ with size $\tau$. \For {episode $= 1 \ldots M$} \State Select a random valid action for each UE. \State Observe SINR as well as the RF signature of each UE as state $s_t$. \For {$t = 1 \ldots T$} \State Set $\mu$BS and beam alignment angles for each UE based on the output predicted by the network according to (\ref{eqn:actor_bs_sel_noisy}) and (\ref{eqn:compute_angle_noisy}). \State Get new state observation $s_{t+1}$. \State Extract individual SINR for each UE from $s_{t+1}$ and compute average rate as reward $r_t$. \State Update replay buffer with experience as $\mathcal{B} \gets \mathcal{B} \cup (s_t, a_t, r_t, s_{t+1})$. \If {$|\mathcal{B}| \geq \tau$} \State Delete oldest experience from $\mathcal{B}$. \EndIf \State Sample $N$ experiences $(s_i, a_i, r_i, s_{i+1})$ from $\mathcal{B}$. \State Compute return for each experience $y_i$. \State Compute Mean Square Bellman Error as $\mathcal{L}$. \State Update $(\bm{\omega}^c,\bm{\omega}^a)$ and $(\bar{\bm{\omega}^c}, \bar{\bm{\omega}^a})$. \EndFor \EndFor \end{algorithmic} \label{alg:proposed_beamer} \end{algorithm} Noting that the elevation and the azimuth angles for beam alignment need to depend both on information on the UE positions (available through $\bm{x}_L$) and the selected $\mu BS$ for UE (available through $\bm{a}^{(i)}_{bs}$), we need a layer which can fuse together these information. For this, we first create a concatenating feature vector for each UE as $\bm{z}_{i} = [\bm{x}_L, \bm{a}^{(i)}_{bs}] \in \mathbb{R}^{d_{L,o}+N_{BS}}$. Then, the action corresponding to beam alignment angles for $i^{th}$ UE are computed as \begin{align} \bm{a}^{(i)}_{\Theta} = \text{tanh}(\bm{W}_{i,\Theta} \bm{z}_i, + \bm{b}_{i,\Theta}), \label{eqn:compute_angle} \end{align} where $\bm{W}_{i,\Theta} \in \mathbb{R}^{2 \times (d_{L,o}+N_{BS})}$ and $\bm{b}_{i,\Theta} \in \mathbb{R}^2$. Note that the tanh($\cdot$) activation function outputs values in range $[-1, +1]$ and hence $\bm{a}^{(i)}_{\Theta} \in [-1, +1]^2$. To explore the space of elevation and azimuth angles, a Gaussian random normal noise $\mathcal{N}(0,\sigma_{\Theta})$ is added to $\bm{a}^{(i)}_{\Theta}$ where the variance $\sigma_{\Theta}$ decays linearly with time. The modified output from the actor subnet regarding the elevation and azimuth angles is given by: \begin{align} \Tilde{\bm{a}}^{(i)}_{\Theta} = \bm{a}^{(i)}_{\Theta}+n, \label{eqn:compute_angle_noisy} \end{align} where $n \sim \mathcal{N}(0,\sigma_{\Theta})$. As the overall performance is more sensitive to the selection of elevation and azimuth angles than the best $\mu BS$, a linearly decaying $\sigma_{\Theta}$ (higher exploration) is used for the selection of angles whereas an exponentially decaying $\epsilon_{bs}$ is used (comparatively lesser exploration) for $\mu BS$ selection a discussed before. Finally, the elevation and azimuth angles for beam alignment are computed (in radians) as $ \theta_i = \frac{3\pi}{4} + \bm{a}^{(i)}_{\Theta}[1] \times \frac{\pi}{4}, \label{eqn:compute_theta} \text{ and } \phi_i = \bm{a}^{(i)}_{\Theta}[2] \times \frac{\pi}{2}. \label{eqn:compute_phi} $ The computation for elevation angle is based on the assumption that all UEs are below the height of $\mu BS$ and hence $\theta_i \in [\pi/2, \pi]$. Similarly, it is assumed that $\phi_i \in [-\pi/2, +\pi/2]$. The proposed algorithm to train the DRL agent for base station selection and beam alignment is presented in Alg. \ref{alg:proposed_beamer}. To appreciate the power of the proposed method, we also perform two experiments named \textit{BSOracle} and \textit{AngleOracle} which serve as benchmarks. In the first method, the agent has the exact knowledge of the $\mu BS$ and in the latter, the location of the UEs and hence the elevation and the azimuth angles are known exactly by the agent. The main differences between the proposed method and BSOracle/AngleOracle methods are listed below. \subsubsection{\textbf{Architecture of Neural action predictor of BSOracle method}} The proposed method takes $s_t\in\mathbb{R}^{N_{UE}.(N_{BS}+1)}$ as an input at time $t$ to the learning agent. The input to the proposed method has only the RF signatures from $N_{BS}$ $\mu BS$s and the SINR for each $N_{UE}$ UEs. Unlike the proposed method, the input to the learning agent in the case of BSOracle is a $N_{UE}.(N_{BS}+N_{BS}+1)$ dimensional vector (observations of $N_{UE}$ UEs are concatenated) which is fed to the common feature extractor network as input. For each UE, the first $N_{BS}$ elements are the one-hot encoded vector indicating the best $\mu BS$ for each UE. The next $N_{BS}$ elements are RF fingerprints and the last element is the SINR of that UE for the current beam alignment configuration. The output of the $l^{th}$ feature extractor layer is computed as $ \bm{x}_{l} = g\left( \bm{W}_l \bm{x}_{l-1} + \bm{b}_l \right), \text{ for } l = 1, \ldots, L $ where $g(.)$ is the non-linear activation function. The output of the feature network $\bm{x}_{L}$ is fed to each of the UE sub-nets those are used to predict \textit{just the elevation and azimuth angles} because the best $\mu BS$ is already known by the agent via the input features. The action corresponding to the beam alignment angles for the $i^{th}$ UE are computed as \begin{align} \bm{a}^{(i)}_{\Theta} = \text{tanh}(\bm{W}_{i,\Theta} \bm{x}_{L}, + \bm{b}_{i,\Theta}), \label{eqn:compute_angle_BSOracle} \end{align} where $\bm{W}_{i,\Theta} \in \mathbb{R}^{2 \times d_{L,o}}$ and $\bm{b}_{i,\Theta} \in \mathbb{R}^2$. Similar to the proposed method, a Gaussian noise with decaying variance is used to explore the space of angles in a better way. Finally the angles are converted to radians following the same way like the proposed method discussed before. The final action is the concatenation of the vector referring the index of the best $\mu BS$ for each UE and the elevation and azimuth angles. \subsubsection{\textbf{Architecture of Neural action predictor of AngleOracle method}} Similar to the proposed method, the input $s_t$ to the learning agent at time $t$ is a $N_{UE}.(N_{BS}+1)$ dimensional vector. The rest of the architecture is very similar to the proposed method except that each UE sub-net predicts the pseudo discrete output that represents the index of the best $\mu BS$. Using this information the genie predicts the correct elevation and azimuth angles. We reiterate that both AngleOracle and BSOracle only serve as possible benchmarks against which we can compare the proposed approach and these by themselves cannot be implemented in a practical scenario since one will never know either the best $\mu BS$ for each UE or the UE locations. \section{Results} \ifCLASSOPTIONtwocolumn \begin{figure*}[!t] \centering \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateEvo_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateEvoNANZ-BS-noisyaction21.csv}\datatablenancysigmaone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateEvoNANZ-BSKnown.csv}\datatablenancybsknownrateevo \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateEvoNANZ-AngleKnown.csv}\datatablenancyangleknownrateevo \tikzstyle{mark_style} = [] \input{./figs/rateevo_template.tex} } \caption{$N_t = 4 \times 4$} \label{fig:rateevo_03ue_bs04x04} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateEvo_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateEvoNANZ-BS-noisyaction21.csv}\datatablenancysigmaone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateEvoNANZ-BSKnown.csv}\datatablenancybsknownrateevo \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateEvoNANZ-AngleKnown.csv}\datatablenancyangleknownrateevo \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [] \input{./figs/rateevo_template.tex} } \caption{$N_t = 8 \times 8$} \label{fig:rateevo_03ue_bs08x08} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateEvo_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/02_psuedo_act/data_03UE_BS16x16_rateEvov1.csv}\tabpsuedoactvone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateEvoNANZ-BS-noisyaction21.csv}\datatablenancysigmaone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateEvoNANZ-BSKnown.csv}\datatablenancybsknownrateevo \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateEvoNANZ-AngleKnown.csv}\datatablenancyangleknownrateevo \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [] \input{./figs/rateevo_template.tex} } \caption{$N_t = 16 \times 16$} \label{fig:rateevo_03ue_bs16x16} \end{subfigure}% \caption{Rate Evolution during learning for $N_{UE} = 3$.} \label{fig:rateevo_03ue} \end{figure*} \begin{figure*}[!t] \centering \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateEvo_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateEvoNANZ-BS-noisyaction21.csv}\datatablenancysigmaone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateEvoNANZ-BSKnown.csv}\datatablenancybsknownrateevo \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateEvoNANZ-AngleKnown.csv}\datatablenancyangleknownrateevo \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [] \input{./figs/rateevo_template.tex} } \caption{$N_t = 4 \times 4$} \label{fig:rateevo_05ue_bs04x04} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateEvo_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/02_psuedo_act/data_05UE_BS08x08_rateEvov1.csv}\tabpsuedoactvone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateEvoNANZ-BS-noisyaction21.csv}\datatablenancysigmaone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateEvoNANZ-BSKnown.csv}\datatablenancybsknownrateevo \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateEvoNANZ-AngleKnown.csv}\datatablenancyangleknownrateevo \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [] \input{./figs/rateevo_template.tex} } \caption{$N_t = 8 \times 8$} \label{fig:rateevo_05ue_bs08x08} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateEvo_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateEvoNANZ-BS-noisyaction21.csv}\datatablenancysigmaone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateEvoNANZ-BSKnown.csv}\datatablenancybsknownrateevo \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateEvoNANZ-AngleKnown.csv}\datatablenancyangleknownrateevo \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [] \input{./figs/rateevo_template.tex} } \caption{$N_t = 16 \times 16$} \label{fig:rateevo_05ue_bs16x16} \end{subfigure}% \caption{Rate Evolution during learning for $N_{UE} = 5$.} \label{fig:rateevo_05ue} \end{figure*} \fi In this section, we provide simulation results for a four-junction scenario similar to \cite{alkhateeb2018deep} (extended to four roads) with $N_{BS} = 10$ and an intercell radius of approximately $100m$. A carrier frequency of $f_c = 28GHz$ is assumed and bandwidth of $5MHz$ is taken. All $\mu BS$ are assumed to have UPA antenna of square dimensions with $N_t = 4 \times 4$ and with $d = \lambda/2$, where is $\lambda$ is the wavelength associated with frequency $f_c$. Following the Street Canyon configuration \cite{5gmodel2016}, we used $n = 1.98$, $\sigma=3.1$, $b = 0$ and $f_0 = 10^{9}$ as the path loss parameters in (\ref{eqn:pathloss}). We provide results for $N_{UE} = 3, \text{ and } 5$ with $\kappa = 1$. The performance of the proposed method and the baseline competitor methods are compared below: \begin{enumerate} \item \textbf{Random}: A blind agent that does not receive any inputs about the UEs, but tries to assign a $\mu BS$ and a set of beam alignment angles for each UE. As this algorithm does not have any input/feedback, the rate obtained by this method is the minimum expected rate that can be obtained by any \textit{intelligent} agent. \item \textbf{Oracle}: This agent assumes that the exact knowledge about the location of UEs as well as the exact channel is known at the BS. Equipped with this information, \textit{Oracle} picks the best $\mu$BS-UE assignment as well as the beam alignment angles. This is the maximum expected rate any algorithm can achieve. \item \textbf{BS-Sweep}: This method is similar to the one proposed in \cite{nitsche2014ieee}. However, as there are multiple $\mu$BS, all BS will simultaneously perform brute force beam search with a beam at every $\nu$ degrees. Since some of the available time is spent on the UE discovery process, the reported metrics are based on the available transmit time. Also, UE discovery needs to be performed at every instant as the UEs are mobile. We considered a frame period of $10ms$ and a beam scan period of $200\mu s$ per beam. While more number of beams increases the resolution of UE discovery, it also adds an overhead time. We provide results for $\nu = 5 \deg$. \item \textbf{Vanilla DDPG}: This is an RL agent that uses the feedforward neural network as proposed in \cite{lillicrap2015continuous} in the context of game-playing agents. This is provided to quantify the improvement in the performance of the proposed neural network architecture. The neural network considered has $L = 2$. Each hidden layer has $128$ nodes. The difference between this and the proposed method is that the UE sub-net is absent in this method. We provide the results averaged over $5$ agents, each trained for $7000$ episodes. Each episode is considered $1000$ timesteps long and the UE positions are reset at the end of each episode. \item \textbf{Proposed}: The proposed Deep Reinforcement Learning based agent has a feed-forward critic network and UE sub-net augmented actor-network, with $L = 2$. Each hidden layer has $128$ hidden nodes. All other conditions including the training environment and the optimizer parameters are the same as \textit{Vanilla DDPG} mentioned above. \item \textbf{BSOracle}: This agent predicts the elevation and azimuth angles with the knowledge about the best $\mu BS$ for each UE. The architecture of this is the same as the proposed method except that each sub-net has only a single hidden layer that predicts the angles. \item \textbf{AngleOracle}: Here the agent predicts only the index of the best $\mu BS$ for each UE and the genie gives the elevation and azimuth angles beased on this. The architecture is same as the proposed method, except that in each sub-net there is only a single layer that predicts the index of the best $\mu BS$. \end{enumerate} In BSOracle method, the best $\mu BS$ for each UE is given by the genie based on which the angles are predicted; whereas in AngleOracle method, the agent predicts only the best $\mu BS$ but the best elevation and azimuth angles for each UE are evaluated by the genie. Needless to say, both BSOracle and AngleOracle methods perform better than the proposed. However, these two methods are carried out to assess the performance of the proposed method that finds the best $\mu BS$ as well as the elevation and azimuth angles for each UE completely by learning. \begin{table}[!h] \centering \caption{Parameter for learning algorithms} \label{tab:drl_params} \begin{tabular}{|l|c|} \hline \textbf{Parameter} & \textbf{Value} \\ \hline \hline Number of Hidden Layers, $L$ & $2$ \\ \hline Hidden Nodes in layer & $[ 128, 128]$ \\ \hline Buffer Size, $\tau$ & $100000$ \\ \hline Discounting factor, $\gamma$ & $0.60$ \\ \hline $\lambda$ & $0.001$ \\ \hline Actor learning rate, $\eta_a$ & $0.0001$ \\ \hline Critic learning rate, $\eta_c$ & $0.001$ \\ \hline Number of episodes & $1000$ \\ \hline Steps per episode & $1000$ \\ \hline \end{tabular} \end{table} The parameters used for training deep reinforcement learning agents are given in Table \ref{tab:drl_params}. The values are found after sufficient hyperparameter tuning through manual search. For a fair comparison, we keep all the parameters the same across Vanilla DDPG, BSOracle, AngleOracle, and the Proposed architecture. \ifCLASSOPTIONtwocolumn \begin{figure*}[!t] \centering \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateCdf_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateCdfNANZ-BS-noisyaction21.csv}\datatablenancyvarsigma \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateCdfNANZ-AngleKnown.csv}\datatablenancyangleknownratecdf \tikzstyle{mark_style} = [mark size={3.0}, mark repeat=20, mark phase=1] \input{./figs/ratecdf_template.tex} } \caption{$N_t = 4 \times 4$} \label{fig:ratecdf_03ue_bs04x04} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateCdf_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateCdfNANZ-BS-noisyaction21.csv}\datatablenancyvarsigma \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateCdfNANZ-AngleKnown.csv}\datatablenancyangleknownratecdf \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [mark size={3.0}, mark repeat=20, mark phase=1] \input{./figs/ratecdf_template.tex} } \caption{$N_t = 8 \times 8$} \label{fig:ratecdf_03ue_bs08x08} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateCdf_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/02_psuedo_act/data_03UE_BS16x16_rateCdfv1.csv}\tabpsuedoactvone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateCdfNANZ-BS-noisyaction21.csv}\datatablenancyvarsigma \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateCdfNANZ-AngleKnown.csv}\datatablenancyangleknownratecdf \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [mark size={3.0}, mark repeat=20, mark phase=1] \input{./figs/ratecdf_template.tex} } \caption{$N_t = 16 \times 16$} \label{fig:ratecdf_03ue_bs16x16} \end{subfigure}% \caption{CDF of observed rates for $N_{UE} = 3$.} \label{fig:ratecdf_03ue} \end{figure*} \begin{figure*}[!t] \centering \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateCdf_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/02_psuedo_act/data_05UE_BS04x04_rateCdfv1.csv}\tabpsuedoactvone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateCdfNANZ-BS-noisyaction21.csv}\datatablenancyvarsigma \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateCdfNANZ-AngleKnown.csv}\datatablenancyangleknownratecdf \tikzstyle{mark_style} = [mark size={3.0}, mark repeat=20, mark phase=1] \input{./figs/ratecdf_template.tex} } \caption{$N_t = 4 \times 4$} \label{fig:ratecdf_05ue_bs04x04} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateCdf_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/02_psuedo_act/data_05UE_BS08x08_rateCdfv1.csv}\tabpsuedoactvone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateCdfNANZ-BS-noisyaction21.csv}\datatablenancyvarsigma \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateCdfNANZ-AngleKnown.csv}\datatablenancyangleknownratecdf \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [mark size={3.0}, mark repeat=20, mark phase=1] \input{./figs/ratecdf_template.tex} } \caption{$N_t = 8 \times 8$} \label{fig:ratecdf_05ue_bs08x08} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateCdf_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/02_psuedo_act/data_05UE_BS16x16_rateCdfv1.csv}\tabpsuedoactvone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateCdfNANZ-BS-noisyaction21.csv}\datatablenancyvarsigma \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateCdfNANZ-AngleKnown.csv}\datatablenancyangleknownratecdf \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [mark size={3.0}, mark repeat=20, mark phase=1] \input{./figs/ratecdf_template.tex} } \caption{$N_t = 16 \times 16$} \label{fig:ratecdf_05ue_bs16x16} \end{subfigure}% \caption{CDF of observed rates for $N_{UE} = 5$.} \label{fig:ratecdf_05ue} \end{figure*} \fi \ifCLASSOPTIONonecolumn \begin{figure*}[t] \centering \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateEvo_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateEvoNANZ-BS-noisyaction21.csv}\datatablenancysigmaone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateEvoNANZ-BSKnown.csv}\datatablenancybsknownrateevo \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateEvoNANZ-AngleKnown.csv}\datatablenancyangleknownrateevo \tikzstyle{mark_style} = [] \input{./figs/rateevo_template.tex} } \caption{$N_t = 4 \times 4$} \label{fig:rateevo_03ue_bs04x04} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateEvo_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateEvoNANZ-BS-noisyaction21.csv}\datatablenancysigmaone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateEvoNANZ-BSKnown.csv}\datatablenancybsknownrateevo \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateEvoNANZ-AngleKnown.csv}\datatablenancyangleknownrateevo \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [] \input{./figs/rateevo_template.tex} } \caption{$N_t = 8 \times 8$} \label{fig:rateevo_03ue_bs08x08} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateEvo_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/02_psuedo_act/data_03UE_BS16x16_rateEvov1.csv}\tabpsuedoactvone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateEvoNANZ-BS-noisyaction21.csv}\datatablenancysigmaone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateEvoNANZ-BSKnown.csv}\datatablenancybsknownrateevo \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateEvoNANZ-AngleKnown.csv}\datatablenancyangleknownrateevo \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [] \input{./figs/rateevo_template.tex} } \caption{$N_t = 16 \times 16$} \label{fig:rateevo_03ue_bs16x16} \end{subfigure}% \caption{Rate Evolution during learning for $N_{UE} = 3$.} \label{fig:rateevo_03ue} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateEvo_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateEvoNANZ-BS-noisyaction21.csv}\datatablenancysigmaone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateEvoNANZ-BSKnown.csv}\datatablenancybsknownrateevo \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateEvoNANZ-AngleKnown.csv}\datatablenancyangleknownrateevo \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [] \input{./figs/rateevo_template.tex} } \caption{$N_t = 4 \times 4$} \label{fig:rateevo_05ue_bs04x04} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateEvo_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/02_psuedo_act/data_05UE_BS08x08_rateEvov1.csv}\tabpsuedoactvone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateEvoNANZ-BS-noisyaction21.csv}\datatablenancysigmaone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateEvoNANZ-BSKnown.csv}\datatablenancybsknownrateevo \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateEvoNANZ-AngleKnown.csv}\datatablenancyangleknownrateevo \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [] \input{./figs/rateevo_template.tex} } \caption{$N_t = 8 \times 8$} \label{fig:rateevo_05ue_bs08x08} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateEvo_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateEvoNANZ-BS-noisyaction21.csv}\datatablenancysigmaone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateEvoNANZ-BSKnown.csv}\datatablenancybsknownrateevo \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateEvoNANZ-AngleKnown.csv}\datatablenancyangleknownrateevo \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [] \input{./figs/rateevo_template.tex} } \caption{$N_t = 16 \times 16$} \label{fig:rateevo_05ue_bs16x16} \end{subfigure}% \caption{Rate Evolution during learning for $N_{UE} = 5$.} \label{fig:rateevo_05ue} \end{figure*} \fi The average sum rate evolution during the learning phase of proposed algorithm is given in Fig. \ref{fig:rateevo_03ue} (for $3$ UEs) and Fig. \ref{fig:rateevo_05ue} (for $5$ UEs) for different number of transmit antenna elements. With a more number of antenna elements, the beam produced by the UPA antenna becomes narrower thus delivering most of the power towards the target direction. With a fewer number of transmit antennas, the beams become broader and this can cause additional interference to neighboring UEs even with good spatial separation. This is evident from the trend of the average sum rate across the different number of transmitting elements. As the number of transmitting elements in the antenna increases, the SINR improves and we can see an improvement in the rate of the Oracle. As the beams become narrower, the BS-Sweep method improves the rate. The learning phase of all the DRL algorithms i.e. Vanilla DDPG, BSOracle, AngleOracle, and the Proposed method is evident from Fig. \ref{fig:rateevo_03ue} and Fig. \ref{fig:rateevo_05ue} (legends are shared across all the figures). The performances of all these algorithms are upper bounded by the performance of the Oracle method where the agent has all the information regarding the best $\mu BS$ and UE locations. Realizing an Oracle or even BSOracle/AngleOracle agents in a practical scenario is not possible. However, the aim is to reach as close to these as possible. Initially, both the Vanilla DDPG and the proposed learning agents start with a performance similar to that of \emph{Random} agent. This is expected as at the beginning, DRL agents are initialized with random weights and hence actions taken by them are also random. The BSOracle agent knows the best $\mu BS$ for each UE and needs to predict just the UE locations. Therefore the performance of BSOracle starts with a comparatively higher value of sum rate than Vanilla DDPG or the proposed one. However, slowly the proposed method catches up with the performance of BSOracle. The AngleOracle agent predicts the best $\mu BS$ for each UE and uses the corresponding angle predictions by an oracle. As the average sum rate is more sensitive towards UE locations than the knowledge of best $\mu BS$, the performance of the agent in the AngleOracle method starts from a better average sum rate value than the BSOracle method and it also reaches a higher average sum rate value than all other algorithms (except Oracle). Reiterating here the fact that, the proposed algorithm does not take any help from the Oracle and thus learns both the best $\mu BS$ and the angles by itself. Compared to this, the BSOracle/AngleOracle methods know a part of the information(either $\mu BS$ or UE locations) and thus can be thought of as \emph{partially-Oracle}. As training progresses, we can observe that the performance of all the DRL methods increase. The rate evolution clearly shows the advantage of the proposed neural architecture and the action space exploration for both $\mu BS$ and the elevation/azimuth angles. Since Vanilla DDPG is not able to handle the mixed actions well, the learning curve flattens soon after the start of the training and also at levels at or below that of the existing beam sweeping methods. However, the proposed architecture for action selection can overcome this difficulty and the learning progresses at a much faster pace than Vanilla DDPG. Further, the evolution of rate is slower because of the exploration in both the action spaces. After approximately $500$ episodes of training itself, the proposed method can perform better than beam sweeping methods. In all of the antenna configurations and UE configurations, at the end of the rate evolution process, the proposed method performs similar to the BSOracle method. This implies that the proposed method can find the best $\mu BS$ for each UE efficiently. With less number of UEs (3 UE case), the performance of the proposed method is very close to the Oracle. For a particular antenna dimension, the gap between the BSOracle/AngleOracle method and the Oracle increases with more number of UEs because the dimension of the action for the agent itself increases, hence increasing the problem complexity. \ifCLASSOPTIONonecolumn \begin{figure*}[t] \centering \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateCdf_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateCdfNANZ-BS-noisyaction21.csv}\datatablenancyvarsigma \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS04x04_rateCdfNANZ-AngleKnown.csv}\datatablenancyangleknownratecdf \tikzstyle{mark_style} = [mark size={3.0}, mark repeat=20, mark phase=1] \input{./figs/ratecdf_template.tex} } \caption{$N_t = 4 \times 4$} \label{fig:ratecdf_03ue_bs04x04} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateCdf_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateCdfNANZ-BS-noisyaction21.csv}\datatablenancyvarsigma \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS08x08_rateCdfNANZ-AngleKnown.csv}\datatablenancyangleknownratecdf \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [mark size={3.0}, mark repeat=20, mark phase=1] \input{./figs/ratecdf_template.tex} } \caption{$N_t = 8 \times 8$} \label{fig:ratecdf_03ue_bs08x08} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateCdf_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/02_psuedo_act/data_03UE_BS16x16_rateCdfv1.csv}\tabpsuedoactvone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateCdfNANZ-BS-noisyaction21.csv}\datatablenancyvarsigma \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_03UE_BS16x16_rateCdfNANZ-AngleKnown.csv}\datatablenancyangleknownratecdf \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [mark size={3.0}, mark repeat=20, mark phase=1] \input{./figs/ratecdf_template.tex} } \caption{$N_t = 16 \times 16$} \label{fig:ratecdf_03ue_bs16x16} \end{subfigure}% \caption{CDF of observed rates for $N_{UE} = 3$.} \label{fig:ratecdf_03ue} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateCdf_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/02_psuedo_act/data_05UE_BS04x04_rateCdfv1.csv}\tabpsuedoactvone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateCdfNANZ-BS-noisyaction21.csv}\datatablenancyvarsigma \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS04x04_rateCdfNANZ-AngleKnown.csv}\datatablenancyangleknownratecdf \tikzstyle{mark_style} = [mark size={3.0}, mark repeat=20, mark phase=1] \input{./figs/ratecdf_template.tex} } \caption{$N_t = 4 \times 4$} \label{fig:ratecdf_05ue_bs04x04} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateCdf_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/02_psuedo_act/data_05UE_BS08x08_rateCdfv1.csv}\tabpsuedoactvone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateCdfNANZ-BS-noisyaction21.csv}\datatablenancyvarsigma \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS08x08_rateCdfNANZ-AngleKnown.csv}\datatablenancyangleknownratecdf \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [mark size={3.0}, mark repeat=20, mark phase=1] \input{./figs/ratecdf_template.tex} } \caption{$N_t = 8 \times 8$} \label{fig:ratecdf_05ue_bs08x08} \end{subfigure}% \begin{subfigure}{.33\linewidth} \resizebox{\linewidth}{!}{ \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateCdf_Baselines_NANZ.csv}\datatable \pgfplotstableread[col sep = comma]{./data/02_psuedo_act/data_05UE_BS16x16_rateCdfv1.csv}\tabpsuedoactvone \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateCdfNANZ-BS-noisyaction21.csv}\datatablenancyvarsigma \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateCdfNANZ-BSKnown.csv}\datatablenancybsknownratecdf \pgfplotstableread[col sep = comma]{./data/00_Nancy/data_05UE_BS16x16_rateCdfNANZ-AngleKnown.csv}\datatablenancyangleknownratecdf \pgfplotsset{ignore legend} \tikzstyle{mark_style} = [mark size={3.0}, mark repeat=20, mark phase=1] \input{./figs/ratecdf_template.tex} } \caption{$N_t = 16 \times 16$} \label{fig:ratecdf_05ue_bs16x16} \end{subfigure}% \caption{CDF of observed rates for $N_{UE} = 5$.} \label{fig:ratecdf_05ue} \end{figure*} \fi The cumulative distribution of rate achieved by each of the methods is given in Fig. \ref{fig:ratecdf_03ue} and Fig. \ref{fig:ratecdf_05ue}. For each of the methods, we simulated $10000$ observations ($10$ episodes), and the data aggregated based on these observations are plotted. Note that the trained DRL-agents are used to get the rate distribution. Even though the DRL methods can give high rates compared to the baseline methods, as the dimensions of the problem increases, the gap between the proposed approach and the oracle also increases. For example, with less antenna dimension, the proposed method attains a rate close to AngleOracle/BSOracle methods. In a higher-dimensional problem with more antennas and UEs, even though the rate is better than all other DRL/non-DRL based methods, the gap between the proposed method and the Oracle/partially-Oracle methods gives rise to a scope of improvement and is open for further research. We can see that, in all cases, the proposed DRL-based blind beam alignment method can provide a better data rate than the beam sweeping method. In the cases where the number of transmit antennas at base stations is small, we can observe that the proposed method can perform close to the Oracle. However, as the number of transmit antennas increases, the gap between the proposed method and Oracle increases. It should be noted that as the number of transmit antennas increases, the beam that can be formed gets narrower and for a given transmit power, better SINR will be delivered at the receiver. Even though the proposed neural network can predict the approximate location of UE from the observed RSSI values, these predictions are susceptible to the noises in the measured RSSI value. When $N_t$ is small, the error in location estimation will not affect the SINR because of the broader beamwidth. With a broader beam, the $\mu BS$ can cover more angular area and hence the requirement for precise angles for beam alignment can be relaxed. But as the number of antenna elements in a $\mu BS$ increases, the beam width gets narrower and hence covering less area in the angular domain. Precise knowledge of location can help in improving the beam alignment accuracy when the number of antenna elements increases. This is the reason for the widening of the gap between the oracle and the proposed method as the number of antenna elements increases. The gap between the proposed method and the AngleOracle method also increases with the increase in the number of antennas for the same reason. However, the proposed method outperforms the other popular traditional as well as machine learning based methods with a very high margin. It should also be noted that the proposed approach does not have any assumptions on either the UE-BS placements or on the channel properties. This enables the proposed method to be equally applicable to both ground-based as well as aerial UEs/BSs networks. \section{Concluding Remarks} In this work, a deep reinforcement learning-based method is proposed for blind beamforming in mmWave communication systems with a multi-BS multi-UE scenario. The numerical experiments conducted show that the proposed method not only increases the average sum rate of UE four times when compared to the traditional beam-sweeping technique in the case of small antenna arrays but also reaches the same rate as the Oracle. Furthermore, the average sum rate of the proposed method is significantly better than the corresponding rate values of traditional methods like BS-sweep or the DRL based methods like Vanilla DDPG even in the case of larger antenna arrays. This improved performance is achieved with almost no overhead to the system as existing signals in the cellular network can also be used as beacon signal feature extraction. The proposed neural network architecture for handling action spaces which are a mix of discrete and continuous actions is the key part of the improvement in performance. Even though we showed results only with DDPG, the proposed neural architecture is agnostic to the policy gradient method and can be used with any other actor-critic methods where the action is a mix of continuous as well as discrete domains. Further, the proposed neural architecture is also not limited in its utility to only the beamforming problem but can be used in any problem where the action space is a mix of continuous and discrete actions.
2,869,038,156,479
arxiv
\section{Introduction}\label{sec:intro} Averaged quantities are still commonly used for the description of many complex processes in physics, chemistry or astrophysics. This kind of approach is inprinted in our minds by an unquestionable success of thermodynamics in which fluctuations around the thermodynamic value are small in large systems. It is our experience that the gross measures in, {\it e.g.}, the particle reaction data are not sufficient to discriminate among models unless supplemented with more fine grained information, especially fluctuations and correlations of various kinds. Recent resurgence of interest in fluctuations in the strong interaction physics is due to the experimental possibility to measure event by event many strongly fluctuating quantities, {\it e.g.}, the multiplicity of produced particles in ultrarelativistic collisions of leptons, hadrons, nuclei, or the charges (masses) of fragmentation products of highly excited heavy-ion residue. The immediate question is then what is the information contained in fluctuations and, in particular, in the order parameter fluctuations , how do these fluctuations scale with the system-size and what is the relation between criticality and the probability distribution of order parameter fluctuations. \section{Order parameter fluctuations}\label{sec:order} Let us suppose that the thermodynamic free energy of finite system depends on three parameters : $\eta$ (the intensive order parameter), $\epsilon$~ (the distance to the critical point) and $N$ (the size of the system). Widom \cite{Widom}~ has proposed that close to the critical point, the free energy density in the thermodynamic limit scales as : \begin{eqnarray} \label{wid1} f_o (\lambda^{\beta} \eta, \lambda \epsilon) \sim \lambda^{2-\alpha} f_o (\eta, \epsilon) ~~~ \ , \end{eqnarray} where $\alpha, \beta $ are usual critical exponents. There is no critical behavior in a finite system. However, a finite system behaves like an infinite one if the correlation length $\xi $ becomes comparable to the typical length $L$~ of the system. This is basically the argument of Fisher and Barber \cite{FisherBarber}~ leading to the finite-size scaling analysis of critical systems. The pseudocritical point for a finite system appears at a distance $\epsilon \sim c N^{-1/{\nu d}}$~ from the critical point\cite{FF}~, where $c$~ is some dimensionless constant which can be either positive or negative\cite{comment}. One can then deduce the scaling of critical free energy density at this point : \begin{eqnarray} \label{Widomfinite} f_c(\eta ,N) \sim \eta^{\frac{2-\alpha }{\beta }} \phi(\eta N^{\frac{\beta }{\nu d}})~~~ \ . \end{eqnarray} Assuming the hyperscaling relation : $2-\alpha = \nu d$~, and using Rushbrooke relation between critical exponents : $\alpha + 2 \beta + \gamma = 2$, one can write the total free energy $F(\eta , \epsilon, N) = N f_o (\eta , \epsilon )$~ at the pseudocritical point as follows : \begin{eqnarray} \label{psi} F_c(\eta , N) \sim f_o (\eta N^{\frac{\beta }{\gamma + 2 \beta }},c)~~~ \ . \end{eqnarray} The canonical probability density $P[\eta ]$ to get some value of the order parameter $\eta $~ is given by\cite{Mayer}~: \begin{eqnarray} \label{proz} P[\eta ] = {Z_N}^{-1} \exp(-{\beta}_T F(\eta , \epsilon , N)) ~~~ \ , \end{eqnarray} where ${\beta}_T$~ is the inverse of temperature. Using Eq. (\ref{psi}), one obtains the partition function : \begin{eqnarray} \label{part} Z_N \sim N^{-\frac{\beta }{\gamma + 2 \beta}} \sim <|\eta |> ~~~\ . \end{eqnarray} It is then easy to see that the probability density $P[\eta ]$~ obeys the first scaling law : \begin{eqnarray} \label{first} <|\eta |> P[\eta ] & = & \Phi (z_{(1)}) = \Phi \left( \frac{\eta - <| \eta |>}{<| \eta |>} \right) \nonumber \\ & = & a({\beta}_T ) \exp \left( - {\beta}_T f_o \left( \frac{\eta}{ <| \eta |>}, c \right) \right) ~~~ \ , \end{eqnarray} with the constant coefficient ${\beta}_T$~ independent of $\eta $~, and the scaling function $\Phi (z_{(1)})$~ depending on a single scaled variable : $z_{(1)} = (\eta - <| \eta |>)/<| \eta | >$~. The scaling limit is defined by the asymptotic behaviour of $P[ \eta ]$ when $\eta \rightarrow \infty$, $<| \eta |> \rightarrow \infty$, but $({\eta}/{<| \eta |>})$~ has a finite value. The temperature-dependent factor $a({\beta}_T)$ is determined by the normalization of $P[\eta ]$~. One may notice that the logarithm of scaling function (\ref{first}) corresponds to the non-critical free energy density at the renormalized distance $\epsilon =c$~ from the critical point. If the order parameter corresponds to the cluster multiplicity, like in the fragmentation - inactivation binary (FIB) process\cite{sing1,sing11}, then (\ref{first}) can be written in an equivalent form to the KNO scaling\cite{koba}~, proposed some time ago as the ultimate symmetry of $S$ - matrix in the relativistic field theory\cite{comment1}~. Relation between the KNO scaling and the phase transition in Feynman-Wilson gas as well as the criticality of self-similar FIB process was studied as well\cite{antoniou,kno}~. Defining the anomalous dimension for an extensive quantity $N \eta$~ as : \begin{eqnarray} \label{anomdim} g = {\lim}_{N \rightarrow \infty} g_N = {\lim}_{N \rightarrow \infty} \frac{d}{d\ln N} \left( \ln <N|\eta |> \right) ~~~\ , \end{eqnarray} one can see that due to (\ref{part}), the scaling (\ref{first}) holds when $g=(\gamma + \beta )/(\gamma + 2\beta )$~. Consequently, $g$~ is contained between 1/2 and 1. Whenever the cluster-size can be reasonably defined for the second-order transition, like it is the case in percolation, Ising model or Fisher droplet model, the exponent $\tau$~ of the power-law cluster-size distribution : $n(k) \sim k^{-\tau }$~, satisfies additional relations\cite{Stauffer} : $\gamma + \beta =1/\sigma$ and $\gamma + 2\beta = (\tau -1)/\sigma$ ~, which yield : $g \equiv 1/(\tau -1)$~. This means that $\tau$ has to be contained between 2 and 3 when (\ref{first}) holds. What may happen if the order parameter is not known exactly? To illustrate this point, let us consider a quantity : $m=N^{\kappa } - N \eta$~, where $\eta $~ is the true order parameter and $\kappa $~ is larger than the anomalous dimension $g$~. For large $N$~, $|m|$ is of order $N^{\kappa}$~. Writing (\ref{first}) with $m$ instead of $\eta$, and taking into account : $P[\eta ] d\eta =P[m]dm$~, one finds the delta - scaling : \begin{eqnarray} \label{delta} <|m|>^{\delta }P[m] = \Phi (z_{({\delta})}) \equiv \Phi \left( \frac{m-<|m|>}{<|m|>^{\delta}} \right) ~~\ , ~~~~~ \delta = \frac{g}{\kappa} < 1 ~~~\ , \end{eqnarray} with the scaling function ${\Phi} (z_{(\delta )})$~ depending only on the scaled variable : $z_{(\delta )} = (m-<|m|>)/<|m|>^{\delta}$~. According to (\ref{psi}) and (\ref{first}), the logarithm of scaling function : \begin{eqnarray} \label{logar} \ln \Phi (z_{(\delta )}) = -{\beta}_T f_o(z_{(\delta )}, c) ~~~ \ , \end{eqnarray} is directly related to the non-critical free energy , in either ordered ($c>0$) or disordered $(c<0)$ phase. The relation $\delta = g/\kappa $~ in (\ref{delta}) singularizes importance of an extensive variable : $m=N(1-\eta )\equiv N{\hat {\eta}}$~. ${\hat {\eta}}$~ can be useful in phenomenological applications and plays an important role in the percolation studies\cite{Stauffer}~. One finds for this choice : $<m> \sim N$~, with algebraic finite-size corrections, and the delta-scaling (\ref{delta}) with $\delta = g$. Hence, $P[N{\hat {\eta}}]$~ allows to determine the anomalous dimension $g$~ and, consequently, the ratio of critical exponents $\beta$~ and $\gamma$~. Let us suppose now that the extensive parameter $m $ is not critical , {\it i.e.}, either the system is in a critical state but the parameter $m$ is not critical, or the system is outside of critical region. The value of $m$ at the equilibrium is obtained by minimizing the free energy. Let us suppose that the free energy $F$ is analytical in the variable $m$ close to its most probable value $m^{*}$, {\it i.e.}, : \begin{eqnarray} \label{freenot} F \sim N^{-\phi} (m-m^{*})^{{\phi}+1} ~~~ \ . \end{eqnarray} In most cases ${\phi}=1$~, though, in general, ${\phi}$~ can take any positive integer value. Using (\ref{freenot}) one obtains : $<|m|> \sim \mu^{*} N$~, where $\mu^{*}$ is a positive (finite) number independent of $N$~, and : \begin{eqnarray} \label{zkonc} Z_N \sim N^{\frac{{\phi}}{{\phi}+1}} \sim ~<|m|>^{\frac{{\phi}}{{\phi}+1}} ~~~ \ . \end{eqnarray} The probability density $P[m]$ verifies the generalized scaling law (\ref{delta}) : \begin{eqnarray} \label{second} <|m|>^{\delta} P[m] = \Phi (z_{(\delta )}) = \exp \left( -{\beta}_T \mu^{*{\phi}} \left( \frac{(m-<|m|>)}{<|m|>^{\frac{{\phi}}{{\phi}+1}}} \right)^{{\phi}+1} \right) ~~~ \ , \end{eqnarray} but now $\delta $ ($= {\phi}/({\phi}+1) < 1$) is constrained by the value of ${\phi}$~. In the generic case ${\phi}=1$~, $\delta $~ equals 1/2 and the scaling function is Gaussian\cite{comment2}. The second scaling (\ref{second}) holds for $<m> \sim N$ but now with the exponential finite-size corrections. \section{Results}\label{sec:results} The above results apply to any {\it second} order transition, and , in particular, they are not limited to the Landau-Ginzburg theory of phase transitions. \begin{figure} \vspace{8cm} \special{psfile=dubna_fig1.ps voffset=350 hoffset=-20 hscale=45 vscale=55 angle=-90} \label{fig1} \caption{ The order parameter fluctuations in the bond percolation model in 3D are calculated in the lattices : $N = 10^3 $ and $N=14^3$~, and plotted in the scaling variables of delta-scaling (\ref{delta}). {\bf (a)} (the upper left plot) : for the bond activation probability : $p_{cr}=0.2482$~; {\bf (b)} (the upper right plot) : for $p=0.245 \simeq p_{cr}$~; {\bf (c)} (the lower left plot) : for $p=0.35 > p_{cr}$~; {\bf (d)} (the lower right plot) : fluctuations of the quantity : $m = M_1 - S_{max}$~, where $M_1 = \sum_k kn(k)$~ is the first moment of the fragment-size probability distribution, at the percolation threshold $p_{cr}$ are plotted in the scaling variables of delta-scaling (\ref{delta}) with $\delta \simeq 0.80$~. For more details, see the description in the text.} \end{figure} As an illustration, let us look at the bond percolation model ( Fig. 1). The intensive order parameter is the normalized mass of largest cluster : $\eta = S_{max}/N$~. Fig. 1a (the upper left plot) shows the scaling function $\Phi (z_{(1 )})$~ in $3D$-percolation at the value of bond activation probability ($p_{cr}=0.2482$) corresponding to the critical activation in the infinite system. Results for different lattice sizes are superposing well in the scaled variables (\ref{delta}) for $\delta = 1$~. One should recall that ${\Phi}(z_{(1)})$~ is also a 'portray' of the free energy density at a renormalized distance from the critical point. The probability distribution $P[N\eta ]$~ changes rapidly, as can be seen in Figs. 1a and 1b (the upper right plot). The latter plot is obtained for the bond activation probablity $p=0.245$~, very close to $p_{cr}$. Results in Fig. 1b are plotted in the scaling variables (\ref{delta}) for $\delta = 1$~, even though slight deviations between different calculations can be seen in the shoulder region and in the tail for positive $z_{(1)}$~. Fig. 1c (the lower left plot) shows the order parameter distribution in the 'liquid phase' ($p=0.35 > p_{cr}$)~. One finds the second scaling (\ref{second})~, in agreement with the analytical derivation. Finally, Fig. 1d (the lower right plot) shows what happens at the percolation threshold ($p = p_{cr}$) if instead of $P[N\eta ]$~, one plots the probability distribution of $m = M_1 - S_{max}$~, where $M_1 = \sum_k kn(k)$~ is the first moment of the fragment-size probability distribution. $m$~ ($\equiv N{\hat {\eta}}$~), is related in a non-trivial way to the order parameter and, in particular, it conserves the singularity of $S_{max}$~. $P[m]$~ obeys the delta-scaling (\ref{delta}) for $\delta \simeq 0.80$~, what should be compared with : $1/(\tau -1) = 0.84$~, given by the analytical argument leading to the delta-scaling (\ref{delta}) . This signature of phase transition disappears for $p \neq p_{cr}$~, {\it i.e.}, $\delta $~ becomes equal 1/2. \begin{figure} \vspace{8cm} \special{psfile=dubna_fig2.ps voffset=350 hoffset=-20 hscale=45 vscale=55 angle=-90} \label{fig2} \caption{The order parameter fluctuations in the FIB model calculated for two initial sizes : $N=2^{10}$ and $N=2^{14}$~, and plotted in the scaling variables of delta-scaling (\ref{delta}). {\bf (a)} (the upper left plot) : the critical FIB process for $p_F=0.875$~, $a=0$~ corresponding to the anomalous dimension $g=0.75$~; {\bf (b)} (the upper right plot) : fluctuations of the quantity : $m = N - M_0$~ are plotted in the scaling variables of delta-scaling (\ref{delta}) with $\delta \simeq 0.73$~, {\bf (c)} (the lower left plot) : the critical FIB process for $p_F=0.7$~, $a=0$~ corresponding to the anomalous dimension $g=0.4$~; {\bf (d)} (the lower right plot) : shattered phase for $a=0,~ b=-1,~ \tau =4$~. For more details, see the description in the text.} \end{figure} As a second example, let us consider the FIB model \cite{sing1,sing11}~ which exhibits the second order, shattering phase transition \cite{Ziff}. In this off-equilibrium case, analytical derivation of (\ref{first}) and (\ref{delta}), in principle, does not apply. One deals in FIB model with clusters characterized by a conserved quantity, called the cluster mass. The anscestor cluster of mass $N$~ is fragmenting via an ordered and irreversible sequence of steps until either the cutoff-scale for monomers is reached or all clusters are inactive. Each step in this cascade is either a binary fragmentation of an active cluster $(k) \rightarrow (j)~+~(k-j)$~ with a fragmentation rate $\sim F_{j,k-j}$~ (a mass partition probability), or its inactivation $(k) \rightarrow (k)^{*}$~ with an inactivation rate $\sim I_k$~. The order parameter is here the reduced cluster multiplicity $M_0/N$~ or the reduced monomer multiplicity, both of them closely interrelated. The total fragmentation probability $p_F$~ at each step of FIB cascade is : \begin{eqnarray} \label{pfdef} p_F(k) = \sum_{j=1}^{k-1} F_{j,k-j}~( I_k + \sum_{j=1}^{k-1} F_{j,k-j} )^{-1} ~~~ \ . \end{eqnarray} The cluster mass independence of $p_F(k)$~ at any step until the cutoff-scale characterizes the {\it critical transition region}. FIB process is self-similar in this regime. For $p_F > 1/2$~, the anomalous dimension (\ref{anomdim}) (now $N| \eta | \equiv M_0$)~ varies from 0 to 1, what is different from the limits on $g$~ in the equilibrium systems. The average multiplicity of inacitve clusters is \cite{kno}~: $<M_0> \sim N^{\tau - 1}$~ ($1 < \tau < 2)$~, and $g=\tau -1$~. In the {\it shattered phase}, the average multiplicity is : $<M_0>~ \sim N$~, the cluster-size distribution is a power-law with $\tau > 2$~ and the anomalous dimension is $g = 1$~. In this phase, $p_F$~ is an increasing function of cluster mass $k$~ and, hence, the FIB cascade is not self-similar. Most of the interesting physical applications correspond to homogeneous fragmentation functions \cite{sing11} : $F_{\lambda j, \lambda (N-j)} = {\lambda }^{2a} F_{j,N-j}$~, {\it e.g.}, $F_{j,N-j} \sim [j(N-j)]^{a}$. For the homogeneous inactivation rate-function : $I_k \sim k^{b}$, the critical transition region in FIB model corresponds to the line : $b = 2a + 1$~, and the shattered phase corresponds to : $b < 2a + 1$~. Such homogeneous rates : $F_{j,k-j}$~ and $I_k$~ , will be used in examples shown in Fig. 2. Fig. 2a (the upper left plot) shows the scaling function $\Phi (z_{(1 )})$~ of critical FIB process for $p_F=0.875$~, $a=0$~, what yields $g=0.75$~. The asymmetry of $\Phi (z_{(1)})$~ about $z_{(1)}=0$~, is rather common in the critical sector of FIB model \cite{kno}. This sector and its characteristic first scaling (\ref{first}) extends to the domain $0 < g < 1/2$~\cite{kno}~, excluded in equilibrium models. In this domain ($g<1/2$)~, the most probable value of ${\Phi}(z_{(1)})$ is at $z_{(1)}=-1$~, whereas for $g>1/2$~ it takes a value close to $z_{(1)}=0$~\cite{kno}~. This can also be seen in Fig. 2c (the lower left plot) which exhibits the scaling function $\Phi (z_{(1 )})$~ of critical FIB process for $p_F=0.7$~, $a=0$~, for which $g=0.4$~. What happens if instead of $P[M_0] \equiv P[N\eta ]$~, one plots $P[N-M_0] \equiv P[N{\hat {\eta}}]$~, as shown in Fig. 2b (the upper right plot). Analogously as in percolation (see Fig. 1b), $P[N{\hat {\eta}}]$~ is scaling (\ref{delta}) with the non-trivial exponent $\delta \simeq 0.73$~ which is close to the value $\delta = g = 0.75$~, obtained using analytical arguments (\ref{delta}). The order parameter distribution in the shattered phase ($a=0, b=-1, \tau =4$)~, is shown in Fig. 2d (the lower right plot) in the variables of second scaling (\ref{second})~ for $\delta = 1/2$~. Again, one finds an analogy to the situation in the 'liquid' phase of percolation (Fig. 1c). \section{Outlook} The off-equilibrium FIB model shows essentially the same relation between criticality and scaling of order parameter fluctuations as it has been derived analytically for the second-order equilibrium transitions (Eqs. (\ref{first}), (\ref{delta}), (\ref{second})). This is related to the underlying self-similarity which is common to both equilibrium and off-equilibrium realizations of the second-order transition. In that sense, the scaling laws (Eqs. (\ref{first}), (\ref{delta}), (\ref{second})) are the salient features of {\it any} system exhibiting the second-order transition and the function ${\Phi}(z_{(\delta )})$~ is a fingerprint of the system and its transition. It is therefore important to study implications of scaling (\ref{first}) on statistical properties of the system. Let us first postulate the definition of the pseudo-free energy : \begin{eqnarray} \label{pseudo} {\cal F} = - {{\tilde {\beta}_T}}^{-1} \ln(<| \eta |>P[ \eta ]) ~~~ \ , \end{eqnarray} with a coefficient ${\tilde {\beta}_T}$ which is independent of $\eta $~ and characterizes the homogeneous system. If we suppose that the first scaling (\ref{first}) holds and employing the asymptotical definition of the anomalous dimension (\ref{anomdim})~, one finds : \begin{eqnarray} \label{pseudoscal} {\cal F}/N \sim {\eta}^{1/(1-g)} \phi (\eta N^{1-g}) ~~~ \ , \end{eqnarray} what formally corresponds to (\ref{Widomfinite}) with $(1-g)$ instead of both $\beta/(2-\alpha ) $~ and $\beta /(\nu \delta )$. ${\cal F}$ appears to be a pseudo-free energy for {\it constrained} $\eta $~, and {\it fixed} both $N$ and ${\tilde {\beta}_T}$. This derivation does not suppose that the system is close to the second-order transition in thermodynamical equilibrium. In particular, (\ref{first}) and (\ref{anomdim}) are expected to hold for any critical behaviour : at the thermodynamic equilibrium as we have shown rigorously, but also for the non-thermodynamic equilibrium like in percolation, for the off-equilibrium final state like in FIB model, or for the self-organized criticality \cite{SOC}. The basic quantity is ${\cal F}$~, which contains thermodynamically relevant information about the analogue of the inverse of temperature. The scaling law of ${\cal F}$~ ~(\ref{pseudoscal}) gives in turn information about the critical behavior of the system. Finally, let us remind that the formulae (\ref{pseudoscal}) contains only one exponent $g$. To have an access to other critical exponent, we need to vary the field conjugate to the order parameter. To summarize, we have found the new approach to study critical phenomena both in equilibrium and off-equilibrium systems. This approach is based on the existence of universal scaling laws in the probability distribution of both the order parameter and its complement, in the second order phase transitions. The precise relation between the scaling functions ${\Phi}(z_{(\delta)})$~ , the nature of order parameter and the critical exponents yields a new tool for determining the combinations of critical exponents even in small systems and for learning about the nature of critical phenomenon. We hope, this approach will be useful in many phenomenological applications in the strong interaction physics and in the condensed matter physics. \section*{References}
2,869,038,156,480
arxiv
\section{Introduction} We consider the problems posed by modern retail payments in the context of the perceived need for compromise between regulatory compliance and consumer protections. Retail payments increasingly rely on digital technology, including both e-commerce transactions via the Internet and in-person electronic payments leveraging payment networks at the point of sale. With cash, customers pass physical objects that are in their possession to merchants. In contrast, electronic payments are generally conducted by proxy: Customers instruct their banks to debit their accounts and remit the funds to the bank accounts of their counterparties. For this reason, non-cash retail payments expose customers to a variety of costs and risks, including profiling, discrimination, and value extraction by the custodians of their assets. A good central bank digital currency (CBDC) would empower individuals to make payments using digital objects in their possession rather than accounts that are linked to their identities, affording them verifiable privacy and control over their digital payments. However, many existing CBDC proposals require either a centralised system operator or a global ledger. Centralised systems entail risks both for the users of the system as well as for the system operators, and global ledgers present performance bottlenecks as well as an economically inefficient allocation of transaction costs. We present a system architecture for retail payments that allows transactions to take place within a local context, avoiding the problems associated with performance bottlenecks and centralised system operators. We show how assets that represent obligations of central banks can be created and exchanged, without requiring a central system operator to process and adjudicate all of the transactions, and without undermining the portability of money throughout the system or the ability for regulators to ensure compliance. Although our proposal takes a decentralised approach to processing transactions, money within our system intrinsically relies upon a trusted \textit{issuer}. This could be the central bank itself, but it could also be a co-regulated federation, such as a national payment network or the operators of a real time gross settlement system. Specifically, the issuer is trusted to oversee the processing of redemptions, wherein CBDC assets are accepted as valid by their recipients. Our proposal is fully compatible with the function of existing private-sector banks. The architecture provides an effective solution for a variety of different use cases, including those that are sensitive to regulatory compliance requirements, transactional efficiency concerns, or consumer affordances such as privacy and control. We begin with an examination of the properties required to support such use cases. The remainder of this article is organised as follows. Chapter~\ref{s:desiderata} identifies the properties that a payment system should have as a foundation for a robust set of technical requirements, Chapter~\ref{s:architecture} specifies the design of our proposed architecture, Chapter~\ref{s:operational} offers a model for how to deploy and manage a central bank digital currency (CBDC) system using our architecture, Chapter~\ref{s:use-cases} describes several use cases that demonstrate the special capabilities of our proposed design, Chapter~\ref{s:analysis} compares our design to other payment systems, and Chapter~\ref{s:conclusion} provides a summary. \section{Payment system desiderata} \label{s:desiderata} To be broadly useful for making payments, and particularly to satisfy the requirements of central bank digital currency, a payment system must have the properties necessary to meet the demands of its use cases. We describe these properties and use cases, and show that they are indeed required. \newcounter{enumTemp} \subsection{Asset-level desiderata} \label{ss:asset} \begin{itemize} \item\cz{Integrity.} We say that an asset has \textit{integrity} if it has a single, verifiable history. Actors in possession of the asset must be able to confirm that the asset is genuine and unique; specifically, any two assets that share any common history must be the same asset. Desired characteristics of integrity include: \begin{enumerate} \item\textbf{Durability} \label{des:durability} Short of stealing the private key of an issuer or breaking the cryptographic assumptions upon which the system infrastructure depends, it shall not be possible to create a counterfeit token, it shall not be possible for the party in possession of a token to spend it more than once, and it shall not be possible for an issuer to create two identical tokens. In addition, it shall not be possible for any actor to mutate the token, once issued. \item\textbf{Self-contained assets} \label{des:self-contained} The asset shall be \textit{self-validating}, which is to say that it shall support a mechanism that allows it to furnish its own proof of integrity, as part of a process of verifying its authenticity to a recipient or other interested party. The purpose of self-validation is to maximise the flexibility of how assets are used and how risks related to asset ownership and state can be managed. In particular, the issuer shall not be required to track the owner or status of the assets that it has created, and payers shall not concern themselves with what happens to an asset once it is spent. \setcounter{enumTemp}{\theenumi} \end{enumerate} \item\cz{Control.} An actor has \textit{control} of an asset if that actor and no other actor possesses the means to specify legitimate changes to the asset, including features that identify its owner. Note that \textit{control} implies the ability to modify the asset in a way that determines the legitimacy of changes made to the asset by its possessor. Desired characteristics of control include: \begin{enumerate} \setcounter{enumi}{\theenumTemp} \item\textbf{Mechanical control} \label{des:mechanical} The ability to create a valid transaction is vested in the owner. No one but the owner can update the state of a specific asset. \item\textbf{Delegation} \label{des:delegation} The asset owner must be able to retain control of the asset when transferring the responsibility of possession to a custodian. That custodian is then unable to exercise control over that asset, for instance by creating a legitimate update to the asset. The owner chooses who can exercise control, and owners can delegate possession without delegating control. \setcounter{enumTemp}{\theenumi} \end{enumerate} \item\cz{Possession.} An actor has \textit{possession} of an asset if that actor and no other actor can effect changes to the asset or reassign possession of the asset to another actor. \textit{Possession} implies the ability to deny possession to others, including the legitimate owner of the asset, on an incidental or permanent basis (this does not include the possibility for forced legal enforcement to relinquish or return an asset). In principle, the balance among costs and risks related to the possession of an asset, including the ability to store assets safely, can be independently chosen by various actors in the system. Desired characteristics of possession include: \begin{enumerate} \setcounter{enumi}{\theenumTemp} \item\textbf{Choice of custodian} \label{des:custodian} Asset owners must be able to choose the custodian entrusted with the possession of their asset. This contrasts with traditional ledger-based approaches in which the ledger is the fixed source of truth about an asset and for which an asset is inextricably bound to that ledger (i.e., moving the asset to another ledger would involve redemption in the first ledger and a new issuance in the next ledger). This property is an essential interoperability feature for any national currency system. To mitigate risks such as custodial compromise or service disruption for sensitive payment systems, asset owners must be able to choose to have the possession of their assets spread across multiple custodians (``multiplexing''), such that they require only some portion of them to respond in order to update the state of their assets. This should be able to occur in a way that is opaque to the custodians (``oblivious multiplexing''), where each is concerned only with its own portion of assets which it is providing custody over and is unaware that other custodians are involved. \item\textbf{Choice to have no custodian} \label{des:no-custodian} The owner of an asset must be able to serve as his or her own custodian. Specialised custodians are good for mitigating risks, but they always introduce costs (transaction fees, account fees, latency, and so on) and risks (for example, intentional or accidental service disruption). To address use cases that are sensitive to those costs and risks it is necessary to allow non-specialised actors to also provide custodianship of assets, and in particular to allow a human owner of an asset to store the asset personally, using his or her own devices. \setcounter{enumTemp}{\theenumi} \end{enumerate} \item\cz{Independence.} Asset owners shall be free to conduct transactions in the future, with confidence that they will be able to use the assets for the use cases they want. \begin{enumerate} \setcounter{enumi}{\theenumTemp} \item\textbf{Fungibility} \label{des:fungible} Each unit is mutually substitutable for each other unit of same issuer, denomination, and vintage, and can be exchanged for cash or central bank reserves. This is enabled by privacy by design and required for self-determination. \item\textbf{Efficient lifecycle} \label{des:efficient-lifecycle} Transactions must be similar in speed to traditional payment systems, capable of having near-instant acceptance. It must be possible for the recipient of an asset to verify that a transaction is valid and final without the need to involve a commercial bank at the time of the transaction, and without forcibly incurring additional costs, risks, or additional technical or institutional requirements. Assets must not expire within an unreasonably short timeframe. \setcounter{enumTemp}{\theenumi} \end{enumerate} \end{itemize} \subsection{System-level desiderata} \label{ss:system} \begin{itemize} \item\cz{Autonomy.} We say that an actor has \textit{autonomy} with respect to an asset if the actor has both possession and control of the asset and can modify the asset without creating metadata that can be used to link the actor to the asset or any specific transaction involving that asset. The term \textit{autonomy} is chosen because it reflects the risk that a data subject might lose the ability to act as an independent moral agent if such records are maintained~\cite{shaw2017}. Desired characteristics of autonomy include: \begin{enumerate} \setcounter{enumi}{\theenumTemp} \item\textbf{Privacy by design} \label{des:privacy-by-design} The approach must allow users to withdraw money from a regulated entity, such as a bank or money services business, and then use that money to make payments without revealing information that can be used to identify the user or the source of the money. The assets themselves, and the transactions in which they are involved, must be untraceable both to their owners and to other transactions. The system must be designed to allow all users to have a sufficiently large anonymity set that they would not have reason to fear profiling on the part of powerful actors with access to aggregated data. \item\textbf{Self-determination for asset owners} \label{des:self-determination} Asset owners shall be able to control what they do with assets. No recipient can use the system to discriminate against asset owners or impose restrictions on what a particular owner can do. Transactions using an asset shall not be blocked or otherwise flagged by recipients based upon targeting the owner of an asset or targeting a set of assets associated with some particular transaction history. \setcounter{enumTemp}{\theenumi} \end{enumerate} \item\cz{Utility.} The system must be generally useful to the public as a means to conduct most, and perhaps substantially all, retail payments. Desired characteristics of utility include: \begin{enumerate} \setcounter{enumi}{\theenumTemp} \item\textbf{Local transactions} \label{des:local} It shall be possible to achieve efficient transactions where participants are able to rely upon local custodians to facilitate acceptance of remittances. The system shall not rely upon global consensus to determine or verify the disposition of an asset and shall allow transacting parties to choose an authority or context that they mutually trust, for example to trust a local authority in exchange for faster settlement or when access to a wider network is not possible, without requiring additional trust between counterparties. \item\textbf{Time-shifted offline transactions} \label{des:time-shifted} It shall be possible for a payer to ``time-shift'' third-party trust to achieve a form of offline payment by first prospectively paying a recipient and then later, in an offline context, choosing whether to consummate the payment by selectively revealing additional information. Time-shifted offline transactions are akin to purchasing a ticket online and, later, spending it offline. \item\textbf{Accessibility} \label{des:accessibility} The protocol employed by the system must be accessible and open to all users. The system must not impose vendor-specific hardware compatibility requirements and must not require manufacturers of compatible hardware to register with a central database or seek approval from an authority. The functionality of the system must not depend upon trusted computing, secure enclaves, or secure elements that impose restrictions upon what users can do with their devices. The system must not require a user to register before acquiring and using a device, and the possession and use of a physical device must not depend upon a long-term relationship with a trusted authority, registered business, or asset custodian. \setcounter{enumTemp}{\theenumi} \end{enumerate} \item\cz{Policy.} The system must support the establishment of institutional policies to benefit the public and the national economy. Desired characteristics of policy include: \begin{enumerate} \setcounter{enumi}{\theenumTemp} \item\textbf{Monetary sovereignty} \label{des:monetary} Monetary sovereignty entails a central bank and government's ability of controlling the use of the sovereign legal currency within its borders and the mechanisms within which it is used. In support of this end, financial remittances facilitated by the system shall involve direct obligations of the central bank of the applicable jurisdiction. \item\textbf{Regulatory compliance} \label{des:compliance} The system shall be operated by regulated financial intermediaries that can establish and enforce rules for their customers. The system shall provide a mechanism that would permit financial intermediaries to prove that they have enforced those rules completely and in every case. By extension, the system would allow for the establishment of regulatory requirements for its operators to support reasonable monitoring by tax authorities for the purpose of establishing or verifying the income tax obligations of their clients. Subject to the limitation that both counterparties to a transaction would not generally be known, the system would permit system operators to perform analytics on their customers, for example, by learning the times and size of asset deposits or withdrawals. Ideally, the system would also provide a counter-fraud mechanism by which consumers to verify the validity of merchants. \end{enumerate} \end{itemize} \subsection{Technical requirements} Next, we translate the asset-level and system-level desiderata into specific technical and institutional capabilities that are necessary to support a suitable payment system. We begin by identifying the technical requirements for an institutionally supportable digital currency that supports verifiable privacy for consumers, wherein consumers are not forced to rely upon promises by trusted actors: \begin{itemize} \item\cz{Blind signatures.} Consumer agents must implement \textit{blinding} and \textit{unblinding} with semantics similar to the blind signatures proposed by Chaum in his original article~\cite{chaum1982} and further elaborated in his more recent work with the Swiss National Bank~\cite{chaum2021}. Specifically, it must be possible for users to furnish a block of data to an issuer, ask the issuer to sign it, then transform the response into a valid signature on a new block of data that the issuer has never seen before and cannot link to the original block of data. This allows transactions that do not link the identity of the sender to the identity of the recipient, as a way to achieve privacy by design (\ref{des:privacy-by-design}). \item\cz{Distributed ledger.} Participants in a clearing network overseen by a central bank must have access to a suitable distributed ledger technology (DLT) system~\cite{iso22739} that enables them to collectively maintain an immutable record that can be updated with sufficient frequency to provide transaction finality that is at least as fast as domestic bank wires. This helps ensure both durability of assets (\ref{des:durability}) and self-determination for users (\ref{des:self-determination}) as described in Section~\ref{s:desiderata}. \item\cz{Open architecture.} The system must fully support the semantics for digital currency specified by Goodell, Nakib, and Tasca~\cite{goodell2021}. Specifically, we assume that retail users of digital currency have access to non-custodial wallets that satisfy certain privacy and accessibility requirements described in Section~\ref{ss:system}, specifically requirements (\ref{des:no-custodian}), (\ref{des:privacy-by-design}), (\ref{des:self-determination}), and (\ref{des:accessibility}). \item\cz{Fungible tokens.} The digital currency tokens themselves must satisfy the fungibility requirement (\ref{des:fungible}) described in Section~\ref{ss:asset}. \item\cz{Institutional controls.} System operators must possess capabilities that support the policy requirements described in Section~\ref{ss:system}, specifically requirements (\ref{des:monetary}) and (\ref{des:compliance}). \end{itemize} Moving to a digital form of currency brings a variety of potential benefits when compared to paper currency, including cryptographic signatures, cryptographic shielding, flexible semantics, reduced management costs, and being able to efficiently transfer units of currency over large distances. However, it is also important to re-capture some of the benefits of physical currency. In order to have \textit{self-contained assets} with \textit{custodial choice}, we need a representation for our assets that is unforgeable, stateful, and oblivious: \begin{itemize} \item\cz{Unforgeable.} Every asset must be unique, and it can only be created once. No set of adversarial actors can repeat the process of creating an asset that has already been created. Note that this requirement is different than a "globally unique identifier", which is merely unlikely to be reused by an honest actor, but which any adversarial actor can reuse for any other asset. True unforgeability requires that once an asset is created, it is impossible to reuse its identifier for any another asset. This property is required for durability (\ref{des:durability}), custodial choice (\ref{des:custodian}), the choice to have no custodian (\ref{des:no-custodian}), local transactions (\ref{des:local}), and time-shifted offline transactions (\ref{des:time-shifted}). \item\cz{Stateful.} Every asset has its own independent state, and as the state of an asset changes over time, the asset remains unique and unforgeable. No set of adversarial actors, including non-issuer owners, can create a second version of the asset with a different state. Note that this requirement precludes using any kind of ``access control token'', such as an HMAC, signed attestation, or even a blinded signature scheme asset, which cannot accumulate state over time and must be returned in precisely the same form as created. The requirements of self-contained assets (\ref{des:self-contained}), mechanical control (\ref{des:mechanical}), and delegation (\ref{des:delegation}) necessitate that assets maintain their own state. \item\cz{Oblivious.} Once finality is achieved following the transfer of an asset to a new owner all of the previous owners, including the issuer, have no obligation to know any aspect of its future state changes and transfers. There is no residual risk to the new owner that the transaction will be undone by either a previous owner or the system itself. Note that encryption does not suffice: there must be no requirement to inform previous owners that state changes have occurred, and previous owners must not be required to do any extra work to accommodate those changes. Otherwise, the self-determination (\ref{des:self-determination}) and efficient lifecycle (\ref{des:efficient-lifecycle}) requirements would be compromised. Paper bank notes are a good example of obliviousness. No entity knows where every bank note is, or what everyone's billfolds hold. If anyone, including the mint, were guaranteed to know this information, then it would prevent paper money from being useful in many of its required use cases. Although obliviousness and privacy are closely related, obliviousness is really about efficiency: It is acceptable for the mint to know where some bills are and the contents of some billfolds. \end{itemize} These qualities combine together to provide assets, referred to as \textit{USO assets} in this document, that have very similar qualities to paper currency. While assets embodying these qualities are not readily available at this time, this is an area of active study and promising results. Given such assets in combination with the technologies mentioned above, our architecture is able to fulfill the complete list of requirements for a payment system. In particular, CBDC created using our architecture can meet the use case demands of paper currency as well as the demands of electronic payment systems in a single architecture, without requiring trusted hardware or heavyweight consensus systems. The requirement for a USO asset to be stateful means it must be able to prove its state has finality. The requirement for a USO asset to be oblivious means that the asset must carry a \textit{proof of provenance} (POP) that allows it to demonstrate its validity on its own, as no other part of the system is required to have it. The requirement for a USO asset to be unforgeable means this proof carries the same weight as if it came from directly the issuer itself, so the issuer acts as the \textit{integrity provider} of the POP. Obliviousness implies there can be other systems between the asset owner and the integrity provider. These systems serve as \textit{relays} in the creation of the POP. Relays are common carriers, like network carriers. In fact a relay knows considerably less than a network carrier: it accepts hashes, and emits hashes of those hashes, and by design is completely oblivious to everything else. \section{An efficient, general-purpose architecture for CBDC} \label{s:architecture} \noindent In this section we propose a method for creating a retail central bank digital currency (CBDC) that supports private payments wherein the owner maintains custody of her digital assets. It achieves the necessary properties for a general purpose payment system described in the previous section by extending the approach proposed by Goodell, Nakib, and Tasca~\cite{goodell2021} with a new asset model that eliminates the need for global consensus with regard to every transaction. While our new approach requires that the central bank must operate some real-time infrastructure, we show that this requirement can be addressed with a lightweight, scalable mechanism that mitigates the risk to resilience and operational security. Suppose that a user, Alice, wants to withdraw retail CBDC for her general-purpose use in making retail payments. We assume that the recipient of any payment that Alice makes will require one or more valid tokens from a trusted issuer $I$ containing content $k$ that has been signed using signature function $s(k, I)$. We further assume, following the arguments made in earlier proposals for privacy-preserving retail CBDC~\cite{goodell2021,chaum2021}, that she will be able to use a \textit{blinding} function $b$, known only to Alice, to request a blind signature on $b(k)$ to which she can apply an \textit{unblinding} function $b^{-1}$, also known only to Alice, to reveal the required signature: \begin{equation} b^{-1}(s(b(k), I))=s(k, I) \end{equation} The signature $s(k, I)$ appearing at the beginning of a USO asset's history shows that it was generated correctly by the CBDC's issuer or by one of its delegates, which we shall call \textit{minters}. Minters are subject to a \textit{minting invariant} wherein every time a minter satisfies a request for a set of signatures of a particular value, it must also cancel a corresponding set of CBDC assets of equal value, and vice-versa. The function of a minter, therefore, is to \textit{recycle} CBDC, and not to issue or destroy it. The proof of provenance of a USO asset allows its recipient to verify that it has the same integrity as if it were in the issuer's database. These proofs of provenance are a powerful enabling feature for a retail CBDC, since assets can be transacted without the need to maintain accounts. Additionally, the expected costs of operating the issuer's infrastructure is much smaller at scale than the costs associated with operating traditional distributed ledger infrastructures in which the record of each transaction is maintained in a global ledger. However, unlike transferring blinded assets in a classical ledger system, whether distributed~\cite{goodell2021} or not~\cite{chaum2021}, transferring USO assets from one party to another explicitly leaves behind an audit trail that can be used by the bearers of an asset to recognise the asset when it is inspected, transacted or seen in the future. A USO asset's proof of provenance is permanently updated each time it is transferred to a new recipient. If the same asset were to be associated with multiple transactions, then a single party to any of the transactions would be able to recognise the asset across all of its transactions, which could potentially compromise the privacy of the other parties. \begin{figure}[ht] \begin{center} \hspace{-0.8em}\scalebox{1.2}{ \begin{tikzpicture}[>=latex, node distance=3cm, font={\sf \small}, auto]\ts \tikzset{>={Latex[width=4mm,length=4mm]}} \node (w1) at (0, 0) [] { \scalebox{0.08}{\includegraphics{images/wallet-vector-xp.png}} }; \node (b1) at (4, 3) [] { \scalebox{2}{\includegraphics{images/s-sender-bank.png}} }; \node (b2) at (8, 3) [] { \scalebox{0.6}{\includegraphics{images/recycle.png}} }; \node (s1) at (4, -3) [] { \scalebox{2}{\includegraphics{images/s-shop.png}} }; \node (r1) at (8, -3) [] { \scalebox{0.3}{\includegraphics{images/gears.jpg}} }; \node (t1) at (0.4, 2.7) [box, text width=1.5cm, align=center, fill=green!30] { signature request \& payment }; \node (t2) at (6, 4.4) [box, text width=1.5cm, align=center, fill=green!30] { signature request \& payment }; \node (t3) at (6, 1.8) [box, text width=1.5cm, align=center, fill=orange!30] { blind signature }; \node (t4) at (3.5, 0.5) [box, text width=1.5cm, align=center, fill=orange!30] { blind signature }; \node (t5) at (0.1, -2.5) [asset] {asset data}; \node (t6) at (6, -1.8) [box, text width=2cm, align=center, fill=magenta!30] { xfer asset to merchant }; \node (t7) at (6, -4.2) [box, text width=2cm, align=center, fill=blue!30] { proof of provenance }; \node (c1) at (-1.5, 0) [noshape, text width=2cm, align=center] {consumer wallet}; \node (c2) at (4, 4.2) [noshape, text width=1.5cm, align=center] {commercial bank}; \node (c3) at (9.3, 3) [noshape, text width=1.5cm, align=center] {minter}; \node (c4) at (4, -3.8) [noshape, text width=1.5cm, align=center] {merchant}; \node (c5) at (9.2, -3) [noshape, text width=1.5cm, align=center] {relay}; \draw[->, line width=1mm] (w1) edge[bend left=20] (b1); \draw[->, line width=1mm] (b1) edge[bend left=20] (b2); \draw[->, line width=1mm] (b2) edge[bend left=20] (b1); \draw[->, line width=1mm] (b1) edge[bend left=20] (w1); \draw[->, line width=1mm] (w1) edge[bend right=20] (s1); \draw[->, line width=1mm] (s1) edge[bend left=20, color=red] (r1); \draw[->, line width=1mm] (r1) edge[bend left=20, color=red] (s1); \end{tikzpicture}} \end{center} \caption{\cz{Schematic representation of the CBDC journey from the perspective of a consumer.}} \label{f:consumer-lifecycle} \end{figure} It follows that if Alice wants an asset that she can spend privately, she must create it herself. Alice establishes her own USO asset privately, and subsequently populates it with the signature $s(k, I)$. Having done this she can then safely transfer the asset to Bob without concern. Figure~\ref{f:consumer-lifecycle} provides a visualisation of the CBDC journey from the perspective of a consumer. Once Bob receives the asset from Alice, he has a choice. One option is to transfer it to a bank, perhaps to deposit the proceeds into his account with the bank, or to request a freshly minted CBDC asset as Alice had done earlier. If he chooses to deposit the proceeds into his account, then the bank now has a spent CBDC asset that it can exchange for central bank reserves or use to satisfy requests for new signed CBDC assets from its other account holders. Alternatively, Bob could transfer the CBDC onward without returning it to the bank, bearing in mind that Bob would not be anonymous when he does; see Section~\ref{ss:chained} for details. We organise Alice's engagement lifecycle with the asset in a five-step process, as shown in Figure~\ref{f:aliceflow}: \begin{figure}[ht] \begin{center} \sf\begin{tikzpicture}[>=latex, node distance=3cm, font={\sf \small}, auto]\ts \tikzset{>={Latex[width=2mm,length=2mm]}} \node (terminal1) at (0,0.0) [term] {START}; \node (form1) at (0,-1.3) [form, text width=9.3em] { (1) Create asset $F_0=\{A, G_0, s((d,I_d), I)\}$ }; \node (box1) at (0,-2.9) [ffbox, text width=10em] { (2) Request signature $s(b(F_0), I_d)$ from issuer $I$ }; \node (wait2) at (0,-4.5) [wait, text width=3.4em] {Wait $dt$}; \node (form2) at (0,-6.3) [form, text width=9.4em] { (3) Create update $F_1$ to add signature $s(F_0, I_d)$ and transfer asset to $B$ }; \node (box2) at (0,-8.1) [ffbox, text width=10em] { (4) Send $F_0$ and $F_1$ to relay $G$ }; \node (box3) at (0,-9.6) [ffbox, text width=10em] { (5) Furnish proof to owner of $B$ }; \node (terminal2) at (0,-10.9) [term] {END}; \draw[->] (terminal1) -- (form1); \draw[->] (form1) -- (box1); \draw[->] (box1) -- (wait2); \draw[->] (wait2) -- (form2); \draw[->] (form2) -- (box2); \draw[->] (box2) -- (box3); \draw[->] (box3) -- (terminal2); \end{tikzpicture} \rm\vspace{-1em} \end{center} \caption{\cz{Typical consumer engagement lifecycle.} Parallelograms represent USO asset operations.} \label{f:aliceflow} \end{figure} \begin{enumerate} \item First, Alice chooses a service provider that maintains a relay $G$, and creates a new USO asset that refers to some specific prior commitment $G_0$ published by the relay. For each CBDC token that Alice wishes to obtain, she generates a new pair of keys using asymmetric cryptography and embeds the public key $A$ and $G_0$ along with the public key of the proposed digital currency issuer $I$, the denomination $d$, and a certificate $s((d,I_d), I)$ containing the key used by the issuer to sign tokens of denomination $d$ into a template for a new, unique update $F_0=\{A, G_0, s((d, I_d), I)\}$ as the foundation for a new asset $F$. Note that for Alice to ensure that her subsequent spending transactions are not linked to each other, she must repeat this step, creating a new key pair for each asset that she wants to create, and optionally choosing different values for the other parameters as well. \item Next, Alice creates $b(F_0)$ using blinding function $b$ and sends it to her bank along with a request for a blind signature from a minter using the key for the correct denomination $I_d$, which in the base case we assume to be the central bank. Alice is effectively requesting permission to validate asset $F$ as legitimate national digital currency (the sovereign legal tender within that jurisdiction), so, presumably, the bank will require Alice to provide corresponding funds, such as by providing physical cash, granting the bank permission to debit her account, or transferring digital currency that she had previously received in the past. See Figure~\ref{f:step-2}. Alice's bank shall forward her request $b(F_0)$ to the central bank along with central bank money (cash, central bank reserves, or existing CBDC assets) whose total value is equal to the value of the CBDC that Alice is requesting. The bank shall then provide Alice with the signature $s(b(F_0), I_d)$. \begin{figure}[ht] \begin{center} \sf\begin{tikzpicture}[>=latex, node distance=3cm, font={\sf \small}, auto]\ts \tikzset{>={Latex[width=2mm,length=2mm]}} \node (n1) at (0,0) {Alice}; \node (n2) at (5,0) {Alice's Bank}; \node (n3) at (10,0) {Central Bank}; \draw (n1) -- (0,-2.9) {}; \draw (n2) -- (5,-2.9) {}; \draw (n3) -- (10,-2.9) {}; \draw[->] (0,-0.7) -- node[above] { $b(F_0)$, $d$, payment for $d$ } (5,-0.7); \draw[->] (5,-1.2) -- node[above] { $b(F_0)$, $d$ units of CB money } (10,-1.2); \draw[->] (10,-1.9) -- node[above] { $s(b(F_0), I_d)$ } (5,-1.9); \draw[->] (5,-2.4) -- node[above] { $s(b(F_0), I_d)$ } (0,-2.4); \end{tikzpicture} \rm\vspace{-1em} \end{center} \caption{\cz{Protocol for Step 2.} The validation of $d$ units of digital currency.} \label{f:step-2} \end{figure} \item At this point, Alice can now ``unblind'' the signature received from the minter to yield $s(F_0, I_d)$, which is all that is required to create valid CBDC. To mitigate the risk of timing attacks that could be used to correlate her request for digital currency with her subsequent activities, Alice should wait for some period of time $dt$, before conducting a transaction with the valid CBDC received as well as before sharing the unblinded signature $s(F_0, I_d)$. Alice's privacy derives from the number of tokens that are ``in-flight'' (outstanding) at any given moment. If she transacts too quickly after completing her withdrawal, then her spending transaction might be traced to her withdrawal. When Alice is ready to conduct a transaction with Bob, she creates a new update $F_1$ wherein she updates the metadata of $F$ to include the signature $s(F_0, I_d)$ and transfer ownership to Bob using his public key $B$. Optionally, Alice might want to confirm that $B$ legitimately belongs to Bob's business, in which case Bob could furnish a certificate for his public key. We also imagine that regulators might impose additional requirements that would apply at this stage, which we describe in Section~\ref{ss:regulatory}. Observe that neither the asset $F_0$ nor its update $F_1$ contain any information about Alice, her wallet, or any other assets or transactions. \begin{figure}[ht] \begin{center} \sf\begin{tikzpicture}[>=latex, node distance=3cm, font={\sf \small}, auto]\ts \tikzset{>={Latex[width=2mm,length=2mm]}} \node (n1) at (0,0) {Alice}; \node (n2) at (5,0) {Bob}; \node (n3) at (10,0) {Provider of $G$}; \draw (n1) -- (0,-2.9) {}; \draw (n2) -- (5,-2.9) {}; \draw (n3) -- (10,-2.9) {}; \draw[->] (0,-0.7) -- node[above] { $F_0,F_1,G$ } (5,-0.7); \draw[->] (5,-1.2) -- node[above] { $h(F_0), h(F_1)$ } (10,-1.2); \draw[->] (10,-1.9) -- node[above] { POP($G_0$, $h(F_0), h(F_1)$) } (5,-1.9); \draw[->, dashed] (5,-2.4) -- node[above] { POP($G_0$, $h(F_0), h(F_1)$) } (0,-2.4); \end{tikzpicture} \rm\vspace{-1em} \end{center} \caption{\cz{Protocol for Step 4, Option 1.} Alice gives Bob possession and control, and Bob registers the update.} \label{f:step-4-1} \end{figure} \begin{figure}[ht] \begin{center} \hspace{-0.8em}\scalebox{1.2}{ \begin{tikzpicture}[>=latex, node distance=3cm, font={\sf \small}, auto]\ts \tikzset{>={Latex[width=4mm,length=4mm]}} \node (w1) at (0, 0) [] { \scalebox{0.08}{\includegraphics{images/wallet-vector-xp.png}} }; \node (s1) at (4, -3) [] { \scalebox{2}{\includegraphics{images/s-shop.png}} }; \node (r1) at (-4, -3) [] { \scalebox{0.3}{\includegraphics{images/gears.jpg}} }; \node (t5) at (-3.3, -0.1) [box, text width=2cm, align=center, fill=magenta!30] { xfer asset to merchant }; \node (t6) at (4.4, -1.0) [box, text width=2cm, align=center, fill=blue!30] { proof of provenance }; \node (t7) at (4.4, 0.1) [asset] {asset data}; \node (t7) at (-0.1, -2.5) [box, text width=2cm, align=center, fill=blue!30] { proof of provenance }; \node (c1) at (1.5, 0.3) [noshape, text width=2cm, align=center] {consumer wallet}; \node (c4) at (4, -3.8) [noshape, text width=1.5cm, align=center] {merchant}; \node (c5) at (-4, -3.8) [noshape, text width=1.5cm, align=center] {relay}; \draw[->, line width=1mm] (w1) edge[bend right=20, color=red] (r1); \draw[->, line width=1mm] (r1) edge[bend right=20, color=red] (w1); \draw[->, line width=1mm] (w1) edge[bend left=20] (s1); \end{tikzpicture}} \end{center} \caption{\cz{Schematic representation of Step 4, Option 2.}} \label{f:alt-journey} \end{figure} \begin{figure}[ht] \begin{center} \sf\begin{tikzpicture}[>=latex, node distance=3cm, font={\sf \small}, auto]\ts \tikzset{>={Latex[width=2mm,length=2mm]}} \node (n1) at (0,0) {Alice}; \node (n2) at (5,0) {Provider of $G$}; \node (n3) at (-5,0) {Bob}; \draw (n1) -- (0,-3.1) {}; \draw (n2) -- (5,-3.1) {}; \draw (n3) -- (-5,-3.1) {}; \draw[->] (0,-0.7) -- node[above] { $h(F_0), h(F_1)$ } (5,-0.7); \draw[->] (5,-1.4) -- node[above] { POP($G_0$, $h(F_0), h(F_1)$) } (0,-1.4); \draw[->] (0,-1.9) -- node[above] { $F_0, F_1, G$ } (-5,-1.9); \draw[->] (0,-2.6) -- node[above] { POP($G_0$, $h(F_0), h(F_1)$) } (-5,-2.6); \end{tikzpicture} \rm\vspace{-1em} \end{center} \caption{\cz{Protocol for Step 4, Option 2.} Alice registers the update herself, giving Bob control first and possession later.} \label{f:step-4-2} \end{figure} \item To consummate the transaction, $h(F_0)$ and $h(F_1)$ must be sent to relay $G$, wherein $h$ is a selector function that can be used to demonstrate that Alice had committed to creating the asset $F_0$ and its update $F_1$, respectively. In particular, $h$ may be a hash function. Alice has two options for how to proceed: \begin{itemize} \item\cz{(Option 1.)} Alice sends the identity of the relay $G$ along with the asset $F_0$ and its update $F_1$ to Bob (see Figure~\ref{f:step-4-1}), and Bob sends $h(F_0)$ and $h(F_1)$ to the relay. At this point, Bob may furnish the POP of the transaction to Alice, once he receives it, as a receipt. \item\cz{(Option 2.)} Alice sends $h(F_0)$ and $h(F_1)$ to the relay directly and subsequently furnishes the asset and its proof of provenance to Bob (see Figures~\ref{f:alt-journey} and~\ref{f:step-4-2}). \end{itemize} \item Finally, if Alice had chosen Option 2 for the previous step, then she should reveal to Bob the POP indicating that the transaction is done. If Alice had chosen Option 1 for the previous step, then Bob will be able to verify this himself. \end{enumerate} Note that once Alice has transferred the CBDC asset to Bob, nothing about the asset or its proofs of provenance can be used to link the asset to Alice, her devices, or her other transactions, regardless of what Bob does with the asset going forward. Broadly speaking, these are the same protections that Alice has when she uses cash, although we expect that regulated financial intermediaries will generally always learn that Bob receives a CBDC asset when Bob receives an asset from a non-custodial wallet. Our architecture provides a general framework for specifying which assets are considered valid. Importantly, and unlike some digital currency system designs, our system allows all of the rules to be implemented at the edge rather than inside the network itself. For example, because a regulated financial intermediary has a role in every transaction, a bank accepting CBDC assets as deposits might implement a rule requiring that an asset must have been previously transacted at most once. Alice's privacy depends upon Alice not binding her identity to the transaction in some way, for example by embedding her personal information into a transaction or by linking the transaction to a wallet identifier. In all cases, we expect that only the initial consumer, Alice, enjoys the benefits of consumer protection. Subsequent recipients of an asset do not have such protections, and rules enforced by banks that receive assets can impose explicit requirements on all of the participants in a chain of transactions. Note that a point of trust is required for any fair transaction between two untrusting parties~\cite{pagnia1999}. \section{Operational considerations} \label{s:operational} Although our architecture could be applied to arbitrary digital currency applications, including digital currency and e-money issued by private-sector banks, we assume that this architecture is most useful for the implementation of central bank digital currency (CBDC), wherein central banks would be the issuers of currency for use by the general public to facilitate payments in domestic retail contexts. CBDC would represent part of the monetary base (M0), like cash and central bank reserves. In this section, we consider operational concerns for the various parties involved in a CBDC distribution, including central banks, private-sector banks, clearinghouses, merchants, and consumers. In particular, we show that the system is able to support lightweight requirements for central banks as well as for end-user devices, including both mobile wallets for consumers and merchant devices at the point-of-sale. \subsection{Operational model We present a prescriptive model for how to use our architecture to implement CBDC, explicitly highlighting how CBDC would operate within the context of a modern banking system and institutions. We observe that money constitutes a complex system within an economy, entailing a delicate set of connected relationships among participants. Our proposed architecture avoids undermining this balance of connected relationships by aligning closely to the system architecture implicit to physical cash. In this sense, what we propose is not a radical new system design, but rather a new kind of digital cash that can exist alongside physical cash and other forms of money or money-like instruments used for payments. To support this model, we must consider the processes and institutions that support the circulation of cash and how they would be adapted to support the circulation of CBDC. We also introduce two new systems: an \textit{integrity system} comprising the set of relays, which ensures that digital assets can be safely used to transfer value, and a \textit{monitoring system} comprising the set of minters, for controlling the creation and destruction of currency tokens. Figure~\ref{f:operating-model} illustrates how this would work, and we offer the following narrative description of the lifecycle of a specific CBDC asset: \begin{itemize} \item \cz{Act I.} A unit of CBDC begins its life as a request from Alice to her commercial bank, which had previously received a set of CBDC vouchers from the central bank in exchange for reserves of equal value. CBDC vouchers are special CBDC assets that can be exchanged for signatures from minters but are not used by retail consumers. Alice's bank debits the value of the request from Alice's bank account and sends the CBDC voucher to the minter along with Alice's request. The minter then signs Alice's request, destroys the voucher, and submits a record of its work to the distributed ledger of the monitoring system, which the central bank and regulators can inspect to understand the aggregate flow of money in the system and verify that the minting invariant is maintained. The minter then sends the signed request back to Alice's bank, which forwards the signed request to Alice. Later, Alice uses the signature to create the CBDC asset, which we shall call Bill, and transfers it to Bob. Whenever a CBDC asset changes hands, either the sender or the recipient must send an update to the correct relay to consummate the transaction. Next, Bob transfers Bill to his bank. Importantly, unlike Alice, Bob can execute this transfer immediately if he chooses to do so; there is no particular value in waiting. At the same time, unlike with the system proposed by Chaum, Grothoff, and M\"oser~\cite{chaum2021}, Bob can wait as long as he likes (subject to optional conditions) before depositing the asset with the bank, since there is no requirement for the issuer or a minter to participate in the transfers. Finally, Bob's bank credits the value of the transaction to Bob's bank account. \item \cz{Act II.} Soon afterward, Charlie, another customer of Bob's bank, makes a request to withdraw CBDC. The bank sends Bill to the minter to be recycled in exchange for signing Charlie's signature request. The minter destroys Bill, signs Charlie's signature request, and returns the signature to Charlie via the bank. Later, Charlie uses the signature to create a new CBDC asset, Bill II, and transfers the asset to Dave. Dave then transfers it to his bank, as Bob had done. Dave's bank decides to bring Bill II back to the central bank in exchange for reserves, instead of recycling it, ending the lifecycle of the unit of CBDC. \end{itemize} Note that Dave's bank could have done what Bob's bank did and save the CBDC to service future requests without vouchers. This recycling process is adiabatic, does not rely upon the active participation of the central bank, and can be repeated an arbitrary number of times in this manner before the ultimate destruction of the unit of CBDC. The minting invariant ensures that the minting system never increases or decreases the total amount of currency in circulation. Instead, it issues a new unit of currency only in response to collecting an old unit of equal value. The central bank is only involved when it engages with banks, specifically by issuing vouchers or accepting CBDC assets in exchange for reserves, and by overseeing the minting operation, passively accepting and analysing reports by minters. The central bank also relies upon the relay system to maintain CBDC integrity, and the DLT system underpins its ability to verify what it must trust. Note also that Alice's bank could have accepted cash or CBDC assets instead of an equal amount of value from her bank account, although legal or regulatory restrictions applicable to the acceptance of cash or CBDC assets might apply. Finally, Alice could have transferred money directly to Bob's bank account rather than to Bob. Depending upon Bob's preferences, this might be a better choice. For example, it would reduce the total number of relay requests, correspondingly reducing the operating cost to the relay system and communication overhead for Bob. It would also allow Bob to handle the case in which Alice does not have exact change; Bob could forward Alice's signature request in the amount of her overpayment to his bank along with his deposit, and then return the blind signature for Alice's change directly to Alice. \begin{figure}[ht] \begin{center} \hspace{-0.8em}\scalebox{0.9}{ \begin{tikzpicture}[>=latex, node distance=3cm, font={\sf \small}, auto]\ts \tikzset{>={Latex[width=4mm,length=4mm]}} \node (w1) at (0, 0) [] { \scalebox{0.08}{\includegraphics{images/wallet-vector-xp.png}} }; \node (b1) at (4, 6) [] { \scalebox{2}{\includegraphics{images/s-sender-bank.png}} }; \node (b2) at (12, 6) [] { \scalebox{2}{\includegraphics{images/s-receiver-bank.png}} }; \node (s1) at (4, -6) [] { \scalebox{2}{\includegraphics{images/s-shop.png}} }; \node (r1) at (3.6, -1.8) [] { \scalebox{0.3}{\includegraphics{images/gears.jpg}} }; \node (r2) at (3.6, 1.8) [] { \scalebox{0.3}{\includegraphics{images/gears.jpg}} }; \node (r3) at (7.2, -1.8) [] { \scalebox{0.3}{\includegraphics{images/gears.jpg}} }; \node (r4) at (7.2, 1.8) [] { \scalebox{0.3}{\includegraphics{images/gears.jpg}} }; \node (c0) at (5.5,0) [ circ, draw, line width=1.2mm, color=orange, minimum height=3.5cm, minimum width=3.5cm ] {}; \node (c1) at (5.5,0) [circ, draw, thick, minimum height=3.4cm, minimum width=3.4cm] {}; \node (c2) at (5.5,0) [circ, draw, thick, minimum height=3.6cm, minimum width=3.6cm] {}; \node (p1) at (5.5,0) [align=center] {\textbf{Integrity}\\DLT System\\(relays)}; \node (c3) at (12.5,0) [ circ, draw, line width=1.2mm, color=orange, minimum height=3.5cm, minimum width=3.5cm ] {}; \node (c4) at (12.5,0) [circ, draw, thick, minimum height=3.4cm, minimum width=3.4cm] {}; \node (c5) at (12.5,0) [circ, draw, thick, minimum height=3.6cm, minimum width=3.6cm] {}; \node (p2) at (12.5,0) [align=center] {\textbf{Monitoring}\\DLT System\\(minters)}; \node (x1) at (10.6, -1.8) [] { \scalebox{0.6}{\includegraphics{images/recycle.png}} }; \node (x2) at (10.6, 1.8) [] { \scalebox{0.6}{\includegraphics{images/recycle.png}} }; \node (x3) at (14.2, -1.8) [] { \scalebox{0.6}{\includegraphics{images/recycle.png}} }; \node (x4) at (14.2, 1.8) [] { \scalebox{0.6}{\includegraphics{images/recycle.png}} }; \node (b3) at (12, -6) [] { \scalebox{2}{\includegraphics{images/s-sender-bank.png}} }; \node (i1) at (-1.5, 0) [noshape, text width=2cm, align=center] {consumer wallet}; \node (i2) at (2.4, 6) [noshape, text width=1.8cm, align=center] {commercial bank}; \node (i3) at (13.3, 6) [noshape, text width=1.5cm, align=center] {central bank}; \node (i4) at (2.6, -6) [noshape, text width=1.8cm, align=center] {merchant}; \node (i5) at (13.6, -6) [noshape, text width=1.8cm, align=center] {commercial bank}; \draw[->, line width=1mm] (b1) edge[bend right=20] node[above, sloped] {withdraw} (w1); \draw[<->, line width=1mm] (b1) edge[bend left=20] node[above, sloped] { issue, destroy } (b2); \draw[->, line width=1mm] (x2) edge[bend right=20] node[above, sloped] {recycle} (b1); \draw[->, line width=1mm] (b3) edge[bend right=20] node[above, sloped] {recycle} (x1); \draw[<->, line width=1mm] (r3) edge[bend left=20, color=red] (x1); \draw[->, line width=1mm] (w1) edge[bend right=20] node[above, sloped] {spend} (s1); \draw[<->, line width=1mm] (s1) edge[bend right=20, color=red] (r1); \draw[<->, line width=1mm] (b1) edge[bend left=20, color=red] (r2); \draw[<->, line width=1mm] (b3) edge[bend left=20, color=red] (r3); \draw[->, line width=1mm] (s1) edge[bend right=20] node[above, sloped] {deposit} (b3); \draw[->, line width=1mm] (x2) edge[bend right=20, dashed] node[above, sloped] { report } (b2); \draw[<->, line width=1mm] (x2) edge[bend left=20, color=red] (r4); \end{tikzpicture}} \end{center} \caption{\cz{Schematic representation of an operating model for a CBDC system.} The diagram depicts the circulation of digital assets, interaction among actors, and supporting functions.} \label{f:operating-model} \end{figure} \subsection{Managing CBDC distribution} The central bank would handle the issuance, expiry, and destruction of its CBDC, as well as managing its value though monetary policy. Meanwhile, one or more clearinghouses or banks would handle all of the real-time processing. As part of the issuance process the central bank may allow one or more clearinghouses or banks to provide signatures on blinded templates, to be used by their customers in the final step of CBDC creation. The central bank would issue a specific quantity of some currency by explicitly allowing a clearinghouse or bank to create and distribute signatures for making that many units of CBDC. We introduce the idea of a \textit{minting-plate}, which combines a \textit{minting-key} that can be used to sign blinded templates with a set of rules that govern its use. There is a deep tension between the desire to limit the number of units that can be created with a particular minting-key, and the need to prevent specific units of currency from being connected to particular creation events (i.e. disconnected creation <-- fix this with the right name). Because there is no way to connect a particular unit of currency with a particular creation event, there is also no way to tell whether a particular unit of currency was created by a legitimate user of a minting-key, as opposed to a compromised or malicious use of that minting-key. What can be done is to keep a record of how many units have been reportedly created and how many have been redeemed. Creation is reported primarily by delegated issuers who holds a minting plates, and secondarily by retail banks which channel requests to those delegated issuers. Redemption happens when a bank brings CBDC units back to the central bank in exchange for central bank reserves. Together these values can reveal that a particular minting-key has been compromised, which can help limit the damage caused by such a compromise. A minting-key might be associated with a set of parameters to limit, for example, the value of currency signed by that minting-key that is in-flight at any particular moment (issuance minus redemptions), the total value of CBDC cumulatively signed by that minting-key, and the time at which signatures by that minting-key would no longer be considered valid. The size of the anonymity set, as we shall discuss later in this section, is directly impacted by the limits that can be specified for the minting-plate. As more limits are placed on a particular minting-plate, the amount of currency it can produce is reduced, making it easier for powerful entities to track the behaviour of individual users. It is important to tune those parameters so they provide good risk mitigation in the event of the compromise of a minting-key, while still maintaining a sufficiently large anonymity set. \subsection{Managing CBDC system integrity} In addition to managing the lifecycle of the individual CBDC assets, we imagine that the central bank would also take responsibility for establishing the integrity system for those USO assets. This integrity system must continue operating without equivocation, and it is possible to build it in a way so that it would not be impacted by increases in the number of assets, users, or transactions. As an example, the central bank may declare that only licensed clearinghouses may operate relays that connect directly into its integrity system. Commercial and retail bank relays would connect into those clearinghouses, and relays operated by other money service providers would connect into those, along with third party corporate relays. Because the trust requirements for operating a relay are quite low, similar to those for a network carrier, this provides a rich ecosystem on which consumers can rely with no increase to the operational overhead of the integrity provider system. Because the scaling concerns are mitigated, there is room to deploy heavyweight solutions for governing this integrity system. While it could be run from a single laptop, it is clearly better to design a system that is as resilient as possible. This means bringing all of the participants in the ecosystem together, such that not only the central bank, but also clearinghouses, commercial banks, retail banks, and so on are participating in a federated or decentralised system, so that only some proportion of them have to be operating correctly for the system to maintain the integrity of its operations. It is worth explicitly noting that the computational cost of decentralised systems generally stems from two sources: one is the gatekeeping cost of keeping out bad actors, which is the primary reason for the hashing cost of proof of work based systems like Bitcoin and Ethereum; the other is the scaling cost of accommodating transactions, assets, and accounts. Our proposed architecture eliminates both of these costs. The first is eliminated by only inviting trusted parties to add their efforts to the integrity system. The second by separating the integrity system from maintaining the state of the assets themselves, so that the scaling costs are not borne by the integrity system. Introducing good governance and transparency into the integrity of a system does not necessitate a large increase in energy usage. Our architecture demonstrates this. \subsection{Managing regulatory compliance} \label{ss:regulatory} Ensuring that regulators can perform their duties is clearly an extremely important aspect of a well-functioning economic system, and must be an explicit goal of any realistic CBDC proposal. As we show in this work, regulatory compliance does not have to come at the cost of sacrificing consumer protections. Indeed, not only are regulation and privacy compatible, but our architecture actually allows them both to be achieved more efficiently than current solutions that choose one over the other. We have two main techniques for ensuring consumer protections. The first is the use of USO assets, which allow the CBDC to be acted upon by its owners unilaterally, regardless of the disposition of the financial apparatus. This means that while the recipient can choose to reject a transaction, no one else in the system, including regulatory bodies, can block it from happening or discriminate against that user. The second is unlinking the sender from the recipient in the transaction channel. This means that even a powerful entity that knows who withdrew CBDC and knows who deposited CBDC will not be able to match senders to recipients. How is efficient regulatory compliance possible with strong consumer protections like these? There are four places that regulation applies in our CBDC architecture, and they mirror four cases in which regulation applies to the use of cash. We argue that we can not only satisfy but actually improve upon the established compliance procedures in each case: \begin{enumerate} \item\cz{When a retail user deposits cash into a bank account.} Banks are often required, for cash deposits greater than a certain size, to request evidence from depositors that the cash to be deposited was obtained legally. From this perspective, CBDC implemented as USO assets is better than cash, because it is possible to automate not just the integrity checks but also the regulatory checks. \item\cz{When a retail merchant receives cash from a consumer.} When merchants decide to deposit cash that they have received in the course of their business activities into bank accounts, they generally have an interest in knowing that the cash they have collected will be accepted. CBDC implemented as USO assets allows such a merchant to apply the same integrity and regulatory checks that are run by their bank. For example, a regulator might want to associate each recipient of CBDC with a bank account for the purpose of implementing compliance procedures. To satisfy this requirement, we might stipulate that banks must require the recipient of CBDC to furnish a commitment in the form of its bank account details to any sender from which it might receive CBDC, and that the CBDC must include a signature of this commitment from the sender as a prerequisite for the bank to consider the CBDC to be valid. \item\cz{When a retail merchant spends cash that it has received.} Recipients of CBDC might want to spend it immediately without depositing it first. Because USO assets track their own history, the next recipient is able to know whether the CBDC has travelled around since leaving a bank. Therefore, the asset must carry the burden of proving that its travel satisfies the relevant regulatory requirements, which could be enforced by automated checks run by the bank that ultimately receives it in the form of a deposit. In this manner, a regulator might allow CBDC to travel over multiple hops, with multiple recipients of CBDC in succession, without the interactive involvement of a regulated financial institution, provided that the recipient bank account details are included and signed by the respective sender in each successive hop. Note that, although the first sender might be anonymous, the USO asset framework enables it to implicitly demonstrate its possession of the key signed by the issuer of the CBDC. Subsequent senders would be identified by their bank account information as recipient from the previous transaction. Conversely, a regulator might want to enforce a rule that recipients of CBDC can do nothing other than deposit CBDC that they receive directly into the specified bank account. To satisfy this requirement, we would stipulate that banks would enforce a rule that the USO asset must have been transacted no more than once (i.e., only one hop). The rules are implicitly dynamic. Bob's bank chooses what program to run to conduct the automated regulatory check, and Bob's software uses the same program as Bob's bank, so regulators can change their requirements at any time without needing the issuance of new CBDC. Regulators could do this by asking the banks to update their compliance procedures, and those new requirements would then be applied within the software of consumers and merchants. \item\cz{Compliance procedures within a financial institution.} A financial institution can prove that in all cases the CBDC it has accepted has met the current regulatory standards. Either the asset passes the automated regulatory checks, or the institution has accepted external evidence to meet the regulatory requirement. We imagine that the latter case would be extremely rare, because consumer and merchant software would automatically reject CBDC that does not meet the regulatory checks that would be carried out by their bank, but it provides an important safety valve. \end{enumerate} To achieve the desired regulatory protection, the source and sink of CBDC must be regulated entities. When Alice creates new CBDC, the signature granting it validity must come from a regulated financial entity; this is enforced by the central bank or its delegates such as minters. When Bob brings his CBDC back to a regulated financial entity such as his bank then that entity can return the CBDC to the central bank in exchange for reserves. Our architecture is compatible with a variety of additional mechanisms for enforcing regulatory requirements, although we recommend careful consideration to verify that such mechanisms are compatible with consumer protection objectives such as privacy and ownership. Note that the first transaction in which a new asset changes hands provides consumer protection, although subsequent transactions do not. In particular, although the initial consumer is protected, the merchant might decide to spend his or her CBDC asset in a second transaction rather than have a bank recycle it, but he or she does this knowing that what the second recipient does with the CBDC asset might expose sensitive information about the second transaction. Having regulated entities as the source and sink of CBDC is sufficient for a mechanism to ensure full regulatory compliance. More than this, it allows that compliance to be achieved with widespread efficiency gains: for the regulator, for the banks, for merchants, and for consumers. \subsection{Ensuring an appropriate anonymity set} In our formulation, CBDC is generally not held by retail customers in custodial accounts and, for this reason, would not earn interest. Although there are some methods available by which fiscal policy can incentivise or disincentivise spending tokens~\cite{goodell2020}, we expect that retail users would view CBDC primarily as a means of payment rather than a store of value. We stipulate that plausible deniability is essential to privacy, and a large anonymity set is a prerequisite to plausible deniability. Inexorably, a trade-off between privacy and flexibility for users lies in the relative timing of withdrawals and remittances, as the strength of the anonymity set is bounded by the number of tokens in-flight between those events. The template architecture ensures that the consumer chooses the minting-key. We assume that the set of minting-keys signed by the issuer will be available for public perusal on a distributed ledger. The fact that an issuer cannot sign multiple minting-keys without having that fact become observable forces accountability for an issuer that might want to create a covert channel that could reveal information about the consumer. Since retail users would have no particular reason to hold CBDC longer than is necessary to make their payments, just as they would have no particular reason to hold cash, it is important to consider ways to encourage users to hold CBDC long enough to ensure that the anonymity set is large enough to protect their privacy. In service of this objective, we propose some practical mechanisms that can be applied to ensure that the anonymity set is sufficiently large to protect the privacy of everyday users: \begin{itemize} \item\cz{Encourage consumers to withdraw larger amounts of money.} For example, consumers can withdraw CBDC in fixed-size lots, and then spread out the use of those over a longer time period and blend in with other consumers, thereby making a smaller number of larger-sized withdrawals from the bank. We anticipate that reducing the number of withdrawals will make it harder to link a payment to its corresponding withdrawal, potentially by one or more orders of magnitude. By reducing the number of statistically linkable withdrawal-payment pairs, users can enjoy a larger anonymity set and, as a result, better privacy. \item\cz{Incentivise consumers to use slow relays by default.} We can give users control over the extent to which it might be possible to temporally correlate a withdrawal to the proof data that is created with a payment. This can be accomplished by adjusting the requirements in Step 4 of the user engagement lifecycle (refer to Figure~\ref{f:aliceflow}) such that $F_1$ can only be accepted by relay $G$ if $F_0$ had previously been published by relay $G$. Then, relay $G$ can explicitly specify a frequency for its publication of successive updates to ensure a sufficiently large anonymity set, for example, to publish once per minute, hour, or day. The motivation is to increase the cooling off period to increase the number of unspent withdrawals from the same minting-key. The provider of relay $G$ could maintain multiple relays with different frequencies. If we accept privacy as a public good~\cite{fairfield2015} and acknowledge transaction immediacy as a threat to privacy, then the provider could charge more to consumers who demand greater immediacy, as a way of compensating for the negative externalities that would result from shorter time intervals between withdrawals and payments. Since the consumer's message to the relay requires no human interaction, CBDC software could send it after a random delay, or could send it through a remailer network such as Mixmaster~\cite{mixmaster}. \item\cz{Encourage slow transaction settlement when possible.} Not every transaction must be settled immediately; consider the case of online purchases for goods or services to be delivered in the future. For such transactions, if Alice can use Step 4, Option 1 (as shown in Figure~\ref{f:step-4-1}) to give Bob direct control and the means to acquire possession of the CBDC, and if Alice trusts Bob not to record the time at which she does so, and if Alice trusts Bob to delay his request for the proof of provenance (and thus settlement) for a sufficiently long time, then Alice can effectively pay Bob immediately. Indeed, Bob's transaction tracking and rate of transactions might influence Alice's calculations about whether this option is safe. Note that this is the same guarantee that payers rely upon to safely use physical cash without being tracked. In the digital context, procuring a strong guarantee about what Bob might do is somewhat harder, and we are pessimistic about the idea that received transactions are not being timestamped, either by Bob or by other observers. \item\cz{Have Alice explicitly give control to Bob during the withdrawal phase.} Alice can give control to Bob in the creation of $F_0$ during Step 1 of the protocol. Because $F_0$ is part of the blinded template, neither her bank nor other observers will be able to associate her withdrawal with her payment to Bob. As with the previous approach, this approach requires Alice to trust Bob not to record the time at which he receives the payment from Alice. However, because Bob is able to verify that the CBDC is valid and that he has exclusive control, this approach might be appropriate for immediate delivery of goods or services. Although the size of the transaction might ordinarily reveal information that could link the withdrawal to the payment, this could be obfuscated by having Alice give Bob a larger quantity of CBDC than he requires, and having Bob provide Alice the excess in the form of new CBDC, either immediately or in the future, using the same method. \end{itemize} We also suggest implementing a mechanism to monitor the number of tokens currently in-flight, to support dynamically adjusting parameters that could impact the size of the anonymity set, such as the number of minting-keys, the number of tokens to be issued by each minting-key, and the set of available denominations. Such a mechanism would support not only the management of digital currency issuance and destruction but also public oversight of the entire process. \subsection{Clearing and settlement} Ensuring that the integrity system continues to produce entries and does not equivocate about the history of is commitments is a major responsibility of a central bank that produces CBDC using this architecture. This can be done by the central bank directly, although such an approach introduces a set of risks, including the possibility that the central bank's operational servers crash or become compromised as well as the possibility that the central bank might change the rules or expectations for the system without warning. Because distributed ledgers are designed to be fault-tolerant and immutable, DLT is a useful tool for systems that require some resilience to crashes and compromise. We suggest that the central bank could take the following approach to using DLT for its integrity system: \begin{enumerate} \item The central bank enlists several highly trusted but independent institutions to run relays and requires each of them to sign off on each new entry that the central bank produces. This protects against compromise of the central bank: The adversary must also compromise all of the other institutions to cause an equivocation. \item The institutions employ a crash fault tolerance mechanism, such as Raft~\cite{ongaro2014}, to allow a few institutions to be offline without interrupting the operation of the system. \item The institutions themselves can propose new entries, perhaps via a fixed schedule or round-robin process, instead of requiring the central bank to do it. This avoids issues associated with having the central bank serve as gatekeeper to transactions and allows the central bank to step out of an operational role and focus on oversight and governance. \item The institutions make a commitment to publish every entry they sign. \end{enumerate} This arrangement is sufficient to convert the centralised integrity system into a distributed ledger overseen, but not operated, by the central bank. The scalability of this architecture can be enhanced by allowing relays to arrange themselves hierarchically. Higher-level relays can aggregate the entries produced by lower-level relays and perform the same process, with the respective lower-level relay operators taking the place of the trusted institutions. Waiting for a higher-level relay to produce an entry might support greater assurance that the proof will be completed, but might be slower than waiting for the lower level relays, which are optimised to minimise latency. Transactions less than a specified amount might be considered final by transacting parties, and may be covered by appropriate insurance or credit for relay operators, without confirmation from the clearing network. The additional confidence provided by aggregate confirmations, therefore, might be necessary for buying high-value goods, such as a car, but probably not for buying low-value goods, such as a cup of coffee. A case can also be made for encouraging relay operators to use mechanically external DLT systems as a commitment mechanism, or public bulletin board, for publishing their entries. This practice might also enhance the confidence in those entries, as well as quicker detection of equivocation of compromised relays, because it compels relays to commit to a more unified view of their published entries rather than merely self-reporting them. \section{Use cases} \label{s:use-cases} In this section, we consider three use cases that demonstrate the power and flexibility of our design and how our proposed architecture can be used to satisfy them. These use cases offer advantages over other electronic payment methods, including modern retail payments via banks or payment platforms as well as unlinkable CBDC proposals such as the one offered by Chaum, Grothoff, and M\"oser~\cite{chaum2021}. The users of the system, including consumers and service providers, can choose which of these possibilities to enable and support. \subsection{Disconnected operation} In some environments, access to the central bank might be slow, delayed, or intermittent rather than real-time, for example where the central bank might be accessible only at certain times. We refer to such environments as ``disconnected'', and we imagine that this characteristic might apply to some remote or sparsely-populated areas with limited or unreliable connectivity, as well as categorically isolated environments such as certain remote villages, ships in the high seas, aircraft in flight, spacecraft in space, or remote military outposts. Fair exchange requires the involvement of a mutually trusted third party~\cite{pagnia1999}. However, this does not imply that all transactions must take place with global agreement. In disconnected environments we assume that there exists a local actor who is sufficiently trustworthy to act as a relay for nodes within that environment. This might be a trusted institution, a network operator, or even a distributed system made up of the nodes in that environment. As long as the recipient trusts that relay to not equivocate, then the recipient can accept a payment that has a proof of provenance that includes that relay, with confidence that it will be possible to complete the proof of provenance to include the integrity provider. Completing that proof is necessary for the payment to be accepted outside of the environment in which the relay is trusted to do its job, but inside of this environment payments can continue to be made without making external network connections. As long as the trusted relay does not equivocate, then nothing that anyone else does, either inside the environment or outside, can adversely impact the payment. Short of equivocating, nothing the trusted relay does, including crashing or denying service, can adversely impact it either. We note that systems that require global consensus, including all centralised systems and most distributed ledger systems, lack this capability. \subsection{Offline operation via time-shifting} \label{ss:time-shifting} Some environments have no connectivity at all. This might include environments without communication equipment, or environments without a local point of trust. We refer to such environments as truly ``offline''. Since transactions require a third party~\cite{pagnia1999}, it might seem that this means that offline transactions are impossible, but that is not entirely true. The involvement of the third party could take place at a different point in time. A user can transfer CBDC to an address over which the recipient has control, but without revealing to the recipient the information needed to exercise that control. Then the user can then effectively spend the CBDC offline by revealing information about the transfers to the recipient. In the event that the user decides not to spend all of the CBDC with that recipient, they have the option to use a fair-exchange protocol with the recipient to redeem any CBDC that was transferred but not spent. In principle, it would be possible to transfer CBDC to a market operator in exchange for tickets (perhaps implemented using blind signatures) and then give the tickets to merchants, and the merchants could use a fair-exchange protocol to redeem value from the market operator. However, this assumes that the merchants are connected to the market operator in real-time so they can verify that such tickets are still available to claim. Similarly, it might be possible to transfer CBDC to an issuer of cash-like, counterfeit-resistant physical tickets that can be used in a local context to make offline purchases to arbitrary recipients without the need for a real-time network connection. \subsection{Chained transactions with embedded provenance} \label{ss:chained} There are several reasons why a recipient of CBDC might want to move it onward without depositing the CBDC directly into a bank account. We refer to such transactions as \textit{chained transactions}. In such cases the provenance information about successive holders of an asset can be maintained within the CBDC tokens, and chained transactions can carry their own proofs of compliance with the rules of the system. Appropriate use cases might include the following: \begin{itemize} \item Perhaps a CBDC holder has no access to a bank or access to a bank is difficult as a result of network connectivity or geographic location. Being able to make a series of transactions under such circumstances may provide an important safety net. \item Perhaps a CBDC holder is acting on behalf of a business that seeks to maintain provable records of its internal or external transfers, perhaps to streamline compliance operations, to satisfy auditing requirements, or to move assets without depositing them into a bank account and incurring a delay associated with settlement. For example, a multinational corporation might want to preserve an audit trail of internal transactions, for example to demonstrate compliance with tax regulations concerning the applicable jurisdiction for revenue, in addition to economic efficiency for such internal moves. \end{itemize} \section{Analysis} \label{s:analysis} In this section, we compare our architecture to alternative architectures for exchanging value. We begin with a set of mechanical design choices and argue for the choices inherent to the argument that we have proposed. Then, we compare our architecture to other systems for exchanging value in terms of the asset-level requirements and system-level requirements defined in Sections~\ref{ss:asset} and~\ref{ss:system}. \subsection{Comparison to other untraceable CBDC solutions} Chaum, Grothoff, and M\"oser~\cite{chaum2021} have also proposed a system for untraceable CBDC. Our system also leverages the blind signature mechanism that is central to their design, although our system differs from theirs in several important ways. In particular, our system: \begin{itemize} \item \cz{Enforces accountability and transparency for authorities and system operators} by leveraging distributed ledger technology as described by Goodell, Nakib, and Tasca~\cite{goodell2021}, thus requiring authorities or system operators to explicitly and publicly specify changes to the protocol and system rules; \item \cz{Enables transactions without real-time involvement of the central bank or issuing authority}, by progressively, and obliviously, building proof structures with logarithmic scaling factors across the relays; and \item \cz{Enables validations without any involvement of the central bank or issuing authority}, by incorporating self-validating proofs of provenance as a fundamental part of the digital assets; and \item \cz{Avoids requiring the central bank to maintain a database} of individual tokens, balances, or specific transactions, as is done with UTXO-oriented digital currency systems. \end{itemize} \subsection{Design features} Some of the design features of our proposal distinguish it from alternative proposals available in the current literature on digital currency. We list several of the most important such features here: \begin{itemize} \item\cz{Regulatory control applies to transactions, not asset ownership.} Our proposed architecture allows regulatory compliance to be automatically enforced by regulated financial institutions that receive CBDC on behalf of their account-holders. This allows comprehensive regulation without introducing a requirement to track the ownership of every token. \item\cz{Non-custodial wallets.} People want custodial accounts because they want strong regulatory controls. Having strong regulatory control at the transaction level allows non-custodial wallets to operate within the regulatory regime, providing efficiencies that make more use cases available to the users of CBDC. This approach allows CBDC to realise the benefit of a token-based approach, while interoperating with traditional custodial accounts as desired, as cash does. \item\cz{Open architecture.} Our approach does not rely upon trusted computing, including trusted software, trusted hardware, or secure elements of any kind. Device manufacturers are third parties, just as other authorities are, and requiring any trusted authority to be part of every transaction compromises the integrity of the system. This is important because we do not wish to require the establishment of a set of trusted hardware vendors, or the assumption that counterparties to a transaction must trust each other's devices. If counterparties do have mutual trust in a third-party, such as an institution, they can use this mutual trust to improve the efficiency of a transaction, as described in Section~\ref{ss:effset}. \item\cz{Time-shifted transactions.} Because fair exchange always requires a third party to every transaction~\cite{pagnia1999}, we observe that there is no way for two counterparties to transact directly without access to a mutually-trusted third party or system. In cases where a mutually trusted system is inaccessible, our architecture allows a time-shifted trust in the form of prepayments, as described in Section~\ref{ss:time-shifting}. \item\cz{Decentralised transactions.} By allowing transactions to be processed in a decentralised manner, our approach avoids the costs and risks of requiring a ledger or other system component to be under the control of a single actor, who might change the rules without public oversight, discriminate against certain users, equivocate about the history of transactions, or otherwise exercise arbitrary authority. \item\cz{Energy efficiency.} By allowing transactions to be processed locally, our approach avoids the costs and risks of requiring a heavyweight, ledger-based system (distributed or not) to be in the middle of every transaction, allowing the use of the CBDC to be highly energy efficient. \item\cz{No central user database.} Our system avoids introducing centralised identity requirements, leveraging the existing decentralised procedures for identification and compliance that are already widespread among financial market participants. This avoids establishing new mechanisms to track users and aligns with global agreements about compliance requirements. \end{itemize} \subsection{Efficient settlement} \label{ss:effset} One of the most important features of cash infrastructure is the ability of counterparties to transact in real-time, with minimal involvement of third parties. To the extent that third parties are not involved in transactions, they cannot engage in rent-seeking behaviour and cannot pass the costs they incur along to transaction counterparties in the form of fees. Where third parties are involved, the involvement is generally minimal and highly local, for example to provide cash withdrawal services (e.g. ATM infrastructure) for consumers and cash deposit services to merchants, both of which are used only in aggregate over many transactions. Cash infrastructure also benefits from instant settlement: Once a payer has given cash to a payee, the transaction is settled. There is no way for a payer to unilaterally unwind (``claw back'') the transaction. With modern digital transactions, scalability interferes with the ability to transact in real-time. Transactions take place across a network, which cannot be globally synchronised. Settlement requires pairwise synchronisation between transacting institutions, which must manage risks associated with concurrency. Settlement times for domestic bank wires and direct debits are generally a matter of hours; settlement times for international wires are even longer. Payment networks generally offer short-term credit as a way to support faster settlements. Our system design provides a mechanism for two transacting parties to enjoy real-time settlements. Recall that, in general, a payer (Alice) must furnish a proof of provenance to a payee (Bob) before a payee will accept payment, and that Alice creates this proof by connecting to the issuer through her chosen relay. If Alice is always assumed to be directly connected to the issuer, then the system will not scale very well: the issuer would have a de facto role in every transaction, and the resulting need to serialise and batch transactions would mean that Alice might be forced to wait. However, because a payer can choose the relays, Alice has the option to choose one that both she and Bob recognise as trustworthy. Because each of these relays is a checkpoint in building the proof of provenance they can offer guarantees to Bob that Alice's transaction has been incorporated. If Bob trusts a relay that Alice has chosen, then this partial proof of provenance will suffice until Bob has received the full proof of provenance. Our architecture allows these promises to be made almost instantaneously by these relays, requiring very little computation. Additionally, various mechanisms can be used to reduce the risk that a relay would equivocate by rewriting history to nullify Alice's transaction. These include both traditional institutional and legal guarantees as well as technical mechanisms like distributed ledgers and other means of achieving immutability. If Alice knows that she is likely to make a purchase within a context in which a particular relay is trusted, then Alice can choose to use that relay for her asset, thus allowing near real-time payments within that context. We observe that this mechanism offers similar functionality to debit card transaction via a retail payment network, wherein transactions can be accepted in real-time because the retail payment network provides a guarantee to the recipient's financial institution that the transaction will succeed. Our proposed mechanism avoids some of the potential friction intrinsic to this approach by eliminating the need for financial credit, although Bob must trust the relay to fulfill its promise to incorporate the transaction. Additionally, because transactions involve direct obligations of the central bank rather than bank deposits, the requirement for a clearinghouse to resolve counterparty risk among institutions is eliminated. \subsection{The fallacy of anonymous accounts} There are two chief approaches to mitigating harmful consumer tracing and profiling. One approach is anonymous accounts, where the identity of the account holder is decoupled from the account. Anonymous accounts are akin to prepaid debit cards and have been proposed as a way to protect the rights of consumers~\cite{pboc2021}. The other approach is transactional unlinking, wherein the sender is decoupled from the receiver inside the transaction channel. These two approaches of anonymous accounts and transactional unlinking are actually orthogonal dimensions. In the absence of transactional unlinking, anonymous accounts don't provide anything useful. Bitcoin is a stark example of this: regular transactions can be trivially de-anonymised, revealing a consumer's entire history, whereas criminals can employ various heavyweight measures to conceal themselves. In the presence of transactional unlinking, anonymous accounts still don't provide anything useful: the transactional unlinking already stops unwanted tracing and profiling, and adding anonymous accounts on top of that only makes enforcing regulatory compliance much more difficult. Thus, we conclude that anonymous accounts are worse than useless. They do not achieve their stated goals, and they extract a high cost from systems that employ them~\cite{goodell2021a}. We also note that anonymous accounts typically contravene AML/KYC recommendations and, because they implicitly link successive transactions done by a consumer to each other, are not actually private for most legitimate retail use. We assume that the accounts referenced by our system would be subject to AML/KYC data collection and would not be anonymous. The privacy of our approach results from the use of non-custodial wallets to unlink successive transactions involving the same currency. Specifically, a user must ``withdraw'' funds from a regulated money services business into her non-custodial wallet in one transaction and then ``remit'' funds into a regulated money services business in the next. Even though the holders of the payer account and the payee account are known, the fact that money has flowed between them is not. \subsection{A comparison of payment system architectures} \begin{table}[ht] \begin{center} \sf \begin{tabular}{|L{9.3cm}|p{0.7em}p{0.7em}p{0.7em}p{0.7em}p{0.7em}p{0.7em}|}\hline & \rotatebox{90}{Cash} & \rotatebox{90}{Custodial accounts} & \rotatebox{90}{Traceable digital currency} & \rotatebox{90}{Untraceable digital currency} & \rotatebox{90}{Traceable USO digital currency\,\,\,\,} & \rotatebox{90}{Untraceable USO digital currency\,\,\,\,}\\ \hline\textbf{Integrity Considerations} & & & & & & \\ Durability & \CIRCLE & \Circle & \Circle & \Circle & \CIRCLE & \CIRCLE \\ Self-contained assets & \CIRCLE & \Circle & \Circle & \Circle & \CIRCLE & \CIRCLE \\ \hline\textbf{Control Considerations} & & & & & & \\ Mechanical control & \CIRCLE & \Circle & \Circle & \CIRCLE & \CIRCLE & \CIRCLE \\ Delegation & \Circle & \Circle & \Circle & \Circle & \CIRCLE & \CIRCLE \\ \hline\textbf{Possession Considerations} & & & & & & \\ Choice of custodian & \CIRCLE & \Circle & \Circle & \Circle & \CIRCLE & \CIRCLE \\ Choice to have no custodian & \CIRCLE & \Circle & \CIRCLE & \CIRCLE & \CIRCLE & \CIRCLE \\ \hline\textbf{Independence Considerations} & & & & & & \\ Fungibility & \CIRCLE & \Circle & \Circle & \CIRCLE & \Circle & \CIRCLE \\ Efficient lifecycle & \Circle & \Circle & \Circle & \Circle & \CIRCLE & \CIRCLE \\ \hline\end{tabular} \rm \caption{A comparison of payment system architectures by asset-level considerations.} \label{t:comparison:asset} \end{center} \end{table} \begin{table}[ht] \begin{center} \sf \begin{tabular}{|L{9.3cm}|p{0.7em}p{0.7em}p{0.7em}p{0.7em}p{0.7em}p{0.7em}|}\hline & \rotatebox{90}{Cash} & \rotatebox{90}{Custodial accounts} & \rotatebox{90}{Traceable digital currency} & \rotatebox{90}{Untraceable digital currency} & \rotatebox{90}{Traceable USO digital currency\,\,\,\,} & \rotatebox{90}{Untraceable USO digital currency\,\,\,\,}\\ \hline\textbf{Autonomy Considerations} & & & & & & \\ Privacy by design & \CIRCLE & \Circle & \Circle & \CIRCLE & \Circle & \CIRCLE \\ Self-determination for asset owners & \CIRCLE & \Circle & \Circle & \CIRCLE & \Circle & \CIRCLE \\ \hline\textbf{Utility Considerations} & & & & & & \\ Local transactions & \CIRCLE & \Circle & \Circle & \Circle & \CIRCLE & \CIRCLE \\ Time-shifted offline transactions & \CIRCLE & \Circle & \Circle & \Circle & \CIRCLE & \CIRCLE \\ Accessibility & \CIRCLE & \Circle & \CIRCLE & \CIRCLE & \CIRCLE & \CIRCLE \\ \hline\textbf{Policy Considerations} & & & & & & \\ Monetary sovereignty & \CIRCLE & \Circle & \CIRCLE & \CIRCLE & \CIRCLE & \CIRCLE \\ Regulatory compliance & \Circle & \CIRCLE & \CIRCLE & \CIRCLE & \CIRCLE & \CIRCLE \\ \hline\end{tabular} \rm \caption{A comparison of payment system architectures by system-level considerations.} \label{t:comparison:system} \end{center} \end{table} \noindent Tables~\ref{t:comparison:asset} and~\ref{t:comparison:system} summarise the characteristics of a selection of different payment system architectures, including our proposed architecture. The descriptions of the payment mechanisms are as follows: \begin{itemize} \item\cz{Cash.} A central bank produces physical bank notes and coins. Retail users circulate them freely, without involving of financial intermediaries. Cash is part of the monetary base of an economy; commercial banks can exchange cash for deposits with the central bank. Although bank notes have serial numbers, cash remains fungible because it can be freely exchanged among bearers and because retail users of cash generally do not maintain records that identify individual units of cash. \item\cz{Custodial accounts.} These are retail payments that take the form of transfers between financial institutions. This category covers both the case of private-sector banks offering accounts to retail consumers as well as the case of central banks offering accounts to retail consumers. Such payments might include bank wires, ACH, cheques, direct debit, and third-party transfers via payment networks including but not limited to card payment systems. \item\cz{Traceable digital currency.} Retail consumers hold tokens that are obligations of the central bank. The tokens are bearer instruments and are not held in custodial accounts, although individual tokens can be linked to the identities of their owners. Thus, the consumers are not anonymous and are therefore subject to profiling and discrimination on the basis of their transactions. The issuer must maintain a record of tokens that were spent to prevent double-spending. The record of tokens can be maintained by the issuer directly or by a distributed ledger using a decentralised consensus system. \item\cz{Untraceable digital currency.} This approach is similar to traceable digital currency, except that the central bank signs blinded tokens using a blind signature scheme of the sort elaborated by David Chaum~\cite{chaum2021}. When a user wants to spend a token, the user unblinds the token and returns it to the issuer along with the address of the recipient. Recipients could be anonymous, or not anonymous, depending upon the specifics of the architecture. Chaum's proposal for digital currency implicitly assumes that the sender is anonymous, but the recipient is not anonymous in the usual case~\cite{chaum2021}. \item\cz{Traceable USO digital currency.} This approach to digital currency uses baseline USO assets. The tokens are not blinded, and although tokens can be directly transferred between possessors without the involvement of the issuer, the chain of custody of an asset is transparent and completely traceable to its possessors. \item\cz{Untraceable USO digital currency.} This approach to digital currency is a fusion of USO assets and the Chaumian system. A user approaches an issuer with a request for a blinded token, which the issuer furnishes to the user. When the user wants to spend a token, the user unblinds the token, incorporates it into a specific previously created asset, and transfers the asset to the recipient. It is now up to the recipient to redeem the token with the issuer, or to pass it to another recipient without the benefit of anonymity. \end{itemize} \section{Conclusion} \label{s:conclusion} In this article, we have presented an untraceable version of an architecture for a payment system based on proofs of provenance. Our architecture combines two previous lines of work to provide a solution that efficiently provides both consumer protections and regulatory compliance. Doing this allows the resulting CBDC to be used across a wide variety of use cases, including many of those currently addressed by cash. Our proposal directly addresses the dilemma of maintaining regulatory compliance while preventing abusive profiling that harms consumers. Abuses of profiling are endemic to modern payment systems, wherein not only governments but also consumer-facing businesses, service providers, and platform operators actively analyse consumer behaviour and can exploit personal information for profit or control. Ordinary consumers are forced to trust not only the practices and motives of such actors but also their security. The costs and risks of security breaches are generally borne by the consumer, and can be quite severe: in the US alone they are estimated to amount to US\$228B in the last year~\cite{maler2021}. Central banks have an opportunity to repair trust between citizens and the state by sponsoring an architecture that does not force users to trust some third party with data protection, but instead allows users to verify for themselves that their privacy is protected. Solutions that promise to end profiling generally do so either by allowing anonymous accounts or by facilitating the consummation of transactions outside of an environment wherein regulators can operate and effectively supervise activity. In contrast to such approaches, our proposal allows effective regulatory supervision while unlinking users' banking relationships from their spending habits, thus enabling consumers to enjoy fully regulated custodial accounts while avoiding the costs and risks of abusive profiling. Furthermore, this architecture addresses concerns about the transactional efficacy of other untraceable architectures by allowing the recipient to accept payment without involving the issuing bank or the sender's financial institution. It also provides a framework for strong assurance of provenance and auditability, allowing follow-on transactions to occur prior to involving the recipient's financial institution. Our proposal addresses the operational and infrastructural overhead that a central bank must incur to manage a payment system through a domestic retail digital currency. It provides an efficient path to the issuance and distribution of a currency as well as the maintenance of its integrity. The distribution and management following issuance can be mediated by existing robust payment channels, including clearinghouses, commercial banks, and payment services businesses, using existing payment mechanisms and avoiding the costs and risks associated with deploying new infrastructure for that purpose. This architecture also addresses the governance and risk mitigation concerns of issuing a domestic retail digital currency and managing a payment system by isolating the components of the system so that each can be treated independently, including the desired properties related to integrity, possession, control, and autonomy as well as the operations of issuance, distribution, and transaction management. Our proposal thus encourages working within the current banking system, including commercial banks and payment institutions, rather than undermining them, and provides the capacity to build a deep and resilient governance approach without compromising the efficiency and privacy of individual transactions. Cash is used in many different situations, as are other payment service solutions. We describe the properties a CBDC must have in order to be efficiently used in those situations, and we show that the technical requirements of our architecture are necessary to deliver a solution with those properties. This allows the CBDC created using our architecture to broadly meet the demands of cash as well as those of electronic payment services, and highlights exactly where other proposals fall short. It is not necessary to make unacceptable compromises between consumer protections and regulatory compliance, and it is not necessary to sacrifice operational efficiency to maintain asset integrity. Indeed, for a currency to be used like cash, it must excel in all three of those aspects. Ours does. \section*{Acknowledgements} The authors are grateful to TODAQ for sponsoring this work. We thank Professor Tomaso Aste for his continued support for our project. We also acknowledge the support of the Centre for Blockchain Technologies at University College London and the Systemic Risk Centre at the London School of Economics, and we acknowledge EPSRC and the PETRAS Research Centre EP/S035362/1 for the FIRE Project.
2,869,038,156,481
arxiv
\section{Introduction} Electroencephalogram (EEG) remains vastly popular for recording the activity of the brain due to its relatively low cost, high time resolution and easy availability in hospitals and research institutions. Recently, researchers have used EEG as a tool for understanding the neural response to motor commands in clinically unresponsive patients \cite{claassen2019detection}, giving impetus to the research on protocols under which EEG data is acquired. The problem we address in this work is the design of an effective protocol that improves the quality of the signal recorded from the subjects. This work proposes two approaches - use of somatosensory cues, and the enforcement of a protocol in which motor imagery of a specific limb is followed by the actual movement of the limb, in contrast to just imagining the movement, which has been used conventionally. \section{Methods} \subsection{EEG Recording Setup} The EEG data was sampled at 1000 Hz from an ANT Neuro Eego\texttrademark Mylab amplifier using the EEGCA64-500 montage and the 10/10 electrode placement system with ``CPz'' electrode as reference electrode. A 64-channel cap with electrooculogram (EOG) channel was used for acquisition. EOG channel was not used for independent component analysis (ICA) artefact removal, since all the participants were instructed to keep their eyes closed throughout the experiment, except when they were under rest. The subjects were seated on a wooden chair, in a well ventilated room. The subjects were also instructed to rest their feet on a wooden support to ensure that there was no electrical contact with the floor. \subsection{Subjects for the study} A total of seven healthy subjects participated in the experiments (5 males and 2 females, all right handed, mean age of 22.5 years, $\sigma=3.4$). Some of the participants took part in more than one protocol. The protocol was approved by the institute human ethics committee of Indian Institute of Science, Bangalore. The participants signed an informed consent form before taking part in the experiments. Subjects were labelled 1 through 7, and protocols were labelled A through E. Thus, a label `B4', for example, refers to the subject 4 under protocol B. \subsection{Web application for controlled timing} For the purpose of maintaining consistent intervals of time for each of the trials of the experiment and to administer cues with as less human error as possible, a web application was developed. The application was programmed to display event labels - ``LEFT'', ``RIGHT'', ``REST'' and ``START''. To prevent the subjects from predicting the sequence of events, the presenting of one of the labels ``LEFT'' and ``RIGHT'' was randomized, with ``REST''/``START'' necessarily following the event label. The duration for each event was also specifically coded-in, depending on the protocol in effect, to ensure that the trials were consistent in duration throughout the experiment. The web-app was used by one of the experiment coordinators in-charge of delivering cues (auditory/ somatosensory) as per the protocol in effect, as detailed in Section \ref{secp}. A pause button was also made available to facilitate the subjects to choose the number of successive trials before taking rest. \begin{figure} \centering \includegraphics[width=9cm]{webappp_labelled.PNG} \caption{A screenshot of the web-app used for timing the auditory cues. The web-app was used to improve the consistency of trial durations.} \label{fig1} \end{figure} \subsection{Preprocessing of the EEG data} The acquired EEG was preprocessed using EEGLAB \cite{delorme2004eeglab}. The 50 Hz line noise was removed using notch filter applied from 49-51 Hz and then, the data was bandpass filtered from 8-30 Hz. This was performed, since the changes in EEG signals due to motor imagery are more visible in mu and beta bands \cite{mcfarland2000mu}\cite{pfurtscheller2006mu}. From the literature, it is known that motor imagery activation in mu and beta bands is visible in the central (Rolandic) region of the brain, primarily consisting of ``C3'',``Cz'' and ``C4'' electrodes. When the arm is moved (or imagined to have moved), mu rhythm changes contralateral to that arm movement (imagination) \cite{ref2}. The frequency band of Rolandic beta rhythms varies from subject to subject, while falling in the broad range of 14-30 Hz \cite{ref3}. \subsection{Algorithm to identify the primary region of activation} It is known from the existing literature that for motor imagery, a minimum of 8 channels are needed to obtain optimal performance \cite{ref1}. However, increasing the number of channels introduces the curse of dimensionality. Thus, we use a spatial filtering algorithm that combines several channels into one, using weighted linear combinations, through which features are extracted. The common spatial pattern (CSP) algorithm linearly transforms the multi-channel EEG signal into a low-dimensional subspace such that the variance of the EEG signal from one class (a group of channels) is maximized while that from the other class (the group of remaining channels) is minimized \cite{jerrinNovel}. Mathematically, the CSP algorithm extremizes the following objective function: \begin{equation} J(\mathbf{w})=\frac{\mathbf{w}^TX_1X_1^T\mathbf{w}}{\mathbf{w}^TX_2X_2^T\mathbf{w}}=\frac{\mathbf{w}^TC_1\mathbf{w}}{\mathbf{w}^TC_2\mathbf{w}} \label{eq1} \end{equation} where $T$ denotes the matrix transpose, $X_i$ is the matrix containing the EEG signals of class $i$, with data samples as columns and channels as rows, $\mathbf{w}$ is the spatial filter and $C_i$ is the spatial covariance matrix of class $i$. CSP finds spatial filters such that the variance of the filtered data is maximal for one class while simultaneously minimal for the other class. This effectively highlights those regions of the brain that contribute most to a particular action (imagery/actual movement) by comparing it to the activation as obtained during the other action. CSP is especially effective in brain-computer interface (BCI) applications that use oscillatory data like EEG, since their most important features are band-power features. The CSP algorithm is computationally efficient and easy to implement. However, CSP does have some limitations. It is not robust to noise or non-stationarities and may not generalize well to new data, especially when there is very less data available. Since we are mainly interested in the spatial regions of the brain that are associated with the imagery in consideration, we decided to use the filters produced by the Lagrangian CSP algorithm and implemented it using Fabien Lotte's RCSP-Toolbox for MATLAB \cite{rcsp}. \section{Experimental protocols explored} \label{secp} The scope of this work was to design a protocol that would return consistent results in terms of the area of activation as expected based on previous work carried out on similar motor imagery tasks. In the literature, all the reported works use either audio or visual cues to the subjects, following which they were expected to imagine the motor action (motor imagery) for the cue given \cite{pfurtscheller2001motor,brandl2015bringing}\cite{brandl2016alternative}. The most popular datasets in use in this domain are the BCI competition datasets \cite{blankertz2004bci}. In BCI competition III dataset, the subjects had to imagine either closing the right arm into a fist or wiggling the toes on the right foot. To validate our work, we tested our processing pipeline on the standard datasets available and the obtained results were comparable to those obtained by Lotte and Guan \cite{rcsp}. The audio and somatosensory cues as detailed in TABLE \ref{table:1} were delivered by one of the experiment conductors seated directly in front of the subject, using the web-app for timing. After a set of 25-30 trials, the web-app was paused in order to let the subject take rest for approximately 2-3 minutes. Depending on their fatigue level, the subject had a choice to request for less number of trials to be conducted for the next batch of trials. \subsection{Protocol A: RA-RF with auditory cues} To begin with, the first three subjects (A1,A2,A3) were made to follow the protocol as described in BCI competition III dataset IVa's description as closely as possible. This was to serve as a baseline for the work covered in this paper. The protocol included three types of events - imagery task of right arm (RA) or right foot (RF), and a rest event. Subjects were given rest for 5 seconds after every imagery task, which itself lasted for 5 seconds. The order of the imagery task was randomized to ensure that the subjects were unable to predict the sequence of cues. The graphical representation of the protocol timings is shown in Fig. \ref{figa}. The feedback obtained from the subjects on this protocol was that it was often difficult to imagine the motor action without actually executing it and even harder to focus on the imagery for a duration as long as 5 seconds. The subjects described their imagination of the action merely as a visual image of the action, which does not strictly correspond to the motor imagery/planning that we were looking for. Taking this feedback into account, changes were made to the protocol, giving rise to newer protocols explained below. \begin{figure} \includegraphics[width=\textwidth]{RARF-Imaginary.JPG} \caption{Graphical representation of the timings of protocol A. An auditory cue lasting less than 0.5 sec was given to ask the subject to start imagining the motor movement. After a period of 5 sec, another auditory cue was given asking the subject to stop the imagination.} \label{figa} \end{figure} \FloatBarrier \subsection{Protocol B: RA-RF with actual motion and auditory cues} EEG data of two subjects, B4 and B5 was recorded under this protocol. This protocol has 4 types of events - two imagery tasks of right arm (RA) and right foot (RF), two actual motion tasks of right arm and right foot. Subjects were given auditory cues ``arm" and ``foot" to perform the corresponding imagery tasks repeatedly for 4 sec. Followed by each imagery task, an actual action of the previously imagined task was asked to be performed before 3 sec. The auditory cue for the actual action was ``start". The graphical representation of the protocol timings is shown in Fig. \ref{figb}. The changes introduced in this protocol were in consideration of the feedback obtained from subjects who participated in protocol A. As seen in section \ref{resprotb}, the introduction of actual movement after the imagery task does produce better results. \begin{figure} \includegraphics[width=\textwidth]{RARF-Actual.JPG} \caption{Graphical representation of the timings of protocol B. An auditory cue lasting less than 0.5 sec was given to ask the subject to start imagining the motor movement. After a period of 5 sec, another auditory cue was given asking the subject to actually perform the action once.} \label{figb} \end{figure} \FloatBarrier \subsection{Protocol C: LA-RA with actual motion and auditory cues} Using this protocol, five subjects' (C1,C3,C4,C5,C6) EEG data were recorded. This protocol has 4 types of events - two imagery tasks of left arm (LA) and right arm (RA), two actual motion tasks of left arm and right arm. Subjects were given auditory cues ``left" and ``right" to perform the corresponding imagery tasks which lasted for 4s. Followed by each imagery task, an actual action of the previously imagined task was asked to be performed once within 3s. The auditory cue for the actual action was ``start". Since we know that the sensory cortex coincides with the motor cortex in the central region of the brain, we thought introducing localized somatosensory cues will further improve our results and realized the next protocol. The graphical representation of the protocol timings is shown in Fig. \ref{figc}. \begin{figure} \includegraphics[width=\textwidth]{LARA-Actual.JPG} \caption{Graphical representation of the timings of protocol C. An auditory cue lasting less than 0.5 sec was given asking the subject to start imagining the motor movement. After a period of 5 sec, another auditory cue was given asking the subject to actually perform the action once.} \label{figc} \end{figure} \FloatBarrier \subsection{Protocol D: LA-RA with somatosensory cues} One subject (D2) was recorded using this protocol. This protocol has 3 types of events - two imagery tasks of left arm (LA) and right arm (RA) and one rest event. Somatosensory cues were given on the subject's outer wrist in the form of a gentle tap to let the subject know that he/she should perform the imagery task of the corresponding hand. The imagination was asked to be carried out for 4s and a tap on the knee of the same side of the body as the previous cue was given to let the participant stop imagination and take rest. The graphical representation of the protocol timings is shown in Fig. \ref{figd}. This protocol was to test if the introduction of somatosensory cues would affect the quality of results obtained. As in section \ref{resprotc}, there is benefit to using somatosensory cues. \begin{figure} \includegraphics[width=\textwidth]{LARA-Somato.JPG} \caption{Graphical representation of the timings of protocol D. Tap on the wrist was the somatosensory cue for starting the imagination and tap on the knee was the cue for stopping the imagination and taking rest.} \label{figd} \end{figure} \FloatBarrier \subsection{Protocol E: LA-RA with actual motion and somatosensory cues} EEG data of two subjects (E1,E6) were recorded using this protocol. This protocol has 4 event types - two imagery tasks of left arm (LA) and right arm (RA), two actual motion tasks of left arm and right arm. The cues that were administered to indicate the onset of imagination or task of actual motion were somatosensory in nature and given using the blunt side of a pen. Upon inducing somatosensory stimulus on the subject's inner wrist, the subject was asked to perform the corresponding imagery task, which lasted for 4s. The somatosensory cue given on the palm told the subject to perform the actual action. The graphical representation of the protocol timings is shown in Fig. \ref{fige}. \begin{figure} \includegraphics[width=\textwidth]{LARA-Actual-Somato.JPG} \caption{Graphical representation of the timings of protocol E. Pressing the inner wrist by the blunt side of a pen was the somatosensory cue for starting the imagination and that of the palm was the cue for performing the actual action.} \label{fige} \end{figure} \FloatBarrier \begin{table}[h!] \caption{List of cues and the corresponding actions.} \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|l|c|} \hline \rowcolor[HTML]{C0C0C0} Type & Cue & Task & Protocol(s) \\ \hline Auditory & ARM & Imagine closing your right arm into a fist & A,B \\ \hline Auditory & FOOT & Imagine wiggling the toes on your right foot & A,B \\ \hline Auditory & REST & Take rest & A,B \\ \hline Auditory & LEFT & Imagine closing your left arm into a fist & C \\ \hline Auditory & RIGHT & Imagine closing your right arm into a fist & C \\ \hline Auditory & START & \begin{tabular}[c]{@{}l@{}}Perform actual action of previously \\ imagined motor action\end{tabular} & B,C \\ \hline Somatosensory & Tap on the LEFT inner wrist & Imagine closing your left arm into a fist & D \\ \hline Somatosensory & Tap on the RIGHT inner wrist & Imagine closing your right arm into a fist & D \\ \hline \multicolumn{1}{|l|}{Somatosensory} & Tap on the LEFT/RIGHT knee & Take rest & D \\ \hline \multicolumn{1}{|l|}{Somatosensory} & \begin{tabular}[c]{@{}c@{}}Poke LEFT inner wrist \\ with the blunt side of a pen\end{tabular} & Imagine closing your left arm into a fist & E \\ \hline \multicolumn{1}{|l|}{Somatosensory} & \begin{tabular}[c]{@{}c@{}}Poke RIGHT inner wrist \\ with the blunt side of a pen\end{tabular} & Imagine closing your right arm into a fist & E \\ \hline Somatosensory & \begin{tabular}[c]{@{}c@{}}Poke LEFT palm with \\ the blunt side of a pen\end{tabular} & Close your left arm into a fist (actual action) & E \\ \hline Somatosensory & \begin{tabular}[c]{@{}c@{}}Poke RIGHT palm with \\ the blunt side of a pen\end{tabular} & Close your right arm into a fist (actual action) & E \\ \hline \end{tabular}% } \label{table:1} \end{table} \section{Results} We compare the performance of the tested protocols by visually inspecting the filters produced by the CSP algorithm. The criteria of evaluation is the correctness of regions highlighted in the filters, with reference to what we know about the expected activation of the brain for the motor imagery in consideration. It may be noted that the sign of the elements in the spatial filer vectors are not significant and only the absolute value matters. Also, all the topoplots were obtained using the EEG data corresponding to motor imagery alone, although some protocols had actual movements included in them. There are broadly two types of imagery in the protocols tested - arm and foot. \subsection{RA-RF based protocols} According to literature \cite{rcsp}, the CSP filter plots for clenching of right arm into a fist when compared to wiggling of right toes on the foot (RA-RF protocol) indicate activation on the left hemisphere of the brain along electrodes ``C3'', ``C5'' and ``CP3''. The CSP filter plots for wiggling of right toes on the foot when compared to clenching of right arm is along the central electrode ``Cz''. \begin{figure} \centering \includegraphics[width=9cm]{elec_loc.JPG} \caption{10/10 EEG electrode placement system. The dots correspond to the positions of the electrodes on the scalp. The names of the relevant electrodes are also given.} \end{figure} \subsubsection{Protocol A: RA-RF with auditory cues} As shown in Figs. \ref{fig8} (a) and (b), the topoplots obtained highlight the regions as expected for the imagery pair RA-RF. However, the regions are not defined very distinctly. This was an observation common to all subjects who participated in this protocol. This could be because the subjects found it difficult to imagine the motor action without actually performing the action. This is also likely affected by the fact that the subjects were unable to stay focused on the imagery for extended periods of time. Taking these into account, changes were made to the protocol. \begin{figure}[h!] \centering \subfloat[]{ \includegraphics[width=6cm]{A3_1.jpeg}} \subfloat[]{ \includegraphics[width=6cm]{A3_2.jpeg}} \caption{CSP filter obtained for subject A3 for (a) right arm and (b) right foot motor imageries under protocol A. Under this protocol, three events were there, namely, imagery of right arm movement, imagery of right foot movement and rest state.} \label{fig8} \end{figure} \subsubsection{Protocol B: RA-RF with auditory cues, followed by actual motion}\label{resprotb} As seen in Fig. \ref{fig9}, the introduction of actual motion into the protocol following every single trial of imagery did result in better results. The topoplots exhibit more definitive regions and the subjects also reported that it was easier to stay focused through the trials, since they were instructed to perform the actual movement. \begin{figure}[h!] \centering \subfloat[]{ \includegraphics[width=6cm]{B4_1.jpeg}} \subfloat[]{ \includegraphics[width=6cm]{B4_2.jpeg}} \caption{CSP filter obtained for subject B4 for (a) right arm and (b) right foot motor imageries, under protocol B. Under this protocol, four events were there, namely, imagery of right arm and right foot movement, and actual movement of right arm and right foot.} \label{fig9} \end{figure} \subsection{LA-RA based protocols} For the clenching of a hand into a fist, we expect activation predominantly centered around the central line electrodes (``C5'',``C3'', ``C1'', ``Cz'', ``C2'', ``C4'' and ``C6''). This activation is also known to show contralateralization with respect to the left or right arm in imagery. In other words, for imagery related to the left arm, the right side of the brain is activated more and vice-versa for the right hand. This property of lateral symmetry of activity for imagery of the left or right arm helps in evaluating visually, the performance of the protocols in question. \subsubsection{Protocol C: LA-RA with auditory cues, followed by actual motion}\label{resprotc} For this protocol and those to follow, a reduced electrode set was considered to reduce dimensions. This was decided based on our understanding of the regions within which the two motor imagery (left arm and right arm) are expected to lie. The topoplots of the filters (Fig. \ref{fig10}), obtained with this protocol, are very close to the ones reported in the literature. \begin{figure}[h!] \centering \subfloat[]{ \includegraphics[width=6cm]{C6_1.jpeg}} \subfloat[]{ \includegraphics[width=6cm]{C6_2.jpeg}} \caption{CSP filter obtained for subject C6 for (a) right arm and (b) left arm motor imageries, under protocol C. Under this protocol, four events were there, namely, imagery of right arm and right foot movement, and actual movement of right arm and right foot. Also, only a reduced set of electrodes were considered for this protocol.} \label{fig10} \end{figure} \FloatBarrier \subsubsection{Protocol D: LA-RA with somatosensory cues} As seen in the topoplots in Fig. \ref{fig11}, the introduction of localized somatosensory cues as detailed earlier helps the subject to stay focused during the imagery and leads to better results. \begin{figure}[h!] \centering \subfloat[]{ \includegraphics[width=6cm]{D2_1.jpeg}} \subfloat[]{ \includegraphics[width=6cm]{D2_2.jpeg}} \caption{CSP filter obtained for subject D2 for (a) right arm and (b) left arm motor imageries, under protocol D. Under this protocol, three events were there, namely, imagery of right arm and right foot movement, and rest state. Instead of auditory cues, localized somatosensory cues were used. Also, only a reduced set of electrodes were considered for this protocol.} \label{fig11} \end{figure} \FloatBarrier \subsubsection{Protocol E: LA-RA with somatosensory cues, followed by actual motion} Considering the benefits of somatosensory cues from protocol D, protocol E was brought about to put protocols C and D together. This was tested on 2 of the 5 subjects, who participated in protocol C. The improvements do seem to stack up as seen in the topoplots for this protocol (Fig. \ref{fig12}). The activation is exactly where they are expected and the plots themselves have sharp and defined regions, which are considered favorable traits. \begin{figure}[h!] \centering \subfloat[]{ \includegraphics[width=6cm]{E6_1.jpeg}} \subfloat[]{ \includegraphics[width=6cm]{E6_2.jpeg}} \caption{CSP filter obtained for subject E6 for (a) right arm and (b) left arm motor imageries, under protocol E. Under this protocol, four events were there, namely, imagery of right arm and right foot movement, and actual movement of right arm and right foot. Instead of auditory cues, localized somatosensory cues were used in this protocol. Also, only a reduced set of electrodes were considered for this protocol.} \label{fig12} \end{figure} \FloatBarrier \section{Discussion} With four different segments of protocols, we have found CSPs for motor imagery with auditory cues, with auditory cues followed by actual movement, somatosensory cues and, somatosensory cues followed by actual movement. It is interesting to note that with somatosensory cues, the regions getting activated for motor imagery are more focused as is evident from the CSP topoplots. The regions activated are in sync with the previous experiments based on EEG \cite{pfurtscheller2001motor,mcfarland2000mu} and fMRI \cite{hanakawa2003functional,sharma2008mapping} to localize motor imagery activity. In our experiment, motor imagery with auditory cues for right foot was concentrated more closely in the center in topoplot (near ``Cz''). This result is in sync with functional near infrared spectroscopy (fNIR) study \cite{batula2017comparison} where bilateral activations were visible for limb movement imagery. Saimpont et al. have reported that auditory cues are helpful for greater visualization of motor imagery \cite{saimpont2013motor}. The regions activated in our study are intraparietal sulcus (IPS), supramarginal gyrus and precentral sulcus which were also activated during motor imagery in fMRI based experiment \cite{sharma2008mapping}. The increased activity of medial prefrontal cortex (mPFC) (Sensorimotor area) has been reported for motor imagery. Significant ipsilateral activations were seen in brain region represented by ``P4''/``CP4'' channels in recovery patients of motor neuron disease \cite{bundy2017contralesional}. Somatosensory cues were shown to improve motor imagery performance measured with motor evoked potentials \cite{bonassi2017provision}. Anterior and posterior Brodmann area (BA) 4 are involved during motor imagery as reported earlier in fMRI study on healthy subjects \cite{sharma2008mapping}. However, with 64 channel EEG aquisition systems, it is hard to report focused cluster-based activation in BA 4 due to the low spatial resolution. \section{Conclusions} The results obtained certainly show promise. Changing the protocol to include actual movement following the imagery helps improve the results. This arises from the fact that healthy subjects are unlikely to have been in a situation where their brain exercised motor planning, (i.e, attempted to move a certain limb) but was unable, or did not want to move the limb. Thus, healthy subjects find it difficult to imagine only the planning, without actually performing the associated movement later. On the other hand, allowing the subjects to move after the trial ensures that the trial captures the activity of the brain during planning. The results also seem to indicate that somatosensory cues stimulate the subjects better, which results in more focused motor imagery. With evolutionary perspective, somatosensory and motor areas are in close association, which leads to diffused activation in pre and post central gyrus areas. This helps not only in inducing activity in the same part of the brain, but also makes it easier for the subject to localize. A subject, who is touched on his hand, would be able to easily resolve where he was touched and will be able to quickly imagine motor actions on the same hand. This resolution based on localization, we speculate, results in better quality of motor imagery. \section{Limitations} Due to approximation of locations in EEG cap, signals from ``C3'' and ``C4'' reflect the activity from both the motor and the somatosensory cortex. Making a distinction between motor and somatosensory activity was not feasible with EEG. \bibliographystyle{splncs04}
2,869,038,156,482
arxiv
\section{Introduction} Single channel speech separation is a fundamental problem in the speech process, which has seen tremendous advances in the last years. The main neural architectures can be divided into two categories: (i) Spectral based~\cite{wang2019deep,li2019spectral, wang2018end} and (ii) time domain based~\cite{luo2018tasnet, nachmani2020voice, zeghidour2020wavesplit}. Currently, the latter category leads with respect to the obtained accuracy. Since the order of the speakers at the output of the neural network is arbitrary, a permutation invariant training is performed. Most of the neural architecture for speech separation use the permutation invariant training (PIT) loss \cite{yu2017permutation} or its extension to the utterance level (uPIT)~\cite{kolbaek2017multitalker}. Both variants have a computational complexity of $O(C!)$, where $C$ is the number of the speakers. As a result, it is not feasible to run PIT on more than ten speakers. In this work, we propose a novel method to train a large number of speakers with a lower complexity of $O(C^3)$, by using the Hungarian algorithm. The Hungarian algorithm is able to find the optimal permutation in terms of the minimal sum of pairwise losses, thus matching between pairs of output- and target-signals. In order to enable the separation network to deal with a large number of speakers, we further introduce an architecture that combines two distinct approaches to separation networks, LSTM and dilated convolutional layers. In our experiments, our method separates up to $20$ speakers, which, as far as we can ascertain, is twice the number tackled by any existing method. Moreover, we show that our method improves the previous state of the art separation results for separating $5$ and $10$ speakers. \section{Related Work} Signal channel speech separation was explored using classical approaches~\cite{martin2018single,ernst2018speech} and, more recently, using deep learning methods. In \cite{erdogan2015phase} an LSTM neural network with a phase sensitive loss function was introduced. An improvement in SDR was demonstrated on the CHiME-2 \cite{vincent2013second} dataset. In \cite{hershey2016deep} a neural separation network with a clustering-based embedding was introduced, presenting results for the separation of two speakers and introducing the WSJ-2mix dataset that was extensively used by followup work. This work was further extended in \cite{isik2016single} by extracting an embedding of spectrogram segments and estimating a mask for the separation part. Results were provided for two and three speakers and an SDR improvement of $10.3$ dB and $7.1$ dB for WSJ-2mix and WSJ-3mix was obtained. In \cite{chen2017deep}, the neural separator network was introduced. Attractor points in the embedding space were used to obtain the time-frequency bins for each speaker. The improvement on the WSJ-mix dataset was by 5.49\%. Luo et al. \cite{luo2018tasnet} introduced TasNet, which is a time domain encoder-decoder neural architecture for the single channel speech separation problem. They show a results of 11.1 SDR improvement for WSJ-2mix dataset over the state of the art. Wang et al. \cite{wang2018end} proposed a neural architecture that separates the speakers in both the time and the frequency domains simultaneously. They presented an improvement of 13.2 SDR on the WSJ-2mix dataset. The work of \cite{luo2018tasnet} further improved the architecture and introduced ConvTasNet \cite{luo2019conv}, which employed a dilated convolutional neural network, showing an SDR improvement 15.6 dB for the WSJ-2mix dataset. Another improvement with LSTM network was introduced in \cite{luo2019dual}, where the dual-path recurrent neural network (DPRNN) architecture was employed to model extremely long sequences. They showed SDR improvement of 18.08 dB on WSJ-2mix dataset. In \cite{nachmani2020voice} a separation network with $MulCat$ blocks was introduced. The proposed method also removed the masking sub-network, leading to an improvement of 20.12 dB to WSJ-2mix dataset. Furthermore, the WSJ-mix dataset was extended to include mixtures of $5$ speakers, where the SDR improvement was 10.6 dB. \cite{shi2020toward} combined DPRNN and TasNet and for the WSJ-5mix dataset they showed an SDR improvement of 10.41dB, and 11.14dB SDR improvement for online remixing. Zeghidour et al. \cite{zeghidour2020wavesplit} introduced a neural separation network that infers a representation to each speaker, by performing clustering, and used it to separate the mixture. They show an SDR improvement of 22.2 dB for WSJ-2mix dataset. Since our work builds upon the $MulCat$ network architecture~\cite{nachmani2020voice}, we will recap its major components. The network consists of an encoder, a separation module and a decoder. The encoder and decoder are simple 1D convolutions. The separation module starts with a chunking module which cuts the signal into chunks in time. Then, a series of doubled $MulCat$ blocks is applied. During training, a multi-scale loss is employed- after each doubled $MulCat$ block the activations are reconstructed by the decoder into audio signals and fed into the loss function. The method also uses the Scale-Invariant Signal-to-Noise Ratio (SI-SNR) loss, which is a slight improvement to the traditional SDR loss. Another line of work for speech separation uses beamformers and introduces an extension of the minimum variance distortionless response (MVDR)~\cite{markovich2009multichannel}. A neural beamformer was introduced in \cite{sainath2015speaker} and further improved in \cite{luo2019fasnet} for the speech separation problem. A follow up work introduced the linearly constrained minimum variance (LCMV) beamformer \cite{laufer2020global}. In \cite{tachibana2020towards} a SinkPIT loss is introduced. They proposed a variant of the PIT loss, which is based on Sinkhorn’s matrix balancing algorithm. They reduce the complexity of the PIT loss from $O(C!)$ to $O(kC^2)$, where $k$ is set to $200$. It is important to note that the chosen permutation is only an approximation of the optimal permutation. In another work, a probabilistic-PIT loss which considers the output permutation as discrete latent random variable was introduced~\cite{yousefi2019probabilistic}. \subsection{Hungarian Algorithm} The \textit{linear sum assignment} problem (also known as the \textit{assignment problem}) is the task of assigning $C$ agents to do $C$ tasks, such that each agent is assigned to exactly one task, and the total cost for the agents performing the tasks is minimal. In other words, given a $C$-by-$C$ matrix of costs, $M$, one for each agent-task pair, find a permutation $\pi$ of the agents, such that the sum of costs of paired agents and tasks is minimal: \vspace{-3mm} \begin{equation} \pi = \underset{\pi \in \Pi_C}{\operatorname{argmin}}\sum_{i=1}^n M_{i,\pi(i)} \end{equation} A naive solution is to iterate over all $C!$ possible permutations. Fortunately, an optimal and polynomial-time algorithm that solves the assignment problem was proposed in $1955$ by Harold Kuhn \cite{kuhn1955hungarian}, reviewed in $1957$ by James Munkres \cite{munkres1957algorithms} and is mostly known by the name the \textit{Hungarian Algorithm}. The initial time complexity of the algorithm was $O(C^4)$ and it was modified later on to a time complexity of $O(C^3)$~\cite{tomizawa1971some}. Simply put, the algorithm starts by trying to find an obvious permutation. If that fails, it goes on to make modifications to the input matrix, in order to find a valid and optimal permutation. The number of the modification iterations needed to find the solution is indicative of how well the permutation fits the data compared to alternative permutations. In other words, the more iterations needed until convergence, the less significant is the optimal permutation. \section{Method} \begin{figure*}[t] \centering \begin{tabular}{c@{~}c} \includegraphics[width=.725\textwidth,height=.4\textheight,keepaspectratio]{ours.png}\\ (i) \\ \includegraphics[width=.725\textwidth,height=.4\textheight,keepaspectratio]{arch.png}\\ (ii) \\ \end{tabular} \smallskip \caption{(i) The proposed Hungarian Loss. $M$, a $C$-by-$C$ matrix of SI-SNR losses between output and target pairs is computed. $M$ is fed into the Hungarian algorithm, which efficiently finds the optimal permutation of target signals, $\pi$. (ii) The proposed separation network architecture. The novel components are the added Conv blocks and the Hungarian loss, which replaces the PIT loss.} \vspace{-.41cm} \label{fig:overall} \end{figure*} This work extends the work in \cite{nachmani2020voice}, through a number of new contributions. First, we introduce the Hungarian Loss which replaces the PIT loss and gives an optimal solution to the permutation issue with a much lower time complexity, $O(C^3)$, which allows to train separation networks for many speakers. Second, we introduce a new network architecture that uses stacked dilated convolutions before each pair of $MulCat$ blocks of \cite{nachmani2020voice}. The overall method, is depicted in Figure \ref{fig:overall}. \subsection{Hungarian Loss} A single-channel speech separation network takes an audio signal that contains a mixture of $C$ speakers speaking and outputs $C$ audio signals, each optimized to contain a separate speaker. During training, the network outputs the separated audio signals in an arbitrary order. Thus, in order to compute a meaningful loss, an alignment, i.e. a permutation, needs to be recovered between the outputs of the network and the separated target signals. One way to find the right permutation is to iterate over all possible $C!$ permutations and choose the one which gives the lowest mean loss value on the pairs (i.e. PIT). The computational cost of PIT is unnoticeable when $C$ is small, in comparison to the other parts of the network. However, it makes training on a large number of speakers impossible. For instance, for 20 speakers PIT needs to check $20!\approx2.4\times 10^{18}$ different permutations). To address this issue, we formulate the task of finding the permutation which minimizes the loss function as a \textit{linear sum assignment} problem. Given the $C$ output signals and $C$ target signals, we calculate the pairwise loss value, $\hat{\ell}(s_i, \hat{s}_{j})$, on every pair of output ($\hat{s}_{j}$) and target ($s_i$) signals, which gives an $C$-by-$C$ matrix of losses, $M$. Next, we assign each output with a unique target and vice-versa. Such assignment is equivalent to choosing $C$ elements of the matrix, such that each chosen element is in a different row and column from all others. An optimal assignment minimizes the sum of values of the chosen elements. The PIT loss can then be viewed as a brute-force solution to this problem, iterating over all possible solutions: \vspace{-4mm} \begin{equation} \ell(s, \hat{s}) = \min_{\pi \in \Pi_C}~\frac{1}{C}\sum_{i=1}^C \hat{\ell}(s_i, \hat{s}_{\pi(i)}) \end{equation} By running the Hungarian Algorithm on $M$, we efficiently find the optimal permutation in polynomial time instead of the brute-force, factorial-time PIT. Note that the assignment algorithm does not need to be differential, since we find a permutation of the targets by which we calculate the loss, meaning that the process is separate from the backwards calculation of gradients. \subsection{Model} We shift our focus to solving the task of separating mixtures of many ($C\geq10$) speakers. As $C$ increases, the task of separating the mixtures becomes more challenging. Thus, we propose a new and suitable network architecture. For this end, we modify the $MulCat$-based architecture~\cite{nachmani2020voice} by adding stacked dilated convolutions before each pair of $MulCat$ blocks. In addition, we increased some of the network's hyper-parameters to achieve a larger capacity needed for the harder tasks. The dilated convolutions scheme is borrowed from \cite{luo2019conv}: We use $8$ \textit{1-D Conv block}s stacked on top of each other, with dilation factors $2^{i-1}$ for $i\in\{1,2,...,8\}$. This corresponds to a single column in the separation module of \cite{luo2019conv}. The rest of the model is in accordance with \cite{nachmani2020voice}, i.e. using the same encoder, chunking, $MulCat$ blocks, decoder and SI-SNR loss. We also adopt the multi-scale loss scheme, which applies the loss function after each double MulCat block, instead of just the last. We tuned some hyper-parameters of the architecture to accompany the harder tasks when $C$ is large: $N$, number of features, was increased from $128$ to $256$. $L$, the encoder's kernel size was increased from $8$ to $16$. $H$, the number of hidden units in the LSTMs was increased from $128$ to $256$ and finally $R$, the number of double $MulCat$ blocks was increased from $6$ to $7$. A similar hyper-parameters adjustment was done in \cite{tachibana2020towards}. \begin{figure}[h] \centering \includegraphics[width=.4\textwidth,height=.4\textheight,keepaspectratio]{num_iters.png} \caption{The average number of iterations per example performed by the Hungarian algorithm. The average is decreasing as training progresses, which indicates that the separation is improving. This also means that the runtime of the Hungarian algorithm is getting even shorter.} \label{fig:HungaryWeight} \vspace{-.41cm} \end{figure} \vspace{-2mm} \begin{figure*}[t] \centering \begin{tabular}{c} \includegraphics[width=0.9\linewidth]{gt8.png} \\ \includegraphics[width=0.9\linewidth]{sink-pit8.png} \\ \includegraphics[width=0.9\linewidth]{ours8.png} \\ \end{tabular} \caption{Separation results for a mixture of $10$ speakers. First row: mel spectrogram of ground truth signals. Second row: mel spectrogram of SinkPIT signals. Third row: mel spectrogram of the outputs of our method. The last row shows the SI-SDR improvement for the SinkPIT method and our method. The x-axis is sorted by the SI-SDRi in descending order.} \label{fig:spec} \vspace{-.41cm} \end{figure*} \begin{figure}[h] \centering \begin{tabular}{cc} \includegraphics[width=0.5\linewidth]{mat_Ours.png} & \includegraphics[width=0.5\linewidth]{mat_SinkPIT.png} \\ (a) & (b) \\ \end{tabular} \caption{The pairwise SI-SDR matrix $M$, sorted in descending order. For the same input sample as in Figure \ref{fig:spec}. (a) Our method. (b) SinkPIT \cite{tachibana2020towards}.} \vspace{-.41cm} \label{fig:mat} \end{figure} \section{Experiments} \noindent{\bf Comparison to state of the art\quad} We show results on datasets derived from WSJ corpus \cite{garofolo1993csr} and LibriSpeech \cite{panayotov2015LibriSpeech}. For WSJ, we use the 5-speaker mix, introduced in \cite{nachmani2020voice}, which uses the same procedure as in \cite{hershey2016deep}, i.e. $30$ hours of speech from the training set si$\_$tr$\_$s were used to create the training and validation sets. The five speakers were randomly chosen and combined with random SNR values between $0-5$ dB. The test set is created from si$\_$et$\_$s and si$\_$dt$\_$s with 16 speakers, that differ from the speakers of the training set. For LibriSpeech, we use the LibriMix \cite{cosentino2020librimix} datasets with mixes of 5, 10, 15 and 20 speakers. LibriMix offers mixtures of 2 and 3 speakers from LibriSpeech, and we used the given scripts to create mixtures of 5, 10, 15 and 20 speakers. We used LibriMix's given parameters to get a sample rate of $8$Khz, clean mixtures (no noise added) and each sample is cut in length, according to the minimal length sample in the mixture. We also use the augmentation process as in \cite{tachibana2020towards}. A separate model is trained for each dataset, with the corresponding number of output channels. Training was done using the Adam optimizer \cite{kingma2014adam}, with batch size $32$ and a learning rate of $1e-3$ which was multiplied by $0.95$ every two epochs. During training, each sample is cut into 4-second segments. Table~\ref{tab:results} compares the results of our model to other models, using the SI-SDRi metrics. As can be seen, our model outperforms the other methods by a large margin. For Libri-5mix and Libri-10mix, we improve the previous results by $1.89$dB and $1.33$dB respectively. Interestingly, the previous state of the art results for $Libri5Mix$ is obtained with $MulCut$ \cite{nachmani2020voice}, whereas for $Libri10Mix$ it is obtained by \cite{tachibana2020towards}, which uses SinkPIT. This is due to the fact that the PIT loss in $MulCut$ for 10 speakers is prohibitive. For WSJ-5mix, the SDR of our method improves by more than $2$dB over the previous method. We are the first to present results for the $Libri15Mix$ and $Libri20Mix$ datasets and running previous work on it is not practical. Sample results are shared online~\url{https://shakeddovrat.github.io/hungarian/}. \noindent{\bf Hungarian loss vs. alternative losses\quad} In order to show the benefits of using the Hungarian algorithm as opposed to PIT, we show the training duration of an epoch of each dataset, depicted in Table \ref{tab:timing}. As shown, on a $5$ speaker mix both methods take about the same time. However, for $C=10$, the Hungarian method is about $9$ times faster on our model. For $C\geq15$, the Hungarian method is still fast, while PIT is unusable as it failed to complete a single epoch after days of running. To our knowledge, the SinkPIT can run on $20$ speakers but only find an approximation to the optimal permutation, whereas our method is both optimal and fast (the source code was not published). In Figure~\ref{fig:HungaryWeight}, we present the average number of iterations per example that were performed by the Hungarian algorithm for the various datasets. We can observe two phenomena: (i) As the training proceeds and the neural network improves the separation performance, the average number of iterations is decreased. (ii) The average number of iterations is lower when the numberof speakers in the mixture is lower. Both of these observations came from the fact that the Hungarian algorithm needs no iterations to converge when the outputs or the network are mixed or noisy. In Figure~\ref{fig:spec}, we plot the mel spectrogram for typical samples as in \cite{tachibana2020towards}. This is shown for a mixture of $10$ speakers, from the $Libri10Mix$ dataset. As can be seen in speakers $4$ and $8$ our method provide a cleaner mel spectrogram, Furthermore, the SI-SDRi difference is 4.9 dB and 2.4 dB for speakers $4$ and $8$ respectively (there is a significant improvement in all speakers). In Figure \ref{fig:mat}, we plot the pairwise SI-SDR negative matrix $M$ sorted in descending order, in comparison to the results of the SinkPIT system~\cite{tachibana2020towards}. Evidently, the entropy in our method is lower, especially in the last rows, i.e., it has much less confusion compared to the baseline method. \noindent{\bf Ablation study\quad} We run ablation analysis in order to understand the contribution of each component of our method. The results are summarized in Table \ref{tab:ablation}, where -convs means without the 1D convolutions and -Hungarian means using the PIT loss. As can be seen, adding the dilated convolution to the architecture improves the performance. Moreover, without the Hungarian algorithm, it is practically impossible to train on $15$ or more speakers. For $10$ speakers, using PIT drastically diminishes performance due to longer training time. On the other hand, for $5$ speakers, the PIT and Hungarian have similar performance since they both found the optimal permutation in a similar runtime. \begin{table}[] \caption{SDR improvement performance of various models versus number of speakers and datasets. 'X' results indicated simulation that failed to complete a single epoch, due to high complexity PIT loss. \textit{x}Mix is LibriMix with \textit{x} speakers.} \label{tab:results} \begin{tabular}{lcccc|c} \toprule Model & \textbf{5Mix} & \textbf{10Mix} & \textbf{15Mix} & \textbf{20Mix} & \textbf{WSJ-5} \\ \midrule ConvTasNet & - & - & - & - & 6.8 \\ DPRNN \cite{luo2019dual} & - & - & - & - & 8.6 \\ MulCat \cite{nachmani2020voice} & 10.83 & 4.74 & X & X & 10.6 \\ TasTas \cite{shi2020toward} & - & - & - & - & 11.14 \\ SinkPIT \cite{tachibana2020towards} & 9.39 & 6.45 & - & - & - \\ \textbf{Ours} & \textbf{12.72} & \textbf{7.78} & \textbf{5.66} & \textbf{4.26} & \textbf{13.22} \\ \bottomrule \end{tabular} \end{table} \begin{table}[] \caption{Training run times of PIT and Hungarian algorithm in minutes per epoch. 'X' indicates simulation that failed to complete a single epoch, due to long run times of the PIT loss.} \label{tab:timing} \begin{tabular}{lccccc} \toprule \textbf{Dataset} & \textbf{\#Spkrs} & \textbf{\#Perms} & \textbf{PIT} & \textbf{Hungarian} \\ \midrule WSJ-5mix & 5 & 120 & 65 & 62 \\ Libri-5Mix & 5 & 120 & 139 & 140 \\ Libri-10Mix & 10 & $\approx3.6e6$ & 462 & 52 \\ Libri-15Mix & 15 & $\approx1.3e12$ & X & 36 \\ Libri-20Mix & 20 & $\approx2.4e18$ & X & 29 \\ \bottomrule \vspace{-.41cm} \end{tabular} \end{table} \begin{table}[] \caption{Ablation analysis - SDR Performance for LibriMix and WSJ-mix datasets. \textit{x}Mix is LibriMix with \textit{x} speakers. 'X' indicates simulation that failed to complete a single epoch, due to long run times of the PIT loss.} \label{tab:ablation} \begin{tabular}{lcccc|c} \toprule \textbf{Model} & \textbf{5Mix} & \textbf{10Mix} & \textbf{15Mix} & \textbf{20Mix} & \textbf{WSJ-5} \\ \midrule Ours-Convs-Hungarian & 11.10 & 4.47 & X & X & 12.18 \\ Ours-Hungarian & 12.53 & 4.84 & X & X & 13.07 \\ Ours-Convs & 11.19 & 5.89 & 5.14 & 4.10 & 12.10 \\ \textbf{Ours} & \textbf{12.72} & \textbf{7.78} & \textbf{5.66} & \textbf{4.26} & \textbf{13.22} \\ \bottomrule \end{tabular} \end{table} \section{Conclusions} In this work, we provide a method for single channel sound separation for a large number of sources. Our method is the first work to show that one can separate a mixture of $20$ speakers from a single channel recording. Our solution is based on the Hungarian algorithm, which efficiently finds the optimal permutation from the $C!$ possible permutations and on a new network architecture that adds stacked 1-D convolutions and added capacity to the state of the art architecture. \subsection*{Acknowledgement} We thank Hideyuki Tachibana for the helpful discussion. The contribution of Eliya Nachmani is part of a Ph.D. thesis research conducted at Tel-Aviv University. \bibliographystyle{IEEEtran}
2,869,038,156,483
arxiv
\section{Introduction} Several models of the nucleon-nucleon interaction have been presented and applied to nuclear physics problems. The G-matrix interaction plays a central role in calculations of nuclear structure from the nucleon-nucleon ($NN$) interaction, as it is the basic two-body interaction in many body systems. It is well known that the nucleon-nucleon interaction is described by the quark cluster model (QCM). This model gives a good description of the nucleon-nucleon scattering ~\cite{QCM1,QCM2}. The main feature of the interaction is nonlocality at short distances. This short range potential has a nonlocal Gaussian soft core of the form, \[ V({\bf r'},{\bf r})\propto\exp [-(\frac{{\bf r}-{\bf r}'}{b})^2]. \] This type of nonlocality is expected from the quark antisymmetrization between the baryons. The Gaussian form reflects the quark wave function inside the baryon and the nonlocality range parameter $b$ is determined by the size of the baryon. The nonlocal exchange force can reproduce the short-range repulsion of the nuclear force at low energy. The repulsive ``core'' in QCM is a soft core with energy dependence in its equivalent local form. It allows the $NN$ relative wave function to go inside and thus gives a milder form factor of the deuteron (or nuclei) at large momenta ~\cite{QCM3}. Because the quark exchange nonlocality comes only at short distances, the central part of the potential is most affected. In fact, the $NN$ repulsion is roughly spin-isospin independent. In this paper, we would like to study significance of the nonlocality of quark model origin in nuclear structure calculations. Since we can not see signals of such nonlocality in on-shell properties, it would be interesting if we could see them in nuclear phenomena. To do so, we calculate G-matrix elements from nonlocal potentials. In conventional nuclear physics, the G-matrix interaction is used as the basic two-body interaction for the nuclear many-body problem. Hence, if there would be signals of quark-originated nonlocality in nuclear phenomena, they must exist in the G-matrix interaction as well. Previously, effects of symmetry restoration of QCD in G-matrix elements were examined~\cite{HoTo}. Thus in the present study we investigate another possibility of QCD oriented effects in nuclear physics. This paper is organized as follows. In section 2, a method to calculate G-matrix elements for finite nuclei is given briefly. In section 3, we introduce the Tamagaki G3RS (Gaussian soft core potential with three ranges) potential~\cite{TAMA} as a local potential. Then we construct a nonlocal Gaussian soft core potentials, which has the same intermediate and long range parts as the original local potential. The short-range part is replaced by a nonlocal potential with free parameters which are determined so as to reproduce the same scattering phase shifts as the local potential. In section 4, numerical results for the G-matrix elements are given both for the local and nonlocal potentials. A summary is given in section 5. \section{Nuclear G-matrix for Finite Nuclei} The G-matrix is the most fundamental two body interaction which incorporates the simplest many body effect, i.e., the Pauli blocking. It is obtained by solving the Bethe-Goldstone equation~\cite{BETHE}: \begin{equation} G(E)=v+v\frac{Q}{E-H_0}G(E). \label{eq:BG} \end{equation} Here $v$ is a nucleon-nucleon interaction in free space, $Q$ the Pauli-blocking operator, and $H_0$ a single particle hamiltonian. We compute G-matrix elements for finite nuclei, specifically for $^{16}{\rm O}$, following the method developed in previous works~\cite{BBH,ATB,HKT}. $H_0$ is chosen to be the harmonic oscillator hamiltonian. The two-body problem is then reduced to a one-body problem of the relative coordinate by employing the Eden and Emery's approximation ~\cite{EE} for the $Q$ operator. Explicitly, after reducing the Pauli-blocking operator as a function of the relative coordinate, it is assumed to be \begin{equation} \label{Q16} Q(n^\prime) = \left\{ \begin{array}{c c} 1 & {\rm for} \; n^\prime > n+2 \\ 0 & {\rm otherwise} \end{array} \right. \end{equation} where $n$ is the node quantum number for the relative motion of the initial two nucleons and $n^\prime$ that for intermediate states. The Barret-Hewitt-MacCarthy's method ~\cite{BHM} is then used for matrix inversion. Additional parameter is introduced for the gap energy, which is chosen to be 40 MeV to achieve a good agreement between the calculated G-matrix elements and empirical ones~\cite{Abrown}. Finally, the starting energy is averaged over the Fermi surface by assuming that the relevant two nucleons are there. G-matrix elements are obtained first in partial wave channels denoted by $^1S_0$, $^1P_1$, etc., which are then transformed into those in interaction channels denoted by SE (Singlet Even), TE (Triplet Even), etc. Matrix elements in SE and SO (Singlet Odd) channels are just those in $^1S_0$ and $^1P_1$ channels, respectively. The TE and TNE (Tensor Even) components are obtained from the coupled $^3S_1-^3D_1$ channels: \begin{eqnarray} G({\rm TE}) &=& \langle S \mid G(^3S_1-^3D_1)\mid S \rangle, \\ G({\rm TNE}) &=& \langle S \mid G(^3S_1-^3D_1)\mid D \rangle. \end{eqnarray} The TO, TNO, LSO (LS Odd) and LSE (LS Even) components are obtained from the following relations~\cite{BBH}: \begin{eqnarray} G({\rm TO}) &=& G(^3P_0)+2G(LSO)+4G(TNO),\\ G({\rm TNO}) &=& -\frac{5}{72}[ 2G(^3P_0)-3G(^3P_1)+G(^3P_2) ], \\ G({\rm LSO}) &=& -\frac{1}{12}[ 2G(^3P_0)+3G(^3P_1)-5G(^3P_2) ], \\ G({\rm LSE}) &=& -\frac{1}{60}[ 9G(^3D_1)+5G(^3D_2)-14G(^3D_3) ], \end{eqnarray} In the previous studies~\cite{BBH,ATB,HKT}, the Reid soft-core potential, Paris potential, and other phenomenological nuclear forces have been applied. We here choose the Tamagaki G3RS potential~\cite{TAMA}, which is a three range gaussian with a short range soft core. We replace the shortest-range part of the original potential by a nonlocal one, and apply it to the G-matrix calculation. The resulting G-matrix elements are compared with each other. \section{Gaussian Nonlocal Potential} We use the following Tamagaki potential as a standard local potential, \begin{equation} V_L({\bf r})=V_C(r)+S_{12}V_T(r)+({\bf L}\cdot{\bf S})V_{LS}(r)+W_{12}V_W(r)+{\bf L}^2V_{LL}(r), \label{eq:tam} \end{equation} with \begin{eqnarray*} S_{12}&=&3(\sigma_1\cdot {\bf r})(\sigma_2\cdot {\bf r})/r^2-\sigma_1\cdot\sigma_2 \\ W_{12} &=& 1/2\{ ({\bf\sigma}_1\cdot {\bf L})({\bf\sigma}_2\cdot {\bf L}) +({\bf\sigma}_2\cdot {\bf L})({\bf\sigma}_1\cdot {\bf L}) \} -({\bf\sigma}_1\cdot{\bf\sigma}_2){\bf L}^2/3 \\ &=& {\bf (L\cdot S)}^2 -\{ \delta_{LJ}+({\bf\sigma}_1\cdot{\bf\sigma}_2)/3 \} {\bf L}^2 \end{eqnarray*} where $L$ and $J$ stand for the orbital and total angular momenta, respectively. The radial functions $V_n(r)$ ($n=C$, $LS$, \ldots) are given by \begin{equation} V_n(r)=\sum_{i=1}^3 V_{ni}\exp[-(r/\eta_{ni})^2], \label{eq:tamrad} \end{equation} where the suffices $i=1,2,3$ refer to the short range, intermediate range and long range part, respectively. The potential parameters, $ V_{ni}$ and $\eta_{ni}$, are determined so as to reproduce the scattering phase shifts as given in Table \ref{tab:g3rs} ~\cite{TAMA}. The nonlocal potential is constructed by replacing the short range part ($i=1$) of Eq.~(\ref{eq:tamrad}). We consider nonlocality only for the central part of the form: \begin{equation} V({\bf r'},{\bf r})= V_{NL}({\pi}^{1/2}b)^{-3} \exp [-(\frac{{\bf r}-{\bf r}'}{b})^2- (\frac{{\bf r}+{\bf r'}}{\eta_{NL}})^2] \label{eq:nlpot} \end{equation} Similar nonlocal terms might appear in noncentral forces, but for the reason explained in introduction, we consider the nonlocality only in the central potential. In Eq.~(\ref{eq:nlpot}), the factor $({\pi}^{1/2}b)^{-3} \exp [- (\frac{{\bf r}-{\bf r'}}{b})^2]$ represents the nonlocality of the nuclear force at short distances, with a new parameter $b$ as the range of nonlocality. When $b\rightarrow 0$, this part reduces to the delta function, $\delta({\bf r}-{\bf r'})$, and then the nonlocal potential becomes the local one. In the quark cluster model, $b$ is proportional to the size of the quark wave function in the nucleon. A typical value of $b$ would be 0.5 fm, which we use in the following calculations. In order to study effects of nonlocality exclusively, we require the nonlocal potential to reproduce the same scattering phase shifts as the corresponding local potential does. We call such a nonlocal potential as local-equivalent (LE) nonlocal potential. For this, we adjust the potential parameters, $V_{NL}$ and $\eta_{NL}$. We allow them to depend on the angular momentum $L$. Phase shifts in various partial waves at scattering energies from $E_{cm} = 0$ to $300$ MeV are fitted. As shown in Figs. 1 - 2, both the local and LE nonlocal potentials (LENL) reproduce empirical phase shifts well up to the scattering energy $\sim$ 300 MeV. In particular, the difference between the local and LENL are negligibly small. In Figs.~1 and 2, we also present the phase shifts calculated from the nonlocal potentials (NL) with the same strength, $V_{NL}=V_{C1}$ and the same range $\eta_{NL}=\eta_{C1}$ as the original Tamagaki potential. They show that the nonlocality yields a softer repulsion at large energy. Although we have shown here the phase shifts only for the two channels, we find qualitatively the same feature for other $S$ and $P$ channels. Summarizing, our LE nonlocal potential reads \begin{eqnarray} V_{NL}({\bf r'},{\bf r})&=&V_{NL}({\pi}^{1/2}b)^{-3} \exp [-(\frac{{\bf r}-{\bf r}'}{b})^2- (\frac{{\bf r}+{\bf r'}}{2\eta_{NL}})^2]\nonumber \\ &&+\sum_{i=2}^3 V_{Ci} \exp[-(r/\eta_{Ci})^2]\, \delta({\bf r}-{\bf r}') \label{eq:mnlp} \end{eqnarray} with the parameters given in Table \ref{tab:g3rs} for the local part and in Table \ref{tab:le0.5} for the nonlocal part ($b=0.5$ fm). \begin{table} \doublerulesep=0pt \begin{center} \begin{tabular}{@{\extracolsep{\fill}}|c|ccc|ccc|cc|c|c|}\hline &$\eta_{C_1}$&$\eta_{C_2}$&$\eta_{C_3}$&$\eta_{T_1}$& $\eta_{T_2}$&$\eta_{T_3}$&$ \eta_{LS_2}$&$\eta_{LS_3}$&$\eta_{W_2}$&$\eta_{LL_2}$ \\ &$V_{C_1}$&$V_{C_2}$&$V_{C_3}$&$V_{T_1}$&$V_{T_2}$&$V_{T_3}$& $V_{LS_2}$&$V_{LS_3}$& $V_{W_2}$&$V_{LL_2}$ \\ \hline $SE$ &0.447&0.942&2.5& & & & & & &0.942 \\ &2000&-270&-5& & & & & & &15 \\ \hline $TO$ &0.447&0.942&2.5&0.447&1.2&2.5&0.447&0.6&0.942&0.942 \\ &2500&-70&1.67&-20&20&2.5&600&-1050&0&0 \\ \hline $SO$ &0.447&0.942&2.5& & & & & & &0.942 \\ &2000&50&10&&&&&&&0 \\ \hline $TE$ &0.447&0.942&2.5&0.447&1.2&2.5&0.447&0.6&0.942&0.942 \\ &2000&-230&-5&67.5&-67.5&-7.5&0&0&-30&30 \\ \hline \end{tabular} \end{center} \caption{Parameters of the Tamagaki G3RS potential} \label{tab:g3rs} \end{table} \begin{table} \begin{center} \begin{tabular}{|r|c|c|}\hline &$\eta_{NL}^{b=0.5}$ &$V_{NL}^{b=0.5}$ \\ \hline $l=0$ &$0.894\times \eta_{C_1}$ &$1.45\times V_{C_1}$ \\ $1$ &$0.894\times \eta_{C_1}$ &$2.9\times V_{C_1}$ \\ $2$ &$0.894\times \eta_{C_1}$ &$6.0\times V_{C_1}$ \\ $3$ &$0.894\times \eta_{C_1}$ &$12.5\times V_{C_1}$ \\ $4$ &$0.894\times \eta_{C_1}$ &$27\times V_{C_1}$ \\ \hline \end{tabular} \end{center} \caption{Parameters of the nonlocal potential with $b=0.5$ fm} \label{tab:le0.5} \end{table} \section{Numerical Results} The G-matrix elements in the harmonic-oscillator basis are presented in Tables \ref{tab:gl}--\ref{tab:gle} for the local (Tamagaki) and LE nonlocal potentials with $b$ = 0.5 fm as functions of node quantum numbers $n$ and $n'$. These are calculated using an oscillator parameter $\hbar\omega=14$ MeV for $^{16}{\rm O}$ and with the gap energy of 40 MeV, which are the same as those used in Refs.~\cite{BBH,ATB,HKT}. \begin{table} \doublerulesep=0pt \begin{center} \begin{tabular*}{14cm}{@{\extracolsep{\fill}}cr|cccc} \hline\hline &s s &n=0 &n=1 &n=2 &n=3 \\ \hline singlet &n'=0 &-6.605 &-5.182 &-3.551 &-2.064 \\ even &1 & &-4.564 &-3.251 &-1.831 \\ &2 & & &-2.352 &-1.233 \\ &3 & & & &-0.428 \\ \hline\hline &s s &n=0 &n=1 &n=2 &n=3 \\ \hline triplet &n'=0 &-10.495&-8.725 &-6.488 &-4.401 \\ even &1 & &-7.982 &-6.209 &-4.274 \\ &2 & & &-5.028 &-3.525 \\ &3 & & & &-2.468 \\ \hline\hline &p p &n=0 &n=1 &n=2 &n=3 \\ \hline singlet &n'=0 &2.365 &2.140 &1.768 &1.495 \\ odd &1 & &2.614 &2.575 &2.374 \\ &2 & & &2.905 &2.935 \\ &3 & & & &3.202 \\ \hline\hline &p p &n=0 &n=1 &n=2 &n=3 \\ \hline triplet &n'=0 &0.200 &0.057 &-0.052 &-0.098 \\ odd &1 & &0.039 &-0.005 &-0.031 \\ &2 & & &0.030 &0.059 \\ &3 & & & &0.144 \\ \hline\hline &s d &n=0 &n=1 &n=2 &n=3 \\ \hline tensor &n'=0 &-5.391 &-7.403 &-8.463 &-9.011 \\ even &1 &-2.429 &-4.688 &-6.463 &-7.691 \\ &2 &-0.967 &-2.449 &-4.067 &-5.502 \\ &3 &-0.338 &-1.095 &-2.208 &-3.454 \\ \hline\hline &p p &n=0 &n=1 &n=2 &n=3 \\ \hline tensor &n'=0 &0.772 &0.745 &0.639 &0.535 \\ odd &1 & &0.896 &0.872 &0.782 \\ &2 & & &0.940 &0.908 \\ &3 & & & &0.939 \\ \hline\hline &d d &n=0 &n=1 &n=2 &n=3 \\ \hline LS &n'=0 &-0.034 &-0.048 &-0.057 &-0.062 \\ even &1 & &-0.075 &-0.091 &-0.102 \\ &2 & & &-0.117 &-0.133 \\ &3 & & & &-0.156 \\ \hline\hline &p p &n=0 &n=1 &n=2 &n=3 \\ \hline LS &n'=0 &-0.416 &-0.666 &-0.850 &-0.983 \\ odd &1 & &-0.998 &-1.247 &-1.432 \\ &2 & & &-1.536 &-1.753 \\ &3 & & & &-1.990 \\ \hline\hline \end{tabular*} \end{center} \caption{G-matrix elements from the local potential for $^{16}{\rm O}$} \label{tab:gl} \end{table} \begin{table} \doublerulesep=0pt \begin{center} \begin{tabular*}{14cm}{@{\extracolsep{\fill}}cr|cccc} \hline\hline &s s &n=0 &n=1 &n=2 &n=3 \\ \hline singlet &n'=0 &-6.542 &-5.098 &-3.444 &-1.931 \\ even &1 & &-4.460 &-3.126 &-1.683 \\ &2 & & &-2.213 &-1.076 \\ &3 & & & &-0.260 \\ \hline\hline &s s &n=0 &n=1 &n=2 &n=3 \\ \hline triplet &n'=0 &-10.499&-8.714 &-6.457 &-4.343 \\ even &1 & &-7.962 &-6.171 &-4.212 \\ &2 & & &-4.978 &-3.458 \\ &3 & & & &-2.392 \\ \hline\hline &p p &n=0 &n=1 &n=2 &n=3 \\ \hline singlet &n'=0 &2.372 &2.150 &1.780 &1.507 \\ odd &1 & &2.627 &2.591 &2.392 \\ &2 & & &2.924 &2.955 \\ &3 & & & &3.224 \\ \hline\hline &p p &n=0 &n=1 &n=2 &n=3 \\ \hline triplet &n'=0 &0.206 &0.064 &-0.046 &-0.093 \\ odd &1 & &0.046 &0.000 &-0.028 \\ &2 & & &0.032 &0.056 \\ &3 & & & &0.134 \\ \hline\hline &s d &n=0 &n=1 &n=2 &n=3 \\ \hline tensor &n'=0 &-5.400 &-7.421 &-8.491 &-9.048 \\ even &1 &-2.439 &-4.707 &-6.493 &-7.732 \\ &2 &-0.978 &-2.469 &-4.097 &-5.543 \\ &3 &-0.348 &-1.114 &-2.236 &-3.493 \\ \hline\hline &p p &n=0 &n=1 &n=2 &n=3 \\ \hline tensor &n'=0 &0.772 &0.745 &0.639 &0.537 \\ odd &1 & &0.897 &0.873 &0.784 \\ &2 & & &0.942 &0.912 \\ &3 & & & &0.944 \\ \hline\hline &d d &n=0 &n=1 &n=2 &n=3 \\ \hline LS &n'=0 &-0.034 &-0.048 &-0.057 &-0.062 \\ even &1 & &-0.075 &-0.091 &-0.102 \\ &2 & & &-0.117 &-0.134 \\ &3 & & & &-0.157 \\ \hline\hline &p p &n=0 &n=1 &n=2 &n=3 \\ \hline LS &n'=0 &-0.414 &-0.663 &-0.849 &-0.983 \\ odd &1 & &-0.997 &-1.248 &-1.436 \\ &2 & & &-1.541 &-1.762 \\ &3 & & & &-2.006 \\ \hline\hline \end{tabular*} \end{center} \caption{ G-matrix elememnts from LE nonlocal potential (b=0.5fm) for $^{16}{\rm O}$} \label{tab:gle} \end{table} First we compare G-matrix elements calculated from various nuclear forces. For this purpose, we show in Table \ref{tab:gcom} the G-matrix elements derived from the Paris potential and those of the present calculation for the SE channel. It turns out that both matrix elements are very similar. The difference is significant only for less important off-diagonal matrix elements, which is, however, typically 20 \% level. This is interesting because the Tamagaki potential has the Gaussian tail and its functional form differs from the others. This fact suggests that G-matrix elements are strongly constrained by on-shell properties (phase shifts). We have confirmed this by calculating G-matrix elements using $NN$ potentials which do not necessarily reproduce empirical phase shifts. \begin{table} \begin{center} \begin{tabular}{c c| c c c c} \hline & $n^\prime$ & $n = 0$ & $n = 1$ & $n = 2$ & $n = 3$ \\ \hline & 0 & -6.605 & -5.182 & -3.551 & -2.064 \\ Tamagaki & 1 & & -4.564 & -3.251 & -1.831 \\ & 2 & & & -2.352 & -1.233 \\ & 3 & & & & -0.428 \\ \hline & 0 & -6.580 & -5.292 & -3.743 & -2.502 \\ Paris & 1 & & -4.644 & -3.322 & -1.947 \\ & 2 & & & -2.386 & -1.070 \\ & 3 & & & & -0.318 \\ \hline \end{tabular} \end{center} \caption{Comparison of diagonal G-matrix elements in the SE channel derived from three nuclear forces.} \label{tab:gcom} \end{table} Now we turn to see the effects of nonlocality in the G-matrix elements. From Tables \ref{tab:gl}-\ref{tab:gle}, one sees that for non-central channels of tensor and LS, the results of the local and of LE nonlocal potentials are almost identical. This seems natural because in the present calculation nonlocality is introduced only in the central channel. Effects of nonlocality in the G-matrix elements are best seen in the S-wave channels of SE and TE. They are repulsive, though the effects are typically as small as a few \% or even less. Therefore, one may conclude that the present nonlocality from the quark exchange effects are negligibly small as compared with other many-body effects of nuclear physics which is typically about 10 \% or even more~\cite{Abrown}. The reason that the present nonlocality acts as repulsive is understood in the following way. As we have discussed in section 3, when nonlocality is introduced for a repulsive component, it effectively reduces the repulsion in the $NN$ potential. To get the local-equivalent phase shifts, one has to modify the potential parameters by reducing the range and increasing the strength of repulsion as summarized in Table \ref{tab:le0.5}. In the calculation of the G-matrix elements, the Pauli-blocking operator $Q$ forbids the transition to lower levels which are already occupied. This effectively reduces the chance for the two nucleons to feel attraction in the nuclear force which comes mainly from the transition to the lower states. Hence, in the G-matrix elements, repulsive components are more enhanced. The nonlocal effect would become important for heavier nuclei where more levels are Pauli blocked. We have performed a calculation of G-matrix elements in the zirconium region. The calculation was done in the same way as for $^{16}{\rm O}$, but with $\hbar \omega$ = 8.8 MeV and with the following Pauli-blocking operator: \begin{equation} Q(n^\prime) = \left\{ \begin{array}{c c} 1 & {\rm for} \; n^\prime > n+4 \, ,\\ 0 & {\rm otherwise} \, . \end{array} \right. \end{equation} Here, as the mass number is increased, more states are Pauli-blocked, which is implemented in the inequality $n^\prime > n+4$ in place of $n^\prime > n+2$ for $^{16}{\rm O}$ in (\ref{Q16}). Results are summarized in Table \ref{tab:gzr} for the SE channel, where we see slightly more nonlocal effects. The difference is, however, once again very small. \begin{table} \begin{center} \begin{tabular}{c c| c c c c} \hline & $n^\prime$ & $n = 0$ & $n = 1$ & $n = 2$ & $n = 3$ \\ \hline & 0 & -3.922 & -3.551 & -2.926 & -2.291 \\ local & 1 & & -3.449 & -2.961 & -2.360 \\ & 2 & & & -2.612 & -2.114 \\ & 3 & & & & -1.718 \\ \hline & 0 & -3.812 & -3.422 & -2.787 & -2.145 \\ LE nonlocal & 1 & & -3.299 & -2.800 & -2.191 \\ & 2 & & & -2.440 & -1.934 \\ & 3 & & & & -1.533 \\ \hline \end{tabular} \end{center} \caption{G-matrix elements in the SE channel for zirconium } \label{tab:gzr} \end{table} \section{Summary} The G-matrix elements for $^{16}{\rm O}$ have been calculated in the harmonic oscillator basis from a G3RS Tamagaki potential (local potential) and a nonlocal potential which has a simple Gaussian nonlocality at a short range but produces the same scattering phase shifts as the local potential does. The nonlocal Gaussian soft core is typical of quark cluster model (QCM) which gives a good description of the nucleon-nucleon scattering. The G-matrix elements thus derived from the nonlocal potential show an effective repulsion in comparison with those derived from the local potential. The difference appears in the SE and TE channels mostly. The effects are, however, generally very small for all nuclei. It is not, therefore, likely that quark-originated nonlocal effects are detected in conventional nuclear phenomena.
2,869,038,156,484
arxiv
\section{Introduction} \label{Ssection:1} For a given smooth curve $\gamma\subset\mathbb{R}^2$, Bernoulli--Euler's elastic energy, also known as bending energy, is defined by \[\mathcal{W}(\gamma)=\int_{\gamma}\kappa^2\,ds,\] where $\kappa$ and $s$ denote the curvature and the arclength parameter of $\gamma$, respectively. The variational problems of $\mathcal{W}$ have been studied as the model of the elastic rods and due to the geometric interest. In this paper we consider curves given as graphs with fixed ends. For a curve written as the graph of a function $u:[0,1]\to\mathbb{R}$, the elastic energy of the curve $(x, u(x))$ is given by $$ \mathcal{W}(u) := \int_0^1 \kappa_u(x)^2 \sqrt{1+u'(x)^2} \,dx = \int_0^1 \left(\frac{u''(x)}{\left(1+u'(x)^2\right)^{\frac{3}{2}}}\right)^2 \sqrt{1+u'(x)^2} \,dx, $$ where $\kappa_u$ denotes the curvature of the curve $(x,u(x))$. Recently, the obstacle problems for $\mathcal{W}$ were studied by \cite{DD, miura_16, Muller01}. This paper is concerned with the minimization problem for $\mathcal{W}$ with the unilateral constraint that the curve lies above a given function $\psi:[0,1]\to\mathbb{R}$. That is, we consider \begin{align}\label{Seq:M}\tag{M} \min_{v\in M_{\rm sym}} \mathcal{W}(v), \end{align} where $M_{\rm sym}$ is a convex set of $H(0,1):=H^2(0,1)\cap H^1_0(0,1)$ as follows: \[ M_{\rm sym}:=\Set{v\in H(0,1) | v\geq \psi \ \ \text{in} \ \ [0,1], \quad v(x)=v(1-x) \ \ \text{for} \ \ 0\leq x \leq \frac{1}{2}}. \] In this paper we say that $u$ is a solution of \eqref{Seq:M} if $u\inM_{\rm sym}$ attains $\inf_{v\inM_{\rm sym}}\mathcal{W}(v)$. We shall assume the following: \begin{assumption}\label{Sassumption:1.1} We say that $\psi:[0,1]\to\mathbb{R}$ satisfies the \textit{symmetric cone condition} if the following hold: \begin{itemize} \item[(i)] $\psi(x)=\psi(1-x)$ \quad for \quad $x\in[0,1]$; \item[(ii)] $\psi(0)<0$, \ $\psi(\tfrac{1}{2})>0$ \ \ and \end{itemize} \begin{align} \label{S0726-1 \psi(x)=(1-2x ) \psi(0) +2x\,\psi\Big(\frac{1}{2}\Big) \quad \text{for}\quad 0\leq x\leq \frac{1}{2}. \end{align} Let $\mathsf{SC}$ denote the class of functions satisfying the symmetric cone condition. \end{assumption} Our concern in this paper is to study the following open problems on \eqref{Seq:M}: \textit{solvability in the case of $\psi(\frac{1}{2})=c_*$, uniqueness, regularity}, under the assumption $\psi\in\mathsf{SC}$. Here \begin{align}\label{Seq:c_*} c_* :=\frac{2}{c_0} = 0.8346262684\ldots \end{align} and $c_0$ is a constant given by \[c_0:=\int_{\mathbb{R}}\frac{1}{(1+t^2)^{\frac{5}{4}}}\,dt =\mathcal{B}\Big(\frac{1}{2}, \frac{3}{4}\Big)=\sqrt{\pi}\frac{\Gamma(3/4)}{\Gamma(5/4)}= 2.396280469\ldots . \] With the argument in \cite{DD} we see that \eqref{Seq:M} has a solution if $\psi\in\mathsf{SC}$ satisfies $\psi(\frac{1}{2})<c_*$. On the other hand, according to \cite{Muller01}, \eqref{Seq:M} has no solution if $\psi\in\mathsf{SC}$ satisfies $\psi(\frac{1}{2})>c_*$. Moreover, due to the lack of the convexity of $\mathcal{W}$, the uniqueness of solutions to \eqref{Seq:M} is an outstanding problem. In this paper, we are interested in the following: \begin{enumerate} \item[(i)] \textit{Is problem \eqref{Seq:M} solvable under the assumption $\psi(\frac{1}{2})=c_*$?} \item[(ii)] \textit{Does the uniqueness of solutions to problem \eqref{Seq:M} hold?} \end{enumerate} \noindent If $u$ is a minimizer of $\mathcal{W}$ without obstacle, $u$ satisfies the equation on $(0,1)$ \begin{align}\label{Seq:1.1} \frac{1}{\sqrt{1+(u'(x))^2}} \frac{d}{dx}\left( \frac{\kappa_{u}'(x)}{\sqrt{1+(u'(x))^2}} \right) +\frac{1}{2}\kappa_{u}(x)^3 =0 \end{align} and the regularity of solutions of \eqref{Seq:1.1} is expected to be improved up to $C^{\infty}(0,1)$. However, the obstacle prevents us from applying arguments for \eqref{Seq:1.1} to problem~\eqref{Seq:M}. We are also interested in the question \begin{enumerate} \item[(iii)]\textit{whether the regularity can be improved up to $C^{\infty}(0,1)$ or not.} \end{enumerate} Although Dall'Acqua and Deckelnick have shown in \cite{DD} that third (weak) derivative of solutions to \eqref{Seq:M} is of bounded variation in $(0,1)$, it is not clear that solutions to \eqref{Seq:M} do not belong to $C^\infty(0,1)$ in general. We are ready to state our main result of this paper: \begin{theorem} \label{Sthm:1.1} Let $\psi\in\mathsf{SC}$ satisfy \begin{align}\label{Stop} \psi\Big(\frac{1}{2}\Big)<c_* . \end{align} Then problem \eqref{Seq:M} has a unique solution $u$. In addition, $u\in C^2([0,1])$ holds and the third derivative of $u$ belongs to $BV(0,1)$, while \begin{align}\label{Seq:1.2} u\notin C^3([0,1]). \end{align} On the other hand, if $\psi\in\mathsf{SC}$ satisfies \begin{align} \label{Seq:1.7} \psi\Big(\frac{1}{2}\Big)\geq c_*, \end{align} then \eqref{Seq:M} has no solution. \end{theorem} Recently, Miura \cite{Miura2021} obtained the same uniqueness result in a different way; he focuses on the curvature and the proof is more geometric. In Theorem~\ref{Sthm:1.1}, we restrict the class of obstacle to $\mathsf{SC}$ for a simplicity. For a more general assumption on $\psi$, see Remark~\ref{Srem:0726}. The reason why we employ Assumption~\ref{Sassumption:1.1} is to reduce problem \eqref{Seq:M} into the boundary value problem \begin{align}\tag{BVP}\label{SBVP} \begin{cases} \frac{1}{\sqrt{1+(u'(x))^2}} \frac{d}{dx}\left( \frac{\kappa_{u}'(x)}{\sqrt{1+(u'(x))^2}} \right) +\frac{1}{2}\kappa_{u}(x)^3 =0 \quad \text{in}\quad 0<x<\frac{1}{2}, \\ u(0)=0,\quad u''(0)=0, \\ u(\frac{1}{2})=\psi(\tfrac{1}{2}), \quad u'(\tfrac{1}{2})=0, \end{cases} \end{align} more precisely, see Section~\ref{Ssection:3}. Hence we can obtain details of solutions to \eqref{Seq:M} via the shooting method, which is a method to know the properties of solutions of boundary value problems (see e.g.\ \cite{GG_06,NT_04,Pan_09,Schaaf}). As in \cite{DD, Muller01}, the study on problem \eqref{Seq:M} is done by variational approaches. One of novelties of this paper is to give another strategy, which makes use of the shooting method. Furthermore, the shooting method enables us not only to give the complete classification of existence and non-existence of solutions to \eqref{Seq:M}, but also to make the graph of \eqref{Seq:M} via MAPLE since we can regard \eqref{Seq:M} as the Cauchy problem (see Figure~\ref{fig:1}). \begin{figure}[htbp] \centering \includegraphics[width=4cm]{Fig_1.eps} \hspace{2cm} \includegraphics[width=4cm]{Fig_2.eps} \caption{The solution of \eqref{Seq:M} for the obstacle $\psi(\frac{1}{2})\approx0.16$ (left). The top line is the solution $u$ of \eqref{Seq:M} with $u'(0)=0.5$ and the other is the symmetric solution $v$ of \eqref{Seq:1.1} with $v(0)=0$ and $v'(0)=0.5$ (right).} \label{fig:1} \end{figure} The shooting method is a useful tool to analyze the second order differential equations (see e.g.\ \cite{Azz, Bren, Brub, Pel}). On the other hand, it is not a standard matter to apply the shooting method to fourth order problems. Indeed, equation \eqref{Seq:1.1} is a quasilinear fourth order equation. However, using a geometric structure of \eqref{Seq:1.1}, we can reduce \eqref{Seq:1.1} into a second order semilinear equation. Then the standard shooting argument can work well for our problem. The reduction strategy is expected to be applicable to other fourth order geometric equations. In this paper we are also interested in the ``variational inequality''. For a solution $u$ of \eqref{Seq:M}, $u+\varepsilon(v-u)$ also belongs to $M_{\rm sym}$ for $v\in M_{\rm sym}$ and for $\varepsilon\in [0,1]$ by the convexity of $M_{\rm sym}$. Then using the minimality of $u$, we have \begin{align*} \frac{d}{d\varepsilon} \mathcal{W}(u+\varepsilon(v-u))\Big|_{\varepsilon=0}\geq 0, \end{align*} which leads the inequality \begin{align}\label{Seq:1.4} \mathcal{W}'(u)(v-u) \geq 0 \quad \text{for}\quad v\in M_{\rm sym}, \end{align} where $\mathcal{W}'(u)(\varphi)$ denotes the first variation of $\mathcal{W}$ at $u$ in direction $\varphi$ given by \begin{align}\label{Seq:1.5} \mathcal{W}'(u)(\varphi) =\int_0^1\bigg[ 2\frac{u''\varphi''}{(1+(u')^2)^{\frac{5}{2}}} -5\frac{(u'')^2u'\varphi'}{(1+(u')^2)^{\frac{7}{2}}} \bigg]\,dx. \end{align} Hence we see that a solution $u$ of \eqref{Seq:M} also solves the following problem: \begin{align}\label{Seq:P}\tag{P} \text{find}\quad u\inM_{\rm sym}\quad \text{such that}\quad \mathcal{W}'(u)(v-u) \geq 0 \quad \text{for all}\quad v\in M_{\rm sym}. \end{align} We can obtain the same results on \eqref{Seq:P} as well as Theorem \ref{Sthm:1.1}: \begin{theorem} \label{Sthm:1.2} Assume that $\psi\in\mathsf{SC}$. Then each of the following holds$:$ \begin{itemize} \item[(i)] If \eqref{Stop} holds, then \eqref{Seq:P} has a unique solution, which is obtained in Theorem~\ref{Sthm:1.1}$;$ \item[(ii)] If \eqref{Seq:1.7} holds, then \eqref{Seq:P} has no solution. \end{itemize} \end{theorem} Since every solution of \eqref{Seq:M} also solves \eqref{Seq:P}, we shall prove Theorem~\ref{Sthm:1.2} and then Theorem~\ref{Sthm:1.1} can be deduced as a corollary of Theorem~\ref{Sthm:1.2}. The uniqueness of solutions of \eqref{Seq:P} is so important that the minimizer of $\mathcal{W}$ in $M_{\rm sym}$ can be also characterized as the equilibrium of the corresponding parabolic problem (see Section~\ref{Ssection:5}). Very recently M\"uller \cite{Mar20X} also considered \eqref{Seq:P} and the corresponding parabolic problem under a bit different assumption on $\psi$. In \cite{Mar20X} he approached problem \eqref{Seq:P} by another way which is based on Talenti's symmetrization. This paper is organized as follows: In Section~\ref{Ssection:2}, we collect notation and known results which are used in this paper. In Section~\ref{Ssection:3}, we identify coincidence sets of solutions to \eqref{Seq:P} and prove the concavity of solutions. In Section~\ref{Ssection:4}, we prove the uniqueness and regularity of \eqref{Seq:P}, using the shooting method. Finally, we apply these results to a parabolic problem in Section~\ref{Ssection:5}: we show that the solution of the corresponding dynamical problem converges to the solution of \eqref{Seq:M}. \subsection*{Acknowledgments} The author would like to thank Professor Shinya Okabe for fruitful discussions. The author would also like to thank referees for their careful reading and useful comments. The author was supported by JSPS KAKENHI Grant Number 19J20749. \section{Preliminaries} \label{Ssection:2} In this section, we collect notation and some known properties on \eqref{Seq:M} and \eqref{Seq:P}. First, we see the relationship between \eqref{Seq:1.1} and the first variation of $u$. For $\varphi\in C^{\infty}_{\rm c}(0,1)$ and a sufficiently smooth $u$, it follows from integration by parts that \begin{align}\label{Seq:2.02} \begin{split} \mathcal{W}'(u)(\varphi) &=\int_0^1\bigg( 2\frac{u''}{(1+(u')^2)^{\frac{5}{2}}}\varphi'' -5\frac{(u'')^2u'}{(1+(u')^2)^{\frac{7}{2}}}\varphi' \bigg)\,dx \\ &=\int_0^1\bigg( -2\frac{u'''}{(1+(u')^2)^{\frac{5}{2}}} +5\frac{(u'')^2u'}{(1+(u')^2)^{\frac{7}{2}}} \bigg)\varphi'\,dx \\ &= \int_0^1 2\left(\frac{1}{\sqrt{1+(u')^2}} \frac{d}{dx}\bigg( \frac{\kappa_{u}'}{\sqrt{1+(u')^2}} \bigg) +\frac{1}{2}\kappa_{u}^3 \right)\varphi \,dx. \end{split} \end{align} \begin{lemma} $u$ is a solution of \eqref{Seq:P} if and only if $u$ solves \begin{align}\label{Seq:P'} \text{find}\quad u\inM_{\rm sym}\quad \text{such that}\quad \mathcal{W}'(u)(v-u) \geq 0 \quad \text{for all}\quad v\in M, \end{align} where $M$ is the convex set of $H(0,1)$ defined by \[ M:=\Set{v\in H(0,1) | v\geq \psi \ \ \text{in} \ \ [0,1] }. \] \end{lemma} \begin{proof} Since we see at once that the sufficiency of \eqref{Seq:P'} is clear, we show the necessity of \eqref{Seq:P'}. Let $u$ be a solution of \eqref{Seq:P} and fix $v\in M$ arbitrarily. Set \[ v_1(x):= \frac{1}{2}\Big( v(x)+v(1-x)\Big), \quad v_2(x):=v(x)-v_1(x). \] Then we find $v_1\inM_{\rm sym}$. Taking $v_1$ as the test function in \eqref{Seq:1.4}, we have \[ 0\leq \mathcal{W}'(u)(v_1-u)=\mathcal{W}'(u)(v-u)-\mathcal{W}'(u)(v_2)\] and hence it suffices to show $\mathcal{W}'(u)(v_2)=0$. Since $u\inM_{\rm sym}$ and $v_2$ satisfies $v_2(1-x)=-v_2(x)$, in view of \eqref{Seq:1.5} we see that \begin{align*} \int_{\frac{1}{2}}^1\bigg( 2\frac{u''v_2''}{(1+(u')^2)^{\frac{5}{2}}} &-5\frac{(u'')^2u'v_2'}{(1+(u')^2)^{\frac{7}{2}}} \bigg)\,dx\\ &=-\int_0^{\frac{1}{2}}\bigg( 2\frac{u''v_2''}{(1+(u')^2)^{\frac{5}{2}}} -5\frac{(u'')^2u'v_2'}{(1+(u')^2)^{\frac{7}{2}}} \bigg)\,dx, \end{align*} which clearly yields $\mathcal{W}'(u)(v_2)=0$. Therefore we obtain \begin{align}\label{Seq:2.2} \mathcal{W}'(u)(v-u) \geq 0 \quad \text{for}\quad v\in M. \end{align} The proof is complete. \end{proof} Next we define a useful function $G$ introduced in \cite{DG_07}. Let $G:\mathbb{R}\to(-\tfrac{c_0}{2}, \tfrac{c_0}{2})$ be \begin{align}\label{Sdef-G} G(x):=\int_0^x \frac{1}{(1+t^2)^{\frac{5}{4}}} \,dt, \end{align} where $c_0:=\int_{\mathbb{R}}(1+t^2)^{-\frac{5}{4}}\,dt$. The function $G$ is bijective and strictly increasing since $G'(x)>0$. Therefore $G^{-1}$ exists, is smooth, and satisfies \begin{align}\label{Seq:1.05} \dfrac{d}{dx}G^{-1}(x) = \Big( 1+G^{-1}(x)^2 \Big)^{\frac{5}{4}}. \end{align} \begin{proposition}[Known results on \eqref{Seq:M}, \cite{DD}] \label{Sprop:2.1} Assume that $\psi\in\mathsf{SC}$ and let $c_*$ be the constant given by \eqref{Stop}. Then the following hold$:$ \begin{itemize} \item[(i)] If $\psi$ satisfies $\psi(\frac{1}{2})<c_*$, then \eqref{Seq:M} has a solution. \item[(ii)] Solutions of \eqref{Seq:M} are concave. \end{itemize} \end{proposition} These are shown in \cite[Section 4.1]{DD}. \begin{proposition}[Known results on \eqref{Seq:P}, \cite{DD}] \label{Sprop:3.1} Assume that $\psi$ satisfies \begin{align}\label{Seq:psi} \psi\in C([0,1]), \quad \psi(0)<0, \ \ \psi(1)<0 \quad \text{and} \quad \max_{0\leq x \leq 1}\psi(x)>0 \end{align} and let $u$ be a solution of \eqref{Seq:P}. Then the following hold$:$ \smallskip {\rm (i)} Suppose that $u(x)>\psi(x)$ for all $x\in E:=(x_1,x_2)\subset(0,1)$. \begin{itemize} \item[(a)] $u\in C^{\infty}(\bar{E})$ and $u$ satisfies \eqref{Seq:1.1} on $E$. \item[(b)] $v(x):=\kappa_{u}(x)(1+u'(x)^2)^{\frac{1}{4}}=\frac{u''(x)}{(1+(u'(x))^2)^{\frac{5}{4}}}$ satisfies on $E$ \begin{align}\label{Seq:3.04} -\frac{d}{dx}\left( \frac{v'(x)}{(1+(u'(x))^2)^{\frac{3}{4}}} \right) +\frac{\kappa_{u}(x)u'(x)}{(1+(u'(x))^2)^{\frac{1}{4}}}v'(x) =0. \end{align} \end{itemize} \smallskip {\rm (ii)} $\kappa_{u}(0)=\kappa_{u}(1)=0$, i.e., $u''(0)=u''(1)=0$. \smallskip {\rm (iii)} Every solution $u$ of \eqref{Seq:P} satisfies \begin{align}\label{SBV} u\in C^2([0,1]) \quad \text{and} \quad u'''\in BV(0,1). \end{align} \end{proposition} The proofs of (i), (ii) and (iii) are given in \cite[Proposition 3.2]{DD}, \cite[Corollary 3.3]{DD} and \cite[Theorem 5.1]{DD}, respectively. If $\psi$ belongs to $\mathsf{SC}$, then $\psi$ clearly satisfies \eqref{Seq:psi}. The representation of \eqref{Seq:3.04} is so important that the following comparison principle holds. \begin{proposition} \label{Sprop:2.3} If $u$ satisfies \eqref{Seq:1.1} on $(x_1, x_2)$, then $v(x)=\frac{u''(x)}{(1+(u'(x))^2)^{\frac{5}{4}}}$ satisfies \begin{align} \label{Scom} \min\{ v(x_1), v(x_2) \} \leq v(x) \leq \max\{v(x_1), v(x_2)\} \quad \text{for} \quad x_1<x< x_2. \end{align} \end{proposition} For the proof we refer the reader to \cite{DG_07}. \section{Concavity and coincidence set} \label{Ssection:3} According to \cite[Proposition~3.2]{Muller01}, solutions of \eqref{Seq:M} touch $\psi$ only at $x=1/2$ under the assumption $\psi\in\mathsf{SC}$. In this section we deduce that solutions of \eqref{Seq:P} also touch $\psi$ only at $x=1/2$. From this fact, we also show that solutions of \eqref{Seq:P} are concave. We remark here that the results in Section~\ref{Ssection:3} do not depend on the height of $\psi$. Thanks to \eqref{SBV}, every solution $u$ of \eqref{Seq:P} satisfies $u'''\in BV(0,1)$. For $\varphi\in C^{\infty}_{\rm c}(0,\frac{1}{2})$ with $\varphi\geq0$, by using $v=u+\varphi$ as a test function in \eqref{Seq:2.2}, we find that $u$ satisfies \begin{align}\label{Seq:3.01} \int_0^{\frac{1}{2}} \bigg(-2\frac{u'''(x)}{(1+u'(x)^2)^{\frac{5}{2}}}+5 \frac{u''(x)^2u'(x)}{(1+u'(x)^2)^{\frac{7}{2}}}\bigg) \varphi'(x)\,dx \geq 0 \end{align} for any $0\leq \varphi\in C^{\infty}_{\rm c}(0,\frac{1}{2})$. \begin{lemma} \label{Sprop:3.2} Let $\psi\in \mathsf{SC}$ and $u$ be a solution of \eqref{Seq:P}. Then $u$ satisfies $u(\tfrac{1}{2})=\psi(\tfrac{1}{2})$. Moreover, $u(x)\ne\psi(x)$ for $x\ne\tfrac{1}{2}$. \end{lemma} \begin{proof} Fix $u$ as a solution of \eqref{Seq:P}. We divide the proof into three steps. \smallskip \noindent \textbf{Step 1}. {\sl $u$ touches $\psi$ at $x=\tfrac{1}{2}$.\ } Assume that $u(\tfrac{1}{2})>\psi(\tfrac{1}{2})$. First let us suppose that $$I:=\Set{x\in(0,\tfrac{1}{2}) | u(x)=\psi(x)} \ne \emptyset.$$ Then we can define $x_{1}:=\sup I \in(0,\tfrac{1}{2})$ such that $u(x_1)=\psi(x_1)$ and $u'(x_1)=\psi'(x_1)$. By symmetry we obtain \[u(x)>\psi(x) \quad \text{in} \quad x\in(x_{1},1-x_{1}). \] Therefore by Propositions~\ref{Sprop:3.1} and \ref{Sprop:2.3}, $v(x)=u''(x)(1+u'(x)^2)^{-\frac{5}{4}}$ satisfies \[ \min\Set{v(x_1), v(1-x_1)} \leq v(x) \leq \max\Set{v(x_1), v(1-x_1)} \quad \text{for}\quad x\in(x_1, 1-x_1), \] which implies that $v\equiv$ const.\ in $[x_1, 1-x_1]$ since $v(x_1)= v(1-x_1)$ holds. Then by the same argument as in \cite[Lemma 4]{DG_07}, there exists $c\in (-\tfrac{c_0}{2}, \tfrac{c_0}{2})$ such that $u$ satisfies \begin{align}\label{Seq:3.2} u'(x)=G^{-1}\Big(\frac{c}{2}- cx\Big) \quad \text{in} \quad[x_1, 1-x_1]. \end{align} Here we note that the curve $u$ given by \eqref{Seq:3.2} is concave for all $c$. However, taking account of the shape of $\psi\in\mathsf{SC}$, $u'(x_1)=\psi'(x_1)$, and the concavity of \eqref{Seq:3.2}, we find the contradiction to $u>\psi$ in $(x_{1},1-x_{1})$ (see e.g.\ Figure~\ref{Sfig:12}). Next, if $I=\emptyset$, Proposition~\ref{Sprop:3.1}(i) implies that $u$ satisfies $u\in C^{\infty}([0,1])$ and that $u$ satisfies \eqref{Seq:1.1} on $(0,1)$. Moreover, it follows from Proposition~\ref{Sprop:3.1}(ii) that $u$ also satisfies $u''(0)=0$. However, according to \cite[Theorem 1]{DG_07}, such $u$ is limited to $u\equiv0$, which contradicts to $u\geq\psi$. Hence no matter whether $I$ is empty or not, $u(\tfrac{1}{2})=\psi(\tfrac{1}{2})$ holds. \begin{figure}[htbp] \begin{center} \includegraphics[width=5.5cm]{cone_Step1-1.eps} \quad \includegraphics[width=5.5cm]{cone_Step1-2.eps} \caption{When $I$ is non-empty, $u|_{[x_1,1-x_1]}$ must be the dotted curve (left), while $u$ must be trivial when $I$ is empty (right). \label{Sfig:12}} \end{center} \end{figure} \noindent \textbf{Step 2}. {\sl The coincidence set has zero Lebesgue measure.\ } Let $$N:=\{x\in(0,\tfrac{1}{2})\, |\, u(x)>\psi(x)\}.$$ If there exist $x_2, x_3 \in (0,\tfrac{1}{2})$ such that $(x_2, x_3)$ is a connected component of $N$, then it follows from Proposition~\ref{Sprop:3.1}(i) that $u$ satisfies \eqref{Seq:1.1} on $(x_2, x_3)$. By Proposition~\ref{Sprop:2.3}, $v=u''(1+(u')^2)^{-\frac{5}{4}}$ satisfies \begin{align}\label{Seq:3.05} v(x)\geq \min\{ v(x_2), v(x_3)\} \quad \text{for}\quad x\in (x_2, x_3). \end{align} Moreover, since $u-\psi$ attains minimum at $x=x_2, x_3$, it holds that $u''(x_2)$, $u''(x_3)\geq 0$, which in combination with \eqref{Seq:3.05} gives $u''\geq0$ in $[x_2, x_3]$. However, $u''\geq0$ in $[x_2, x_3]$ and $u'(x_2)=u'(x_3)$ imply that $u\equiv\psi$ in $[x_2, x_3]$, which contradicts the fact that $(x_2, x_3)$ is a connected component of $N$. Next let us suppose that there are $0<x_4 < x_5 <{1}/{2}$ such that $(0,x_4)$ and $(x_5,\tfrac{1}{2})$ are connected components of $N$, respectively. The previous argument implies that $u\equiv\psi$ in $[x_4, x_5]$ and hence that $u''=0$ in $[x_4, x_5]$ since $\psi''\equiv0$. Moreover, recalling that $v=u''(1+(u')^2)^{-\frac{5}{4}}$ satisfies the comparison principle \eqref{Scom} on $(x_5, \tfrac{1}{2})$, we find that one of the following holds: \begin{align}\label{Seq:3.4} \begin{cases} \text{(i)}\quad &u''(\tfrac{1}{2})\geq 0 \quad \text{and} \quad u''(x)\geq 0 \quad\text{in}\quad [x_5, \tfrac{1}{2}];\\ \text{(ii)} &u''(\tfrac{1}{2})< 0 \quad \text{and} \quad u''(x)\leq 0 \quad\text{in}\quad [x_5, \tfrac{1}{2}]. \end{cases} \end{align} In the case of (i), it holds that $u''\geq 0$ in $[x_5, \frac{1}{2}]$ and $u'(x_5)>0$, which contradicts $u'(\frac{1}{2})=0$. Supposing (ii), we infer from $u'(x_5)=\psi'(x_5)$ that $u(\frac{1}{2})<\psi(\frac{1}{2})$. Therefore both (i) and (ii) do not occur and we conclude that either $ N=(0,\frac{1}{2}) $ or \begin{align}\label{Seq:3.5} N=(0,a)\cup\Big(a,\frac{1}{2}\Big) \quad \text{for some} \quad 0<a<\frac{1}{2} \end{align} holds. \noindent \textbf{Step 3}. {\sl We show that $u(x)>\psi(x)$ if $x\ne\tfrac{1}{2}$.\ } It is sufficient to show that \eqref{Seq:3.5} does not occur. Suppose, to the contrary, that there exists $a\in (0,\tfrac{1}{2})$ satisfying $u(a)=\psi(a)$. Then $u''(a)\geq 0$ since $u-\psi$ attains the minimum at $x=a$. Moreover, we have \begin{align}\label{Seq:3.06} u''(a)> 0. \end{align} In fact, if $u''(a)=0$, then we obtain the same contradiction as in \eqref{Seq:3.4}. Let us recall that \begin{align*} u(0)=0, \quad u'(\tfrac{1}{2})=0 \quad \text{and}\quad u''(0)=0. \end{align*} Set $A_1:=(0,a)$ and $A_2:=(a,\tfrac{1}{2})$. Then $u>\psi$ in $A_i$, $i=1,2$. Therefore Proposition~\ref{Sprop:3.1}(a) implies that $u\in C^{\infty}(\bar{A_i})$ and we have \begin{align}\label{Seq:3.08} \bigg(-2\frac{u'''(x)}{(1+u'(x)^2)^{\frac{5}{2}}}+5 \frac{u''(x)^2u'(x)}{(1+u'(x)^2)^{\frac{7}{2}}}\bigg)'= 0 \quad \text{in}\quad A_{i} \end{align} for $i=1,2$, respectively. First we focus on $A_1$. Since $u''(0)=0$ and $u''(a)>0$ hold, combining these with \eqref{Scom} we have $u''(x)\geq 0$ for $x\in A_1$. Hence $u'''(0)\geq0$. If $u'''(0)=0$, then $u$ satisfies \eqref{Seq:1.1} with $u(0)=u''(0)=u'''(0)=0$ and such $u$ is limited to a line segment. This contradicts $u'(\tfrac{1}{2})=0$. Therefore $u'''(0)>0$, which in combination with \eqref{Seq:3.08} and $u''(0)=0$ gives \begin{align}\label{Seq:3.09} 2\frac{u'''(x)}{(1+u'(x)^2)^{\frac{5}{2}}}-5 \frac{u''(x)^2u'(x)}{(1+u'(x)^2)^{\frac{7}{2}}} =2\frac{u'''(0)}{(1+u'(0)^2)^{\frac{5}{2}}}=:\eta_1>0 \end{align} for all $x\in (0,a)=A_1$. Next we focus on $A_2$. If $u''(\tfrac{1}{2})> 0$, then \eqref{Scom} and \eqref{Seq:3.06} imply that $u''(x)\geq 0$ for $x\in\bar{A_2}$. However this leads to a contradiction by the same method as in \eqref{Seq:3.4} and hence we have $u''(\frac{1}{2})\leq 0. $ Combining the comparison principle for $v(x)=u''(x)(1+u'(x)^2)^{-\frac{5}{4}}$ with $v(a)>0$ and $v(\tfrac{1}{2})\leq0$, we find that $v$ attains the minimum at $x=\tfrac{1}{2}$ in $a\leq x \leq \tfrac{1}{2}$. Then it holds that \[ v'(x)\Big|_{x=\frac{1}{2}} = \frac{u'''(x)}{(1+u'(x)^2)^{\frac{5}{4}}}-\frac{5}{2} \frac{u''(x)^2u'(x)}{(1+u'(x)^2)^{\frac{9}{4}}}\Bigg|_{x=\frac{1}{2}} \leq0. \] This together with $u'(\frac{1}{2})=0$ implies that $u'''(\tfrac{1}{2})\leq0$. Following the same way as in \eqref{Seq:3.09}, we infer from \eqref{Seq:3.08} that \begin{align}\label{Seq:3.10} 2\frac{u'''(x)}{(1+u'(x)^2)^{\frac{5}{2}}}-5 \frac{u''(x)^2u'(x)}{(1+u'(x)^2)^{\frac{7}{2}}} =2\frac{u'''(\tfrac{1}{2})}{(1+u'(\tfrac{1}{2})^2)^{\frac{5}{2}}}=:\eta_2 \leq 0 \end{align} for all $x\in (a,\frac{1}{2})=A_2$. Finally, fix $0\leq \varphi\in C^{\infty}_{\rm c}(0,\frac{1}{2})$ such that $\varphi(a)>0$. Then the left-hand side of \eqref{Seq:3.01} is reduced into \begin{align*} \int_0^{\frac{1}{2}} \bigg(-2\frac{u'''(x)}{(1+u'(x)^2)^{\frac{5}{2}}}&+5 \frac{u''(x)^2u'(x)}{(1+u'(x)^2)^{\frac{7}{2}}}\bigg) \varphi'(x)\,dx \\ &=\int_0^a -\eta_1\varphi'(x) \,dx + \int_a^{\frac{1}{2}}-\eta_2\varphi'(x) \,dx \\ &=(-\eta_1+\eta_2)\varphi(a) <0, \end{align*} where we used \eqref{Seq:3.09} and \eqref{Seq:3.10}. However, this clearly contradicts \eqref{Seq:3.01}. The proof is complete. \end{proof} By Proposition~\ref{Sprop:3.1}, Lemma~\ref{Sprop:3.2}, and the symmetry property of $M_{\rm sym}$, solutions of \eqref{Seq:P} satisfy \eqref{SBVP}. From this fact, we deduce the concavity of solutions of \eqref{Seq:P}. \begin{proposition} \label{Sprop:3.3} Let $\psi\in \mathsf{SC}$. Then every solution $u$ of \eqref{Seq:P} is concave if it exists. Moreover, it holds that $$u''\Big(\frac{1}{2}\Big)<0.$$ \end{proposition} \begin{proof} Recall that all solutions of \eqref{Seq:P} satisfy \eqref{SBVP}. Let $u$ be a solution of \eqref{SBVP}. It suffices to show that $u''(\tfrac{1}{2})<0$ since we infer from Proposition~\ref{Sprop:2.3}(i) that if $u''(\tfrac{1}{2})<0$, then \begin{align*} \frac{u''(x)}{(1+(u'(x))^2)^{\frac{5}{4}}} =v(x) \leq \max\Set{v(0), v(\tfrac{1}{2})}=0 \quad \text{for}\quad x\in\Big(0,\frac{1}{2}\Big). \end{align*} This clearly implies that $u''(x)\leq0$ in $[0,\tfrac{1}{2}]$, and $u''(x)\leq0$ in $[\tfrac{1}{2},1]$ also holds by symmetry. Suppose, to the contrary, that $u''(\tfrac{1}{2})\geq0$. Then comparison principle \eqref{Scom} implies that \begin{align*} \frac{u''(x)}{(1+(u'(x))^2)^{\frac{5}{4}}} =v(x) \geq \min\Set{v(0), v(\tfrac{1}{2})}=0 \quad \text{for} \quad x\in\Big(0,\frac{1}{2}\Big). \end{align*} Hence $u''(x)\geq0$ in $[0,\tfrac{1}{2}]$. However, such $u$ does not satisfy $u'(\tfrac{1}{2})=0$ since $u(\frac{1}{2})=\psi(\frac{1}{2})$. This contradicts our assumption and hence we obtain $u''(\frac{1}{2})<0$. The proof is complete. \end{proof} \begin{remark} \label{Srem:0726} Thanks to Proposition~\ref{Sprop:2.1}(ii), solutions of \eqref{Seq:M} are concave and hence \eqref{Seq:M} can be reduced to \eqref{SBVP} under the assumption that \begin{align}\label{S0726-2} \psi\in C^2([0,\tfrac{1}{2}]) \quad \text{satisfies} \quad \psi'(0)\geq0 \quad \text{and} \quad \psi''\geq0 \ \ \text{in}\ \ [0,\tfrac{1}{2}], \end{align} which includes \eqref{S0726-1}. Then, from the argument in Section~\ref{Ssection:4}, we can prove Theorem~\ref{Sthm:1.1} under assumption \eqref{S0726-2} instead of \eqref{S0726-1}. On the other hand, in general, problem \eqref{Seq:P} cannot be reduced to \eqref{SBVP} under assumption \eqref{S0726-2}. Indeed, it is not so clear that Lemma~\ref{Sprop:3.2} holds under assumption \eqref{S0726-2}. \end{remark} \section{Shooting method}\label{Ssection:4} Due to the arguments in Section~\ref{Ssection:3}, solutions of \eqref{Seq:P} satisfy \eqref{SBVP}. In Subsection~\ref{Ssubsec:4.1} we show some properties of solutions of \eqref{SBVP}. Applying them, we show the uniqueness and the regularity of the solution of \eqref{Seq:P} in Subsection~\ref{Ssubsec:4.2}. \subsection{Two-point boundary value problem}\label{Ssubsec:4.1} In this subsection, we consider the multiplicity of solutions to \eqref{SBVP}, that is, \begin{align*} \frac{1}{\sqrt{1+(u'(x))^2}} \frac{d}{dx}\left( \frac{\kappa_{u}'(x)}{\sqrt{1+(u'(x))^2}} \right) +\frac{1}{2}\kappa_{u}(x)^3 =0, \quad 0< x <\frac{1}{2} \end{align*} with the boundary conditions \begin{align}\label{Seq:BC} u(0)=0, \quad u''(0)=0, \quad u\Big(\frac{1}{2}\Big)=\psi\Big(\frac{1}{2}\Big), \quad u'\Big(\frac{1}{2}\Big)=0. \end{align} We shall show that \eqref{SBVP} has at most one solution, using the shooting method. In the following we consider the initial condition \begin{align}\label{Seq:IC} u(0)=0, \quad u'(0)=\alpha, \quad u''(0)=0, \quad u'''(0)=\beta \end{align} instead of the boundary condition \eqref{Seq:BC} and find the condition of $(\alpha,\beta)$ to satisfy $u(\tfrac{1}{2})=\psi(\tfrac{1}{2})$ and $u'(\tfrac{1}{2})=0$. Since it follows from Proposition~\ref{Sprop:3.3} that solutions of \eqref{Seq:P} are concave, we only focus on $u'(0)=\alpha>0$. To begin with, as discussed in \eqref{Seq:2.02}, we can reduce \eqref{Seq:1.1} into \begin{align}\label{Seq:4.3} \left(2\frac{u'''(x)}{(1+u'(x)^{2})^{\frac{5}{2}}}-5\frac{u''(x)^2u'(x)}{(1+u'(x)^{2})^{\frac{7}{2}}} \right)'=0. \end{align} Let us set \[ u'(x)=:w(x). \] Then by \eqref{Seq:4.3} $w$ satisfies \begin{align}\label{Seq:4.04} \begin{split} \frac{w''(x)}{(1+w(x)^2)^{\frac{5}{2}}}-\frac{5}{2}\frac{w'(x)^2w(x)}{(1+w(x)^{2})^{\frac{7}{2}}} &=\frac{w''(0)}{(1+w(0)^2)^{\frac{5}{2}}}-\frac{5}{2}\frac{w'(0)^2w(0)}{(1+w(0)^{2})^{\frac{7}{2}}}\\ &= \dfrac{\beta}{(1+\alpha^2)^{\frac{5}{2}}}, \end{split} \end{align} where we used the initial data \eqref{Seq:IC}. Using $G$, which is defined in \eqref{Sdef-G}, we set \begin{align}\label{Seq:set-y} y(x): =G(w(x)) = G(u'(x)). \end{align} Then combining \eqref{Seq:4.04} and \eqref{Seq:set-y} with \begin{align*} y'(x) = \frac{u''(x)}{(1+u'(x)^2)^{\frac{5}{4}}}, \quad y''(x)=\frac{u'''(x)}{(1+u'(x)^2)^{\frac{5}{4}}}-\frac{5}{2}\frac{u''(x)^2u'(x)}{(1+u'(x)^2)^{\frac{9}{4}}}, \end{align*} we obtain \begin{align}\label{Seq:4.05} \dfrac{y''(x)}{\big(1+G^{-1}(y(x))^2\big)^{\frac{5}{4}}} = \dfrac{\beta}{(1+\alpha^2)^{\frac{5}{2}}}. \end{align} By $w(0)=\alpha$ and $w'(0)=0$, we consider \begin{align}\label{Seq:4.06} y(0)=G(\alpha)>0 \quad\text{and}\quad y'(0) =\dfrac{w'(0)}{(1+w(0)^2)^{\frac{5}{4}}}=0 \end{align} as the initial data for \eqref{Seq:4.05}. If $u$ satisfies $u'(\tfrac{1}{2})=0$, then $y$ must attain zero at $x=\tfrac{1}{2}$. Therefore at first we seek the condition $(\alpha,\beta)$ to satisfy $y(\tfrac{1}{2})=0$. By the representation of \eqref{Seq:4.05} and the initial condition \eqref{Seq:4.06}, we notice that \begin{align* \beta<0 \end{align*} if and only if the solution $y$ has zero. Furthermore, if $\beta<0$ then zero of $y$ is unique since $\beta<0$ implies that $y''(x)<0$. The point where $y$ achieves zero, which is often called \textit{time map formula}, is given as follows: \begin{lemma} \label{Sprop:4.1} Let $(\alpha,\beta)\in\mathbb{R}_{>0}\times\mathbb{R}_{<0}$ be arbitrary and $y$ be the solution of \eqref{Seq:4.05} with \eqref{Seq:4.06}. The point where $y$ achieves zero is given by \begin{align}\label{Stime-map} Z_{\alpha,\beta} =\dfrac{(1+\alpha^2)^{\frac{5}{4}}}{\sqrt{2}\sqrt{|\beta|}}\int_0^{\alpha} \dfrac{1}{\sqrt{\alpha-t}} \dfrac{\,dt}{(1+t^2)^{\frac{5}{4}}}. \end{align} \end{lemma} \begin{proof} Fix $(\alpha,\beta)\in\mathbb{R}_{>0}\times\mathbb{R}_{<0}$ arbitrarily and let $y$ be the solution of \eqref{Seq:4.05} with \eqref{Seq:4.06}. Let us define \[ f(t):= (1+G^{-1}(t)^2)^{\frac{5}{4}}, \quad F(X):=2\int_0^X f(t)\,dt. \] Set $Z\in(0,\infty)$ as the point where $y$ achieves zero. Then we deduce from \eqref{Seq:4.05} that \begin{align*} \dfrac{d}{dx} \bigg( y'(x)^2 - \dfrac{\beta}{(1+\alpha^2)^{\frac{5}{2}}}F(y(x)) \bigg) &= 2y' \bigg( y'' -\dfrac{\beta}{(1+\alpha^2)^{\frac{5}{2}}}\big(1+G^{-1}(y)^2\big)^{\frac{5}{4}} \bigg) =0, \end{align*} which in combination with \eqref{Seq:4.06} gives \begin{align}\label{Seq:4.09} y'(x)^2 - \dfrac{\beta}{(1+\alpha^2)^{\frac{5}{2}}}F(y(x)) = -\dfrac{\beta}{(1+\alpha^2)^{\frac{5}{2}}}F(y(0)) \quad \text{for}\quad x\in (0,Z). \end{align} Moreover, since $y'(0)=0$ and $y''<0$ in $(0,Z)$, we find $y'(x)<0$ in $(0,Z)$. Combining this with \eqref{Seq:4.09}, we obtain \begin{align}\label{Seq:1103-3} \begin{split} y'(x) &=- \dfrac{\sqrt{|\beta|}}{(1+\alpha^2)^{\frac{5}{4}}}\sqrt{F(G(\alpha))-F(y(x))}\\ &=- \dfrac{\sqrt{|\beta|}}{(1+\alpha^2)^{\frac{5}{4}}}\sqrt{2\alpha-2G^{-1}(y(x))}. \end{split} \end{align} Here we used \begin{align*} F(s) = 2\int_0^s \big(1+G^{-1}(t)^2\big)^{\frac{5}{4}} \,dt = 2\int_0^s \big( G^{-1}(t)\big)' \,dt = 2G^{-1}(s), \end{align*} which follows from \eqref{Seq:1.05} and $G^{-1}(0)=0$. Integrating \eqref{Seq:1103-3} on $(0,Z)$, we obtain \begin{align*} \dfrac{\sqrt{|\beta|}}{(1+\alpha^2)^{\frac{5}{4}}} Z &=\int_0^{Z} -\dfrac{y'(t)}{\sqrt{2\alpha-2G^{-1}(y(t))}}\,dt =\int_0^{G(\alpha)} \dfrac{\,ds}{\sqrt{2\alpha-2G^{-1}(s)}}, \end{align*} where we used the change of the variables $s=y(t)$ in the last equality. Therefore we have \begin{align*} Z =\dfrac{(1+\alpha^2)^{\frac{5}{4}}}{\sqrt{|\beta|}}\int_0^{G(\alpha)} \dfrac{\,ds}{\sqrt{2\alpha-2G^{-1}(s)}}. \end{align*} By the change of variables $t=G^{-1}(s)$, we obtain \eqref{Stime-map}. \end{proof} By Lemma~\ref{Sprop:4.1}, for each $\alpha>0$ the map on $\mathbb{R}_{<0}$ \[ \beta \mapsto Z_{\alpha,\beta}\] is strictly increasing and satisfies \[ \lim_{\beta\uparrow0}Z_{\alpha,\beta}=\infty \quad \text{and}\quad \lim_{\beta\to-\infty}Z_{\alpha,\beta}=0.\] Therefore \eqref{Stime-map} implies that for each $\alpha>0$ there exists a unique $\beta<0$ such that $Z_{\alpha,\beta}=1/2$. Replacing $Z_{\alpha,\beta}$ with $1/2$ in \eqref{Stime-map}, we obtain the following. \begin{proposition} \label{Sprop:4.2} For each $\alpha>0$, $Z_{\alpha,\beta}=1/2$ holds if and only if \begin{align}\notag \beta=\beta_*(\alpha):= -2(1+\alpha^2)^{\frac{5}{2}}\left(\int_0^{\alpha}\dfrac{1}{\sqrt{\alpha-t}}\dfrac{\,dt}{(1+t^2)^{\frac{5}{4}}} \right)^2. \end{align} \end{proposition} Thus for $\alpha=u'(0)$, \[ u'''(0)=\beta_{*}(\alpha)\] is needed in \eqref{Seq:IC} so that the solution $u$ of \eqref{Seq:1.1} with \eqref{Seq:IC} satisfies $u'(\tfrac{1}{2})=0$. Hence we should consider \begin{align}\label{Su_*} u(0)=0, \quad u'(0)=\alpha, \quad u''(0)=0, \quad u'''(0)=\beta_*(\alpha). \end{align} Next we investigate the relationship between $u'(0)=\alpha$ and $u(\tfrac{1}{2})=\psi(\tfrac{1}{2})$. To this end, hereafter we consider only the case $\beta=\beta_{*}(\alpha)$ in \eqref{Seq:IC}. Let $u(x;\alpha)$ denote the solution of \eqref{Seq:1.1} with \eqref{Su_*}. Then $y(x;\alpha):=G(u'(x;\alpha))$ is the solution of \begin{align}\label{Sy_a} \dfrac{y''(x)}{\big(1+G^{-1}(y(x))^2\big)^{\frac{5}{4}}} = \dfrac{\beta_*(\alpha)}{(1+\alpha^2)^{\frac{5}{2}}} \end{align} with the initial data \eqref{Seq:4.06}. For short we denote the right-hand side of \eqref{Sy_a} by \begin{align}\nota \gamma(\alpha) := \dfrac{\beta_*(\alpha)}{(1+\alpha^2)^{\frac{5}{2}}} = -2\left(\int_0^{\alpha}\dfrac{1}{\sqrt{\alpha-t}}\dfrac{\,dt}{(1+t^2)^{\frac{5}{4}}} \right)^2. \end{align} \begin{lemma} \label{Slem:4.2} Let $u(x;\alpha)$ be the solution of \eqref{Seq:1.1} with \eqref{Su_*} for $\alpha>0$. Then \begin{align}\label{Seq:4.31} u \Big(\frac{1}{2};\alpha\Big) =\frac{\displaystyle \int_0^{\alpha}\frac{s}{\sqrt{\alpha-s}}\frac{\,ds}{(1+s^2)^{\frac{5}{4}}}}{\displaystyle 2\int_0^{\alpha}\frac{1}{\sqrt{\alpha-s}}\frac{\,ds}{(1+s^2)^{\frac{5}{4}}}}. \end{align} \end{lemma} \begin{proof} Let $y(x;\alpha)$ be the function given by $y(x;\alpha)=G(u'(x;\alpha))$. Then $y(\cdot;\alpha)$ satisfies \eqref{Sy_a} and $y(\cdot;\alpha)$ is strictly decreasing in $(0,1/2)$ by \eqref{Seq:1103-3}. Using this $y$, we have \begin{align*} u\Big(\frac{1}{2};\alpha\Big)=\int_0^{\frac{1}{2}}u'(x;\alpha)\,dx=\int_0^{\frac{1}{2}} G^{-1}(y(x;\alpha))\,dx, \end{align*} where we used $u(0;\alpha)=0$. We infer from the change of variables $y(x;\alpha)=s$ and \eqref{Seq:1103-3} that \begin{align*} u\Big(\frac{1}{2};\alpha\Big) &=\int_{G(\alpha)}^0 G^{-1}(s)\frac{1}{y'(x;\alpha)}\,ds \\ &=\int_{G(\alpha)}^0 G^{-1}(s) \left(-\frac{(1+\alpha^2)^{\frac{5}{4}}}{|\beta_*(\alpha)|^{\frac{1}{2}}}\frac{1}{\sqrt{2\alpha-2G^{-1}(s)}}\right) \,ds. \end{align*} By the change of variables $G^{-1}(s)=x$, we have \begin{align*} u\Big(\frac{1}{2};\alpha\Big)&=\frac{(1+\alpha^2)^{\frac{5}{4}}}{|\beta_*(\alpha)|^{\frac{1}{2}}} \int_0^{\alpha}\frac{x}{\sqrt{2\alpha-2x}}\frac{1}{(1+x^2)^{\frac{5}{4}}} \,dx =\frac{\displaystyle \int_0^{\alpha}\frac{x}{\sqrt{\alpha-x}}\frac{\,dx}{(1+x^2)^{\frac{5}{4}}}}{\displaystyle 2\int_0^{\alpha}\frac{1}{\sqrt{\alpha-s}}\frac{\,ds}{(1+s^2)^{\frac{5}{4}}}}, \end{align*} which is the desired formula. Here we used \begin{align}\label{S0726-4} \frac{|\beta_*(\alpha)|^{\frac{1}{2}}}{(1+\alpha^2)^{\frac{5}{4}}} = \sqrt{2}\int_0^{\alpha}\frac{1}{\sqrt{\alpha-s}}\frac{\,ds}{(1+s^2)^{\frac{5}{4}}}, \end{align} which follows from \eqref{Sprop:4.2}. We complete the proof. \end{proof} Let us set \begin{align}\label{Seq:1104-3} I(\alpha):=\int_0^{\alpha}\frac{\sqrt{\alpha}}{\sqrt{\alpha-s}}\frac{\,ds}{(1+s^2)^{\frac{5}{4}}}, \quad J(\alpha):=\int_0^{\alpha}\frac{\sqrt{\alpha}}{\sqrt{\alpha-x}}\frac{x}{(1+x^2)^{\frac{5}{4}}}\,dx. \end{align} Then thanks to Lemma~\ref{Slem:4.2}, we notice that \[ u\Big(\frac{1}{2};\alpha\Big)=\frac{J(\alpha)}{2I(\alpha)}. \] \begin{lemma} \label{Slem:4.3} For $J(\alpha)$ given by \eqref{Seq:1104-3}, it holds that \[ J'(\alpha) > 0 \quad \text{for} \quad \alpha>0. \] \end{lemma} \begin{proof} By the change of variables we reduce $J(\alpha)$ into \begin{align}\label{Seq:1208-1} J(\alpha)= \int_0^1 \frac{t}{\sqrt{1-t}}\frac{\alpha^2}{(1+\alpha^2t^2)^{\frac{5}{4}}}\,dt. \end{align} Let ${}_2F_1$ denote the Gaussian hypergeometric function (cf.\ Definition~\ref{Sdef:hypergeom}). Note that for each $\alpha>0$ \begin{align} \int_0^1 \frac{t}{\sqrt{1-t}}\frac{\alpha^2}{(1+\alpha^2t^2)^{\frac{5}{4}}}\,dt &= \frac{2}{3}\frac{\alpha^2}{1+\alpha^2} {}_2F_1\big[1, \tfrac{3}{2}; \tfrac{7}{4}; \tfrac{\alpha^2}{1+\alpha^2} \big] \label{Seq:2205-1} \\ &= \frac{2}{3}\frac{\alpha^2}{1+\alpha^2} \sum_{n=0}^{\infty} \frac{(1)_n (\tfrac{3}{2})_n}{(\frac{7}{4})_n \, n!} \Big(\frac{\alpha^2}{1+\alpha^2} \Big)^n \notag \end{align} (see Proposition~\ref{Sprop:2205-1} for a rigorous derivation). Here $(x)_n$ is the Pochhammer symbol, and $(x)_n >0$ holds for any $n\in\mathbb{N}\cup\{0\}$ and $x>0$. Therefore, setting \[ j(\alpha):= \frac{\alpha^2}{1+\alpha^2} \quad \text{for} \quad \alpha>0, \] we obtain \begin{align} \begin{split} J'(\alpha)&= \frac{2}{3}j'(\alpha) \sum_{n=0}^{\infty} \frac{(1)_n (\tfrac{3}{2})_n}{(\frac{7}{4})_n \, n!} j(\alpha)^n \\ &\qquad + \frac{2}{3}j(\alpha) j'(\alpha)\sum_{n=1}^{\infty} \frac{(1)_n (\tfrac{3}{2})_n}{(\frac{7}{4})_n \, (n-1)!} j(\alpha)^{n-1} >0 \end{split} \end{align} with the help of the fact that $0<j(\alpha)<1$. The proof is complete. \if0 By the change of variable we reduce $J(\alpha)$ into \begin{align}\label{Seq:1208-1} J(\alpha)= \int_0^1 \frac{t}{\sqrt{1-t}}\frac{\alpha^2}{(1+\alpha^2t^2)^{\frac{5}{4}}}\,dt, \end{align} and then by differentiating \eqref{Seq:1208-1} we have \begin{align}\label{Seq:1208-2} J'(\alpha) &=\frac{\alpha}{2}\int_0^1\frac{t}{\sqrt{1-t}}\frac{4-\alpha^2t^2}{(1+\alpha^2t^2)^{\frac{9}{4}}}\,dt =\frac{1}{2\sqrt{\alpha}}\int_0^{\alpha}\frac{t}{\sqrt{\alpha-t}}\frac{4-t^2}{(1+t^2)^{\frac{9}{4}}}\,dt. \end{align} By \eqref{Seq:1208-2} we find that $J'(\alpha)>0$ holds for $\alpha\leq2$. Hence hereafter we assume that $\alpha>2$. Then we obtain \begin{align}\label{Seq:1208-3} \begin{split} 2\sqrt{\alpha} \,J'(\alpha) &=\int_0^{2}\frac{t}{\sqrt{\alpha-t}}\frac{4-t^2}{(1+t^2)^{\frac{9}{4}}}\,dt -\int_2^{\alpha}\frac{t}{\sqrt{\alpha-t}}\frac{t^2-4}{(1+t^2)^{\frac{9}{4}}}\,dt\\ &\geq \int_0^{2}\frac{t}{\sqrt{\alpha-t}}\frac{4-t^2}{(1+t^2)^{\frac{9}{4}}}\,dt -\frac{2}{3}\int_2^{\alpha}\frac{1}{(\alpha-t)^{\frac{3}{4}}}\,dt-\frac{1}{3}\int_2^{\alpha}\frac{t^3(t^2-4)^3}{(1+t^2)^{\frac{27}{4}}}\,dt\\ &=:j_1(\alpha)-\frac{2}{3}\,j_2(\alpha)-\,\frac{1}{3}j_3(\alpha). \end{split} \end{align} It follows immediately that $j'_1(\alpha)<0$, $j'_2(\alpha)>0$ and $j'_3(\alpha)>0$. Therefore \[ j(\alpha):=\frac{1}{2\sqrt{\alpha}}\bigg( j_1(\alpha)-\frac{2}{3}j_2(\alpha)-\frac{1}{3}j_3(\alpha) \bigg) \] is strictly decreasing. Moreover, since it holds that \[ j_1(\alpha) = O(\alpha^{-\frac{1}{2}}), \quad j_2(\alpha) = O(\alpha^{\frac{1}{4}}), \quad j_3(\alpha) = O(1) \quad (\alpha\to\infty), \] we observe that \[j(\alpha) \to0 \quad \text{as}\quad \alpha\to\infty.\] This together with \eqref{Seq:1208-3} implies that \[J'(\alpha)\geq j(\alpha) > 0 \quad \text{for}\quad \alpha>2.\] We complete the proof. \fi \end{proof} Next, we show that $\alpha\mapsto u(\frac{1}{2};\alpha)$ is strictly increasing. To this end, let us set \[ \xi(x,\alpha):=\frac{\partial y}{\partial\alpha}(x;\alpha), \] where we regarded $y(x;\alpha)$ as a function on $[0,\frac{1}{2}]\times\mathbb{R}_{>0}$. Then it follows from \eqref{Sy_a} that \begin{align}\label{Seq:1103-1} \xi''-\gamma(\alpha)f'(y)\xi-\gamma'(\alpha)f(y)=0, \end{align} where $f(t)=(1+G^{-1}(t)^2)^{\frac{5}{4}}$ and $'=\frac{\partial}{\partial x}$. By the definition $f'(t)\geq0$ holds for $t\geq0$. Moreover, $y(0;\alpha)=G(\alpha)$, $y'(0;\alpha)=0$, and $y(\frac{1}{2};\alpha)=0$ imply that \begin{align*} \xi(0,\alpha)=G'(\alpha), \quad \xi'(0,\alpha)=0, \quad \text{and}\quad \xi\Big(\frac{1}{2},\alpha\Big)=0. \end{align*} \begin{proposition} \label{Sprop:4.3} Let $u(x;\alpha)$ be the solution of \eqref{Seq:1.1} with \eqref{Su_*} for $\alpha>0$. Then \begin{align}\label{Seq:4.34} \frac{d }{d \alpha} \left[u\Big(\frac{1}{2};\alpha\Big)\right] > 0 \quad\text{for} \quad \alpha>0. \end{align} \end{proposition} \begin{proof} To begin with, recall that $y(\cdot;\alpha)$ is decreasing in $[0,\frac{1}{2}]$ for each $\alpha>0$, in partucular, \begin{align}\label{Seq:4.24} 0<y(x;\alpha)<y(0;\alpha)=G(\alpha) \quad \text{and} \quad y'(x;\alpha)<0 \quad\text{for}\quad 0<x <\frac{1}{2}. \end{align} We infer from \eqref{Seq:1103-3} that \begin{align}\label{Seq:1103-5} y'(x;\alpha)^2 &=- \gamma(\alpha)\big(2\alpha-2G^{-1}(y(x;\alpha))\big). \end{align} Differentiating \eqref{Seq:1103-5} with respect to $\alpha$, we have \begin{align}\label{Seq:1103-6} 2y'\cdot\xi' =-2\gamma'(\alpha)\bigg[ \alpha-G^{-1}(y) \bigg] -2\gamma(\alpha)\bigg[ 1-\Big( 1+G^{-1}(y)^2 \Big)^{\frac{5}{4}} \xi\bigg]. \end{align} Set $\Gamma_{+}:=\Set{\alpha\in(0,\infty) | \gamma'(\alpha)>0}$ and $\Gamma_{-}:=\Set{\alpha\in(0,\infty) | \gamma'(\alpha)\leq0}$. We divide the proof into two cases. \smallskip \textbf{Case I}. {\sl We show \eqref{Seq:4.34} for $\alpha\in\Gamma_{-}$.\ } Fix $\alpha\in\Gamma_{-}$ arbitrarily. Combining \eqref{Seq:1103-6} with \eqref{Seq:4.24}, $\gamma(\alpha)<0$, and $\gamma'(\alpha)\leq0$, we deduce that \begin{align*} \text{ if $c\in(0,\tfrac{1}{2})$ satisfies $\xi(c,\alpha)\leq0$, then $c$ also satisfies $\xi'(c,\alpha)<0$. } \end{align*} Therefore if there is a point $c\in(0,\frac{1}{2})$ such that $\xi(c,\alpha)=0$, then $\xi(x,\alpha)<0$ holds for $x\in(c,\frac{1}{2}]$, which contradicts $\xi(\frac{1}{2},\alpha)=0$. Therefore we may assume that $\xi(x,\alpha)\ne0$ for $x\in[0,\frac{1}{2})$. This together with $\xi(0,\alpha)=G'(\alpha)=(1+\alpha^2)^{\frac{5}{4}}>0$ implies that \begin{align}\label{Seq:1103-7} \xi(x,\alpha)\geq 0 \quad \text{for}\quad 0\leq x \leq \frac{1}{2}. \end{align} On the other hand, it follows from $u(0;\alpha)=0$ that \begin{align*} u\Big(\frac{1}{2};\alpha\Big) = \int_0^{\frac{1}{2}}u'(x;\alpha) \,dx = \int_0^{\frac{1}{2}}G^{-1}(y(x;\alpha)) \,dx, \end{align*} which in combination with \eqref{Seq:1103-7} gives \begin{align}\label{Seq:4.35} \frac{d }{d \alpha} \left[u\Big(\frac{1}{2};\alpha\Big)\right] =\int_0^{\frac{1}{2}}\big(1+G^{-1}(y)^2\big)^{\frac{5}{4}}\xi(x,\alpha) \,dx \geq0. \end{align} The last equality does not hold due to $\xi(0,\alpha)>0$. Therefore we obtain \eqref{Seq:4.34} for $\alpha\in\Gamma_{-}$. \smallskip \textbf{Case II}. {\sl Show \eqref{Seq:4.34} for $\alpha\in\Gamma_{+}$.\ } Since $y(\frac{1}{2};\alpha)=0$ and $\xi(\frac{1}{2},\alpha)=0$ hold for all $\alpha>0$, substituting $x={1}/{2}$ into \eqref{Seq:1103-6}, we obtain \begin{align}\label{Seq:1103-8} 2y'\Big(\frac{1}{2};\alpha\Big)\cdot\xi'\Big(\frac{1}{2},\alpha\Big) =-2\Big( \alpha\gamma'(\alpha)+\gamma(\alpha) \Big). \end{align} To continue, we distinguish two subcases: \smallskip (i) {\sl We consider $\alpha\in\Gamma_{+}$ satisfying $\alpha\gamma'(\alpha)+\gamma(\alpha) \leq0$. } Then such $\alpha$ satisfies \begin{align*} \xi'\Big(\frac{1}{2},\alpha\Big)\leq 0, \end{align*} where we used \eqref{Seq:1103-8} and $y'(\frac{1}{2};\alpha)<0$. Here, combining straightforward calculations with \eqref{Sy_a} and \eqref{Seq:1103-1}, we obtain \begin{align}\label{Seq:1104-1} \Big(\xi'y'-\xi\gamma(\alpha)f(y) \Big)' =\gamma'(\alpha)f(y)y' \quad \text{in}\quad (0,\tfrac{1}{2}) \end{align} for each $\alpha>0$. Assume that \[\xi (x_0,\bar{\alpha}) <0 \quad \text{for some} \quad 0< x_0 <\frac{1}{2} \] holds for some $\bar{\alpha}\in\Gamma_{+}$ with $\bar{\alpha}\gamma'(\bar{\alpha})+\gamma(\bar{\alpha}) \leq0$. This together with $\xi(\frac{1}{2},\bar{\alpha})=0$ implies that there exists $x_1\in(0,\frac{1}{2})$ satisfying $\xi(x_1,\bar{\alpha})<0$ and $\xi'(x_1,\bar{\alpha})=0$. Then integrating \eqref{Seq:1104-1} on $(x_1,\frac{1}{2})$, we have \[ \xi'\Big(\frac{1}{2},\bar{\alpha}\Big) y'\Big(\frac{1}{2};\bar{\alpha}\Big) +\xi(x_1,\bar{\alpha})\gamma(\bar{\alpha})f(y(x_1;\bar{\alpha})) =\gamma'(\bar{\alpha})\int_{x_1}^{\frac{1}{2}}f(y)y'\,dx. \] However, the left-hand side takes a non-negative value while the right-hand side is negative, which is impossible. Hence it holds that \[ \xi(x,\alpha)\geq0\quad\text{for}\quad x\in\Big(0,\frac{1}{2}\Big)\] and for $\alpha\in\Gamma_{+}$ with $\alpha\gamma'(\alpha)+\gamma(\alpha)\leq0$. Similar to \eqref{Seq:4.35}, we have \eqref{Seq:4.34}. \smallskip (ii) {\sl We assume that $\alpha\in\Gamma_{+}$ satisfies $\alpha\gamma'(\alpha)+\gamma(\alpha) >0$. } The assumption implies that $(\alpha\gamma(\alpha))'>0$. Let $I(\alpha)$ and $J(\alpha)$ be given by \eqref{Seq:1104-3}. Then by the fact that \[ \alpha\gamma(\alpha)= -2\left(\sqrt{\alpha}\int_0^{\alpha}\frac{1}{\sqrt{\alpha-t}}\frac{\!\,dt}{(1+t^2)^{\frac{5}{4}}}\right)^2 = -2I(\alpha)^2, \] $\alpha\gamma'(\alpha)+\gamma(\alpha) >0$ gives $I'(\alpha)<0$. Since $u(\frac{1}{2};\alpha)=J(\alpha)/2I(\alpha)$, we have \begin{align* \frac{d }{d \alpha} \left[u\Big(\frac{1}{2};\alpha\Big)\right] = \frac{J'(\alpha)I(\alpha)-J(\alpha)I'(\alpha)}{2I(\alpha)^2} >\frac{J'(\alpha)I(\alpha)}{2I(\alpha)^2}, \end{align*} where in the last inequality we used $I'(\alpha)<0$ and $J(\alpha)>0$. Combining this with Lemma~\ref{Slem:4.3}, we obtain \eqref{Seq:4.34}. We complete the proof. \end{proof} \begin{proposition} \label{Sprop:4.4} Let $\alpha>0$ and $u(x;\alpha)$ be the solution of \eqref{Seq:1.1} with \eqref{Su_*}. Then \begin{align}\label{Seq:4.26} u\Big(\frac{1}{2};\alpha\Big) \to c_* \quad \text{as}\quad \alpha\to\infty, \end{align} where $c_*$ is the constant given by \eqref{Seq:c_*}. \end{proposition} \begin{proof} Let $\alpha\gg1$ and $I(\alpha)$ be given by \eqref{Seq:1104-3}. Set \begin{align*} I(\alpha)&= \int_0^{\alpha^{{3}/{4}}}\frac{1}{\sqrt{1-s/\alpha}}\frac{1}{(1+s^2)^{\frac{5}{4}}}\,ds + \int_{\alpha^{{3}/{4}}}^{\alpha} \frac{1}{\sqrt{1-s/\alpha}}\frac{1}{(1+s^2)^{\frac{5}{4}}}\,ds \\ & =:I_1(\alpha) + I_2(\alpha). \end{align*} Since $(1+s^2)^{\frac{5}{4}}$ is integrable on $(0,\infty)$, we obtain \[ I_1(\alpha) =\int_0^{\infty}\chi_{(0,\alpha^{{3}/{4}})} \frac{1}{\sqrt{1-s/\alpha}}\frac{1}{(1+s^2)^{\frac{5}{4}}} \,ds\to \int_0^{\infty}\frac{1}{(1+s^2)^{\frac{5}{4}}}\,ds \quad\text{as}\quad \alpha\to\infty. \] On the other hand, we observe that \begin{align*} |I_2(\alpha)|&\leq \frac{1}{(1+\alpha^{\frac{3}{2}})^{\frac{5}{4}}}\int_{\alpha^{{3}/{4}}}^{\alpha} \frac{1}{\sqrt{1-s/\alpha}}\,ds =\frac{1}{(1+\alpha^{\frac{3}{2}})^{\frac{5}{4}}}\cdot2\alpha\sqrt{\alpha-\alpha^{\frac{3}{4}}} \to 0 \end{align*} as $\alpha\to\infty$. Therefore we obtain \begin{align} \label{Seq:4.30} I(\alpha) \to \int_0^{\infty}\frac{1}{(1+s^2)^{\frac{5}{4}}}\,ds=\frac{c_0}{2} \quad \text{as}\quad \alpha\to\infty. \end{align} By the same argument we have \begin{align} \label{Seq:4.33} J(\alpha) \to \int_0^{\infty}\frac{x}{(1+x^2)^{\frac{5}{4}}}\,dx =2 \quad\text{as}\quad\alpha\to\infty. \end{align} Combining \eqref{Seq:4.31} with \eqref{Seq:4.30} and \eqref{Seq:4.33}, we obtain \eqref{Seq:4.26}. \end{proof} In fact, we can obtain another characterization of the limit of $u(x;\alpha)$ (see Appendix~\ref{Ssubsec:4.3}). \begin{remark} Even if $u(x;\alpha)$ exists in $x\geq1$, $u$ never satisfies $u(1;\alpha)=0$ for any $\alpha>0$ since the solution of \eqref{Seq:1.1} with $u(0)=u''(0)=u(1)=u'(\frac{1}{2})=0$ is limited to $u\equiv0$. Therefore $u(x;\alpha)$ is far different from the function obtained in \cite{DG_07}. For the comparison, see Figure~\ref{Sfig:3}: one is $u(x;\frac{1}{2})$; the other is the solution of \eqref{Seq:1.1} with $u(0)=u(1)=0$ and $u'(0)=\frac{1}{2}$, and both of them satisfy $u'(0)=\frac{1}{2}$. \end{remark} \begin{figure}[h] \centering \includegraphics[width=4.2cm]{Fig_3} \caption{$u(x;0.5)$ has a singularity.\label{Sfig:3}} \end{figure} \subsection{Proof of Theorem~\ref{Sthm:1.2}}\label{Ssubsec:4.2} In this subsection, we turn to problem \eqref{Seq:P}. The results in Subsection~\ref{Ssubsec:4.2} also hold for \eqref{Seq:M} since the same argument is applicable. \begin{proof}[Proof of Theorem~\ref{Sthm:1.2}] \textsl{We first show uniqueness and existence.} Assume that $\psi(\frac{1}{2}) < c_*$. Then it follows from Proposition~\ref{Sprop:2.1} that \eqref{Seq:M} has a solution. As mentioned earlier, solutions of \eqref{Seq:M} also solve \eqref{Seq:P}. Namely, a minimizer of $\mathcal{W}$ in $M_{\rm sym}$ exists and it is a solution of \eqref{Seq:P}. We show the uniqueness of solutions of \eqref{Seq:P}. By the argument in Section~\ref{Ssection:3}, every solution $u$ of \eqref{Seq:P} satisfies \eqref{SBVP} on $[0,\frac{1}{2}]$. For $H:=\psi(\frac{1}{2})\in(0,c_*)$, there exists $\tilde{\alpha}=\tilde{\alpha}(H)>0$ such that \[ u\Big(\frac{1}{2};\tilde{\alpha}\Big)=H. \] We infer from Proposition~\ref{Sprop:4.3} that such $\tilde{\alpha}$ is unique, so we obtain the conclusion. \smallskip \textsl{Next we show non-existence if $\psi(\tfrac{1}{2})\geq c_*$.} Suppose that \eqref{Seq:P} has a solution $u$ under the assumption $\psi(\frac{1}{2}) \geq c_*$. Then by the argument in Section~\ref{Ssection:3}, $u$ satisfies \eqref{SBVP} on $[0,\frac{1}{2}]$. However, Proposition~\ref{Sprop:4.4} implies that $u(\frac{1}{2};\alpha)$ cannot reach $c_*$ for any $\alpha:=u'(0)$, which contradicts our assumption. Therefore if $\psi(\frac{1}{2})\geq c_*$, \eqref{Seq:P} does not have a solution. \smallskip \textsl{We discuss the regularity.} We assume that $\psi(\frac{1}{2}) < c_*$ and then \eqref{Seq:P} has a unique solution $u$. Set $u'(0)=:\alpha>0$. Then $u$ satisfies $u|_{[0,1/2]}=u(\cdot;\alpha)|_{[0,1/2]}$, where $u(\cdot;\alpha)$ is the solution of \eqref{Seq:1.1} with \eqref{Su_*}. We infer from \eqref{Seq:4.04} that \[ \lim_{x \nearrow \frac{1}{2} } u'''(x;\alpha) = \frac{\beta_*(\alpha)}{(1+\alpha^2)^{\frac{5}{4}}}<0 . \] However, if $u\in C^3(0,1)$, by symmetry $u'''(\tfrac{1}{2})=0$ must hold. This contradicts our assumption. \end{proof} \begin{remark} We refer to \cite{CF_79} as an example of the loss of regularity induced by obstacle. They considered a linear fourth order obstacle problem: \begin{align}\label{SYeq:1027-2} \text{find}\quad u\in \mathcal{A} \quad \text{such that}\quad \int_{\Omega}\Delta u\Delta(v-u)\,dx\geq0 \quad \text{for} \quad v\in \mathcal{A}, \end{align} where $\Omega\subset\mathbb{R}^N$ is a bounded domain and \[ \mathcal{A} :=\Set{v\in H^2_0(\Omega) | v\geq\psi \ \ \text{a.e.\ in}\ \ \Omega }.\] It is shown that there cannot exist an a priori $W^{3,p}(K)$ estimate on the solution of \eqref{SYeq:1027-2} for $p>n$ and for any compact subdomain $K\subset\subset\Omega$ (see \cite[Section 7]{CF_79}). \end{remark} \section{Application to the parabolic problem}\label{Ssection:5} Dynamical approaches are also useful to study variational problems. In this section we show that the solution of \eqref{Seq:M} obtained in Theorem~\ref{Sthm:1.1} can be characterized as equilibrium of the corresponding parabolic problem: \begin{align} \label{Seq:GF} \tag{GF} \begin{cases} \partial_{t}u + \nabla\mathcal{W}(u) \geq 0 & \quad \text{in} \quad (0,1)\times(0,T),\\ \partial_{t}u + \nabla\mathcal{W}(u) = 0 & \quad \text{in} \quad \{ (x,t)\in(0,1)\times(0,T)\ |\ u>\psi\},\\ u \geq \psi & \quad \text{in} \quad (0,1)\times(0,T),\\ u = u '' = 0 & \quad \text{on} \quad \{0,1\}\times(0,T),\\ u(\cdot,0)=u_{0}(\cdot) & \quad \text{in} \quad (0,1), \end{cases} \end{align} where $\nabla\mathcal{W}(u)$ is the Euler--Lagrange operator of $\mathcal{W}(u)$, i.e., \begin{align*} \int_0^1\nabla\mathcal{W}(u)\cdot\varphi\,dx&:=\frac{d}{d\varepsilon} \mathcal{W}(u+\varepsilon\varphi)\big|_{\varepsilon=0}\\ &\ =\int_0^1 \bigg[\Big(2\frac{u''}{(1+|u'|^2)^{\frac{5}{2}}}\Big)''+5\Big(\frac{|u''|^2u'}{(1+|u'|^2)^{\frac{7}{2}}}\Big)'\bigg]\varphi\,dx \end{align*} for $\varphi\in C^{\infty}_{\rm c}(0,1)$. If there is no obstacle, it is a standard matter to obtain global solvability and asymptotic stability for the $L^2$-gradient flow with initial data sufficiently close to stable equilibrium (see e.g.\ \cite{DG_09, NO_2017}). However, since the solution of \eqref{Seq:M} and the solution of \eqref{Seq:GF} are not in general regular, the standard argument cannot work. In particular, \eqref{Seq:1.2} implies that \textit{the solution of \eqref{Seq:GF} never converges to the solution of \eqref{Seq:M} in the $C^{\infty}$-topology}. A new ingredient for the following results is that \textit{we obtain global existence and asymptotic behavior in the full limit sense while the solution of \eqref{Seq:M} does not belong to $C^3(0,1)$}. Let $u_0$ satisfy \begin{align}\label{Seq:5.1} u_0\in H(0,1), \quad u_0(x)\geq \psi(x) \quad \text{for}\quad x\in [0,1]. \end{align} For $T>0$, we define the convex set $\mathcal{K}_{T}$ by \begin{align*} \mathcal{K}_T := \Set{ v \in L^{\infty}(0,T;H(0,1))\cap H^1(0,T;L^2(0,1)) | \begin{array}{l} v \geq \psi \,\,\, \text{in} \,\,\, (0,1) \times [0,T], \\ v|_{t=0} = u_{0} \,\,\, \text{in}\,\,(0,1) \end{array} }, \end{align*} where $H(0,1)$ is the Hilbert space $H(0,1)=H^2(0,1)\cap H^1_0(0,1)$ equipped with the scalar product $$ (u,v)_{H(0,1)} := \int_0^1 u''v'' \, dx \quad \text{for}\quad u, v \in H(0,1). $$ In this paper we employ the norm $\|\cdot\|_{H(0,1)}$ on $H(0,1)$ as $$ \|u\|_{H(0,1)} := (u,u)_{H(0,1)}^{1/2} \quad\text{for}\quad u \in H(0,1), $$ which is equivalent to $\|\cdot\|_{H^2(0,1)}$. In fact, there exist $c_H,C_H>0$ such that \begin{align* c_H\|u\|_{H(0,1)}\leq \|u\|_{H^2(0,1)} \leq C_H\|u\|_{H(0,1)} \end{align*} (see e.g.\ \cite[Theorem 2.31]{GGS_2010}). We formulate the definition of solutions to \eqref{Seq:GF} as follows. \begin{definition} We say that $u$ is a weak solution to \eqref{Seq:GF} in $(0,1) \times [0,T]$ if the following hold$\colon$ \begin{enumerate} \item[{\rm (i)}] $u \in \mathcal{K}_T;$ \item[{\rm (ii)}] For any $v\in\mathcal{K}_T$ it holds that \begin{align} \label{Seq:1204-1} \int_0^T\!\!\int_0^1\bigg[ \partial_{t} u (v-u) &+2\frac{u''(v-u)''}{(1+(u')^2)^{\frac{5}{2}}} -5 \frac{|u''|^2 u'(v-u)'} {(1+(u')^2)^{\frac{7}{2}}} \bigg]\,dx\!\,dt \geq 0. \end{align} \end{enumerate} \end{definition} We are now ready to state a local-in-time existence and uniqueness result proved in \cite{OY_2019}. \begin{proposition}[Local-in-time existence and uniqueness, \cite{OY_2019}] \label{Sthm:5.1} Let $\psi$ satisfy \begin{align}\label{Seq:1216-1} \psi\in C([0,1]), \quad \psi(0)<0, \quad \psi(1)<0 \end{align} and $u_0$ satisfy \eqref{Seq:5.1}. Then there exists a constant $T=T(u_0)>0$ such that \eqref{Seq:GF} has a unique weak solution in $(0,1)\times[0,T]$. Moreover, $u(\cdot,t) \in H^2(0,1)$ for all $0\leq t \leq T$ and it holds that \begin{align}\label{Seq:5.02} \mathcal{W}(u(t_2))-\mathcal{W}(u(t_1)) \leq -\frac{1}{2}\int_{t_1}^{t_2}\!\!\int_0^1 |\partial_t u|^2\,dx\!\,dt \quad \text{for all} \quad 0\leq t_1 \leq t_2 \leq T. \end{align} \end{proposition} \begin{remark}\label{Srem:5.1} Let $u$ be the weak solution $u$ to \eqref{Seq:GF} in $(0,1)\times[0,T]$. If there exists $M>0$ such that \[ \sup_{0\leq t \leq T}\|u'(\cdot, t)\|_{L^{\infty}(0,1)} \leq M, \] then $u$ can be uniquely extended to the solution in $(0,1)\times[0,T+L(u_0)]$, where \begin{align*} 0< L(u_0):= c_H\left(1+M^2\right)^{\frac{5}{2}} \mathcal{W}(u_0)^{\frac{1}{2}}. \end{align*} This extension is justified by solving \eqref{Seq:GF} with the initial datum $u(T)$ (more precisely, see \cite[proof of Theorem 1.1]{OY_2019}). Namely, it is important to deduce a uniform estimate for $u$ on $\dot{W}^{1,\infty}(0,1)$ to extend the time of existence of the solution. \end{remark} In order to discuss the asymptotic behavior of the solution $u$ of \eqref{Seq:GF}, we prepare two lemmas. \begin{lemma}[Preservation of symmetry] \label{Slem:5.1} Let $\psi$ be a symmetric function satisfying \eqref{Seq:1216-1} and $u$ be the solution of \eqref{Seq:GF} in $(0,1)\times[0,T]$ with the initial datum $ u_0 \in M_{\rm sym}. $ Then $u(\cdot, t)\in M_{\rm sym}$ for $0\leq t \leq T$. \end{lemma} \begin{proof} Let $u$ be a weak solution of \eqref{Seq:GF} in $(0,1)\times[0,T]$ and $v\in \mathcal{K}_T$ be arbitrary. Setting $\tilde{u}(x,t):=u(1-x,t)$ and $\tilde{v}(x,t):=v(1-x,t)$, we find that $\tilde{u}$, $\tilde{v}\in\mathcal{K}_T$ by symmetry of $\psi$ and $u_0$. Then \begin{align*} \int_0^T\!\!\int_0^1\bigg[ \partial_{t} \tilde{u}& (v-\tilde{u}) +2\frac{\tilde{u}''(v-\tilde{u})''}{(1+(\tilde{u}')^2)^{\frac{5}{2}}} -5 \frac{|\tilde{u}''|^2 \tilde{u}'(v-\tilde{u})'} {(1+(\tilde{u}')^2)^{\frac{7}{2}}} \bigg]\,dx\!\,dt \\ &= \int_0^T\!\!\int_1^0-\bigg[ \partial_{t} u (\tilde{v}-u) +2\frac{u''(\tilde{v}-u)''}{(1+(u')^2)^{\frac{5}{2}}} -5 \frac{|u''|^2 u'(\tilde{v}-u)'} {(1+(u')^2)^{\frac{7}{2}}} \bigg]\,dx\!\,dt\geq0, \end{align*} where in the last inequality we used \eqref{Seq:1204-1} taking $\tilde{v}$ as the test function. It follows from the uniqueness of solutions of \eqref{Seq:GF} that $\tilde{u}=u$ and this implies $u(\cdot,t)\inM_{\rm sym}$ for any $0\leq t \leq T$. \end{proof} \begin{lemma} \label{Slem:5.2} Let $\psi$ satisfy \eqref{Seq:1216-1} and $u$ be the solution of \eqref{Seq:GF} in $(0,1)\times[0,T]$ with the initial datum $u_0 \in H(0,1)$ satisfying $\mathcal{W}(u_0)<c_0^2$. Then \begin{align*} \| u'(\cdot,t) \|_{L^{\infty}(0,1)}\leq G^{-1}\left( \frac{\sqrt{\mathcal{W}(u_0)}}{2} \right):=M^*(u_0) \quad \text{for all} \quad0\leq t\leq T. \end{align*} \end{lemma} \begin{proof} By Lemma~\ref{Slem:5.1}, $u(\cdot,t)\inM_{\rm sym}$ holds for $0\leq t \leq T$. Fix $x\in[0,\frac{1}{2})$ and $t\in[0,T]$. Then it follows from the definition of $G$ that \begin{align} \label{Seq:2.03} \mathcal{W}(u(t)) \geq \int_x^{1-x} u''(x,t)^2 G'(u'(x,t))^2\,dx. \end{align} By the Cauchy--Schwarz inequality we have \begin{align* \begin{split} \left| G(u'(x,t))-G(u'(1-x,t)) \right| &=\left| \int_x^{1-x} G'(u'(x,t)) u''(x,t) \,dx \right| \\ &\leq \sqrt{1-2x}\left( \int_x^{1-x} G'(u'(x,t))^2 u''(x,t)^2 \,dx \right)^{\frac{1}{2}} \\ &\leq \sqrt{1-2x}\ \mathcal{W}(u(t))^{\frac{1}{2}}, \end{split} \end{align*} where the last inequality holds by \eqref{Seq:2.03}. By symmetry we have $u'(x,t)=-u'(1-x,t)$. Therefore since $G$ is an odd function, we obtain $G(u'(1-x,t))=-G(u'(x,t))$, and hence \begin{align*} G(|u'(x,t)|)=|G(u'(x,t))|\leq \frac{1}{2}\mathcal{W}(u(t))^{\frac{1}{2}}. \end{align*} This together with \eqref{Seq:5.02} implies that \begin{align*} \|u'(x,t)\|_{L^{\infty}(0,1)} \leq G^{-1}\left(\frac{\sqrt{\mathcal{W}(u(t))}}{2}\right) \leq G^{-1}\left(\frac{\sqrt{\mathcal{W}(u_0)}}{2}\right), \end{align*} where we used the monotonicity of $G^{-1}$ and $\mathcal{W}(u_0)<c_0^2$. \end{proof} We close this paper with the result on the relationship between \eqref{Seq:M} and \eqref{Seq:GF}: \begin{theorem}[Asymptotic behavior] \label{Sthm:5.2} Let $\psi\in\mathsf{SC}$ and $\psi(\frac{1}{2})< c_*$, where $c_*$ is given by \eqref{Seq:c_*}. Let $u_0\in M_{\rm sym}$ satisfy \eqref{Seq:5.1} and $\mathcal{W}(u_0)<c_0^2$. Then \eqref{Seq:GF} has a unique solution $u$ in $(0,1)\times[0,\infty)$. Moreover, $u$ satisfies \begin{align}\label{Seq:5.7} u(\cdot,t) \to U \quad \text{in}\quad H^2(0,1) \quad \text{as} \quad t\to\infty, \end{align} where $U$ is the solution obtained by Theorem~\ref{Sthm:1.1}. \end{theorem} \begin{remark}\label{Srem:5.2} As used in Lemma~\ref{Slem:5.2}, the assumption $\mathcal{W}(u_0)<c_0^2$ plays an important role for the uniform bound of $\|u'(\cdot,t)\|_{L^{\infty}(0,1)}$ (see \cite[Lemma 2.4]{DD} and \cite[Corollary 5.16]{Muller20}). Therefore, no matter whether $\psi(\frac{1}{2})<c_*$ or not, we can obtain the global-in-time existence result under the assumption that $\mathcal{W}(u_0)<c_0^2$. \end{remark} \begin{proof}[Proof of Theorem~\ref{Sthm:5.2}] \textsl{We first show the global-in-time existence.} Let $u$ be the solution in $(0,1)\times[0,T]$. Set \[ L^* := c_H\left(1+M^*(u_0)^2\right)^{\frac{5}{2}} \mathcal{W}(u_0)^{\frac{1}{2}}. \] Since Lemma~\ref{Slem:5.2} gives the uniform estimate of $\|u'(\cdot,t)\|_{L^{\infty}(0,1)}$, we can extend the solution $u$ to $(0,1)\times[0,T+L^*]$ by considering \eqref{Seq:GF} with the initial datum $u(T)$, as mentioned in Remark~\ref{Srem:5.1}. Then by Lemma~\ref{Slem:5.2} it holds that \begin{align*} \| u'(\cdot,t)\|_{L^{\infty}(0,1)} \leq G^{-1}\left(\frac{\sqrt{\mathcal{W}(u(T))}}{2} \right) \quad \text{for all}\quad T\leq t\leq T+L^*. \end{align*} This together with \eqref{Seq:5.02} gives \begin{align*} \| u'(\cdot,t)\|_{L^{\infty}(0,1)} \leq G^{-1}\left( \frac{\sqrt{\mathcal{W}(u_0)}}{2} \right)=M^*(u_0) \quad \text{for all}\quad 0\leq t\leq T+L^*. \end{align*} By solving \eqref{Seq:GF} with the initial datum $u(T+L^*)$, and by following the same argument as above, we can extend the solution $u(x,t)$ to $t=T+2L^*$ and it follows from \eqref{Seq:5.02} that \[ \mathcal{W}(u(T+2L^*)) \leq \mathcal{W}(u(T+L^*)) \leq \mathcal{W}(u_0), \] which yields \begin{align*} \| u'(\cdot,t)\|_{L^{\infty}(0,1)} \leq M^*(u_0) \quad \text{for all}\quad T+L^*\leq t\leq T+2L^*. \end{align*} Repeating this argument, we can extend the solution to an arbitrary time and find that $u$ satisfies \begin{align}\label{Seq:5.8} \| u'(\cdot,t)\|_{L^{\infty}(0,1)} \leq G^{-1}\left( \frac{\sqrt{\mathcal{W}(u_0)}}{2} \right) \quad \text{for} \quad 0\leq t<\infty. \end{align} Thus the solution $u$ of \eqref{Seq:GF} with the initial datum $u_0$ can be extended to $(0,1)\times[0,\infty)$ and satisfies \begin{align} \label{Seq:5.06} \int_0^{\infty}\!\!\int_0^1\Bigl[ \partial_{t} u (v-u) &+2\frac{u''(v-u)''}{(1+(u')^2)^{\frac{5}{2}}} -5 \frac{|u''|^2 u'(v-u)'} {(1+(u')^2)^{\frac{7}{2}}} \Bigr]\,dx\!\,dt \ge 0 \end{align} for $v\in \mathcal{K}_{\infty}$. Moreover, since it holds by \eqref{Seq:5.02}, the energy monotonicity, that \begin{align*} \int_0^1 \frac{|u''(x,t)|^2}{(1+M^*(u_0)^2)^{\frac{5}{2}}}\,dx \leq \mathcal{W}(u(t)) \leq \mathcal{W}(u_0) \quad \text{for} \quad 0\leq t<\infty, \end{align*} we obtain the $H^{2}$-uniform boundness of $u(\cdot,t)$, that is, \begin{align}\label{Seq:5.9} \|u''(\cdot,t)\|^2_{L^2(0,1)} \leq (1+M^*(u_0)^2)^{\frac{5}{2}}\mathcal{W}(u_0) \quad \text{for} \quad 0\leq t<\infty. \end{align} Furthermore, \eqref{Seq:5.8} and \eqref{Seq:5.9} clearly imply that $\|u(\cdot,t)\|_{H^2(0,1)}$ is uniformly bounded. \smallskip \textsl{Next we prepare $\omega$-limit set.} We will prove this with the help of the argument used in \cite[Section 3.1]{KSW}. Let $u$ be the solution in $(0,1)\times[0,\infty)$ with the initial datum $u_0$. To begin with, we define $\omega$-limit set by \[ \omega(u_0):=\Set{w \in C^1([0,1]) | u(t_k) \to w \text{ in } C^1([0,1]) \text{ for some sequence } t_k \to \infty } \] and we shall show that \begin{align}\label{Seq:5.08} \omega(u_0)=\{ U\}, \end{align} where $U$ is the unique solution of \eqref{Seq:P}. Since every $w\in \omega(u_0)$ satisfies \[ w\geq\psi \quad \text{in}\quad [0,1], \] in order to obtain \eqref{Seq:5.08} it is sufficient to show that $w$ satisfies \eqref{Seq:1.4}. Fix $w\in\omega(u_0)$ arbitrarily. Then there exists $\{t_k\}$ such that $u(\cdot,t_k)\to w$ in $L^2(0,1)$ with $1<t_k \to \infty$. Define a sequence of functions $\{u_k\}$ by \[ u_k(x,\tau):= u(x, t_k+\tau), \quad (x,\tau) \in (0,1)\times (-1,1). \] Since \eqref{Seq:5.02} and Step~1 imply that $\partial_t u \in L^2((0,1)\times(0,\infty))$, it holds that \begin{align}\label{Seq:5.07} \int_{-1}^1 \int_0^1 |\partial_t u_k|^2 \,dx\!\,d\tau &= \int_{t_k-1}^{t_k+1}\!\int_0^1 |\partial_t u|^2 \,dx\!\,dt \to 0 \quad \text{as}\quad k\to\infty. \end{align} Therefore, by \eqref{Seq:5.07} and the Cauchy--Schwarz inequality we have \begin{align*} \int_{-1}^1\int_0^1 |u(x,t_k+\tau)&-u(x,t_k)|^2\,d\tau\!\,dx \\ &=\int_0^1\int_{-1}^1 \left|\int_{t_k}^{t_k+\tau}\partial_t u(x,s)\,ds\right|^2\,dx\!\,d\tau \\ &\leq \int_0^1\int_{t_k-1}^{\infty} \left|\partial_t u(x,s) \right|^2\,ds\!\,dx \ \to0 \quad \text{as}\quad k\to\infty, \end{align*} which yields \begin{align}\label{Seq:5.10} u_k \to w \quad \text{in}\quad L^2(-1,1;L^2(0,1)). \end{align} Moreover, by the same argument as in \cite[Lemma 4.10]{OY_2019}, $u_k(\cdot,s)\in H^3(0,1)$ for a.e. $s\in(-1,1)$ and \begin{align}\label{Seq:5.100} \int_{-1}^1\|u'''_k(\cdot,s)\|_{L^2(0,1)}^2\,ds \leq C \left( \int_{-1}^1 \|\partial_{t}u_k(\cdot,s)\|^2_{L^2(0,1)}\,ds +1\right) \end{align} holds and hence $\{u_k\}$ is uniformly bounded in $L^2(-1,1;H^3(0,1))$. By the Aubin--Lions--Simon compactness theorem, we find that $w \in H^2(0,1)$ and there exists a subsequence, which we still denote by $\{u_k\}$, such that \begin{align}\label{Seq:5.11} u_k \to w \quad \text{in}\quad L^2(-1,1;H^2(0,1)) \quad\text{as}\quad k\to\infty. \end{align} Next, we show that $w$ satisfies \eqref{Seq:1.4}. Fix $V\in M$ arbitrarily and take a function $\zeta\in C^{\infty}_{\rm c}(-1,1)$ satisfying $0\leq\zeta\leq 1$ and $\zeta\not\equiv0$. Since $u(x,t)+\zeta(t-t_k)\big( V(x)-u(x,t) \big) $ belongs to $\mathcal{K}_{\infty}$, using this as a test function in \eqref{Seq:5.06} we have \begin{align*} \int_{-1}^1\int_0^1 \bigg[ \partial_t u_k (V-u_k) +2\frac{u_k''(V-u_k)''}{(1+(u_k')^2)^{\frac{5}{2}}} -5 \frac{|u_k''|^2 u_k'(V-u_k)'} {(1+(u_k')^2)^{\frac{7}{2}}} \bigg]\zeta \,dx\!\,d\tau\geq 0, \end{align*} where we used the change of variables $\tau = t-t_k$. Combining this with \eqref{Seq:5.07} and \eqref{Seq:5.11}, letting $k\to\infty$, we obtain \begin{align*} \int_{-1}^1 \zeta \,d\tau\left(\int_0^1 \bigg[2\frac{w''(V-w)''}{(1+(w')^2)^{\frac{5}{2}}} -5 \frac{|w''|^2 w'(V-w)'} {(1+(w')^2)^{\frac{7}{2}}} \bigg] \,dx\right) \geq 0. \end{align*} Hence dividing the above by $\int_{-1}^1 \zeta\,d\tau$, we find that $w$ satisfies \eqref{Seq:1.4} and $w$ is a solution of \eqref{Seq:P}. We have already shown in Theorem~\ref{Sthm:1.2} that a solution of \eqref{Seq:P} is unique if $\psi\in\mathsf{SC}$ satisfies $\psi(\frac{1}{2})< c_*$. Therefore we obtain \eqref{Seq:5.08}. \smallskip \textsl{We show the the full-limit convergence.} First we show that \begin{align}\label{Seq:5.12} u(\cdot,t) \to U \quad \text{in} \quad C^1([0,1]) \end{align} as $t\to\infty$. Suppose that \eqref{Seq:5.12} does not hold. Then there exists $\epsilon>0$ and sequence $t_j\to\infty$ such that \begin{align}\label{Seq:5.14} \| u(\cdot,t_j) -U\|_{C^1([0,1])} \geq \epsilon. \end{align} However, \eqref{Seq:5.9} implies that $\{u(\cdot,t_j)\}$ is bounded in $H^2(0,1)$ and hence by the Rellich--Kondrachov compactness theorem there exist $\{t_{j_k}\} \subset \{t_{j}\}$ and $\bar{u}\in H^2(0,1)$ such that \[ u(\cdot,t_{j_k}) \to \bar{u} \quad \text{in}\quad C^1([0,1]) \] as $k\to\infty$. Since \eqref{Seq:5.08} yields $\bar{u}=U$, this contradicts \eqref{Seq:5.14}. Thus we obtain \eqref{Seq:5.12}. Next we prove that \begin{align}\label{Seq:5.16} \frac{u''(\cdot,t)}{(1+u'(\cdot,t)^2)^{\frac{5}{4}}} \rightharpoonup \frac{U''}{(1+(U')^2)^{\frac{5}{4}}} \quad \text{weakly in} \quad L^2(0,1) \end{align} as $t\to\infty$. Using $G$, given by \eqref{Sdef-G}, we see that \[G\big(u'(x,t)\big)'=\frac{u''(x,t)}{(1+u'(x,t)^2)^{\frac{5}{4}}}, \quad G\big(U'\big)'=\frac{U''}{(1+(U')^2)^{\frac{5}{4}}}. \] For any $\phi\in C^1_{\rm c}(0,1)$, it holds that \begin{align*} \int_0^1\bigg[\frac{u''(x,t)}{(1+u'(x,t)^2)^{\frac{5}{4}}} - \frac{U''(x)}{(1+U'(x)^2)^{\frac{5}{4}}} \bigg]\phi\,dx &=-\int_0^1\bigg[G\big(u'(\cdot,t)\big) - G\big(U'\big) \bigg]\phi'\,dx\\ &\to 0 \quad \text{as}\quad t\to\infty, \end{align*} where we used the continuity of $G$ and \eqref{Seq:5.12}. Therefore we obtain \eqref{Seq:5.16}. Finally, we show that \begin{align}\label{Seq:5.17} \int_0^1\bigg(\frac{u''(\cdot,t)}{(1+u'(\cdot,t)^2)^{\frac{5}{4}}}\bigg) ^2\,dx \to \int_0^1\bigg( \frac{U''}{(1+(U')^2)^{\frac{5}{4}}} \bigg) ^2\,dx \end{align} as $t\to\infty$. We can regard the left-hand side of \eqref{Seq:5.17} as $\mathcal{W}(u(t))$. Since \eqref{Seq:5.02} implies that $\mathcal{W}(u(t))$ is non-increasing with respect to $t$, there exists $A\in\mathbb{R}$ such that \begin{align} \label{Seq:5.18} A=\inf_{t>0}\mathcal{W}(u(t))=\lim_{t\to\infty}\mathcal{W}(u(t)). \end{align} Suppose that $\mathcal{W}(U)<A$. Similar to \cite[proof of Lemma 6.3]{OY_2019} we can construct $\{t_j\}_{j\in\mathbb{N}}$ satisfying \[ u(\cdot,t_j) \to U \quad \text{in}\quad H^2(0,1). \] This together with \eqref{Seq:5.18} implies that \[ A=\lim_{t\to\infty}\mathcal{W}(u(t)) =\lim_{j\to\infty}\mathcal{W}(u(t_j)) =\mathcal{W}(U), \] which contradicts $\mathcal{W}(U)<A$ and we obtain $\mathcal{W}(U)\geq A$. Since $U$ is a unique minimizer of $\mathcal{W}$ in $M_{\rm sym}$, $\mathcal{W}(U)> A$ does not occur. Therefore $\mathcal{W}(U)=A$, which in combination with \eqref{Seq:5.18} gives \[ \lim_{t\to\infty}\mathcal{W}(u(t)) =\mathcal{W}(U). \] This clearly asserts that \eqref{Seq:5.17} holds. It follows from \eqref{Seq:5.16} and \eqref{Seq:5.17} that \[ \frac{u''(\cdot,t)}{(1+u'(\cdot,t)^2)^{\frac{5}{4}}} \to \frac{U''}{(1+(U')^2)^{\frac{5}{4}}} \quad \text{in} \quad L^2(0,1) \] as $t\to\infty$. This together with \eqref{Seq:5.12} implies that \eqref{Seq:5.7} holds. \end{proof}
2,869,038,156,485
arxiv
\section{Introduction} Existence of Goldstone boson(s) is a manifestation of the spontaneous breakdown of some exact or nearly-exact global continuous symmetry in Nature \cite{goldstone}. Such Goldstone or pseudo-Goldstone bosons would be exactly or nearly massless. The well known example in the standard model (SM) is the pion which in the modern view can be interpreted as the Goldstone boson of spontaneous breakdown of the chiral $SU(2) \times SU(2)$ symmetry. Another logical possibility is the presence of a global hidden symmetry that the usual SM particles do not experience. The simplest choice is a global hidden $U(1)$ symmetry associated with a new quantum number $W$ of which all the hidden particles carry non-vanishing $W$ charges while all SM particles are neutral. Weinberg \cite{weinberg} showed that such a simple extension to the SM could bring the Goldstone boson into weak interactions with the SM particles via a Higgs portal, $g (S^\dagger S)(\Phi^\dagger \Phi)$, where $S(x)$ is a complex singlet scalar field neutral under the SM symmetries with a nonzero $W$ quantum number, and $\Phi$ is the SM Higgs doublet with $W=0$. Thus, the Goldstone bosons could remain in thermal equilibrium in the early Universe until they went out of equilibrium at a temperature above but not much above the muon mass. In this way, the Goldstone boson could contribute a fraction of $0.39$ to the effective number $N_{eff}$ of neutrino species present in the era before recombination \cite{weinberg}. The requirement for this to happen is that the interactions of the Goldstone boson with the SM particles should be strong enough to bring it into thermal equilibrium and also weak enough such that it decouples close to the neutrino-decoupling temperature. The nature of derivative couplings of Goldstone bosons can easily satisfy this requirement. There are a number of constraints on the model, namely on the Goldstone boson and the massive scalar $\sigma$ field associated with the Goldstone boson. Since they are weakly coupled to the Higgs boson, they would contribute to the invisible decay width of the Higgs boson \cite{weinberg,Huang:2013oua}. There are a number of other constraints in existing data \cite{Huang:2013oua}, e.g., search for invisible particles in hadron decays, quarkonium decays, etc. In particular, here we point out that the invisible Higgs search at LEP-II will give the most stringent constraint on the mixing angle. The detail will be given in the next section. In this work, besides working out the constraints on the model, we point out it may be possible to detect the $\sigma$ field and the Goldstone boson of the model at the LHC, via the {\it visible} decay mode of the $\sigma$ field, namely $\sigma \to \pi \pi$, especially when the modulus of the field $S$ takes on a large vacuum expectation value (VEV). This is the main result of this work. We also estimate the event rates at the LHC-8 and LHC-14. Studies of Goldstone bosons at the LHC in other context can be found in Ref.~\cite{dedes}. The related dark matter phenomenology has also been studied in Ref.~\cite{luis}. \section{The Model} The model \cite{weinberg} is based on adding a complex singlet field $S$ to the SM Higgs doublet, through which the singlet field interacts with the SM particles. The renormalizable Lagrangian density is given by \footnote{ We have normalized the kinetic energy term of a complex scalar field in the canonical form with the coefficient equals to 1.} \begin{equation} \label{1} {\cal L}= (\partial_\mu S^\dagger ) (\partial^\mu S ) + \mu^2 S^\dagger S - \lambda (S^\dagger S)^2 - g (S^\dagger S) (\Phi^\dagger \Phi) + {\cal L}_{\rm sm} \end{equation} where the Higgs sector in the ${\cal L}_{\rm sm}$ is \begin{equation} {\cal L}_{\rm sm} \supset (D_\mu \Phi)^\dagger ( D^\mu \Phi) + \mu^2_{\rm sm} \Phi^\dagger \Phi - \lambda_{\rm sm} (\Phi^\dagger \Phi )^2 \; . \end{equation} To respect the low energy theorem, we follow Ref.~\cite{weinberg} to write $S$ in term of a radial field $r(x)$ and a Goldstone field $\alpha(x)$ as \begin{equation} S(x) = \frac{1}{\sqrt{2}} \left( \langle r \rangle + r(x) \right) e^{i 2 \alpha(x)} \end{equation} in which the radial field develops a VEV $\langle r \rangle$ where the field $S$ is expanding around. Note that one can always set $\langle \alpha(x) \rangle =0$ by field redefinition. The SM Higgs doublet field $\Phi$ is expanded about the VEV as \begin{equation} \Phi (x) = \frac{1}{\sqrt{2}}\, \left( \begin{array}{c} 0 \\ \langle \phi \rangle + \phi (x) \end{array} \right ) \end{equation} in the unitary gauge, and $\langle \phi \rangle \approx 246$ GeV. Expanding the Lagrangian in Eq.~(\ref{1}) around the VEVs and replacing $\alpha(x) \to \alpha(x)/(2\langle r \rangle )$ in order to achieve a canonical kinetic term of the $\alpha(x)$ field describing a scalar of dimension 1, we obtain \begin{eqnarray} \label{L_W} {\cal L} & \supset & \frac{1}{2} (\partial_\mu r )(\partial^\mu r) + \frac{1}{2} \frac{ (\langle r \rangle + r)^2}{\langle r \rangle^2} (\partial_\mu \alpha )(\partial^\mu \alpha) + \frac{\mu^2}{2} ( \langle r \rangle + r )^2 - \frac{\lambda}{4} ( \langle r \rangle + r )^4 \nonumber \\ &&+ \frac{1}{2} (\partial_\mu \phi )(\partial^\mu \phi) + \frac{\mu_{\rm sm}^2}{2} ( \langle \phi \rangle + \phi )^2 - \frac{\lambda_{\rm sm}}{4} ( \langle \phi \rangle + \phi )^4 \nonumber \\ && - \frac{g}{4} ( \langle r \rangle + r )^2 ( \langle \phi \rangle + \phi )^2 \; . \end{eqnarray} Two tadpole conditions can be written down using $\partial V / \partial r=0$ and $\partial V / \partial \phi=0$ where $V$ is the scalar potential part of Eq.(\ref{L_W}): \begin{eqnarray} \langle \phi \rangle^2 &=& \frac{4 \lambda\mu_{\rm sm}^2 - 2 g \mu^2 } { 4 \lambda \lambda_{\rm sm} - g^2 } \;, \\ \langle r \rangle^2 &=& \frac{4 \lambda_{\rm sm}\mu^2 -2 g \mu_{\rm sm}^2 } { 4 \lambda \lambda_{\rm sm} - g^2 } \; . \end{eqnarray} Taking the decoupling limit $g\to 0$ from the above equations, we recover the SM condition of $\langle \phi \rangle^2 = \mu_{\rm sm}^2/\lambda_{\rm sm}$ as well as $\langle r \rangle^2 = \mu^2/\lambda$. The interaction fields $r(x)$ and $\phi(x)$ are no longer mass eigenstates because of the mixing term proportional to $g$. The mass term is \begin{equation} {\cal L}_{\rm m} = - \frac{1}{2} \left( \phi(x) \;\;\; r(x) \right)\; \left ( \begin{array}{cc} 2 \lambda_{\rm sm} \langle \phi \rangle^2 & g \langle r \rangle \langle \phi \rangle\\ g \langle r \rangle \langle \phi \rangle & 2 \lambda \langle r \rangle^2 \end{array} \right )\; \left( \begin{array}{c} \phi(x) \\ r(x) \end{array} \right ) \;. \end{equation} We rotate $(\phi(x)\;\; r(x) )^T$ by an angle $\theta$ into physical fields: \begin{equation} \left( \begin{array}{c} H(x) \\ \sigma(x) \end{array} \right ) = \left( \begin{array}{cc} \cos\theta & \sin\theta \\ - \sin\theta & \cos\theta \end{array} \right )\; \left( \begin{array}{c} \phi(x) \\ r(x) \end{array} \right ) \;. \end{equation} The physical masses of the $H(x)$ and $\sigma(x)$, and the mixing angle are given by \begin{eqnarray} m_H^2 &=& 2 \lambda_{\rm sm} \langle \phi \rangle^2 \cos^2\theta + 2 \lambda \langle r \rangle^2 \sin^2\theta + g \langle r \rangle \langle \phi \rangle \sin 2 \theta \; , \nonumber \\ m_\sigma^2 &=& 2 \lambda \langle r \rangle^2 \cos^2\theta + 2 \lambda_{\rm sm} \langle \phi \rangle^2 \sin^2\theta - g \langle r \rangle \langle \phi \rangle \sin 2 \theta \; , \\ \tan 2 \theta &=& \frac{ g \langle r \rangle \langle \phi \rangle} {\lambda_{\rm sm} \langle \phi \rangle^2 - \lambda \langle r \rangle^2} \; . \nonumber \end{eqnarray} In the small $\theta$ limit ($\theta \alt 0.01$ as will be shown later), \begin{eqnarray} \label{smalltheta} m_H^2 &\approx & 2 \lambda_{\rm sm} \langle \phi \rangle^2 \; , \nonumber \\ m_\sigma^2 &\approx& 2 \lambda \langle r \rangle^2 \; , \\ \theta &\approx & \frac{ g \langle r \rangle \langle \phi \rangle} { m_H^2 - m_\sigma^2 } \; . \nonumber \end{eqnarray} We can now write down the interactions terms in the limits of $\theta \ll 1$ and $m_\sigma \ll m_H$: \footnote{ In the coupling of $H\sigma\sigma$, the next-to-leading term in $\theta$ is $g \theta \langle r \rangle$, which is suppressed by a factor of $(\langle r \rangle / \langle \phi \rangle) \theta $ relative to the leading term. However, when the ratio $\langle r \rangle/ \langle \phi \rangle $ is large, this next-to-leading term could be a sizable correction to the leading term. } \begin{eqnarray} {\cal L}_{H \alpha\alpha} &=& \frac{\theta}{\langle r \rangle } \, H\, (\partial_\mu \alpha) (\partial^\mu \alpha) \; , \nonumber \\ {\cal L}_{\sigma \alpha\alpha} &=& \frac{1}{\langle r \rangle } \, \sigma \, (\partial_\mu \alpha) (\partial^\mu \alpha) \; , \\ {\cal L}_{H\sigma\sigma} &=& - \frac{g}{2} \langle \phi \rangle \, H \,\sigma^2 \; . \nonumber \end{eqnarray} \section{Current Constraints on the $\sigma$ field and Goldstone Boson} {\it Constraints from LEP searches for an invisibly decaying Higgs boson.} The $\sigma$ field has a suggested mass of about 500 MeV and it can be produced via the mixing with the Higgs field \cite{weinberg}. Such a light scalar boson can be readily produced in hadron decays, quarkonium decays, as well as in the $Z$ boson decays, and at $e^+ e^-$ collisions. The OPAL collaboration \cite{opal} searched for an invisibly decaying Higgs boson for the whole mass range from 1 GeV to 108 GeV at LEPII where the limit of the following ratio \[ \frac{ \sigma(Z h) B(h \to \chi^0 \chi^0)}{ \sigma (Z H_{\rm sm})} \] was obtained. We can extrapolate the mass of the $h$ boson to be below 1 GeV, and the ratio being excluded is down to almost $10^{-4}$ (See Fig.~5 of \cite{opal}). In the present context, the production cross section of $Z \sigma$ is \[ \sigma (Z \sigma ) = \theta^2 \times \sigma (Z H_{\rm sm} ) \] Assuming the $\sigma$ field decays entirely into Goldstone bosons and disappears, we can constrain the mixing angle $\theta$ \begin{equation} \theta \alt 10^{-2} \;. \end{equation} A less stringent constraint $\theta<0.27$ has also been obtained recently in \cite{Huang:2013oua} from $B(\Upsilon \to\gamma E) \hskip-12pt / \; < 2\times 10^{-6}$ \cite{delAmoSanchez:2010ac} via Wilczek mechanism \cite{Wilczek:1977zn} with the one-loop QCD correction \cite{Nason:1986tr}. {\it Invisible width of the Higgs boson}. The invisible decay of the Higgs boson goes through two processes: \[ H \to \alpha \alpha,\qquad H \to \sigma \sigma \to 4 \alpha \] The decay partial widths are given by \begin{eqnarray} \Gamma(H \to \alpha\alpha) &=& \frac{1}{32\pi} \frac{m_H^3}{ \langle \phi \rangle^2 } \, \frac{ \langle \phi \rangle^2}{\langle r \rangle^2 } \, \theta^2 \; , \\ \Gamma(H \to \sigma\sigma) & \approx & \frac{1}{32\pi} \frac{m_H^3}{ \langle \phi \rangle^2 } \, \frac{ \langle \phi \rangle^2}{\langle r \rangle^2 } \, \theta^2 \; , \end{eqnarray} in which we have assumed $m_\sigma \ll m_H$. Since the $\sigma$ field decays mostly into the Goldstone bosons (see the next section), we add both channels to obtain the invisible width of the Higgs boson \cite{wimpmode} \begin{equation} \label{Hinv} \Gamma_{\rm inv} (H) = \frac{2}{32\pi} \frac{m_H^3}{ \langle \phi \rangle^2 } \, \frac{ \langle \phi \rangle^2}{\langle r \rangle^2 } \, \theta^2 \; . \end{equation} One of the global fits to all the SM Higgs boson signal strength has constrained the non-standard decay width of the Higgs boson to be less than $1.2$ MeV (branching ratio about 22\%) at 95\% CL \cite{higgcision}. The constraint on $\langle \phi\rangle / \langle r \rangle$ from the invisible width is similar to the constraint on $g$ obtained in Ref.~\cite{weinberg}. Note that we also account for the decay of $H\to \sigma\sigma$ as a part of the invisible width of the Higgs boson. Numerically, from Eq.(\ref{Hinv}), we have \[ \theta \; \frac{\langle \phi \rangle}{\langle r \rangle } \le 0.043 \;. \] We use this constraint to rule out the parameter space in the plane of $(\theta, \langle \phi \rangle /\langle r \rangle)$, shown by the shaded region in Fig.~\ref{limit}. \begin{figure}[th!] \centering \includegraphics[width=5.5in]{vphi-vr-limit-new.pdf} \caption{\small \label{limit} Parameter space of $(\theta, \langle \phi \rangle /\langle r \rangle)$. The shaded region is ruled out by the invisible width of the Higgs boson to be less than 1.2 MeV. The condition for muon decoupling and the ratio $f$ are given in Eqs.~(\ref{muondecoupling}) and (\ref{f}), respectively, and are shown here for $m_\sigma = 500$ MeV. The upper limit of $\theta$ is taken to be $0.01$ constrained by the search for invisibly decaying Higgs boson at LEP-II. } \end{figure} {\it Muon decoupling.} In the early Universe, the Goldstone bosons are at thermal equilibrium with the SM particles. As the Universe cooled down from its Hubble expansion, the Goldstone bosons would go out of the equilibrium since its weak interaction with the SM particles could no longer keep up with the Hubble expansion. It was argued in \cite{weinberg} that the best scenario for the Goldstone bosons to go out of equilibrium is at a temperature still above the muon and electron masses but below all other masses of the SM. After decoupling, the Goldstone bosons were free and its temperature $T$ would then just fall off like the inverse of the Friedmann-Roberston-Walker scale factor $a$. Since the total cosmic entropy is conserved during the adiabatic expansion, after the muon annihilation, the constancy of $Ta$ for the Goldstone bosons implies they behave like neutrino impostors contributing to the measured $\Delta N_{eff} = (4/7)(43/57)^{4/3}=0.39$ \cite{weinberg}, which is consistent with the recent Planck result \cite{planck}. For this scenario to work, the annihilation rate of $\alpha \alpha \longleftrightarrow \mu^+ \mu^-$ must be of the same order of the Hubble expansion rate at the temperature $k_B T \approx m_{\mu}$, {\it i.e.} \cite{weinberg} \begin{equation} \label{muondecoupling} \frac{g^2 m_{\mu}^7 m_{\rm PL}}{m_{\sigma }^4 m_{H}^4} \approx 1 \; , \end{equation} where $m_{\rm PL}$ is the Planck mass. {}From Eq.(\ref{smalltheta}), one can express $g^2$ in terms of $\theta$, $\langle \phi \rangle / \langle r \rangle$ and $m_\sigma$. However, its dependence on $m_\sigma$ is rather weak for $m_\phi \gg m_\sigma$. This muon decoupling condition of Eq.(\ref{muondecoupling}) is shown in Fig.~\ref{limit} for $m_\sigma = 500$ MeV. Note that this muon decoupling is not a constraint, but rather an interesting condition at the early universe for the Goldstone boson to explain $\Delta N_{eff}$. \section{Decay of the $\sigma$ field} Because of the constraint from the Higgs invisible width and condition for muon decoupling in Eq.~(\ref{muondecoupling}), the mass range of $\sigma$ cannot be much larger than $O(1)$ GeV \cite{weinberg}. We therefore show the mass range from 1 MeV to 1000 MeV for $\sigma$ from now on, and use 500 MeV when we need a typical value. The decay modes of such a light $\sigma$ field are very similar to those of a very light Higgs boson ($\alt 1$ GeV) \cite{pipi}. The $\sigma$ can decay into a pair of electrons, muons, photons, pions and Goldstone bosons. The formulas for the decays into $e^+ e^-$, $\mu^+ \mu^-$ and $\gamma\gamma$ are the same as the Higgs boson, up to a mixing angle. Thus, for the $f \bar f$ final state, we have \begin{equation} \Gamma(\sigma \to f\bar f) = \theta^2 \; \frac{m_f^2 m_\sigma}{8 \pi \langle \phi \rangle^2 }\, \left [ 1- \frac{4 m_f^2}{m_\sigma^2} \right ]^{3/2} \;, \end{equation} in the small $\theta$ limit. For $m_\sigma < 1$ GeV the only possibility for fermionic decays are $f = e,\mu$. The decay width for $\sigma \to \gamma\gamma$ is the same as the one for the SM Higgs boson, up to $\theta^2$. We do not repeat the formula here except noting that the loop formulas for the light quarks are not trustworthy due to non-perturbative effects. However the $\sigma \to \gamma\gamma$ mode is not important. Since the $\sigma$ field is very light and close to the hadronic scale $\Lambda_{\rm QCD}$, its decay into two gluons is not applicable because of the non-perturbative hadronic effects, in contrast to the SM Higgs boson. The only hadrons that the $\sigma$ field can decay into is a pair of pions $\pi^+ \pi^-$ and $\pi^0 \pi^0$. The decay width of $\sigma \to \pi \pi$ summing over the isospin channels is given by \cite{pipi} \begin{equation} \Gamma ( \sigma \to \pi \pi) = \theta^2\; \frac{1}{216 \pi}\, \frac{m_\sigma^3}{\langle \phi \rangle^2}\, \left( 1- \frac{4 m_\pi^2}{m_\sigma^2} \right )^{1/2} \, \left( 1 + \frac{11 m_\pi^2}{2 m_\sigma^2} \right )^{2} \;. \end{equation} The major difference is the decay mode $\sigma \to \alpha \alpha$ of the Goldstone boson. The partial width is \begin{equation} \Gamma (\sigma \to \alpha \alpha ) = \frac{ m_\sigma^3}{ 32 \pi \langle r \rangle^2 } \;. \end{equation} It is easy to see that the visible partial widths are all proportional to $\theta^2$ because the decay into visible particles is only possible via the mixing with the SM Higgs boson. On the other hand, the decay into a pair of Goldstone bosons is not suppressed by the mixing angle, but inversely proportional to the square of $\langle r \rangle$. We show the branching ratios of the $\sigma$ field for $\langle r \rangle$ = 1 and 7 TeV in Fig.~\ref{decay}. \begin{figure}[th!] \centering \includegraphics[width=3.2in]{br-1tev.pdf} \includegraphics[width=3.2in]{br-7tev.pdf} \caption{\small \label{decay} Decay branching ratios for the $\sigma$ field for (a) $\langle r \rangle = 1$ TeV and (b) $\langle r \rangle = 7$ TeV. The mode $\pi \pi$ includes $\pi^+\pi^-$ and $\pi^0\pi^0$. The mixing parameter $\theta$ is set at 0.01. } \end{figure} Mostly, the $\sigma$ field decays invisibly into the Goldstone bosons. It essentially adds to the invisible width of the SM Higgs boson, as the Higgs boson can decay via $H \to \sigma \sigma \to 4 \alpha$ and $H \to \alpha \alpha$. However, when $\langle r \rangle$ goes to a very large value, say 7 TeV, as remarked already in \cite{weinberg}, the decay of $\sigma \to \pi \pi$ can be as large as 2\%. We show the ratio \begin{equation} \label{f} f \equiv \frac{\Gamma(\sigma \to \pi \pi)}{\Gamma(\sigma \to \alpha\alpha)} = \theta^2 \frac{4}{27} \frac{\langle r \rangle^2}{\langle \phi \rangle^2} \, \left( 1- \frac{4 m_\pi^2}{m_\sigma^2} \right )^{1/2} \, \left( 1 + \frac{11 m_\pi^2}{2 m_\sigma^2} \right )^{2} \end{equation} in Fig.~\ref{limit} for $m_\sigma =$ 500 MeV. In most part of the allowed region, the ratio $f$ is well below $10^{-4}$, thus mostly the $\sigma$ field decays into Goldstone boson. Nevertheless, if one goes to the corner where $\frac{\langle \phi \rangle}{\langle r \rangle}$ is very small, we can achieve $f \approx 10^{-2}$. Such a value of $f$ would imply very interesting signatures for the $\sigma$ field and the Goldstone boson. \section{Collider Signatures} When the branching ratio $B(\sigma \to \pi \pi) \approx 2\%$, the collider signature would be very interesting. The dominant production of the $\sigma$ field is via the decay of the Higgs boson, followed by the decays of the two $\sigma$ fields. We can look for one $\sigma$ decaying invisibly into a pair of Goldstone bosons while the other one decays visibly into a pair of pions. Therefore, we expect \begin{equation} gg \to H \to \sigma \sigma \to (\pi \pi) (\alpha \alpha) \;, \end{equation} where the invariant mass of the pion pair is located right at $m_\sigma$. The signature would be a distinguished pion pair with $m_{\pi\pi} \approx m_\sigma$ plus a large missing energy carried away by the Goldstone bosons $\alpha$. We perform a rough estimate of event rate here. The production cross section of the SM Higgs boson the LHC-8 is about 19 pb \cite{wikicern}, and the non-standard decay branching ratio of the Higgs boson is limited to be less than about 20\% \cite{higgcision}. Therefore, using the analysis above we choose a currently allowed branching ratio of the Higgs boson: \begin{equation} B(H \to \sigma\sigma) \alt 10\% \;. \end{equation} The cross section at the LHC-8 with $\langle r \rangle = 7$ TeV would be \begin{eqnarray} && \sigma(gg\to H)\times B(H \to \sigma\sigma) \times B(\sigma \to \pi\pi) \times B(\sigma \to \alpha\alpha) \times 2 \nonumber \\ & & \approx 19\;{\rm pb} \times 0.1 \times 0.02 \times 0.97 \times 2 \approx 73 \; {\rm fb}\;. \end{eqnarray} For LHC-14, one should multiply the above number by a factor of $2.8$. Since the intermediate $\sigma$ boson is only $O(1)$ GeV, its decay products would be very collimated. The two $\alpha$'s become missing energies, while the two pions are very collimated, which appear to be a ``microjet'', and experimentally it looks like a $\tau$ jet. The final state then consists of a microjet jet, which is made up of two pions, and a large missing energy. We first discuss the case when the two pions are charged pions. Ideally, we would like to separate the two charged pions with an angular separation between them of order $\sim 2 m_\sigma / p_{T_\sigma} = 1\, {\rm GeV} / 60\,{\rm GeV} \approx 0.015$ which is rather small. Only the pixel detector inside the LHC experiments has some chances of separating them. The pixel tracker of the CMS detector~\cite{CMS} consists of three barrel layers with radii 4.4, 7.3 and 10.2 cm, and two endcap disks on each side of the barrel section. The spatial resolution ranges from 20 $\mu$m to about 100 $\mu$m, depending on the direction. Taking conservatively 100 $\mu$m as the spatial resolution and divide it by the average radius of the pixel detector, say 5 cm, we obtain an angular resolution of $2\times 10^{-3}$ \footnote { With both outer trackers and pixel detectors, the resolution could be $2-10$ times better than $2\times 10^{-3}$ for pions with $p_T > 10$ GeV. }. This is smaller than the average angular separation between the two charged pions estimated above by almost an order of magnitude. Thus it seems quite plausible to separate the two collimated charged pions. However, there is no guarantee that the pattern recognition algorithms would be able to reconstruct two distinct tracks, especially in the presence of large number of pile-up events. In the next phase of the CMS, another layer will be added to the pixel detectors at a radius of 16 cm~\cite{CMS-2}. The angular resolution will be further improved and the likelihood of separating the tracks of the two charged pions will be increased. If the experiment cannot resolve the two charged pions, then the final state will look like a single jet consisting of some hadrons, plus missing energy. It is similar to signatures of many new models beyond the SM. In this case, one can make use of the associated production of the Higgs boson with a $W$ (or $Z$) boson, followed by the leptonic decay of the $W$ and the same decay mode of the Higgs boson: \begin{equation} pp \to W H \to (\ell \nu) (\sigma \sigma) \longrightarrow (\ell \nu) (\pi \pi + \alpha \alpha) \;. \end{equation} The final state then consists of a charged lepton, a single jet of two unresolved charged pions, plus missing energy. The charged lepton is an efficient trigger of the events. The major SM background is the production of $W+1$ jet, which could be orders of magnitude larger \cite{w1j}. It presents an extreme challenge for experimentalists, although we may make use of the missing energy spectrum, because the signal also receives missing energy from $\sigma \to \alpha \alpha$ decay in addition to the neutrino from the $W$ decay. One may also use the feature of a microjet (similar to a $\tau$ jet) that is somewhat ``thin'' compared to the usual hadronic jet to separate the signal from backgrounds. In the case that one of the $\sigma$'s decays into two neutral pions, the process can give rise to 4 photons collimated as one ``fat'' photon. The final state would be a charged lepton, a ``fat'' photon plus missing energy, challenged by the major SM background of $W \gamma$ production which has a much larger cross section \cite{wg}. However, one can make use of the fact that the photon in the signal is ``fat'' to distinguish it from the background one. Therefore, using both the gluon fusion and associated production with a $W$ we have provided more options to explore this model. However, in all situations that we studied above, they present great challenges to the experimentalists. Detailed detection simulations are needed in order to settle down if the proposed search is feasible or not. In the following, we will be contended by performing rough estimates on the signal cross section at the LHC-8 and LHC-14, so that experimentalists can have some ideas how large the signal cross section that one can obtain. At $\langle r \rangle =7$~TeV, the branching ratio of $\sigma$ into $\pi\pi$ is as large as 2\%. For smaller $\langle r \rangle=3 - 7$ TeV, the branching ratio into $\pi\pi$ ranges from about 0.4\% to 2\%, for which we may have enough cross sections for detection. We perform parton-level Monte Carlo simulations to estimate the event cross sections at the LHC-8 and LHC-14 for $\langle r \rangle = 3 - 7$ TeV. We normalize the uncut gluon fusion cross sections and the associated production cross sections to those given in the LHC Physics Web site \cite{wikicern}. For both the pions and charged lepton, we impose the same $p_T$ and rapidity cuts as $p_T > 30 \;{\rm GeV}$ and $|\eta| < 2.5$ respectively. We show the cross sections after the cuts in Table~\ref{x-sec}. We have multiplied the cross sections by the branching ratios $B(H \to \sigma \sigma) \times B(\sigma \to \pi \pi) \times B(\sigma \to \alpha \alpha) \times 2$ to the Higgs boson decay, and $B(W\to \ell \nu) = 2/9$ to the $W$ boson decay. At the LHC-8 with about 20 fb$^{-1}$, the gluon fusion can produce a handful of events against the background if the two pions can be resolved. Nevertheless, if the pions cannot be resolved the associated production only has a cross section of order $O(0.05)$ fb, which may not be enough for detection. At the LHC-14 with a projected luminosity of $O(100)$ fb$^{-1}$, both the gluon fusion and associated production give sizable event rates whether or not the two pions can be resolved. Here, as mentioned above, the most important experimental issue is resolving the two pions. Although our rough estimate of angular separation by the pixel and tracking detectors indicates one may be able to resolve the pions, difficulties coming from the pile-up, pattern recognition, and track reconstruction post real challenges for our experimentalists. A proper detector simulation is called for before any realistic conclusion can be drawn. \begin{table}[tbh!] \caption{\small \label{x-sec} Cross sections in fb for the gluon fusion process $pp \to H \to \sigma\sigma \to (\pi\pi) (\alpha\alpha)$ and the associated process $pp \to WH \to (\ell \nu) (\sigma\sigma) \to (\ell \nu) (\pi\pi + \alpha\alpha)$ at the LHC-8 and LHC-14 with the selection cuts described in the text. We choose $m_\sigma = 500$ MeV. } \centering \begin{ruledtabular} \begin{tabular}{cccccc} $\langle r \rangle$ & $B(\sigma \to \pi \pi)$ & \multicolumn{2}{c} {Cross Section (fb) LHC-8} & \multicolumn{2}{c} {Cross Section (fb) LHC-14} \\ (TeV) & & gluon fusion & $WH$ & gluon fusion & $WH$ \\ \hline 3 & $3.72\times 10^{-3}$ & 0.16 & 0.013 & 0.39 & 0.024 \\ 4 & $6.58\times 10^{-3}$ & 0.27 & 0.022 & 0.68 & 0.043 \\ 5 & $1.02\times 10^{-2}$ & 0.42 & 0.034 & 1.05 & 0.067 \\ 6 & $1.46\times 10^{-2}$ & 0.60 & 0.049 & 1.50 & 0.095 \\ 7 & $1.97\times 10^{-2}$ & 0.80 & 0.065 & 2.00 & 0.13 \end{tabular} \end{ruledtabular} \end{table} To summarize, the logical possibility of the existence of a hidden sector of Goldstone bosons masquerading as fractional cosmic neutrinos and communicate to our visible world through the Higgs portal as suggested recently by Weinberg \cite{weinberg} is explored further phenomenologically here. We have studied the constraints from the invisible Higgs search at LEP-II, the invisible Higgs width derived from global fittings using all the LHC signal strength data, and the condition of muon decoupling from evolution of our Universe. We also studied Higgs decays into a pair of $\sigma$ and its various decay modes. This interesting idea of Goldstone bosons as cosmic neutrino impostors can be tested by searching for the process of $gg \to H \to \sigma \sigma \to (\pi \pi) (\alpha \alpha)$ and the associated production $WH \to (\ell \nu) (\sigma \sigma) \to (\ell\nu) (\pi \pi + \alpha \alpha)$ at the LHC-8 and LHC-14. \newpage \section*{Acknowledgments} We thank Kevin Burkett, Kai-Feng Chen, Shih-Chieh Hsu, and Shin-Shan Yu for useful discussions regarding experimental issues, and also thank Chih-Ting Lu for pointing out the correct formula for $\sigma$ decays into pion pair. This work was supported in parts by the National Science Council of Taiwan under Grant Nos. 99-2112-M-007-005-MY3, 102-2112-M-007-015-MY3, and 101-2112-M-001-005-MY3, in part by U.S. Department of Energy under DE-FG02-12ER41811, as well as the WCU program through the KOSEF funded by the MEST (R31-2008-000-10057-0). WYK would like to thank the hospitalities of Institute of Physics, Academia Sinica and the Physics Division of NCTS.
2,869,038,156,486
arxiv
\section{Introduction} It is well known that deriving maximum principles, namely, necessary conditions for optimality, is an important approach in solving optimal control problems (see \cite{YongZhou} and the references therein). Boltyanski-Gamkrelidze-Pontryagin \cite{B-G-P56} announced the Pontryagin's maximum principle for the first time for deterministic control systems in 1956. They introduced the spike variation and studied the first-order term in a sort of Taylor's expansion with respect to this perturbation. But for stochastic control systems, if the diffusion terms depend on the controls, then one can't follow this idea for deterministic control systems. The reason is that the It\^{o} integral {\displaystyle\int\nolimits_{t}^{t+\varepsilon}} \sigma(s)dB(s)$ is only of order $\sqrt{\varepsilon}$ which leads to the first-order expansion method failed. To overcome this difficulty, Peng \cite{Peng90} first introduced the second-order term in the Taylor expansion of the variation and obtained the global maximum principle for the classical stochastic optimal control problem. Since then, many researchers investigate this kind of optimal control problems for various stochastic systems (see \cite{YingHu006,YingHu001,Tang003,Tang004,Zhou002}). Peng \cite{Peng93} generalized the classical stochastic optimal control problem to one where the cost functional is defined by $Y(0)$. Here $(Y(\cdot),Z(\cdot))$ is the solution of the following backward stochastic differential equation (BSDE) (\ref{intro--bsde0}) \begin{equation} \left\{ \begin{array} [c]{rl -dY(t)= & f(t,X(t),Y(t),Z(t),u(t))dt-Z(t)dB(t),\\ Y(T)= & \phi(X(T)). \end{array} \right. \label{intro--bsde0 \end{equation} Since El Karoui et al. \cite{ElKaetal97} defined a more general class of stochastic recursive utilities in economic theory by solutions of BSDEs, this new kind of stochastic optimal control problem is called the stochastic recursive optimal control problem. When the control domain is convex, one can avoid spike variation method and deduce a so-called local stochastic maximum principle. Peng \cite{Peng93} first established a local stochastic maximum principle for the classical stochastic recursive optimal control problem. The local stochastic maximum principles for other various problems were studied in (Dokuchaev and Zhou \cite{Dokuchaev-Zhou}, Ji and Zhou \cite{Ji-Zhou}, Peng \cite{Peng93}, Shi and Wu \cite{Shi-Wu}, Xu \cite{Xu95}, Zhou \cite{Zhou003}, see also the references therein). But when the control domain is nonconvex, one encounters an essential difficulty when trying to derive the first-order and second-order expansions for the BSDE (\ref{intro--bsde0}) and it is proposed as an open problem in Peng \cite{Peng99}. Recently, Hu \cite{Hu17} studied this open problem and obtained a completely novel global maximum principle. In \cite{Hu17}, Hu found that there are closely relations among the terms of the first-order Taylor's expansions, i.e. \begin{equation \begin{array} [c]{l Y_{1}\left( t\right) =p\left( t\right) X_{1}\left( t\right) ,\\ Z_{1}(t)=p(t)\delta\sigma(t)I_{E_{\epsilon}}(t)+[\sigma_{x}(t)p(t)+q(t)]X_{1 \left( t\right) , \end{array} \label{intro-relation-hu \end{equation} where $(p\left( \cdot\right) ,q(\cdot))$ is the solution of the adjoint equation. And the BSDE satisfied by $(p\left( \cdot\right) ,q(\cdot))$ possesses a linear generator. Notice that the variation of $Z(t)$ includes the term $\langle p(t),\delta\sigma(t)\rangle I_{E_{\varepsilon}}(t)$. Hu \cite{Hu17} proposed to do Taylor's expansions at $\bar{Z}(t)+p(t)\delta \sigma(t)I_{E_{\epsilon}}(t)$ and deduced the maximum principle. Motivated by the leader-follower stochastic differential games and other problems in mathematical finance, Yong \cite{Yong10} studied a fully coupled controlled FBSDE with mixed initial-terminal conditions. In \cite{Yong10}, Yong regarded $Z(\cdot)$ as a control process and then applied the Ekeland variational principle to obtain an optimality variational principle which contains unknown parameters. Note that using the similar approach, Wu \cite{Wu13} studied a stochastic recursive optimal control problem. In this paper, we study the following stochastic optimal control problem: minimize the cost functional \[ J(u(\cdot))=Y(0) \] subject to the following fully coupled forward-backward stochastic differential equation (FBSDE) (see \cite{YingHu002, Ma-WZZ, Ma-Yong-FBSDE, Ma-ZZ, Zhang17} and the references therein): \begin{equation} \left\{ \begin{array} [c]{rl dX(t)= & b(t,X(t),Y(t),Z(t),u(t))dt+\sigma(t,X(t),Y(t),Z(t),u(t))dB(t),\\ dY(t)= & -g(t,X(t),Y(t),Z(t),u(t))dt+Z(t)dB(t),\\ X(0)= & x_{0},\ Y(T)=\phi(X(T)), \end{array} \right. \label{intro--fbsde \end{equation} where the control variable $u(\cdot)$ takes values in a nonempty subset of $\mathbb{R}^{k}$. In fact, our model is a special one in Yong \cite{Yong10}. But our object is to get rid of the unknown parameters in the optimality variational principle in \cite{Yong10,Wu13} and obtain a global stochastic maximum principle for the above fully coupled control system. In order to do this, we should study the variational equations of the BSDE in (\ref{intro--fbsde}). But as pointed out in \cite{Yong10}, the regularity/integrability of process $Z(\cdot)$ seems to be not enough in the case when a second order expansion is necessary. Fortunately, inspired by Hu \cite{Hu17}, we overcome this difficulty based on the following two findings. The first one is although the first-order and second-order variational equations are fully coupled linear FBSDEs, we can decouple them by establishing the relations among the first-order Taylor's expansions, i.e., \begin{equation \begin{array} [c]{l Y_{1}\left( t\right) =p\left( t\right) X_{1}\left( t\right) ,\\ Z_{1}(t)=\Delta(t)I_{E_{\epsilon}}(t)+K_{1}(t)X_{1}\left( t\right) , \end{array} \label{intro--relation \end{equation} where $\Delta(t)$ satisfies the following algebra equation \begin{equation} \Delta(t)=p(t)(\sigma(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t)+\Delta (t),u(t))-\sigma(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar{u}(t))) \label{intro--algebra \end{equation} and $(p\left( \cdot\right) ,q(\cdot))$ is the adjoint process which satisfies a quadratic BSDE. By the results of Lepeltier and San Martin \cite{LS02}, we obtain the existence of solution to this nonlinear adjoint equation. Utilizing the uniqueness result of the linear fully coupled FBSDE in the appendix, we also prove the uniqueness of solution to this adjoint equation. The second finding is that the first-order variation $Z_{1}(t)$ has a unique decomposition by the relations (\ref{intro--relation}). This point inspires us that we should do Taylor's expansions at $\bar{Z}(t)+\Delta (t)I_{E_{\epsilon}}(t)$. The advantage of this approach is that the reminder term of Taylor's expansions $K_{1}(t)X_{1}\left( t\right) $ has good estimate which avoids the difficulty to do estimates such as $E {\displaystyle\int\nolimits_{0}^{T}} \mid Z(t)\mid^{2+\varepsilon}dt]<\infty$, for some $\varepsilon>0$. For this reason, the obtained maximum principle will include a new term $\Delta(t)$ which is determined uniquely by $u(t)$, $\bar{u}(t)$, and the optimal state $(\bar{X}(t)$, $\bar{Y}(t)$, $\bar{Z}(t))$. The readers may refer to subsection \ref{heuristic} for a heuristic derivation. By assuming $q(\cdot)$ is a bounded process, we derive the first-order and second-order variational equations and deduce a global maximum principle which includes a new term $\Delta(t)$. Furthermore, we study the case in which $q(\cdot)$ may be unbounded. But for this case, we only obtain the maximum principle when $\sigma(t,x,y,z,u)$ is linear in $z$, i.e., \[ \sigma(t,x,y,z,u)=A(t)z+\sigma_{1}(t,x,y,u). \] Finally, applications to stochastic linear quadratic control problems are investigated. The rest of the paper is organized as follows. In section 2, we give the preliminaries and formulation of our problem. A global stochastic maximum principle is obtained by spike variation method in section 3. Especially, to illustrate our main approach, we give a heuristic derivation in subsection \ref{heuristic} before we prove the maximum principle strictly. In section 4, a linear quadratic control problem is investigated based on the obtained estimates in section 3. In appendix, we give some results that will be used in our proofs. \section{ Preliminaries and problem formulation} Let $(\Omega,\mathcal{F},P)$ be a complete probability space on which a standard $d$-dimensional Brownian motion $B=(B_{1}(t),B_{2}(t),...B_{d (t))_{0\leq t\leq T}^{\intercal}$ is defined. Assume that $\mathbb{F= \{\mathcal{F}_{t},0\leq t\leq T\}$ is the $P$-augmentation of the natural filtration of $B$, where $\mathcal{F}_{0}$ contains all $P$-null sets of $\mathcal{F}$. Denote by $\mathbb{R}^{n}$ the $n$-dimensional real Euclidean space and $\mathbb{R}^{k\times n}$ the set of $k\times n$ real matrices. Let $\langle\cdot,\cdot\rangle$ (resp. $\left\vert \cdot\right\vert $) denote the usual scalar product (resp. usual norm) of $\mathbb{R}^{n}$ and $\mathbb{R ^{k\times n}$. The scalar product (resp. norm) of $M=(m_{ij})$, $N=(n_{ij )\in\mathbb{R}^{k\times n}$ is denoted by $\langle M,N\rangle =tr\{MN^{\intercal}\}$ (resp.$\Vert M\Vert=\sqrt{MM^{\intercal}}$), where the superscript $^{\intercal}$ denotes the transpose of vectors or matrices. We introduce the following spaces. $L_{\mathcal{F}_{T}}^{p}(\Omega;\mathbb{R}^{n})$ : the space of $\mathcal{F _{T}$-measurable $\mathbb{R}^{n}$-valued random variables $\eta$ such that \[ ||\eta||_{p}:=(\mathbb{E}[|\eta|^{p}])^{\frac{1}{p}}<\infty, \] $L_{\mathcal{F}_{T}}^{\infty}(\Omega;\mathbb{R}^{n})$: the space of $\mathcal{F}_{T}$-measurable $\mathbb{R}^{n}$-valued random variables $\eta$ such that $||\eta||_{\infty}:=\underset{\omega\in\Omega}{\mathrm{ess~sup }\left\Vert \eta\right\Vert <\infty$, $L_{\mathcal{F}}^{p}([0,T];\mathbb{R}^{n})$: the space of $\mathbb{F}$-adapted and $p$-th integrable stochastic processes on $[0,T]$ such that \[ \mathbb{E}\left[ \int_{0}^{T}\left\vert f(t)\right\vert ^{p}dt\right] <\infty, \] $L_{\mathcal{F}}^{\infty}(0,T;\mathbb{R}^{n})$: the space of $\mathbb{F $-adapted and uniformly bounded stochastic processes on $[0,T]$ such that \[ ||f(\cdot)||_{\infty}=\underset{(t,\omega)\in\lbrack0,T]\times\Omega }{\mathrm{ess~sup}}|f(t)|<\infty, \] $L_{\mathcal{F}}^{p,q}([0,T];\mathbb{R}^{n})$: the space of $\mathbb{F $-adapted stochastic processes on $[0,T]$ such that \[ ||f(\cdot)||_{p,q}=\left\{ \mathbb{E}\left[ \left( \int_{0}^{T |f(t)|^{p}dt\right) ^{\frac{q}{p}}\right] \right\} ^{\frac{1}{q}}<\infty, \] $L_{\mathcal{F}}^{p}(\Omega;C([0,T],\mathbb{R}^{n}))$: the space of $\mathbb{F}$-adapted continuous stochastic processes on $[0,T]$ such that \[ \mathbb{E}\left[ \sup\limits_{0\leq t\leq T}\left\vert f(t)\right\vert ^{p}\right] <\infty. \] \subsection{$L^{p}$ estimate for fully coupled FBSDEs} We first give an $L^{p}$-estimate for the following fully coupled forward-backward stochastic differential equation: \begin{equation} \left\{ \begin{array} [c]{rl dX(t)= & b(t,X(t),Y(t),Z(t))dt+\sigma(t,X(t),Y(t),Z(t))dB(t),\\ dY(t)= & -g(t,X(t),Y(t),Z(t))dt+Z(t)dB(t),\\ X(0)= & x_{0},\ Y(T)=\phi(X(T)), \end{array} \right. \label{fbsde \end{equation} where \[ b:\Omega\times\lbrack0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{m \times\mathbb{R}^{m\times d}\rightarrow\mathbb{R}^{n}, \ \[ \sigma:\Omega\times\lbrack0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{m \times\mathbb{R}^{m\times d}\rightarrow\mathbb{R}^{n\times d}, \ \[ g:\Omega\times\lbrack0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{m \times\mathbb{R}^{m\times d}\rightarrow\mathbb{R}^{m}, \ \[ \phi:\Omega\times\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}. \] A solution to (\ref{fbsde}) is a triplet of $\mathbb{F}$-adapted process $\Theta(\cdot):=(X(\cdot),Y(\cdot),Z(\cdot))$. We impose the following assumption. \begin{assumption} \label{assum-1}(i) $\psi=b,\sigma,g,\phi$ are uniformly Lipschitz continuous with respect to $x,y,z$, that is, there exist constants $L_{i}>0$, $i=1,2,3$ such tha \ \begin{array} [c]{rl |b(t,x_{1},y_{1},z_{1})-b(t,x_{2},y_{2},z_{2})| & \leq L_{1}|x_{1 -x_{2}|+L_{2}(|y_{1}-y_{2}|+|z_{1}-z_{2}|),\\ |\sigma(t,x_{1},y_{1},z_{1})-\sigma(t,x_{2},y_{2},z_{2})| & \leq L_{1 |x_{1}-x_{2}|+L_{2}|y_{1}-y_{2}|+L_{3}|z_{1}-z_{2}|,\\ |g(t,x_{1},y_{1},z_{1})-g(t,x_{2},y_{2},z_{2}) & \leq L_{1}(|x_{1 -x_{2}|+|y_{1}-y_{2}|+|z_{1}-z_{2}|),\\ |\phi(t,x_{1})-\phi(t,x_{2})| & \leq L_{1}|x_{1}-x_{2}|, \end{array} \] for all $t,\omega,x_{i},y_{i},z_{i}$, $i=1,2$. \newline(ii) For a given $p>1$, $\phi(0)\in L_{\mathcal{F}_{T}}^{p}(\Omega;\mathbb{R}^{m})$, $b(\cdot ,0,0,0)\in L_{\mathcal{F}}^{1,p}([0,T];\mathbb{R}^{n})$, $g(\cdot,0,0,0)\in L_{\mathcal{F}}^{1,p}([0,T];\mathbb{R}^{m})$, $\sigma(\cdot,0,0,0)\in L_{\mathcal{F}}^{2,p}([0,T];\mathbb{R}^{n\times d})$. \end{assumption} For $p>1$, set \begin{equation} \Lambda_{p}:=C_{p}2^{p+1}(1+T^{p})c_{1}^{p}, \label{def-Lambda \end{equation} where $c_{1}=\max\{L_{2},L_{3}\},$ $C_{p}$ is defined in Lemma \ref{sde-bsde} in appendix. \begin{theorem} Suppose Assumption \ref{assum-1} holds and\ $\Lambda_{p}<1$ for some $p>1$. $\ $Then \eqref{fbsde} admits a unique solution $(X\left( \cdot\right) ,Y\left( \cdot\right) ,Z\left( \cdot\right) )\in L_{\mathcal{F} ^{p}(\Omega;C([0,T],\mathbb{R}^{n}))\times L_{\mathcal{F}}^{p}(\Omega ;C([0,T],\mathbb{R}^{m}))\times L_{\mathcal{F}}^{2,p}([0,T];\mathbb{R ^{m\times d})$ and \ \begin{array} [c]{l ||(X,Y,Z)||_{p}^{p}=\mathbb{E}\left\{ \sup\limits_{t\in\lbrack0,T]}\left[ |X(t)|^{p}+|Y(t)|^{p}\right] +\left( \int_{0}^{T}|Z(t)|^{2}dt\right) ^{\frac{p}{2}}\right\} \\ \ \leq C\mathbb{E}\left\{ \left( \int_{0}^{T}[|b|+|g|](t,0,0,0)dt\right) ^{p}+\left( \int_{0}^{T}|\sigma(t,0,0,0)|^{2}dt\right) ^{\frac{p}{2} +|\phi(0)|^{p}+|x_{0}|^{p}\right\} , \end{array} \] where $C$ depends on $T$, $p$, $L_{1}$, $c_{1}$. \label{est-fbsde-lp} \end{theorem} \begin{proof} Without loss of generality, we only prove the case $n=m=d=1$. Let $\mathcal{L}$ denote the space of all $\mathbb{F}$-adapted processes $(Y(\cdot),Z(\cdot))$ such that \[ \mathbb{E}\left[ \sup\limits_{0\leq t\leq T}|Y(t)|^{p}+\left( \int_{0 ^{T}|Z(t)|^{2}dt\right) ^{\frac{p}{2}}\right] <\infty. \] For each given $(y,z)\in\mathcal{L}$, consider the following FBSDE: \begin{equation} \left\{ \begin{array} [c]{rl dX(t)= & b(t,X(t),y(t),z(t))dt+\sigma(t,X(t),y(t),z(t))dB(t),\\ dY(t)= & -g(t,X(t),Y(t),Z(t))dt+Z(t)dB(t),\\ X(0)= & x_{0},\ Y(T)=\phi(X(T)). \end{array} \right. \label{fbsde-y0 \end{equation} Under Assumption \ref{assum-1}, it is easy to deduce that the solution $(Y(\cdot),Z(\cdot))$ of \eqref{fbsde-y0} belongs to $\mathcal{L}$. Denote the operator $(y(\cdot),z(\cdot))\rightarrow(Y(\cdot),Z(\cdot))$ by $\Gamma$. For two elements $(y^{i},z^{i})\in\mathcal{L}$, $i=1,2$, let $(X^{i}(\cdot ),Y^{i}(\cdot),Z^{i}(\cdot))$ be the corresponding solution to \eqref{fbsde-y0}. Set \[ \Delta y=y^{1}-y^{2},\text{ }\Delta z=z^{1}-z^{2},\text{ }\Delta X=X^{1 -X^{2},\text{ }\Delta Y=Y^{1}-Y^{2},\text{ }\Delta Z=Z^{1}-Z^{2}. \] Then \begin{equation} \left\{ \begin{array} [c]{rl d\Delta X(t)= & \left[ \alpha_{1}(t)\Delta X(t)+\beta_{1}(t)\Delta y(t)+\gamma_{1}(t)\Delta z(t)\right] dt+\left[ \alpha_{2}(t)\Delta X(t)+\beta_{2}(t)\Delta y(t)+\gamma_{2}(t)\Delta z(t)\right] dB(t),\\ d\Delta Y(t)= & -\left[ \alpha_{3}(t)\Delta X(t)+\beta_{3}(t)\Delta Y(t)+\gamma_{3}(t)\Delta Z(t)\right] dt+\Delta Z(t)dB(t),\\ \Delta X(0)= & 0,\ \Delta Y(T)=\lambda(T)\Delta X(T), \end{array} \right. \end{equation} where \[ \alpha_{1}(t)=\left\{ \begin{array} [c]{ll \frac{b(t,X^{1}(t),y^{1}(t),z^{1}(t))-b(t,X^{2}(t),y^{1}(t),z^{1}(t))}{\Delta X(t)},\ & \text{if}\ \Delta X(t)\neq0,\\ 0, & \text{if}\ \Delta X(t)=0, \end{array} \right. \] and $\alpha_{i}(t)$, $\beta_{i}(t)$, $\gamma_{i}(t)$, $\lambda(T)$ are defined similarly. Furthermore, $\alpha_{i}(t)$, $\beta_{i}(t)$, $\gamma_{i}(t)$, $\lambda(T)$ are bounded by Lipschitz constants of the corresponding coefficients. Especially, $|\beta_{1}(t)|,|\gamma_{1}(t)|,|\beta _{2}(t)|,|\gamma_{2}(t)|\leq c_{1}$. Due to Lemma \ref{sde-bsde}, we obtain \begin{equation \begin{array} [c]{l \mathbb{E}\left[ \sup\limits_{0\leq t\leq T}\left( |\Delta X(t)|^{p}+|\Delta Y(t)|^{p}\right) +\left( \int_{0}^{T}|\Delta Z(t)|^{2}dt\right) ^{\frac {p}{2}}\right] \\ \leq C_{p}\mathbb{E}\left\{ \left[ \int_{0}^{T}(|\beta_{1}(t)||\Delta y(t)|+|\gamma_{1}(t)||\Delta z(t)|)dt\right] ^{p}+\left[ \int_{0}^{T}\left( |\beta_{2}(t)|^{2}|\Delta y(t)|^{2}+|\gamma_{2}(t)|^{2}|\Delta z(t)|^{2 \right) dt\right] ^{\frac{p}{2}}\right\} \\ \leq C_{p}2^{p+1}\left( 1+T^{p}\right) c_{1}^{p}\mathbb{E}\left[ \sup\limits_{0\leq t\leq T}|\Delta y(t)|^{p}+\left( \int_{0}^{T}|\Delta z(t)|^{2}dt\right) ^{\frac{p}{2}}\right] \\ =\Lambda_{p}\mathbb{E}\left[ \sup\limits_{0\leq t\leq T}|\Delta y(t)|^{p}+\left( \int_{0}^{T}|\Delta z(t)|^{2}dt\right) ^{\frac{p}{2 }\right] . \end{array} \label{fbsde-delty \end{equation} Since $\Lambda_{p}<1$, the operator $\Gamma$ is a contraction mapping and has a unique fixed point $(Y(\cdot),Z(\cdot))$.\ Let $X(\cdot)$ be the solution of \eqref{fbsde} with respect to the fixed point $(Y(\cdot),Z(\cdot))$. Thus, $(X(\cdot),Y(\cdot),Z(\cdot))$ is the unique solution to \eqref{fbsde}. Let $\Theta^{0}:=(X^{0}(\cdot),Y^{0}(\cdot),Z^{0}(\cdot))$ be the solution to \eqref{fbsde-y0} with $y=0$, $z=0$. From \eqref{fbsde-delty}, \[ ||(Y-Y^{0},Z-Z^{0})||\leq\Lambda_{p}^{\frac{1}{p}}||(Y-0,Z-0)||=\Lambda _{p}^{\frac{1}{p}}||(Y,Z)||. \] By triangle inequality, \ \begin{array} [c]{rl ||(Y,Z)||\leq & ||(Y-Y^{0},Z-Z^{0})||+||(Y^{0},Z^{0})||\\ \leq & \Lambda_{p}^{\frac{1}{p}}||(Y,Z)||+||(Y^{0},Z^{0})||, \end{array} \] which leads to \[ ||(Y,Z)||\leq\left( 1-\Lambda_{p}^{\frac{1}{p}}\right) ^{-1}||(Y^{0 ,Z^{0})||. \] By Lemma \ref{sde-bsde} in appendix, we obtain \ \begin{array} [c]{rl ||(Y^{0},Z^{0})||^{p}\leq & C_{p}\mathbb{E}\left[ |\phi(0)|^{p}+|x_{0 |^{p}+\left( \int_{0}^{T}[|b|+|g|](t,0,0,0)dt\right) ^{p}+\left( \in _{0}^{T}|\sigma(t,0,0,0)|^{2}dt\right) ^{\frac{p}{2}}\right] , \end{array} \] where $C_{p}\ $depends on $T$, $p$, $L_{1}$. Thus we hav \[ ||(Y,Z)||_{p}^{p}\leq C^{\prime}\mathbb{E}\left[ |\phi(0)|^{p}+|x_{0 |^{p}+\left( \int_{0}^{T}[|b|+|g|](t,0,0,0)dt\right) ^{p}+\left( \in _{0}^{T}|\sigma(t,0,0,0)|^{2}dt\right) ^{\frac{p}{2}}\right] , \] where$\ C^{\prime}=C_{p}(1-\Lambda_{p}^{\frac{1}{p}})^{-p}$. By Lemma \ref{sde-bsde}, we can obtain the desired result. \end{proof} \begin{remark} In the case $p=2$, Pardoux and Tang obtained the $L^{2}$-estimate in \cite{Pardoux-Tang} (see also \cite{Cvi-Zhang}). Instead of assuming that $L_{2}$ and $L_{3}$ are small enough as in \cite{Cvi-Zhang}, we assume $\Lambda_{p}<1$ in this paper. There are other conditions in \cite{Cvi-Zhang} which can guarantee the existence and uniqueness of \eqref{fbsde}. The readers may apply the method introduced in the above theorem to obtain the $L^{p $-estimate of \eqref{fbsde} for these conditions similarly. \end{remark} \subsection{Problem formulation} Consider the following fully coupled stochastic control system: \begin{equation} \left\{ \begin{array} [c]{rl dX(t)= & b(t,X(t),Y(t),Z(t),u(t))dt+\sigma(t,X(t),Y(t),Z(t),u(t))dB(t),\\ dY(t)= & -g(t,X(t),Y(t),Z(t),u(t))dt+Z(t)dB(t),\\ X(0)= & x_{0},\ Y(T)=\phi(X(T)), \end{array} \right. \label{state-eq \end{equation} wher \[ b:[0,T]\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}^{d}\times U\rightarrow\mathbb{R}, \ \[ \sigma:[0,T]\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}^{d}\times U\rightarrow\mathbb{R}^{d}, \ \[ g:[0,T]\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}^{d}\times U\rightarrow\mathbb{R}, \ \[ \phi:\mathbb{R}\rightarrow\mathbb{R}. \] An admissible control $u(\cdot)$ is an $\mathbb{F}$-adapted process with values in $U$ such tha \[ \sup\limits_{0\leq t\leq T}\mathbb{E}[|u(t)|^{8}]<\infty, \] where the control domain $U$ is a nonempty subset of $\mathbb{R}^{k}$. Denote the admissible control set by $\mathcal{U}[0,T]$. Our optimal control problem is to minimize the cost functional \[ J(u(\cdot))=Y(0) \] over $\mathcal{U}[0,T]$ \begin{equation} \underset{u(\cdot)\in\mathcal{U}[0,T]}{\inf}J(u(\cdot)). \label{obje-eq \end{equation} \section{Stochastic maximum principle} We derive maximum principle (necessary condition for optimality) for the optimization problem (\ref{obje-eq}) in this section. For simplicity of presentation, we only study the case $d=1$, and then present the results for the general case in subsection \ref{sec-general}. In this section, the constant $C$ will change from line to line in our proof. We impose the following assumptions on the coefficients of \eqref{state-eq}. \begin{assumption} For $\psi=b,$ $\sigma,$ $g$ and $\phi$, we suppose (i) $\psi$, $\psi_{x}$, $\psi_{y}$, $\psi_{z}$ are continuous in $(x,y,z,u)$; $\psi_{x}$, $\psi_{y}$, $\psi_{z}$ are bounded; there exists a constant $\ L>0$ such tha \ \begin{array} [c]{rl |\psi(t,x,y,z,u)| & \leq L\left( 1+|x|+|y|+|z|+|u|\right) ,\\ |\sigma(t,0,0,z,u)-\sigma(t,0,0,z,u^{\prime})| & \leq L(1+|u|+|u^{\prime}|). \end{array} \] (ii) For any $2\leq\beta\leq8$,\ $\Lambda_{\beta}:=C_{\beta}2^{\beta +1}(1+T^{\beta})c_{1}^{\beta}<1$, where $c_{1}=\max\{L_{2},L_{3}\}$, $L_{2}=\max\{||b_{y}||_{\infty},||b_{z}||_{\infty},||\sigma_{y}||_{\infty}\}$, $L_{3}=||\sigma_{z}||_{\infty}$, $C_{\beta}$ is defined in Lemma \ref{sde-bsde} in appendix for $L_{1}=\max\{||b_{x}||_{\infty},||\sigma _{x}||_{\infty},||g_{x}||_{\infty},||g_{y}||_{\infty},||g_{z}||_{\infty },||\phi_{x}||_{\infty}\}$. (iii) $\psi_{xx}$, $\psi_{xy}$, $\psi_{yy}$ , $\psi_{xz}$, $\psi_{yz}$, $\psi_{zz}$ are continuous in $(x,y,z,u)$; $\psi_{xx}$, $\psi_{xy}$, $\psi_{yy}$, $\psi_{xz}$, $\psi_{yz}$ ,$\psi_{zz}$ are bounded. \label{assum-2} \end{assumption} Under Assumption \ref{assum-2}(i)-(ii), for any $u(\cdot)\in\mathcal{U}[0,T]$, the state equation \eqref{state-eq} has a unique solution by Theorem \ref{est-fbsde-lp}. Let $\bar{u}(\cdot)$ be optimal and $(\bar{X}(\cdot),\bar{Y}(\cdot),\bar {Z}(\cdot))$ be the corresponding state processes of (\ref{state-eq}). Since the control domain is not necessarily convex, we resort to spike variation method. For any $u(\cdot)\in\mathcal{U}[0,T]$ and $0<\epsilon<T$, define \[ u^{\epsilon}(t)=\left\{ \begin{array} [c]{lll \bar{u}(t), & \ t\in\lbrack0,T]\backslash E_{\epsilon}, & \\ u(t), & \ t\in E_{\epsilon}, & \end{array} \right. \] where $E_{\epsilon}\subset\lbrack0,T]$ is\ a measurable set with $|E_{\epsilon}|=\epsilon$. Let $(X^{\epsilon}(\cdot),Y^{\epsilon (\cdot),Z^{\epsilon}(\cdot))$ be the state processes of (\ref{state-eq}) associated with $u^{\epsilon}(\cdot)$. For simplicity, for $\psi=b$, $\sigma$, $g$, $\phi$ and $\kappa=x$, $y$, $z$, denot \ \begin{array} [c]{rl \psi(t)= & \psi(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar{u}(t)),\\ \psi_{\kappa}(t)= & \psi_{\kappa}(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar {u}(t)),\\ \delta\psi(t)= & \psi(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),u(t))-\psi(t),\\ \delta\psi_{\kappa}(t)= & \psi_{\kappa}(t,\bar{X}(t),\bar{Y}(t),\bar {Z}(t),u(t))-\psi_{\kappa}(t),\\ \delta\psi(t,\Delta)= & \psi(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t)+\Delta (t),u(t))-\psi(t),\\ \delta\psi_{\kappa}(t,\Delta)= & \psi_{\kappa}(t,\bar{X}(t),\bar{Y}(t),\bar {Z}(t)+\Delta(t),u(t))-\psi_{\kappa}(t), \end{array} \] where $\Delta(\cdot)$ is an $\mathbb{F}$--adapted process. Moreover, denote $D\psi$ is the gradient of $\psi$ with respect to $x$, $y$, $z$, and $D^{2}\psi$ is the Hessian matrix of $\psi$ with respect to $x$, $y$, $z$ \begin{align*} D\psi(t) & =D\psi(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar{u}(t)),\\ D^{2}\psi(t) & =D^{2}\psi(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar{u}(t)). \end{align*} \begin{lemma} \label{est-epsilon-bar}Suppose Assumption \ref{assum-2}(i)-(ii) hold. Then for any $2\leq\beta\leq8$ we have \begin{equation} \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |X^{\epsilon (t)-\bar{X}(t)|^{\beta}+|Y^{\epsilon}(t)-\bar{Y}(t)|^{\beta}\right) \right] +\mathbb{E}\left[ \left( \int_{0}^{T}|Z^{\epsilon}(t)-\bar{Z}(t)|^{2 dt\right) ^{\frac{\beta}{2}}\right] =O\left( \epsilon^{\frac{\beta}{2 }\right) . \end{equation} \end{lemma} \begin{proof} Let \ \begin{array} [c]{rl \xi^{1,\epsilon}(t) & :=X^{\epsilon}(t)-\bar{X}(t);\\ \eta^{1,\epsilon}(t) & :=Y^{\epsilon}(t)-\bar{Y}(t);\\ \zeta^{1,\epsilon}(t) & :=Z^{\epsilon}(t)-\bar{Z}(t);\\ \Theta(t) & :=(\bar{X}(t),\bar{Y}(t),\bar{Z}(t));\\ \Theta^{\epsilon}(t) & :=(X^{\epsilon}(t),Y^{\epsilon}(t),Z^{\epsilon}(t)). \end{array} \] We have \begin{equation} \left\{ \begin{array} [c]{rl d\xi^{1,\epsilon}(t)= & \left[ \tilde{b}_{x}^{\epsilon}(t)\xi^{1,\epsilon }(t)+\tilde{b}_{y}^{\epsilon}(t)\eta^{1,\epsilon}(t)+\tilde{b}_{z}^{\epsilon }(t)\zeta^{1,\epsilon}(t)+\delta b(t)I_{E_{\epsilon}}(t)\right] dt\\ & +\left[ \tilde{\sigma}_{x}^{\epsilon}(t)\xi^{1,\epsilon}(t)+\tilde{\sigma }_{y}^{\epsilon}(t)\eta^{1,\epsilon}(t)+\tilde{\sigma}_{z}^{\epsilon (t)\zeta^{1,\epsilon}(t)+\delta\sigma(t)I_{E_{\epsilon}}(t)\right] dB(t),\\ \xi^{1,\epsilon}(0)= & 0, \end{array} \right. \label{ep-bar-x \end{equation \begin{equation} \left\{ \begin{array} [c]{rl d\eta^{1,\epsilon}(t)= & -\left[ \tilde{g}_{x}^{\epsilon}(t)\xi^{1,\epsilon }(t)+\tilde{g}_{y}^{\epsilon}(t)\eta^{1,\epsilon}(t)+\tilde{g}_{z}^{\epsilon }(t)\zeta^{1,\epsilon}(t)+\delta g(t)I_{E_{\epsilon}}(t)\right] dt+\zeta^{1,\epsilon}(t)dB(t),\\ \eta^{1,\epsilon}(T)= & \tilde{\phi}_{x}^{\epsilon}(T)\xi^{1,\epsilon}(T), \end{array} \right. \label{ep-bar-y \end{equation} where \[ \tilde{b}_{x}^{\epsilon}(t)=\int_{0}^{1}b_{x}(t,\Theta(t)+\theta (\Theta^{\epsilon}(t)-\Theta(t)),u^{\epsilon}(t))d\theta \] and $\tilde{b}_{y}^{\epsilon}(t)$, $\tilde{b}_{z}^{\epsilon}(t)$, $\tilde{\sigma}_{x}^{\epsilon}(t)$, $\tilde{\sigma}_{y}^{\epsilon}(t)$, $\tilde{\sigma}_{z}^{\epsilon}(t),$ $\tilde{g}_{x}^{\epsilon}(t)$, $\tilde {g}_{y}^{\epsilon}(t)$, $\tilde{g}_{z}^{\epsilon}(t)$ and $\tilde{\phi _{x}^{\epsilon}(T)$ are defined similarly. Noting that $\left( \xi^{1,\epsilon}(t),\eta^{1,\epsilon}(t),\zeta ^{1,\epsilon}(t)\right) $ is the solution to \eqref{ep-bar-x} and \eqref{ep-bar-y}, and \[ \mathbb{E}\left[ \left( \int_{E_{\epsilon}}|u(t)|dt\right) ^{\beta}\right] \leq\epsilon^{\beta-1}\mathbb{E}\left[ \int_{E_{\epsilon}}|u(t)|^{\beta }dt\right] , \] then, by Theorem \ref{est-fbsde-lp}, we get \ \begin{array} [c]{ll} & \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |\xi^{1,\epsilon }(t)|^{\beta}+|\eta^{1,\epsilon}(t)|^{\beta}\right) +\left( \int_{0 ^{T}|\zeta^{1,\epsilon}(t)|^{2}dt\right) ^{\frac{\beta}{2}}\right] \\ & \ \ \leq C\mathbb{E}\left[ \left( \int_{0}^{T}\left( |\delta b(t)|I_{E_{\epsilon}}(t)+|\delta g(t)|I_{E_{\epsilon}}(t)\right) dt\right) ^{\beta}+\left( \int_{0}^{T}|\delta\sigma(t)|^{2}I_{E_{\epsilon }(t)dt\right) ^{\frac{\beta}{2}}\right] \\ & \ \ \leq C\mathbb{E}\left[ \left( \int_{E_{\epsilon}}(1+|\bar{X (t)|+|\bar{Y}(t)|+|\bar{Z}(t)|+|u(t)|+|\bar{u}(t)|)dt\right) ^{\beta}\right. \\ & \text{ \ \ \ \ \ \ \ }\left. +\left( \int_{E_{\epsilon}}(1+|\bar {X}(t)|^{2}+|\bar{Y}(t)|^{2}+|u(t)|^{2}+|\bar{u}(t)|^{2})dt\right) ^{\frac{\beta}{2}}\right] \\ & \ \ \leq C\left( \epsilon^{\beta}+\epsilon^{\frac{\beta}{2}}\right) \left( 1+\sup\limits_{t\in\lbrack0,T]}\mathbb{E}\left[ |\bar{X}(t)|^{\beta }+|\bar{Y}(t)|^{\beta}+|u(t)|^{\beta}+|\bar{u}(t)|^{\beta}\right] \right) +C\epsilon^{\frac{\beta}{2}}\mathbb{E}\left[ \left( \int_{0}^{T}|\bar {Z}(t)|^{2}dt\right) ^{\frac{\beta}{2}}\right] \\ & \ \ \leq C\epsilon^{\frac{\beta}{2}}. \end{array} \] \end{proof} \subsection{A heuristic derivation\label{heuristic}} Before giving the strict proof of the stochastic maximum principle, we illustrate how to obtain our results formally in this subsection. By Lemma \ref{est-epsilon-bar}, we have $X^{\epsilon}(t)-\bar{X}(t)\sim O(\sqrt{\epsilon})$, $Y^{\epsilon}(t)-\bar{Y}(t)\sim O(\sqrt{\epsilon})$ and $Z^{\epsilon}(t)-\bar{Z}(t)\sim O(\sqrt{\epsilon})$. Suppose that \begin{equation \begin{array} [c]{lll X^{\epsilon}(t)-\bar{X}(t) & = & X_{1}(t)+X_{2}(t)+o(\epsilon),\\ Y^{\epsilon}(t)-\bar{Y}(t) & = & Y_{1}(t)+Y_{2}(t)+o(\epsilon),\\ Z^{\epsilon}(t)-\bar{Z}(t) & = & Z_{1}(t)+Z_{2}(t)+o(\epsilon), \end{array} \label{heur-1 \end{equation} where $X_{1}(t)\sim O(\sqrt{\epsilon})$, $X_{2}(t)\sim O(\epsilon)$, $Y_{1}(t)\sim O(\sqrt{\epsilon})$, $Y_{2}(t)\sim O(\epsilon)$, $Z_{1}(t)\sim O(\sqrt{\epsilon})$ and $Z_{2}(t)\sim O(\epsilon)$. It is well-known that the solution $Z$ of the FBSDE (\ref{state-eq}) is closely related to the diffusion term $\sigma$ of the forward SDE of (\ref{state-eq}). When we adopt the spike variation method and calculate the variational equation of $X$, the diffusion term of the variational equation should include the term $\delta\sigma(t)I_{E_{\epsilon}}(t)$. So we guess that $Z_{1}(t)$ has the following for \begin{equation} Z_{1}(t)=\Delta(t)I_{E_{\epsilon}}(t)+Z_{1}^{\prime}(t). \label{heur-2 \end{equation} where $\Delta(t)$ is an $\mathbb{F}$--adapted process and $Z_{1}^{\prime}(t)$ has good estimates similarly as $X_{1}(t)$. But this form of $Z_{1}(t)$ leads to great difficulties when we do Taylor's expansion of the coefficients $b,$ $\sigma$ and $g$ with respect to $Z$. Fortunately, we find that $\Delta(t)$ can be determined uniquely by $u(t)$, $\bar{u}(t)$, and the optimal state $(\bar{X}(t)$, $\bar{Y}(t)$, $\bar{Z}(t))$. Note that in Hu \cite{Hu17} \begin{equation} \Delta(t)=p(t)\left( \sigma(t,\bar{X}(t),u(t))-\sigma(t,\bar{X}(t),\bar {u}(t))\right) \label{heur-hu \end{equation} where $p(t)$ is the adjoint process. Although $\Delta(t)$ appears in the expansion of $Z^{\epsilon}(t)-\bar{Z}(t)$, by (\ref{heur-hu}) it is clearly that $\Delta(t)$ includes the spike variation of control variables. In our context, we will see lately that $\Delta(t)$ is determined by an algebra equation (\ref{heur-7}). Thus, when we derive the variational equations, we should keep the $\Delta(t)I_{E_{\epsilon}}(t)$ term unchanged and do Taylor's expansions at $\bar{Z}(t)+\Delta(t)I_{E_{\epsilon}}(t)$. This idea is first applied to a partially coupled FBSDE control system by Hu \cite{Hu17}. Following this idea, we can derive the first-order and second-order variational equations for our control system (\ref{state-eq}). The expansions for $b$ and $\sigma$ are given as follows \ \begin{array} [c]{l b(t,X^{\epsilon}(t),Y^{\epsilon}(t),Z^{\epsilon}(t),u^{\epsilon}(t))-b(t)\\ =b(t,\bar{X}(t)+X_{1}(t)+X_{2}(t),\bar{Y}(t)+Y_{1}(t)+Y_{2}(t),\bar {Z}(t)+\Delta(t)I_{E_{\epsilon}}(t)+Z_{1}^{\prime}(t)+Z_{2}(t),u^{\epsilon }(t))-b(t)+o(\epsilon)\\ =b_{x}(t)(X_{1}(t)+X_{2}(t))+b_{y}(t)(Y_{1}(t)+Y_{2}(t))+b_{z}(t)(Z_{1 ^{\prime}(t)+Z_{2}(t))\\ \text{ }+\frac{1}{2}[X_{1}(t),Y_{1}(t),Z_{1}^{\prime}(t)]D^{2}b(t)[X_{1 (t),Y_{1}(t),Z_{1}^{\prime}(t)]^{\intercal}+\delta b(t,\Delta)I_{E_{\epsilon }(t)+o(\epsilon), \end{array} \ \ \begin{array} [c]{l \sigma(t,X^{\epsilon}(t),Y^{\epsilon}(t),Z^{\epsilon}(t),u^{\epsilon }(t))-\sigma(t)\\ =\sigma(t,\bar{X}(t)+X_{1}(t)+X_{2}(t),\bar{Y}(t)+Y_{1}(t)+Y_{2}(t),\bar {Z}(t)+\Delta(t)I_{E_{\epsilon}}(t)+Z_{1}^{\prime}(t)+Z_{2}(t),u^{\epsilon }(t))-\sigma(t)+o(\epsilon)\\ =\sigma_{x}(t)(X_{1}(t)+X_{2}(t))+\sigma_{y}(t)(Y_{1}(t)+Y_{2}(t))+\sigma _{z}(t)(Z_{1}^{\prime}(t)+Z_{2}(t))\\ \text{ }+\delta\sigma_{x}(t,\Delta)X_{1}(t)I_{E_{\epsilon}}(t)+\delta \sigma_{y}(t,\Delta)Y_{1}(t)I_{E_{\epsilon}}(t)+\delta\sigma_{z (t,\Delta)Z_{1}^{\prime}(t)I_{E_{\epsilon}}(t)\\ \text{ }+\frac{1}{2}[X_{1}(t),Y_{1}(t),Z_{1}^{\prime}(t)]D^{2}\sigma (t)[X_{1}(t),Y_{1}(t),Z_{1}^{\prime}(t)]^{\intercal}+\delta\sigma (t,\Delta)I_{E_{\epsilon}}(t)+o(\epsilon). \end{array} \] Note that \[ \int_{0}^{T}\delta b_{x}(t,\Delta)X_{1}(t)I_{E_{\epsilon}}(t)dt\sim o(\epsilon)\text{ and }\int_{0}^{T}\delta\sigma_{x}(t,\Delta)X_{1 (t)I_{E_{\epsilon}}(t)dB(t)\sim O(\epsilon). \] So we omit $\delta b_{x}(t,\Delta)X_{1}(t)I_{E_{\epsilon}}(t)$ in the expansions of $b$ and keep $\delta\sigma_{x}(t,\Delta)X_{1}(t)I_{E_{\epsilon }(t)$ in the expansions of $\sigma$. The expansions for $g$ and $\phi$ are similar to the expansions for $b$. Then, we obtain the following variational equations \begin{equation} \left\{ \begin{array} [c]{rl d(X_{1}(t)+X_{2}(t))= & \{b_{x}(t)(X_{1}(t)+X_{2}(t))+b_{y}(t)(Y_{1 (t)+Y_{2}(t))+b_{z}(t)(Z_{1}^{\prime}(t)+Z_{2}(t))\\ & +\frac{1}{2}[X_{1}(t),Y_{1}(t),Z_{1}^{\prime}(t)]D^{2}b(t)[X_{1 (t),Y_{1}(t),Z_{1}^{\prime}(t)]^{\intercal}+\delta b(t,\Delta)I_{E_{\epsilon }(t)\}dt\\ & +\{\sigma_{x}(t)(X_{1}(t)+X_{2}(t))+\sigma_{y}(t)(Y_{1}(t)+Y_{2 (t))+\sigma_{z}(t)(Z_{1}^{\prime}(t)+Z_{2}(t))\\ & +\delta\sigma_{x}(t,\Delta)X_{1}(t)I_{E_{\epsilon}}(t)+\delta\sigma _{y}(t,\Delta)Y_{1}(t)I_{E_{\epsilon}}(t)+\delta\sigma_{z}(t,\Delta )Z_{1}^{\prime}(t)I_{E_{\epsilon}}(t)\\ & +\frac{1}{2}[X_{1}(t),Y_{1}(t),Z_{1}^{\prime}(t)]D^{2}\sigma(t)[X_{1 (t),Y_{1}(t),Z_{1}^{\prime}(t)]^{\intercal}+\delta\sigma(t,\Delta )I_{E_{\epsilon}}(t)\}dB(t),\\ X_{1}(0)+X_{2}(0)= & 0, \end{array} \right. \label{heur-4 \end{equation \begin{equation} \left\{ \begin{array} [c]{rl d(Y_{1}(t)+Y_{2}(t))= & -\{g_{x}(t)(X_{1}(t)+X_{2}(t))+g_{y}(t)(Y_{1 (t)+Y_{2}(t))+g_{z}(t)(Z_{1}^{\prime}(t)+Z_{2}(t))\\ & +\frac{1}{2}[X_{1}(t),Y_{1}(t),Z_{1}^{\prime}(t)]D^{2}g(t)[X_{1 (t),Y_{1}(t),Z_{1}^{\prime}(t)]^{\intercal}+\delta g(t,\Delta)I_{E_{\epsilon }(t)\}dt\\ & +(Z_{1}(t)+Z_{2}(t))dB(t),\\ Y_{1}(T)+Y_{2}(T)= & \phi_{x}(\bar{X}(T))(X_{1}(T)+X_{2}(T))+\frac{1}{2 \phi_{xx}(\bar{X}(T))X_{1}^{2}(T). \end{array} \right. \label{heur-4' \end{equation} Now, we need to derive the first-and second-order variational equations from (\ref{heur-4}) and (\ref{heur-4'}). Firstly, it is easy to establish the first-order variational equation for $X_{1}(t)$ \begin{equation \begin{array} [c]{rl dX_{1}(t)= & \left[ b_{x}(t)X_{1}(t)+b_{y}(t)Y_{1}(t)+b_{z}(t)(Z_{1 (t)-\Delta(t)I_{E_{\epsilon}}(t))\right] dt\\ & +\left[ \sigma_{x}(t)X_{1}(t)+\sigma_{y}(t)Y_{1}(t)+\sigma_{z (t)(Z_{1}(t)-\Delta(t)I_{E_{\epsilon}}(t))+\delta\sigma(t,\Delta )I_{E_{\epsilon}}(t)\right] dB(t),\\ X_{1}(0)= & 0. \end{array} \label{heur-5' \end{equation} Notice that $Y_{1}(T)=\phi_{x}(\bar{X}(T))X_{1}(T)$. So we guess that $Y_{1}\left( t\right) =p\left( t\right) X_{1}\left( t\right) $ where $p\left( t\right) $ is the solution of the following adjoint equation \[ \left\{ \begin{array} [c]{rl dp(t)= & -\Upsilon(t)dt+q(t)dB(t),\\ p(T)= & \phi_{x}(\bar{X}(T)), \end{array} \right. \] where $\Upsilon(t)$ is some adapted process which will be determined later. It is clear that $Y_{1}\left( t\right) =p\left( t\right) X_{1}\left( t\right) $ should include all $O(\sqrt{\epsilon})$-terms of the drift term of (\ref{heur-4'}). Applying It\^{o}'s formula to $p\left( t\right) X_{1}\left( t\right) $, we can determine tha \begin{equation} \left\{ \begin{array} [c]{rl dY_{1}(t)= & -\left[ g_{x}(t)X_{1}(t)+g_{y}(t)Y_{1}(t)+g_{z}(t)(Z_{1 (t)-\Delta(t)I_{E_{\epsilon}}(t))-q(t)\delta\sigma(t,\Delta)I_{E_{\epsilon }(t)\right] dt+Z_{1}(t)dB(t),\\ Y_{1}(T)= & \phi_{x}(\bar{X}(T))X_{1}(T), \end{array} \right. \label{heur-5 \end{equation} and $(p(\cdot),q(\cdot))$ satisfies the following equation \begin{equation} \left\{ \begin{array} [c]{rl dp(t)= & -\left\{ g_{x}(t)+g_{y}(t)p(t)+g_{z}(t)K_{1}(t)+b_{x}(t)p(t)+b_{y (t)p^{2}(t)\right. \\ & \left. +b_{z}(t)K_{1}(t)p(t)+\sigma_{x}(t)q(t)+\sigma_{y}(t)p(t)q(t)+\sigma _{z}(t)K_{1}(t)q(t)\right\} dt+q(t)dB(t),\\ p(T)= & \phi_{x}(\bar{X}(T)), \end{array} \right. \label{eq-p \end{equation} wher \begin{equation} K_{1}(t)=(1-p(t)\sigma_{z}(t))^{-1}\left[ \sigma_{x}(t)p(t)+\sigma _{y}(t)p^{2}(t)+q(t)\right] . \label{def-k1 \end{equation} Thus, we obtain the relationshi \begin{equation \begin{array} [c]{l Y_{1}\left( t\right) =p\left( t\right) X_{1}\left( t\right) ,\\ Z_{1}(t)=(1-p(t)\sigma_{z}(t))^{-1}p(t)(\delta\sigma(t,\Delta)-\sigma _{z}(t)\Delta(t))I_{E_{\epsilon}}(t)+K_{1}(t)X_{1}\left( t\right) . \end{array} \label{heur-6 \end{equation} Combining (\ref{heur-2}) and (\ref{heur-6}), we obtain \begin{align*} \Delta(t) & =(1-p(t)\sigma_{z}(t))^{-1}p(t)(\delta\sigma(t,\Delta )-\sigma_{z}(t)\Delta(t)),\\ Z_{1}^{\prime}(t) & =K_{1}(t)X_{1}\left( t\right) , \end{align*} which implies the following algebra equation \begin{equation} \Delta(t)=p(t)\delta\sigma(t,\Delta). \label{heur-7 \end{equation} From (\ref{heur-4}), (\ref{heur-4'}), (\ref{heur-5'}) and (\ref{heur-5}), it is easy to deduce that $(X_{2}(\cdot),Y_{2}(\cdot))$ satisfies the following equation: \begin{equation} \left\{ \begin{array} [c]{rl dX_{2}(t)= & \{b_{x}(t)X_{2}(t)+b_{y}(t)Y_{2}(t)+b_{z}(t)Z_{2}(t)+\delta b(t,\Delta)I_{E_{\epsilon}}(t)\\ & +\frac{1}{2}\left[ X_{1}(t),Y_{1}(t),Z_{1}(t)-\Delta(t)I_{E_{\epsilon }(t)\right] D^{2}b(t)\left[ X_{1}(t),Y_{1}(t),Z_{1}(t)-\Delta (t)I_{E_{\epsilon}}(t)\right] ^{\intercal}\}dt\\ & +\left\{ \sigma_{x}(t)X_{2}(t)+\sigma_{y}(t)Y_{2}(t)+\sigma_{z (t)Z_{2}(t)+\delta\sigma_{x}(t,\Delta)X_{1}(t)I_{E_{\epsilon}}(t)+\delta \sigma_{y}(t,\Delta)Y_{1}(t)I_{E_{\epsilon}}(t)\right. \\ & +\delta\sigma_{z}(t,\Delta)\left( Z_{1}(t)-\Delta(t)I_{E_{\epsilon }(t)\right) \\ & \left. +\frac{1}{2}\left[ X_{1}(t),Y_{1}(t),Z_{1}(t)-\Delta (t)I_{E_{\epsilon}}(t)\right] D^{2}\sigma(t)\left[ X_{1}(t),Y_{1 (t),Z_{1}(t)-\Delta(t)I_{E_{\epsilon}}(t)\right] ^{\intercal}\right\} dB(t),\\ dY_{2}(t)= & -\left\{ g_{x}(t)X_{2}(t)+g_{y}(t)Y_{2}(t)+g_{z}(t)Z_{2 (t)+\left[ q(t)\delta\sigma(t,\Delta)+\delta g(t,\Delta)\right] I_{E_{\epsilon}}(t)\right. \\ & \left. +\frac{1}{2}\left[ X_{1}(t),Y_{1}(t),Z_{1}(t)-\Delta (t)I_{E_{\epsilon}}(t)\right] D^{2}g(t)\left[ X_{1}(t),Y_{1}(t),Z_{1 (t)-\Delta(t)I_{E_{\epsilon}}(t)\right] ^{\intercal}\right\} dt+Z_{2 (t)dB(t),\\ X_{2}(0)= & 0,\text{ }Y_{2}(T)=\phi_{x}(\bar{X}(T))X_{2}(T)+\frac{1}{2 \phi_{xx}(\bar{X}(T))X_{1}^{2}(T). \end{array} \right. \label{heur-8 \end{equation} In the following two subsections, we give the rigorous proofs for the above heuristic derivations. \subsection{First-order expansion} From the heuristic derivation, in order to obtain the first-order variational equation of \eqref{state-eq}, we need to introduce the first-order adjoint equation (\ref{eq-p}). Since the generator of \eqref{eq-p} does not satisfy Lipschitz condition, we firstly explore the solvability of \eqref{eq-p}. For $\beta_{0}>0$ and $y\in\mathbb{R}$, set \[ G(y)=L_{1}+\left( L_{2}+L_{1}+\beta_{0}^{-1}L_{1}L_{2}\right) |y|+\left[ L_{2}+\beta_{0}^{-1}(L_{1}L_{2}+L_{2}^{2})\right] y^{2}+\beta_{0}^{-1 L_{2}^{2}|y|^{3},\ y\in\mathbb{R}\text{. \] Let $s(\cdot)$ be the maximal solution to the following equation \begin{equation} s(t)=L_{1}+\int_{t}^{T}G(s(r))dr,\;t\in\lbrack0,T]; \label{u-ode \end{equation} and $l(\cdot)$ be the minimal solution to the following equation: \begin{equation} l(t)=-L_{1}-\int_{t}^{T}G(l(r))dr,\;t\in\lbrack0,T]. \label{l-ode \end{equation} Moreover, set \begin{equation} t_{1}=T-\int_{-\infty}^{-L_{1}}\frac{1}{G(y)}dy,\ \ t_{2}=T-\int_{L_{1 }^{\infty}\frac{1}{G(y)}dy,\ \ t^{\ast}=t_{1}\vee t_{2}. \label{def-t \end{equation} \begin{lemma} \label{t-star} For given $\beta_{0}>0$, then there exists a $\delta>0$ such that when $L_{2}<\delta$, we have $t^{\ast}<0$. \end{lemma} \begin{proof} We only prove that there exists a $\delta>0$ such that when $L_{2}<\delta$, we have $t_{2}<0$. Note that $G(y)$ is a monotonic function with respect to $L_{2}$. As $L_{2}\rightarrow0$, \[ \frac{1}{G(y)}\uparrow\frac{1}{L_{1}(1+|y|)}. \] Applying the monotone convergence theorem, we obtain \[ \int_{L_{1}}^{\infty}\frac{1}{G(y)}dy\uparrow\int_{L_{1}}^{\infty}\frac {1}{L_{1}(1+|y|)}dy=\infty. \] By the definition of $t_{2}$, the result is obvious. \end{proof} \begin{assumption} \label{assum-3} There exists a positive constant $\beta_{0}\in(0,1)$ such that \[ t^{\ast}<0, \] an \begin{equation} \lbrack s(0)\vee(-l(0))]L_{3}\leq1-\beta_{0}. \label{assum-cl \end{equation} \end{assumption} \begin{remark} Note that $G(\cdot)$ is independent of $L_{3}$. Therefore by Lemma \ref{t-star}, Assumption \ref{assum-3} holds when $L_{2}$ and $L_{3}$ are small enough. \end{remark} \begin{theorem} \label{exist-unique-BSDE}Suppose Assumptions \ref{assum-2}(i)-(ii) and \ref{assum-3} hold. Then \eqref{eq-p} has a bounded solution $\ $such that \[ |p(t)|\leq\lbrack s(0)\vee(-l(0))], \] and $q(\cdot)\in L_{\mathcal{F}}^{2,\beta}([0,T];\mathbb{R})$, for any $\beta\geq1$. \end{theorem} \begin{proof} The proof of the existence is a direct consequence of Theorem 4 in \cite{LS02}. Furthermore, similar to Corollary 4 in \cite{HuBSDEquad}, we can obtain $q(\cdot)\in L_{\mathcal{F}}^{2,\beta}\left( [0,T];\mathbb{R}\right) $ for any $\beta\geq1$. \end{proof} \begin{remark} The proof of the uniqueness for $(p(\cdot),q(\cdot))$ can be found in Theorem \ref{unique-pq} in the appendix. \end{remark} In order to introduce the first-order variational equation, we study the following algebra equation \begin{equation} \Delta(t)=p(t)(\sigma(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t)+\Delta (t),u(t))-\sigma(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar{u}(t))),\;t\in \lbrack0,T], \label{def-delt \end{equation} where $u\left( \cdot\right) $ is a given admissible control. \begin{remark} It should be note that the $\Delta(t)$ depends on the optimal control $\bar {u}(\cdot)$, the adjoint process $p(\cdot)$, and the control $u(\cdot)$. \end{remark} \begin{lemma} \label{delta-exist}Under the same assumptions as in Theorem \ref{exist-unique-BSDE}. Then (\ref{def-delt}) has a unique adapted solution $\Delta(\cdot)$. Moreover \begin{equation \begin{array} [c]{c |\Delta(t)|\leq C(1+|\bar{X}(t)|+|\bar{Y}(t)|+|u(t)|+|\bar{u}(t)|),\text{ }t\in\lbrack0,T],\\ \sup\limits_{0\leq t\leq T}\mathbb{E}[|\Delta(t)|^{8}]<\infty, \end{array} \label{del-new-3 \end{equation} where $C$ is a constant depending on $\beta_{0}$, $L$, $L_{1}$, $L_{2}$, $L_{3}$, $T$. \end{lemma} \begin{proof} We first prove uniqueness. Let $\Delta(\cdot)$ and $\Delta^{\prime}(\cdot)$ be two adapted solutions to (\ref{def-delt}). The \begin{equation \begin{array} [c]{l \left\vert \Delta(t)-\Delta^{\prime}(t)\right\vert \\ =|p(t)||\sigma(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t)+\Delta(t),u(t))-\sigma (t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t)+\Delta^{\prime}(t),u(t))|\\ \leq\lbrack s(0)\vee(-l(0))]L_{3}\left\vert \Delta(t)-\Delta^{\prime }(t)\right\vert \\ \leq(1-\beta_{0})\left\vert \Delta(t)-\Delta^{\prime}(t)\right\vert , \end{array} \label{del-new-1 \end{equation} which implies $\Delta(t)=\Delta^{\prime}(t)$ for $t\in\lbrack0,T]$. Now we construct a contraction mapping on $L_{\mathcal{F}}^{2}([0,T];\mathbb{R})$ to prove the existence. For each given $\tilde{\Delta}(\cdot)\in L_{\mathcal{F }^{2}([0,T];\mathbb{R})$, define the operator $\tilde{\Delta}(\cdot )\rightarrow\Delta(\cdot)$ by $\Gamma$, wher \[ \Delta(t)=p(t)(\sigma(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t)+\tilde{\Delta }(t),u(t))-\sigma(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar{u}(t))),\;t\in \lbrack0,T]. \] Following the same steps as (\ref{del-new-1}), we can ge \begin{equation} |\Delta(t)|\leq(1-\beta_{0})\left\vert \tilde{\Delta}(t)\right\vert +p(t)\delta\sigma(t),\;t\in\lbrack0,T], \label{del-new-2 \end{equation} which implies $\Delta(\cdot)\in L_{\mathcal{F}}^{2}([0,T];\mathbb{R})$. For each $\tilde{\Delta}_{i}(\cdot)\in L_{\mathcal{F}}^{2}([0,T];\mathbb{R})$, denote $\Delta_{i}(\cdot)=\Gamma\left( \tilde{\Delta}_{i}(\cdot)\right) $, $i=1$, $2$. Similar to (\ref{del-new-1}), we hav \[ \mathbb{E}\left[ \int_{0}^{T}\left\vert \Delta_{1}(t)-\Delta_{2 (t)\right\vert ^{2}dt\right] \leq(1-\beta_{0})^{2}\mathbb{E}\left[ \in _{0}^{T}\left\vert \tilde{\Delta}_{1}(t)-\tilde{\Delta}_{2}(t)\right\vert ^{2}dt\right] . \] Thus, by the contraction mapping theorem, (\ref{def-delt}) has an adapted solution $\Delta(\cdot)\in L_{\mathcal{F}}^{2}([0,T];\mathbb{R})$. Moreover, for any adapted solution $\Delta(\cdot)$ to (\ref{def-delt}), it follows from (\ref{del-new-2}) with $\tilde{\Delta}(\cdot)=\Delta(\cdot)$ tha \[ |\Delta(t)|\leq\beta_{0}^{-1}p(t)\delta\sigma(t)\leq C(1+|\bar{X}(t)|+|\bar {Y}(t)|+|u(t)|+|\bar{u}(t)|),\text{ }t\in\lbrack0,T], \] which implies (\ref{del-new-3}). \end{proof} \begin{remark} If the diffusion term is independent of $z$, that is $\sigma_{z}(\cdot)=0$, then we obtain $\Delta(t)=p(t)\delta\sigma(t)$. If the diffusion term contains $z$ in a linear form, for example $\sigma(t,x,y,z,u)=A(t)z+\sigma _{1}(t,x,y,u)$, then we obtain \[ \Delta(t)=\left( 1-p(t)A(t)\right) ^{-1}p(t)\left( \sigma_{1}(t,\bar {X}(t),\bar{Y}(t),u(t))-\sigma_{1}(t,\bar{X}(t),\bar{Y}(t),\bar{u}(t))\right) . \] \end{remark} Now we introduce the first-order variational equation: \begin{equation} \left\{ \begin{array} [c]{rl dX_{1}(t)= & \left[ b_{x}(t)X_{1}(t)+b_{y}(t)Y_{1}(t)+b_{z}(t)(Z_{1 (t)-\Delta(t)I_{E_{\epsilon}}(t))\right] dt\\ & +\left[ \sigma_{x}(t)X_{1}(t)+\sigma_{y}(t)Y_{1}(t)+\sigma_{z (t)(Z_{1}(t)-\Delta(t)I_{E_{\epsilon}}(t))+\delta\sigma(t,\Delta )I_{E_{\epsilon}}(t)\right] dB(t),\\ X_{1}(0)= & 0, \end{array} \right. \label{new-form-x1 \end{equation} and \begin{equation} \left\{ \begin{array} [c]{lll dY_{1}(t) & = & -\left[ g_{x}(t)X_{1}(t)+g_{y}(t)Y_{1}(t)+g_{z (t)(Z_{1}(t)-\Delta(t)I_{E_{\epsilon}}(t))-q(t)\delta\sigma(t,\Delta )I_{E_{\epsilon}}(t)\right] dt+Z_{1}(t)dB(t),\\ Y_{1}(T) & = & \phi_{x}(\bar{X}(T))X_{1}(T). \end{array} \right. \label{new-form-y1 \end{equation} \begin{remark} The existence and uniqueness of $(X_{1}(\cdot),Y_{1}(\cdot),Z_{1}(\cdot))$ in \eqref{new-form-x1} and \eqref{new-form-y1} is guaranteed by Theorem \ref{est-fbsde-lp}. \end{remark} \begin{assumption} The solution $q(\cdot)$ of (\ref{eq-p}) is a bounded process. \label{assm-q-bound} \end{assumption} The relationship between $(Y_{1}(t),Z_{1}(t))$ and $X_{1}(t)$ as pointed out in our heuristic derivation is obtained in the following lemma. \begin{lemma} \label{lemma-y1}Suppose Assumptions \ref{assum-2}(i)-(ii), \ref{assum-3} and \ref{assm-q-bound} hold. Then we have \begin{align*} Y_{1}(t) & =p(t)X_{1}(t),\\ Z_{1}(t) & =K_{1}(t)X_{1}(t)+\Delta(t)I_{E_{\epsilon}}(t), \end{align*} where $p(\cdot)$ is the solution of \eqref{eq-p} and $K_{1}\left( \cdot\right) $ is given in (\ref{def-k1}). \end{lemma} \begin{proof} Consider the following stochastic differential equation: \begin{equation} \left\{ \begin{array} [c]{rl d\tilde{X}_{1}(t)= & \left\{ \left[ b_{x}(t)+b_{y}(t)p(t)+b_{z (t)K_{1}(t)\right] \tilde{X}_{1}(t)\right\} dt\\ & +\left\{ \left[ \sigma_{x}(t)+p(t)\sigma_{y}(t)+\sigma_{z}(t)K_{1 (t)\right] \tilde{X}_{1}(t)+\delta\sigma(t,\Delta)I_{E_{\epsilon }(t)\right\} dB(t),\\ \tilde{X}_{1}(0)= & 0. \end{array} \right. \label{eq-x1 \end{equation} It is easy to check that there exists a unique solution $\tilde{X}_{1}(\cdot)$ of \eqref{eq-x1}. Se \begin{align} \tilde{Y}_{1}(t) & =p(t)\tilde{X}_{1}(t),\label{first order-relation}\\ \tilde{Z}_{1}(t) & =K_{1}(t)\tilde{X}_{1}(t)+\Delta(t)I_{E_{\epsilon }(t).\nonumber \end{align} Applying It\^{o}'s lemma to $p(t)\tilde{X}_{1}(t)$, \ \begin{array} [c]{lll d\tilde{Y}_{1}(t) & = & -\left[ g_{x}(t)\tilde{X}_{1}(t)+g_{y}(t)\tilde {Y}_{1}(t)+g_{z}(t)\tilde{Z}_{1}(t)-g_{z}(t)\Delta(t)I_{E_{\epsilon }(t)-q(t)\delta\sigma(t,\Delta)I_{E_{\epsilon}}(t)\right] dt+\tilde{Z _{1}(t)dB(t). \end{array} \] Thus $(\tilde{X}_{1}(\cdot),\tilde{Y}_{1}(\cdot),\tilde{Z}_{1}(\cdot))$ solves (\ref{new-form-x1}) and (\ref{new-form-y1}), by Theorem \ref{est-fbsde-lp}, \ $(\tilde{X}_{1}(\cdot),\tilde{Y}_{1}(\cdot),\tilde{Z}_{1}(\cdot ))=(X_{1}(\cdot),Y_{1}(\cdot),Z_{1}(\cdot))$. This completes the proof. \end{proof} Then we have the following estimates. \begin{lemma} \label{est-one-order}Suppose Assumptions \ref{assum-2}, \ref{assum-3} and \ref{assm-q-bound} hold. Then for any $2\leq\beta\leq8$, we have the following estimates \begin{equation} \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |X_{1}(t)|^{\beta }+|Y_{1}(t)|^{\beta}\right) \right] +\mathbb{E}\left[ \left( \int_{0 ^{T}|Z_{1}(t)|^{2}dt\right) ^{\beta/2}\right] =O(\epsilon^{\beta/2}), \label{est-x1-y1 \end{equation \[ \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |X^{\epsilon (t)-\bar{X}(t)-X_{1}(t)|^{2}+|Y^{\epsilon}(t)-\bar{Y}(t)-Y_{1}(t)|^{2}\right) \right] +\mathbb{E}\left[ \int_{0}^{T}|Z^{\epsilon}(t)-\bar{Z (t)-Z_{1}(t)|^{2}dt\right] =O(\epsilon^{2}), \ \[ \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}(|X^{\epsilon}(t)-\bar {X}(t)-X_{1}(t)|^{4}+|Y^{\epsilon}(t)-\bar{Y}(t)-Y_{1}(t)|^{4})\right] +\mathbb{E}\left[ \left( \int_{0}^{T}|Z^{\epsilon}(t)-\bar{Z}(t)-Z_{1 (t)|^{2}dt\right) ^{2}\right] =o(\epsilon^{2}). \] \end{lemma} \begin{proof} By Theorem \ref{est-fbsde-lp}, we have \begin{equation \begin{array} [c]{l \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |X_{1}(t)|^{\beta }+|Y_{1}(t)|^{\beta}\right) +\left( \int_{0}^{T}|Z_{1}(t)|^{2}dt\right) ^{\beta/2}\right] \\ \leq C\mathbb{E}\left[ \left( \int_{0}^{T}\left[ \left( |b_{z (t)|+|g_{z}(t)|\right) |\Delta(t)|+|q(t)\delta\sigma(t,\Delta)|\right] I_{E_{\epsilon}}(t)dt\right) ^{\beta}\right] \\ \text{ \ }+C\mathbb{E}\left[ \left( \int_{0}^{T}\left[ |\sigma_{z (t)\Delta(t)|^{2}+|\delta\sigma(t,\Delta)|^{2}\right] I_{E_{\epsilon }(t)dt\right) ^{\beta/2}\right] \\ \leq C\mathbb{E}\left[ \left( \int_{E_{\epsilon}}\left( 1+|\bar {X}(t)|+\left\vert \bar{Y}(t)\right\vert +\left\vert \bar{u (t)|+|u(t)\right\vert \right) dt\right) ^{\beta}\right] \\ \text{ \ }+C\mathbb{E}\left[ \left( \int_{E_{\epsilon}}\left( 1+|\bar {X}(t)|^{2}+\left\vert \bar{Y}(t)\right\vert ^{2}+\left\vert \bar{u (t)|^{2}+|u(t)\right\vert ^{2}\right) dt\right) ^{\beta/2}\right] \\ \leq C\epsilon^{^{\beta/2}}. \end{array} \label{est-1-order \end{equation} We use the notations $\xi^{1,\epsilon}(t)$, $\eta^{1,\epsilon}(t)$ and $\zeta^{1,\epsilon}(t)$ in the proof of Lemma \ref{est-epsilon-bar} and let \ \begin{array} [c]{rl \xi^{2,\epsilon}(t) & :=X^{\epsilon}(t)-\bar{X}(t)-X_{1}(t);\\ \eta^{2,\epsilon}(t) & :=Y^{\epsilon}(t)-\bar{Y}(t)-Y_{1}(t);\\ \zeta^{2,\epsilon}(t) & :=Z^{\epsilon}(t)-\bar{Z}(t)-Z_{1}(t);\\ \Theta(t) & :=(\bar{X}(t),\bar{Y}(t),\bar{Z}(t));\\ \Theta(t,\Delta I_{E_{\epsilon}}) & :=(\bar{X}(t),\bar{Y}(t),\bar{Z (t)+\Delta(t)I_{E_{\epsilon}}(t));\\ \Theta^{\epsilon}(t) & :=(X^{\epsilon}(t),Y^{\epsilon}(t),Z^{\epsilon}(t)). \end{array} \] Note that \ \begin{array} [c]{l \delta\sigma(t,\Delta)I_{E_{\epsilon}}(t)=\sigma(t,\bar{X}(t),\bar{Y (t),\bar{Z}(t)+\Delta(t)I_{E_{\epsilon}}(t),u^{\epsilon}(t))-\sigma (t)=\sigma(t,\Theta(t,\Delta I_{E_{\epsilon}}(t)),u^{\epsilon}(t))-\sigma(t). \end{array} \] We hav \ \begin{array} [c]{l \sigma(t,\Theta^{\epsilon}(t),u^{\epsilon}(t))-\sigma(t)-\delta\sigma (t,\Delta)I_{E_{\epsilon}}(t)\\ =\sigma(t,\Theta^{\epsilon}(t),u^{\epsilon}(t))-\sigma(t,\Theta(t,\Delta I_{E_{\epsilon}}(t)),u^{\epsilon}(t))\\ =\tilde{\sigma}_{x}^{\epsilon}(t)(X^{\epsilon}(t)-\bar{X}(t))+\tilde{\sigma }_{y}^{\epsilon}(t)(Y^{\epsilon}(t)-\bar{Y}(t))+\tilde{\sigma}_{z}^{\epsilon }(t)(Z^{\epsilon}(t)-\bar{Z}(t)-\Delta(t)I_{E_{\epsilon}}(t)), \end{array} \] where \[ \tilde{\sigma}_{x}^{\epsilon}(t)=\int_{0}^{1}\sigma_{x}(t,\Theta(t,\Delta I_{E_{\epsilon}}(t))+\theta(\Theta^{\epsilon}(t)-\Theta(t,\Delta I_{E_{\epsilon}}(t))),u^{\epsilon}(t))d\theta, \] and $\tilde{\sigma}_{y}^{\epsilon}(t)$, $\tilde{\sigma}_{z}^{\epsilon (t)$\ are defined similarly. \ Recall that $\tilde{b}_{x}^{\epsilon}(t)$, $\tilde{b}_{y}^{\epsilon}(t)$, $\tilde{b}_{z}^{\epsilon}(t)$, $\tilde{g}_{x}^{\epsilon}(t)$, $\tilde{g _{y}^{\epsilon}(t)$, $\tilde{g}_{z}^{\epsilon}(t)$ and $\tilde{\phi _{x}^{\epsilon}(T)$ are defined in Lemma \ref{est-epsilon-bar}. Then \begin{equation} \left\{ \begin{array} [c]{ll d\xi^{2,\epsilon}(t)= & \left[ \tilde{b}_{x}^{\epsilon}(t)\xi^{2,\epsilon }(t)+\tilde{b}_{y}^{\epsilon}(t)\eta^{2,\epsilon}(t)+\tilde{b}_{z}^{\epsilon }(t)\zeta^{2,\epsilon}(t)+A_{1}^{\epsilon}(t)\right] dt\\ & +\left[ \tilde{\sigma}_{x}^{\epsilon}(t)\xi^{2,\epsilon}(t)+\tilde{\sigma }_{y}^{\epsilon}(t)\eta^{2,\epsilon}(t)+\tilde{\sigma}_{z}^{\epsilon (t)\zeta^{2,\epsilon}(t))+B_{1}^{\epsilon}(t)\right] dB(t),\\ \xi^{2,\epsilon}(0)= & 0, \end{array} \right. \label{deri-x-x1 \end{equation \[ \left\{ \begin{array} [c]{rl d\eta^{2,\epsilon}(t)= & -\left[ \tilde{g}_{x}^{\epsilon}(t)\xi^{2,\epsilon }(t)+\tilde{g}_{y}^{\epsilon}(t)\eta^{2,\epsilon}(t)+\tilde{g}_{z}^{\epsilon }(t)\zeta^{2,\epsilon}(t)+C_{1}^{\epsilon}(t)\right] dt+\zeta^{2,\epsilon }(t)dB(t),\\ \eta^{2,\epsilon}(T)= & \tilde{\phi}_{x}^{\epsilon}(T)\xi^{2,\epsilon }(T)+D_{1}^{\epsilon}(T), \end{array} \right. \] wher \ \begin{array} [c]{rl A_{1}^{\epsilon}(t)= & (\tilde{b}_{x}^{\epsilon}(t)-b_{x}(t))X_{1 (t)+(\tilde{b}_{y}^{\epsilon}(t)-b_{y}(t))Y_{1}(t)+(\tilde{b}_{z}^{\epsilon }(t)-b_{z}(t))Z_{1}(t)+b_{z}(t)\Delta(t)I_{E_{\epsilon}}(t)+\delta b(t)I_{E_{\epsilon}}(t),\\ B_{1}^{\epsilon}(t)= & (\tilde{\sigma}_{x}^{\epsilon}(t)-\sigma_{x (t))X_{1}(t)+(\tilde{\sigma}_{y}^{\epsilon}(t)-\sigma_{y}(t))Y_{1 (t)+(\tilde{\sigma}_{z}^{\epsilon}(t)-\sigma_{z}(t))K_{1}(t)X_{1}(t),\\ C_{1}^{\epsilon}(t)= & (\tilde{g}_{x}^{\epsilon}(t)-g_{x}(t))X_{1 (t)+(\tilde{g}_{y}^{\epsilon}(t)-g_{y}(t))Y_{1}(t)+(\tilde{g}_{z}^{\epsilon }(t)-g_{z}(t))Z_{1}(t)+\delta g(t)I_{E_{\epsilon}}(t)\\ & +g_{z}(t)\Delta(t)I_{E_{\epsilon}}(t)+q(t)\delta\sigma(t,\Delta )I_{E_{\epsilon}}(t),\\ D_{1}^{\epsilon}(T)= & (\tilde{\phi}_{x}^{\epsilon}(T)-\phi_{x}(\bar {X}(T)))X_{1}(T). \end{array} \] By Theorem \ref{est-fbsde-lp}, we obtain \ \begin{array} [c]{l \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |\xi^{2,\epsilon }(t)|^{2}+|\eta^{2,\epsilon}(t)|^{2}\right) +\int_{0}^{T}|\zeta^{2,\epsilon }(t)|^{2}dt\right] \\ \leq C\mathbb{E}\left[ \left( \int_{0}^{T}\left( |A_{1}^{\epsilon }(t)|+|C_{1}^{\epsilon}(t)|\right) dt\right) ^{2}+\int_{0}^{T |B_{1}^{\epsilon}(t)|^{2}dt+|D_{1}^{\epsilon}(T)|^{2}\right] \\ \leq C\mathbb{E}\left[ \left( \int_{0}^{T}|A_{1}^{\epsilon}(t)|dt\right) ^{2}+\left( \int_{0}^{T}|C_{1}^{\epsilon}(t)|dt\right) ^{2}+\int_{0 ^{T}|B_{1}^{\epsilon}(t)|^{2}dt+|D_{1}^{\epsilon}(T)|^{2}\right] . \end{array} \] Now we estimate term by term as follows. (1) Since \begin{equation \begin{array} [c]{l \mathbb{E}\left[ \left( \int_{0}^{T}|\tilde{b}_{z}^{\epsilon}(t)-b_{z (t)||Z_{1}(t)|dt\right) ^{2}\right] \\ \leq C\mathbb{E}\left[ \int_{0}^{T}|\tilde{b}_{z}^{\epsilon}(t)-b_{z (t)|^{2}dt\int_{0}^{T}|Z_{1}(t)|^{2}dt\right] \\ \leq C\left\{ \mathbb{E}\left[ \left( \int_{0}^{T}|\tilde{b}_{z}^{\epsilon }(t)-b_{z}(t)|^{2}dt\right) ^{2}\right] \right\} ^{\frac{1}{2}}\left\{ \mathbb{E}\left[ \left( \int_{0}^{T}|Z_{1}(t)|^{2}dt\right) ^{2}\right] \right\} ^{\frac{1}{2}}\\ \leq C\left\{ \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |\xi^{1,\epsilon}(t)|^{4}+|\eta^{1,\epsilon}(t)|^{4}\right) +\left( \in _{0}^{T}\left( |\zeta^{1,\epsilon}(t)|^{2}+|\delta b_{z}(t)|^{2 I_{E_{\epsilon}}(t)\right) dt\right) ^{2}\right] \right\} ^{\frac{1}{2 }\left\{ \mathbb{E}\left[ \left( \int_{0}^{T}|Z_{1}(t)|^{2}dt\right) ^{2}\right] \right\} ^{\frac{1}{2}}\\ \leq C\epsilon^{2}, \end{array} \label{est-b_x-z1 \end{equation} the estimate of $\mathbb{E}\left[ \left( \int_{0}^{T}\left( \tilde{b _{x}^{\epsilon}(t)-b_{x}(t)\right) X_{1}(t)dt\right) ^{2}\right] $ and $\mathbb{E}\left\{ \left[ \int_{0}^{T}\left( \tilde{b}_{y}^{\epsilon }(t)-b_{y}(t)\right) Y_{1}(t)dt\right] ^{2}\right\} $ is the same as \eqref{est-b_x-z1}, \begin{equation} \mathbb{E}\left[ \left( \int_{0}^{T}|b_{z}(t)\Delta(t)I_{E_{\epsilon }(t)|dt\right) ^{2}\right] \leq C\epsilon\int_{E_{\epsilon}}\mathbb{E [|\Delta(t)|^{2}]dt\leq C\epsilon^{2}, \label{est-delta \end{equation \ \begin{array} [c]{ll \mathbb{E}\left[ (\int_{0}^{T}|\delta b(t)I_{E_{\epsilon}}(t)|dt)^{2}\right] & \leq\mathbb{E}\left[ \left( \int_{E_{\epsilon}}\left( 1+|\bar {X}(t)|+\left\vert \bar{Y}(t)\right\vert +\left\vert \bar{Z}(t)\right\vert +\left\vert u(t)\right\vert +\left\vert \bar{u}(t)\right\vert \right) dt\right) ^{2}\right] \\ & \leq\epsilon\mathbb{E}\left[ \int_{E_{\epsilon}}\left( 1+|\bar{X (t)|^{2}+\left\vert \bar{Y}(t)\right\vert ^{2}+\left\vert \bar{Z (t)\right\vert ^{2}+\left\vert u(t)\right\vert ^{2}+\left\vert \bar {u}(t)\right\vert ^{2}\right) dt\right] \\ & \leq C\epsilon^{2}, \end{array} \] then, \[ \mathbb{E}\left[ (\int_{0}^{T}|A_{1}^{\epsilon}(t)|dt)^{2}\right] \leq C\epsilon^{2}. \] (2) \begin{equation \begin{array} [c]{l \mathbb{E}\left[ \int_{0}^{T}|\tilde{\sigma}_{z}^{\epsilon}(t)-\sigma _{z}(t)|^{2}|K_{1}(t)X_{1}(t)|^{2}dt\right] \\ \leq C\mathbb{E}\left[ \sup\limits_{0\leq t\leq T}|X_{1}(t)|^{2}\int_{0 ^{T}|\tilde{\sigma}_{z}^{\epsilon}(t)-\sigma_{z}(t)|^{2}dt\right] \\ \leq C\left\{ \mathbb{E}\left[ \sup\limits_{0\leq t\leq T}|X_{1 (t)|^{4}\right] \right\} ^{\frac{1}{2}}\left\{ \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |\xi^{1,\epsilon}(t)|^{4}+|\eta ^{1,\epsilon}(t)|^{4}\right) \right. \right. \\ \ \ \left. \left. +\left( \int_{0}^{T}\left( |\zeta^{1,\epsilon (t)-\Delta(t)I_{E_{\epsilon}}(t)|^{2}+|\delta\sigma_{z}(t,\Delta )|^{2}I_{E_{\epsilon}}(t)\right) dt\right) ^{2}\right] \right\} ^{\frac {1}{2}}\\ \leq C\epsilon\left\{ \epsilon^{2}+\epsilon\int_{E_{\epsilon}}\mathbb{E [|\Delta(t)|^{4}]dt\right\} ^{\frac{1}{2}}\\ \leq C\epsilon^{2}, \end{array} \label{est-sigmazx1 \end{equation} and the estimate of $\mathbb{E}\left[ \int_{0}^{T}|\tilde{\sigma _{x}^{\epsilon}(t)-\sigma_{x}(t)|^{2}|X_{1}(t)|^{2}dt\right] $ and $\mathbb{E}\left[ \int_{0}^{T}|\tilde{\sigma}_{y}^{\epsilon}(t)-\sigma _{y}(t)|^{2}|Y_{1}(t)|^{2}dt\right] $ is the same as \eqref{est-sigmazx1}. Thus, \begin{equation} \mathbb{E}\left[ \int_{0}^{T}|B_{1}^{\epsilon}(t)|^{2}dt\right] \leq C\epsilon^{2}. \end{equation} (3 \begin{equation} \mathbb{E}\left[ |D_{1}^{\epsilon}(T)|^{2}\right] \leq C\left\{ \mathbb{E}\left[ \sup\limits_{0\leq t\leq T}|X_{1}(t)|^{4}\right] \right\} ^{\frac{1}{2}}\left\{ \mathbb{E}\left[ \sup\limits_{0\leq t\leq T |\xi^{1,\epsilon}(t)|^{4}\right] \right\} ^{\frac{1}{2}}\leq C\epsilon^{2}. \end{equation} (4) The estimate of $\mathbb{E}\left[ (\int_{0}^{T}|C_{1}^{\epsilon }(t)|dt)^{2}\right] $ is the same as $\mathbb{E}\left[ (\int_{0}^{T |A_{1}^{\epsilon}(t)|dt)^{2}\right] $. Similarly, we obtain \ \begin{array} [c]{l \mathbb{E}\left\{ \sup\limits_{t\in\lbrack0,T]}\left[ |\xi^{2,\epsilon }(t)|^{4}+|\eta^{2,\epsilon}(t)|^{4}\right] +\left( \int_{0}^{T |\zeta^{2,\epsilon}(t)|^{2}dt\right) ^{2}\right\} \\ \leq C\mathbb{E}\left\{ \left( \int_{0}^{T}\left( |A_{1}^{\epsilon }(t)|+|C_{1}^{\epsilon}(t)|\right) dt\right) ^{4}+\left( \int_{0}^{T |B_{1}^{\epsilon}(t)|^{2}dt\right) ^{2}+\left\vert D_{1}^{\epsilon }(T)\right\vert ^{4}\right\} \\ =o(\epsilon^{2}). \end{array} \] This completes the proof. \end{proof} \subsection{Second-order expansion} Noting that $Z_{1}(t)=K_{1}(t)X_{1}(t)+\Delta(t)I_{E_{\epsilon}}(t)$\ in Lemma \ref{lemma-y1}, then we introduce the second-order variational equation as follows: \begin{equation} \left\{ \begin{array} [c]{rl dX_{2}(t)= & \left\{ b_{x}(t)X_{2}(t)+b_{y}(t)Y_{2}(t)+b_{z}(t)Z_{2 (t)+\delta b(t,\Delta)I_{E_{\epsilon}}(t)\right. \\ & \left. +\frac{1}{2}\left[ X_{1}(t),Y_{1}(t),K_{1}(t)X_{1}(t)\right] D^{2}b(t)\left[ X_{1}(t),Y_{1}(t),K_{1}(t)X_{1}(t)\right] ^{\intercal }\right\} dt\\ & +\left\{ \sigma_{x}(t)X_{2}(t)+\sigma_{y}(t)Y_{2}(t)+\frac{1}{2}\left[ X_{1}(t),Y_{1}(t),K_{1}(t)X_{1}(t)\right] D^{2}\sigma(t)\left[ X_{1}(t),Y_{1}(t),K_{1}(t)X_{1}(t)\right] ^{\intercal}\right. \\ & \left. +\sigma_{z}(t)Z_{2}(t)+\left[ \delta\sigma_{x}(t,\Delta )X_{1}(t)+\delta\sigma_{y}(t,\Delta)Y_{1}(t)\right] I_{E_{\epsilon }(t)+\delta\sigma_{z}(t,\Delta)K_{1}(t)X_{1}(t)I_{E_{\epsilon}}(t)\right\} dB(t),\\ X_{2}(0)= & 0, \end{array} \right. \label{new-form-x2 \end{equation} and \begin{equation} \left\{ \begin{array} [c]{ll dY_{2}(t)= & -\left\{ g_{x}(t)X_{2}(t)+g_{y}(t)Y_{2}(t)+g_{z}(t)Z_{2 (t)+\left[ q(t)\delta\sigma(t,\Delta)+\delta g(t,\Delta)\right] I_{E_{\epsilon}}(t)\right. \\ & \left. +\frac{1}{2}\left[ X_{1}(t),Y_{1}(t),K_{1}(t)X_{1}(t)\right] D^{2}g(t)\left[ X_{1}(t),Y_{1}(t),K_{1}(t)X_{1}(t)\right] ^{\intercal }\right\} dt+Z_{2}(t)dB(t),\\ Y_{2}(T)= & \phi_{x}(\bar{X}(T))X_{2}(T)+\frac{1}{2}\phi_{xx}(\bar{X (T))X_{1}^{2}(T). \end{array} \right. \label{new-form-y2 \end{equation} In the following lemma, we estimate the orders of $X_{2}(\cdot)$, $Y_{2 (\cdot)$, $Z_{2}(\cdot)$, and $Y^{\epsilon}(0)-\bar{Y}(0)-Y_{1}(0)-Y_{2}(0)$. \begin{lemma} \label{est-second-order} Suppose Assumptions \ref{assum-2}, \ref{assum-3} and \ref{assm-q-bound} hold. Then for any $2\leq\beta\leq4$ we hav \ \begin{array} [c]{rl \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}(|X_{2}(t)|^{2}+|Y_{2 (t)|^{2})\right] +\mathbb{E}\left[ \int_{0}^{T}|Z_{2}(t)|^{2}dt\right] & =O(\epsilon^{2}),\\ \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}(|X_{2}(t)|^{\beta +|Y_{2}(t)|^{\beta})\right] +\mathbb{E}\left[ \left( \int_{0}^{T |Z_{2}(t)|^{2}dt\right) ^{\frac{\beta}{2}}\right] & =o(\epsilon^{\frac {\beta}{2}}),\\ Y^{\epsilon}(0)-\bar{Y}(0)-Y_{1}(0)-Y_{2}(0) & =o(\epsilon). \end{array} \] \end{lemma} \begin{proof} By Theorem \ref{est-fbsde-lp}, we hav \ \begin{array} [c]{l \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}(|X_{2}(t)|^{2}+|Y_{2 (t)|^{2})+\int_{0}^{T}|Z_{2}(t)|^{2}dt\right] \\ \leq C\mathbb{E}\left[ \left( \int_{0}^{T}[(|\delta b(t,\Delta )|+|\delta\sigma(t,\Delta)|+|\delta g(t,\Delta)|)I_{E_{\epsilon} (t)+|X_{1}(t)|^{2}+|Y_{1}(t)|^{2}]dt\right) ^{2}\right] \\ \text{ \ }+C\mathbb{E}\left[ \int_{0}^{T}\left[ |X_{1}(t)|^{4 +|Y_{1}(t)|^{4}+(|X_{1}(t)|^{2}+|Y_{1}(t)|^{2})I_{E_{\epsilon}}(t)\right] dt\right] \\ \leq C\epsilon\mathbb{E}\left[ \int_{E_{\epsilon}}(1+|\bar{X}(t)|^{2 +|\bar{Y}(t)|^{2}+|\bar{Z}(t)|^{2}+|u(t)|^{2}+|\bar{u}(t)|^{2})dt\right] \\ \text{ \ }+C\mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |X_{1}(t)|^{4}+|Y_{1}(t)|^{4}\right) \right] +C\epsilon\mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}(|X_{1}(t)|^{2}+|Y_{1}(t)|^{2})\right] \\ \leq C\epsilon^{2}, \end{array} \ \begin{equation \begin{array} [c]{l \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}(|X_{2}(t)|^{\beta +|Y_{2}(t)|^{\beta})+\left( \int_{0}^{T}|Z_{2}(t)|^{2}dt\right) ^{\frac{\beta}{2}}\right] \\ \leq C\mathbb{E}\left[ \left( \int_{0}^{T}[(|\delta b(t,\Delta )|+|\delta\sigma(t,\Delta)|+|\delta g(t,\Delta)|)I_{E_{\epsilon} (t)+|X_{1}(t)|^{2}+|Y_{1}(t)|^{2}]dt\right) ^{\beta}\right] \\ \text{ \ }+C\mathbb{E}\left[ \left( \int_{0}^{T}\left[ |X_{1 (t)|^{4}+|Y_{1}(t)|^{4}+(|X_{1}(t)|^{2}+|Y_{1}(t)|^{2})I_{E_{\epsilon }(t)\right] dt\right) ^{\frac{\beta}{2}}\right] \\ \leq C\epsilon^{\frac{\beta}{2}}\mathbb{E}\left[ \left( \int_{E_{\epsilon }(1+|\bar{X}(t)|^{2}+|\bar{Y}(t)|^{2}+|\bar{Z}(t)|^{2}+|u(t)|^{2}+|\bar {u}(t)|^{2})dt\right) ^{\frac{\beta}{2}}\right] \\ \text{ \ }+C\mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |X_{1}(t)|^{2\beta}+|Y_{1}(t)|^{2\beta}\right) \right] +C\epsilon ^{\frac{\beta}{2}}\mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T] (|X_{1}(t)|^{\beta}+|Y_{1}(t)|^{\beta})\right] \\ =o(\epsilon^{\frac{\beta}{2}}). \end{array} \end{equation} Now, we focus on the last estimate. We use the same notations $\xi ^{1,\epsilon}(t)$, $\eta^{1,\epsilon}(t)$, $\zeta^{1,\epsilon}(t)$, $\xi^{2,\epsilon}(t)$, $\eta^{2,\epsilon}(t)$ and $\zeta^{2,\epsilon}(t)$ in the proof of Lemma \ref{est-epsilon-bar} and Lemma \ref{est-one-order}. Let \ \begin{array} [c]{rl \xi^{3,\epsilon}(t) & :=X^{\epsilon}(t)-\bar{X}(t)-X_{1}(t)-X_{2}(t);\\ \eta^{3,\epsilon}(t) & :=Y^{\epsilon}(t)-\bar{Y}(t)-Y_{1}(t)-Y_{2}(t);\\ \zeta^{3,\epsilon}(t) & :=Z^{\epsilon}(t)-\bar{Z}(t)-Z_{1}(t)-Z_{2}(t);\\ \Theta(t) & :=(\bar{X}(t),\bar{Y}(t),\bar{Z}(t));\\ \Theta(t,\Delta I_{E_{\epsilon}}) & :=(\bar{X}(t),\bar{Y}(t),\bar{Z (t)+\Delta(t)I_{E_{\epsilon}}(t));\\ \Theta^{\epsilon}(t) & :=(X^{\epsilon}(t),Y^{\epsilon}(t),Z^{\epsilon}(t)). \end{array} \] Define $\widetilde{D^{2}b^{\epsilon}}(t) \[ \widetilde{D^{2}b^{\epsilon}}(t)=2\int_{0}^{1}\int_{0}^{1}\theta D^{2}b(t,\Theta(t,\Delta I_{E_{\epsilon}})+\lambda\theta(\Theta^{\epsilon }(t)-\Theta(t,\Delta I_{E_{\epsilon}})),u^{\epsilon}(t))d\theta d\lambda, \] $\widetilde{D^{2}\sigma^{\epsilon}}(t)$, $\widetilde{D^{2}g^{\epsilon}}(t)$ and $\tilde{\phi}_{xx}^{\epsilon}(T)$ are defined similarly. Then, we have \begin{equation} \left\{ \begin{array} [c]{ll d\xi^{3,\epsilon}(t)= & \left\{ b_{x}(t)\xi^{3,\epsilon}(t)+b_{y (t)\eta^{3,\epsilon}(t)+b_{z}(t)\zeta^{3,\epsilon}(t)+A_{2}^{\epsilon }(t)\right\} dt\\ & +\left\{ \sigma_{x}(t)\xi^{3,\epsilon}(t)+\sigma_{y}(t)\eta^{3,\epsilon }(t)+\sigma_{z}(t)\zeta^{3,\epsilon}(t)+B_{2}^{\epsilon}(t)\right\} dB(t),\\ \xi^{3,\epsilon}(0)= & 0, \end{array} \right. \label{x-x1-x2 \end{equation} an \begin{equation} \left\{ \begin{array} [c]{lll d\eta^{3,\epsilon}(t) & = & -\{g_{x}(t)\xi^{3,\epsilon}(t)+g_{y (t)\eta^{3,\epsilon}(t)+g_{z}(t)\zeta^{3,\epsilon}(t)+C_{2}^{\epsilon }(t)\}dt-\zeta^{3,\epsilon}(t)dB(t),\\ \eta^{3,\epsilon}(T) & = & \phi_{x}(\bar{X}(T))\xi^{3,\epsilon}(T)+D_{2 ^{\epsilon}(T), \end{array} \right. \label{y-y1-y2 \end{equation} wher \ \begin{array} [c]{ll A_{2}^{\epsilon}(t)= & \left[ \delta b_{x}(t,\Delta)\xi^{1,\epsilon }(t)+\delta b_{y}(t,\Delta)\eta^{1,\epsilon}(t)+\delta b_{z}(t,\Delta)\left( \zeta^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t)\right) \right] I_{E_{\epsilon}}(t)\\ & +\frac{1}{2}\left[ \xi^{1,\epsilon}(t),\eta^{1,\epsilon}(t),\zeta ^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t)\right] \widetilde{D^{2 b^{\epsilon}}(t)\left[ \xi^{1,\epsilon}(t),\eta^{1,\epsilon}(t),\zeta ^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t)\right] ^{\intercal}\\ & -\frac{1}{2}\left[ X_{1}(t),Y_{1}(t),K_{1}(t)X_{1}(t)\right] D^{2}b(t)\left[ X_{1}(t),Y_{1}(t),K_{1}(t)X_{1}(t)\right] ^{\intercal}, \end{array} \ \ \begin{array} [c]{ll B_{2}^{\epsilon}(t)= & \left[ \delta\sigma_{x}(t,\Delta)\xi^{2,\epsilon }(t)+\delta\sigma_{y}(t,\Delta)\eta^{2,\epsilon}(t)+\delta\sigma_{z (t,\Delta)\zeta^{2,\epsilon}(t)\right] I_{E_{\epsilon}}(t)\\ & +\frac{1}{2}\left[ \xi^{1,\epsilon}(t),\eta^{1,\epsilon}(t),\zeta ^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t)\right] \widetilde{D^{2 \sigma^{\epsilon}}(t)\left[ \xi^{1,\epsilon}(t),\eta^{1,\epsilon (t),\zeta^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t)\right] ^{\intercal}\\ & -\frac{1}{2}\left[ X_{1}(t),Y_{1}(t),K_{1}(t)X_{1}(t)\right] D^{2 \sigma(t)\left[ X_{1}(t),Y_{1}(t),K_{1}(t)X_{1}(t)\right] ^{\intercal}, \end{array} \ \ \begin{array} [c]{ll C_{2}^{\epsilon}(t)= & \left[ \delta g_{x}(t,\Delta)\xi^{1,\epsilon }(t)+\delta g_{y}(t,\Delta)\eta^{1,\epsilon}(t)+\delta g_{z}(t,\Delta)\left( \zeta^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t)\right) \right] I_{E_{\epsilon}}(t)\\ & +\frac{1}{2}\left[ \xi^{1,\epsilon}(t),\eta^{1,\epsilon}(t),\zeta ^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t)\right] \widetilde{D^{2 g^{\epsilon}}(t)\left[ \xi^{1,\epsilon}(t),\eta^{1,\epsilon}(t),\zeta ^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t)\right] ^{\intercal}\\ & -\frac{1}{2}\left[ X_{1}(t),Y_{1}(t),K_{1}(t)X_{1}(t)\right] D^{2}g(t)\left[ X_{1}(t),Y_{1}(t),K_{1}(t)X_{1}(t)\right] ^{\intercal},\\ D_{2}^{\epsilon}(T)= & \frac{1}{2}\tilde{\phi}_{xx}^{\epsilon}(T)\xi ^{1,\epsilon}(T)^{2}-\frac{1}{2}\phi_{xx}(\bar{X}(T))X_{1}^{2}(T). \end{array} \] We introduce the following fully coupled FBSDE: \begin{equation} \left\{ \begin{array} [c]{rl dh(t)= & \left[ g_{y}(t)h(t)+b_{y}(t)m(t)+\sigma_{y}(t)n(t)\right] dt+\left[ g_{z}(t)h(t)+b_{z}(t)m(t)+\sigma_{z}(t)n(t)\right] dB(t),\\ h(0)= & 1,\\ dm(t)= & -\left[ g_{x}(t)h(t)+b_{x}(t)m(t)+\sigma_{x}(t)n(t)\right] dt+n(t)dB(t),\\ m(T)= & \phi_{x}(\bar{X}(T))h(T). \end{array} \right. \end{equation} It has a unique solution due to Theorem \ref{est-fbsde-lp}. Applying It\^{o}'s formula to \[ m(t)\xi^{3,\epsilon}(t)-h(t)\eta^{3,\epsilon}(t), \] we have \begin{equation \begin{array} [c]{ll |\eta^{3,\epsilon}(0)| & =\left\vert \mathbb{E}\left[ h(T)D_{2}^{\epsilon }(T)+\int_{0}^{T}\left( m(t)A_{2}^{\epsilon}(t)+n(t)B_{2}^{\epsilon }(t)+h(t)C_{2}^{\epsilon}(t)\right) dt\right] \right\vert \\ & \leq\mathbb{E}\left[ \left\vert h(T)D_{2}^{\epsilon}(T)\right\vert +\int_{0}^{T}\left( \left\vert m(t)A_{2}^{\epsilon}(t)\right\vert +\left\vert n(t)B_{2}^{\epsilon}(t)\right\vert +\left\vert h(t)C_{2}^{\epsilon }(t)\right\vert \right) dt\right] . \end{array} \label{second order-estimate \end{equation} We estimate each term as follows. (1) { \ \begin{array} [c]{ll \mathbb{E}\left[ |h(T)D_{2}^{\epsilon}(T)|\right] & \leq\left\{ \mathbb{E}\left[ |h(T)|^{2}\right] \right\} ^{\frac{1}{2}}\left\{ \mathbb{E}\left[ |D_{2}^{\epsilon}(T)|^{2}\right] \right\} ^{\frac{1}{2}}\\ & \leq C\left\{ \mathbb{E}\left[ |\tilde{\phi}_{xx}^{\epsilon}(T)-\phi _{xx}(\bar{X}(T))|^{2}|\xi^{1,\epsilon}(T)|^{4}+|\xi^{2,\epsilon}(T)|^{2 |\xi^{1,\epsilon}(T)+X_{1}(T)|^{2}\right] \right\} ^{\frac{1}{2}}\\ & =o(\epsilon). \end{array} \] } (2) {Since \[ \mathbb{E}\left[ \int_{0}^{T}|m(t)A_{2}^{\epsilon}(t)|dt\right] \leq\mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}|m(t)|\int_{0}^{T |A_{2}^{\epsilon}(t)|dt\right] \leq\left\{ \mathbb{E}\left[ \sup \limits_{t\in\lbrack0,T]}|m(t)|^{2}\right] \right\} ^{\frac{1}{2}}\left\{ \mathbb{E}\left[ \left( \int_{0}^{T}|A_{2}^{\epsilon}(t)|dt\right) ^{2}\right] \right\} ^{\frac{1}{2}}, \] { then we only need to check \begin{equation} \mathbb{E}\left[ \left( \int_{0}^{T}|A_{2}^{\epsilon}(t)|dt\right) ^{2}\right] =o(\epsilon^{2}). \label{second order-part estimate \end{equation} Indeed, (\ref{second order-part estimate}) is due to the following estimates: \ \begin{array} [c]{l \mathbb{E}\left[ \left( \int_{0}^{T}|\delta b_{z}(t,\Delta)(\zeta ^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t))|I_{E_{\epsilon}}(t)dt\right) ^{2}\right] \\ \leq\mathbb{E}\left[ \left( \int_{E_{\epsilon}}|\delta b_{z}(t,\Delta )|\left( |\zeta^{2,\epsilon}(t)|+|K_{1}(t)X_{1}(t)|\right) dt\right) ^{2}\right] \\ \leq C\mathbb{E}\left[ \left( \int_{E_{\epsilon}}|\zeta^{2,\epsilon }(t)|dt\right) ^{2}\right] +C\mathbb{E}\left[ \sup\limits_{t\in\lbrack 0,T]}|X_{1}(t)|^{2}\left( \int_{E_{\epsilon}}|\delta b_{z}(t,\Delta )|dt\right) ^{2}\right] \\ \leq C\epsilon\mathbb{E}\left[ \int_{0}^{T}|\zeta^{2,\epsilon}(t)|^{2 dt\right] +C\epsilon^{2}\mathbb{E}[\sup\limits_{t\in\lbrack0,T] |X_{1}(t)|^{2}]\\ =o(\epsilon^{2}), \end{array} \ \begin{equation \begin{array} [c]{l \mathbb{E}\left[ \left( \int_{0}^{T}\left\vert \widetilde{b}_{zz}^{\epsilon }(t)\left( \zeta^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t)\right) ^{2}-b_{zz}(t)K_{1}(t)^{2}X_{1}(t)^{2}\right\vert dt\right) ^{2}\right] \\ \leq\mathbb{E}\left[ \left( \int_{0}^{T}\left\vert \widetilde{b _{zz}^{\epsilon}(t)\zeta^{2,\epsilon}(t)\left( \zeta^{1,\epsilon (t)-\Delta(t)I_{E_{\epsilon}}(t)+K_{1}(t)X_{1}(t)\right) \right\vert dt\right) ^{2}\right] \\ \text{ \ }+\mathbb{E}\left[ \left( \int_{0}^{T}\left\vert \left( \widetilde{b}_{zz}^{\epsilon}(t)-b_{zz}(t)\right) K_{1}(t)^{2}X_{1 (t)^{2}\right\vert dt\right) ^{2}\right] \\ \leq C\mathbb{E}\left[ \int_{0}^{T}\left\vert \zeta^{2,\epsilon }(t)\right\vert ^{2}dt\int_{0}^{T}\left\vert \zeta^{1,\epsilon}(t)-\Delta (t)I_{E_{\epsilon}}(t)+K_{1}(t)X_{1}(t)\right\vert ^{2}dt\right] \\ \text{ \ }+C\mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}|X_{1 (t)|^{4}\left( \int_{0}^{T}\left\vert \left( \widetilde{b}_{zz}^{\epsilon }(t)-b_{zz}(t)\right) \right\vert dt\right) ^{2}\right] \\ =o(\epsilon^{2}), \end{array} \label{est-d2b \end{equation} the other terms are similar.} (3){ }The estimate of $\mathbb{E}\left[ \int_{0}^{T}|n(t)B_{2}^{\epsilon }(t)|dt\right] $: \ \begin{array} [c]{ll \mathbb{E}\left[ \int_{0}^{T}\left\vert n(t)\delta\sigma_{z}(t,\Delta )\zeta^{2,\epsilon}(t)I_{E_{\epsilon}}(t)\right\vert dt\right] & \leq C\mathbb{E}\left[ \int_{E_{\epsilon}}|n(t)\zeta^{2,\epsilon}(t)|dt\right] \\ & \leq C\left\{ \mathbb{E}\left[ \int_{0}^{T}|\zeta^{2,\epsilon (t)|^{2}dt\right] \right\} ^{\frac{1}{2}}\left\{ \mathbb{E}\left[ \int_{E_{\epsilon}}|n(t)|^{2}dt\right] \right\} ^{\frac{1}{2}}\\ & =o(\epsilon), \end{array} \ \ \begin{array} [c]{l \mathbb{E}\left[ \int_{0}^{T}\left\vert n(t)\right\vert \left\vert \tilde{\sigma}_{zz}^{\epsilon}(t)\left( \zeta^{1,\epsilon}(t)-\Delta (t)I_{E_{\epsilon}}(t)\right) ^{2}-\sigma_{zz}(t)K_{1}(t)^{2}X_{1 (t)^{2}\right\vert dt\right] \\ \leq\mathbb{E}\left[ \int_{0}^{T}\left\vert n(t)\right\vert \left\vert \tilde{\sigma}_{zz}^{\epsilon}(t)\left( \zeta^{1,\epsilon}(t)-\Delta (t)I_{E_{\epsilon}}(t)+K_{1}(t)X_{1}(t)\right) \zeta^{2,\epsilon }(t)\right\vert dt\right] \\ \text{ \ }+\mathbb{E}\left[ \int_{0}^{T}\left\vert n(t)\right\vert \left\vert \tilde{\sigma}_{zz}^{\epsilon}(t)-\sigma_{zz}(t)\right\vert K_{1}(t)^{2 X_{1}(t)^{2}dt\right] \\ \leq\mathbb{E}\left[ \int_{0}^{T}\left\vert n(t)\right\vert \left\vert \tilde{\sigma}_{zz}^{\epsilon}(t)\left( \zeta^{1,\epsilon}(t)-\Delta (t)I_{E_{\epsilon}}(t)\right) \zeta^{2,\epsilon}(t)\right\vert dt\right] +\mathbb{E}\left[ \int_{0}^{T}\left\vert n(t)\right\vert \left\vert \tilde{\sigma}_{zz}^{\epsilon}(t)K_{1}(t)X_{1}(t)\zeta^{2,\epsilon }(t)\right\vert dt\right] +o(\epsilon)\\ =\mathbb{E}\left[ \int_{0}^{T}\left\vert n(t)\right\vert \left\vert 2\in _{0}^{1}\theta\left[ \sigma_{z}(t,\Theta(t,\Delta I_{E_{\epsilon} )+\theta(\Theta^{\epsilon}(t)-\Theta(t,\Delta I_{E_{\epsilon}})),u^{\epsilon }(t))-\sigma_{z}(t,\Theta(t,\Delta I_{E_{\epsilon}}),u^{\epsilon}(t))\right] d\theta\right\vert \left\vert \zeta^{2,\epsilon}(t)\right\vert dt\right] \\ \text{ \ }+\mathbb{E}\left[ \int_{0}^{T}\left\vert n(t)\right\vert \left\vert \tilde{\sigma}_{zx}^{\epsilon}\left( t\right) \xi^{1,\epsilon}\left( t\right) +\tilde{\sigma}_{zy}^{\epsilon}\left( t\right) \eta^{1,\epsilon }\left( t\right) \right\vert \left\vert \zeta^{2,\epsilon}(t)\right\vert dt\right] +C\mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left\vert X_{1}(t)\right\vert \int_{0}^{T}\left\vert n(t)K_{1}(t)\right\vert \left\vert \zeta^{2,\epsilon}(t)\right\vert dt\right] +o(\epsilon)\\ =o(\epsilon), \end{array} \] {the other terms are similar.} (4) {The estimate of $\mathbb{E}\left[ \int_{0}^{T}|h(t)C_{2}^{\epsilon }(t)|dt\right] $ is the same as $\mathbb{E}\left[ \int_{0}^{T |m(t)A_{2}^{\epsilon}(t)|dt\right] $. } All the terms in (\ref{second order-estimate}) have been derived. Finally, we obtain \[ Y^{\epsilon}(0)-\bar{Y}(0)-Y_{1}(0)-Y_{2}(0)=o(\epsilon). \] The proof is complete. \end{proof} In the above lemma, we only prove $Y^{\epsilon}(0)-\bar{Y}(0)-Y_{1 (0)-Y_{2}(0)=o(\epsilon)$ and have not deduced \[ \mathbb{E}[\sup\limits_{t\in\lbrack0,T]}|Y^{\epsilon}(t)-\bar{Y (t)-Y_{1}(t)-Y_{2}(t)|^{2}]=o(\epsilon^{2}). \] The reason is \[ \mathbb{E}\left[ \int_{0}^{T}\left\vert \tilde{\sigma}_{zz}^{\epsilon }(t)\left( \zeta^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t)\right) \right\vert ^{2}\left\vert \zeta^{2,\epsilon}(t)\right\vert ^{2}dt\right] =o(\epsilon^{2}) \] may be not hold. But if \begin{equation} \sigma(t,x,y,z,u)=A(t)z+\sigma_{1}(t,x,y,u) \label{xigma-zz \end{equation} where $A(t)$ is a bounded adapted process, then $\sigma_{zz}\equiv0.$ In this case, we can prove the following estimate. \begin{lemma} \label{lemma-est-sup}Suppose Assumptions \ref{assum-2}, \ref{assum-3}, \ref{assm-q-bound} and $\sigma(t,x,y,z,u)=A(t)z+$ $\sigma_{1}(t,x,y,u)$ where $A(t)$ is a bounded adapted process. The \ \begin{array} [c]{rl \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}|X^{\epsilon}(t)-\bar {X}(t)-X_{1}(t)-X_{2}(t)|^{2}\right] & =o(\epsilon^{2}),\\ \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}|Y^{\epsilon}(t)-\bar {Y}(t)-Y_{1}(t)-Y_{2}(t)|^{2}+\int_{0}^{T}|Z^{\epsilon}(t)-\bar{Z (t)-Z_{1}(t)-Z_{2}(t)|^{2}dt\right] & =o(\epsilon^{2}). \end{array} \] \end{lemma} \begin{proof} We use all notations in Lemma \ref{est-second-order}. By Theorem \ref{est-fbsde-lp}, we hav \ \begin{array} [c]{l \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}(|\xi^{3,\epsilon (t)|^{2}+|\eta^{3,\epsilon}(t)|^{2})+\int_{0}^{T}|\zeta^{3,\epsilon (t)|^{2}dt\right] \\ \leq C\mathbb{E}\left[ \left( \int_{0}^{T}|A_{2}^{\epsilon}(t)|dt\right) ^{2}+\left( \int_{0}^{T}|C_{2}^{\epsilon}(t)|dt\right) ^{2}+\int_{0 ^{T}|B_{2}^{\epsilon}(t)|^{2}dt+|D_{2}^{\epsilon}(T)|^{2}\right] , \end{array} \] where $A_{2}^{\epsilon}(\cdot)$, $C_{2}^{\epsilon}(\cdot)$, $D_{2}^{\epsilon }(T)$ are the same as Lemma \ref{est-second-order}, and \ \begin{array} [c]{ll B_{2}^{\epsilon}(t)= & \left[ \delta\sigma_{x}(t)\xi^{2,\epsilon (t)+\delta\sigma_{y}(t)\eta^{2,\epsilon}(t)\right] I_{E_{\epsilon} (t)+\frac{1}{2}\left[ \xi^{1,\epsilon}(t),\eta^{1,\epsilon}(t)\right] \widetilde{D^{2}\sigma^{\epsilon}}(t)\left[ \xi^{1,\epsilon}(t),\eta ^{1,\epsilon}(t)\right] ^{\intercal}\\ & -\frac{1}{2}\left[ X_{1}(t),Y_{1}(t)\right] D^{2}\sigma(t)\left[ X_{1}(t),Y_{1}(t)\right] ^{\intercal}. \end{array} \] In Lemma \ref{est-second-order}, we have proved \[ \mathbb{E}\left[ \left( \int_{0}^{T}|A_{2}^{\epsilon}(t)|dt\right) ^{2}+\left( \int_{0}^{T}|C_{2}^{\epsilon}(t)|dt\right) ^{2}+|D_{2 ^{\epsilon}(T)|^{2}\right] =o(\epsilon^{2}). \] Now we just need to check $\mathbb{E}\left[ \int_{0}^{T}|B_{2}^{\epsilon }(t)|^{2}dt\right] =o\left( \epsilon^{2}\right) $ as follows \begin{equation \begin{array} [c]{l \mathbb{E}\left[ \int_{0}^{T}\left\vert \delta\sigma_{x}(t)\xi^{2,\epsilon }(t)\right\vert ^{2}I_{E_{\epsilon}}(t)dt\right] \leq\mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left\vert \xi^{2,\epsilon}(t)\right\vert ^{2}\int_{E_{\epsilon}}\left\vert \delta\sigma_{x}(t)\right\vert ^{2}dt\right] =o(\epsilon^{2}). \end{array} \label{est-sigmax-x-x1 \end{equation} The estimate of $\mathbb{E}\left[ \int_{0}^{T}\left\vert \delta\sigma _{y}(t)\eta^{2,\epsilon}(t)\right\vert ^{2}dt\right] $ is same to \eqref{est-sigmax-x-x1}, an \ \begin{array} [c]{l \mathbb{E}\left[ \int_{0}^{T}\left\vert \tilde{\sigma}_{yy}^{\epsilon (t)\eta^{1,\epsilon}(t)^{2}-\sigma_{yy}(t)Y_{1}(t)^{2}\right\vert ^{2}dt\right] \ \ \\ \leq\mathbb{E}\left[ \int_{0}^{T}\left\vert \tilde{\sigma}_{yy}^{\epsilon }(t)\eta^{2,\epsilon}(t)(\eta^{1,\epsilon}(t)+Y_{1}(t))\right\vert ^{2}dt\right] +\mathbb{E}\left[ \int_{0}^{T}\left\vert \tilde{\sigma _{yy}^{\epsilon}(t)-\sigma_{yy}(t)\right\vert ^{2}Y_{1}(t)^{4}dt\right] \\ \leq C\mathbb{E}\left[ \int_{0}^{T}\left\vert \eta^{2,\epsilon}(t)\right\vert ^{2}\left\vert \eta^{1,\epsilon}(t)+Y_{1}(t)\right\vert ^{2}dt\right] +\mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left\vert Y_{1}(t)\right\vert ^{4}\int_{0}^{T}\left\vert \tilde{\sigma}_{yy}^{\epsilon}(t)-\sigma _{yy}(t)\right\vert ^{2}dt\right] \\ =o(\epsilon^{2}). \end{array} \] Other terms are similar. \end{proof} \subsection{Maximum principle} \label{section-mp}Note that $Y_{1}(0)=0$, by Lemma \ref{est-second-order}, we have \[ J(u^{\epsilon}(\cdot))-J(\bar{u}(\cdot))=Y^{\epsilon}(0)-\bar{Y (0)=Y_{2}(0)+o(\epsilon). \] In order to obtain $Y_{2}(0)$, we introduce the following second-order adjoint equation: \begin{equation} \left\{ \begin{array} [c]{rl -dP(t)= & \left\{ P(t)\left[ (D\sigma(t)^{\intercal}[1,p(t),K_{1 (t)]^{\intercal})^{2}+2Db(t)^{\intercal}[1,p(t),K_{1}(t)]^{\intercal +H_{y}(t)\right] \right. \\ & +2Q(t)D\sigma(t)^{\intercal}[1,p(t),K_{1}(t)]^{\intercal}+\left[ 1,p(t),K_{1}(t)\right] D^{2}H(t)\left[ 1,p(t),K_{1}(t)\right] ^{\intercal }\left. +H_{z}(t)K_{2}(t)\right\} dt\\ & -Q(t)dB(t),\\ P(T)= & \phi_{xx}(\bar{X}(T)), \end{array} \right. \label{eq-P \end{equation} where \ \begin{array} [c]{ll H(t,x,y,z,u,p,q)= & g(t,x,y,z,u)+pb(t,x,y,z,u)+q\sigma(t,x,y,z,u), \end{array} \ \ \begin{array} [c]{ll K_{2}(t)= & (1-p(t)\sigma_{z}(t))^{-1}\left\{ p(t)\sigma_{y}(t)+2\left[ \sigma_{x}(t)+\sigma_{y}(t)p(t)+\sigma_{z}(t)K_{1}(t)\right] \right\} P(t)\\ & +(1-p(t)\sigma_{z}(t))^{-1}\left\{ Q(t)+p(t)[1,p(t),K_{1}(t)]D^{2 \sigma(t)[1,p(t),K_{1}(t)]^{\intercal}\right\} , \end{array} \] and $DH(t)$, $D^{2}H(t)$ are defined similar to $D\psi$ and $D^{2}\psi$. Note that (\ref{eq-P}) is a linear BSDE with uniformly Lipschitz continuous coefficients, then it has a unique solution. Before we deduce the relationship between $X_{2}(\cdot)$ and $(Y_{2}(\cdot),Z_{2}(\cdot))$, we introduce the following equation: \begin{equation \begin{array} [c]{l \hat{Y}(t)=\int_{t}^{T}\left\{ (H_{y}(s)+\sigma_{y}(s)g_{z (s)p(s)(1-p(s)\sigma_{z}(s))^{-1})\hat{Y}(s)+\left( H_{z}(s)+\sigma _{z}(s)g_{z}(s)p(s)(1-p(s)\sigma_{z}(s))^{-1}\right) \hat{Z}(s)\right. \\ \ \ \ \ \ \ \ \ \ \ \left. +\left[ \delta H(s,\Delta)+\frac{1}{2 P(s)\delta\sigma(s,\Delta)^{2}\right] I_{E_{\epsilon}}(s)\right\} ds-\int_{t}^{T}\hat{Z}(s)dB(s), \end{array} \label{eq-y-hat \end{equation} where $\delta H(s,\Delta):=p(s)\delta b(s,\Delta)+q(s)\delta\sigma (s,\Delta)+\delta g(s,\Delta)$. It is also a linear BSDE and has a unique solution. \begin{lemma} \label{relation-y2} Suppose Assumptions \ref{assum-2}, \ref{assum-3} and. \ref{assm-q-bound} hold. Then we hav \ \begin{array} [c]{rl Y_{2}(t) & =p(t)X_{2}(t)+\frac{1}{2}P(t)X_{1}(t)^{2}+\hat{Y}(t),\\ Z_{2}(t) & =\mathbf{I(t)}+\hat{Z}(t), \end{array} \] where $(\hat{Y}(\cdot),\hat{Z}(\cdot))$ is the solution to \eqref{eq-y-hat} and \begin{align*} \mathbf{I(t)} & =K_{1}(t)X_{2}(t)+\frac{1}{2}K_{2}(t)X_{1}^{2 (t)+(1-p(t)\sigma_{z}(t))^{-1}p(t)(\sigma_{y}(t)\hat{Y}(t)+\sigma_{z (t)\hat{Z}(t))+P(t)\delta\sigma(t,\Delta)X_{1}(t)I_{E_{\epsilon}}(t)\\ & \;+(1-p(t)\sigma_{z}(t))^{-1}p(t)\left[ \delta\sigma_{x}(t,\Delta )X_{1}(t)+\delta\sigma_{y}(t,\Delta)p(t)X_{1}(t)+\delta\sigma_{z (t,\Delta)K_{1}(t)X_{1}(t)\right] I_{E_{\epsilon}}(t). \end{align*} \end{lemma} \begin{proof} Using the same method as in Lemma \ref{lemma-y1}, we can deduce the above relationship similarly. \end{proof} Consider the following equation: \begin{equation} \left\{ \begin{array} [c]{rl d\gamma(t)= & \gamma(t)\left[ H_{y}(t)+p(t)g_{z}(t)(1-p(t)\sigma_{z (t))^{-1}\sigma_{y}(t)\right] dt\\ & +\gamma(t)\left[ H_{z}(t)+p(t)(1-p(t)\sigma_{z}(t))^{-1}\sigma_{z (t)g_{z}(t)\right] dB(t),\\ \gamma(0)= & 1. \end{array} \right. \label{eq-gamma \end{equation} Applying It\^{o}'s formula to $\gamma(t)\hat{Y}(t)$, we obtain \ \begin{array} [c]{rl \hat{Y}(0)= & \mathbb{E}\left\{ \int_{0}^{T}\gamma(t)\left[ \delta H(t,\Delta)+\frac{1}{2}P(t)\delta\sigma(t,\Delta)^{2}\right] I_{E_{\epsilon }(t)dt\right\} . \end{array} \] Define \begin{equation \begin{array} [c]{ll \mathcal{H}(t,x,y,z,u,p,q,P)= & pb(t,x,y,z+\Delta(t),u)+q\sigma(t,x,y,z+\Delta (t),u)\\ & +\frac{1}{2}P(\sigma(t,x,y,z+\Delta(t),u)-\sigma(t,\bar{X}(t),\bar {Y}(t),\bar{Z}(t),\bar{u}(t)))^{2}\ +g(t,x,y,z+\Delta(t),u), \end{array} \label{def-H \end{equation} where $\Delta(t)$ is defined in (\ref{def-delt}) corresponding to $u(t)=u$. It is easy to check that \begin{align*} & \delta H(t,\Delta)+\frac{1}{2}P(t)\delta\sigma(t,\Delta)^{2}\\ & =\mathcal{H}(t,\bar{X}(t),\bar{Y}(t),\bar{Z (t),u(t),p(t),q(t),P(t))-\mathcal{H}(t,\bar{X}(t),\bar{Y}(t),\bar{Z (t),\bar{u}(t),p(t),q(t),P(t)). \end{align*} Noting that $\gamma(t)>0$ for $t\in\lbrack0,T]$, then we obtain the following maximum principle. \begin{theorem} \label{Th-MP}Suppose Assumptions \ref{assum-2}, \ref{assum-3} and \ref{assm-q-bound} hold. Let $\bar{u}(\cdot)\in\mathcal{U}[0,T]$ be optimal and $(\bar{X}(\cdot),\bar{Y}(\cdot),\bar{Z}(\cdot))$ be the corresponding state processes of (\ref{state-eq}). Then the following stochastic maximum principle holds: \begin{equation} \mathcal{H}(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),u,p(t),q(t),P(t))\geq \mathcal{H}(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar{u (t),p(t),q(t),P(t)),\ \ \ \forall u\in U,\ a.e.,\ a.s., \label{mp-1 \end{equation} where $(p\left( \cdot\right) ,q\left( \cdot\right) )$, $\left( P\left( \cdot\right) ,Q\left( \cdot\right) \right) $ satisfy (\ref{eq-p}), (\ref{eq-P}) respectively, and $\Delta(\cdot)$ satisfies (\ref{def-delt}). \end{theorem} \begin{remark} If $b$ and $\sigma$ are independent of $y$ and $z$, then Theorem \ref{Th-MP} degenerates to the maximum principle obtained in \cite{Hu17}. \end{remark} \begin{corollary} \label{cor-mp-convex}Under the same assumptions as in Theorem \ref{Th-MP}. Moreover, suppose that $b$, $\sigma$, $g$ are continuously differentiable with respect to $u$ and $U$ is a convex set. Then \begin{equation} \Delta_{u}(t)|_{u=\bar{u}(t)}=\frac{p(t)\sigma_{u}(t)}{1-p(t)\sigma_{z}(t)} \label{delt-convex \end{equation} and the maximum principle is \begin{equation} \mathcal{H}_{u}(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar{u}(t),p(t),q(t))\cdot (u-\bar{u}(t))\geq0\ \ \ \forall u\in U,\ a.e.,\ a.s. \label{hamil-convex \end{equation} wit \ \begin{array} [c]{l \mathcal{H}_{u}(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar{u}(t),p(t),q(t))\\ =p(t)b_{u}(t)+q(t)\sigma_{u}(t)+g_{u}(t)+(p(t)b_{z}(t)+q(t)\sigma_{z (t)+g_{z}(t))\frac{p(t)\sigma_{u}(t)}{1-p(t)\sigma_{z}(t)}. \end{array} \] \end{corollary} \begin{proof} By implicit function theorem for (\ref{def-delt}), we get (\ref{delt-convex}). For each $u\in U$, taking $u_{\rho}(t)=\bar{u}(t)+\rho(u-\bar{u}(t))$, we can get (\ref{hamil-convex}) by (\ref{mp-1}). \end{proof} \subsection{The case without Assumption \ref{assm-q-bound}} The relations $Y_{1}(t)=p(t)X_{1}(t)$ and $\ Z_{1}(t)=K_{1}(t)X_{1 (t)+\Delta(t)I_{E_{\epsilon}}(t)$ in Lemma \ref{lemma-y1}, is the key point to derive the maximum principle (\ref{mp-1}). Note that to prove Lemma \ref{lemma-y1}, we need Assumption \ref{assm-q-bound}, which implie \begin{equation} \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}|\tilde{X}_{1}(t)|^{2}\right] <\infty. \label{eq-new211 \end{equation} However, under the following assumption, combing Theorems \ref{appen-th-linear-fbsde} and \ref{unique-pq} in appendix, we can obtain the relations $Y_{1}(t)=p(t)X_{1}(t)$ and$\ Z_{1}(t)=K_{1}(t)X_{1}(t)+\Delta (t)I_{E_{\epsilon}}(t)$ without Assumption \ref{assm-q-bound}. \begin{assumption} \label{assm-sig-small} $\sigma(t,x,y,z,u)=A(t)z+\sigma_{1}(t,x,y,u)$ and $\left\Vert A(\cdot)\right\Vert _{\infty}$ is small enough. \end{assumption} In this case, the first-order adjoint equation becomes \begin{equation} \left\{ \begin{array} [c]{rl dp(t)= & -\left\{ g_{x}(t)+g_{y}(t)p(t)+g_{z}(t)K_{1}(t)+b_{x}(t)p(t)+b_{y (t)p^{2}(t)+b_{z}(t)K_{1}(t)p(t)\right. \\ & \left. +\sigma_{x}(t)q(t)+\sigma_{y}(t)p(t)q(t)+A(t)K_{1}(t)q(t)\right\} dt+q(t)dB(t),\\ p(T)= & \phi_{x}(\bar{X}(T)), \end{array} \right. \label{eq-p-q-unbound \end{equation} where \[ K_{1}(t)=(1-p(t)A(t))^{-1}\left[ \sigma_{x}(t)p(t)+\sigma_{y}(t)p^{2 (t)+q(t)\right] . \] The first-order variational equation becomes \[ \left\{ \begin{array} [c]{rl dX_{1}(t)= & \left[ b_{x}(t)X_{1}(t)+b_{y}(t)Y_{1}(t)+b_{z}(t)(Z_{1 (t)-\Delta(t)I_{E_{\epsilon}}(t))\right] dt\\ & +\left[ \sigma_{x}(t)X_{1}(t)+\sigma_{y}(t)Y_{1}(t)+A(t)(Z_{1 (t)-\Delta(t)I_{E_{\epsilon}}(t))+\delta\sigma(t,\Delta)I_{E_{\epsilon }(t)\right] dB(t),\\ X_{1}(0)= & 0, \end{array} \right. \] and \[ \left\{ \begin{array} [c]{lll dY_{1}(t) & = & -\left[ g_{x}(t)X_{1}(t)+g_{y}(t)Y_{1}(t)+g_{z (t)(Z_{1}(t)-\Delta(t)I_{E_{\epsilon}}(t))-q(t)\delta\sigma(t,\Delta )I_{E_{\epsilon}}(t)\right] dt+Z_{1}(t)dB(t),\\ Y_{1}(T) & = & \phi_{x}(\bar{X}(T))X_{1}(T), \end{array} \right. \] where \[ \Delta(t)=\left( 1-p(t)A(t)\right) ^{-1}p(t)\left( \sigma_{1}(t,\bar {X}(t),\bar{Y}(t),u(t))-\sigma_{1}(t,\bar{X}(t),\bar{Y}(t),\bar{u}(t))\right) . \] By Theorems \ref{appen-th-linear-fbsde} and \ref{unique-pq} we have the following relationship: \begin{lemma} Suppose Assumptions \ref{assum-2}(i)-(ii), \ref{assum-3} and \ref{assm-sig-small} hold. Then we have \begin{align*} Y_{1}(t) & =p(t)X_{1}(t),\\ Z_{1}(t) & =K_{1}(t)X_{1}(t)+\Delta(t)I_{E_{\epsilon}}(t), \end{align*} where $p(\cdot)$ is the solution of \eqref{eq-p-q-unbound}. \end{lemma} \begin{lemma} \label{est-one-order-q-unbound}Suppose Assumptions \ref{assum-2}, \ref{assum-3} and \ref{assm-sig-small} hold. Then for any $2\leq\beta<8$, we have the following estimates \begin{equation} \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |X_{1}(t)|^{\beta }+|Y_{1}(t)|^{\beta}\right) \right] +\mathbb{E}\left[ \left( \int_{0 ^{T}|Z_{1}(t)|^{2}dt\right) ^{\beta/2}\right] =O(\epsilon^{\beta/2}), \label{est-x1-y1-q-unbound \end{equation \[ \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |X^{\epsilon (t)-\bar{X}(t)-X_{1}(t)|^{4}+|Y^{\epsilon}(t)-\bar{Y}(t)-Y_{1}(t)|^{4}\right) \right] +\mathbb{E}\left[ \left( \int_{0}^{T}|Z^{\epsilon}(t)-\bar {Z}(t)-Z_{1}(t)|^{2}dt\right) ^{2}\right] =o(\epsilon^{2}). \] \end{lemma} \begin{proof} The estimate of (\ref{est-x1-y1-q-unbound}) is the same as (\ref{est-1-order}) in Lemma \ref{est-one-order}, except the following term, \ \begin{array} [c]{l \mathbb{E}\left[ \left( \int_{0}^{T}|q(t)\delta\sigma(t,\Delta )|I_{E_{\epsilon}}(t)dt\right) ^{\beta}\right] \\ \leq C\mathbb{E}\left[ \left( \int_{E_{\epsilon}}|q(t)|\left( 1+|\bar {X}(t)|+\left\vert \bar{Y}(t)\right\vert +\left\vert \bar{u (t)|+|u(t)\right\vert \right) dt\right) ^{\beta}\right] \\ \leq C\mathbb{E}\left[ \left( \int_{E_{\epsilon}}|q(t)|^{2}dt\right) ^{\frac{\beta}{2}}\left( \int_{E_{\epsilon}}\left( 1+|\bar{X}(t)|^{2 +\left\vert \bar{Y}(t)\right\vert ^{2}+\left\vert \bar{u}(t)|^{2 +|u(t)\right\vert ^{2}\right) dt\right) ^{\frac{\beta}{2}}\right] \\ \leq C\left\{ \mathbb{E}\left[ \left( \int_{E_{\epsilon}}\left( 1+|\bar {X}(t)|^{2}+\left\vert \bar{Y}(t)\right\vert ^{2}+\left\vert \bar{u (t)|^{2}+|u(t)\right\vert ^{2}\right) dt\right) ^{4}\right] \right\} ^{\frac{\beta}{8}}\left\{ \mathbb{E}\left[ \left( \int_{E_{\epsilon }|q(t)|^{2}dt\right) ^{\frac{4\beta}{8-\beta}}\right] \right\} ^{\frac{8-\beta}{8}}\\ \leq C\left\{ \epsilon^{3}\mathbb{E}\left[ \int_{E_{\epsilon}}\left( 1+|\bar{X}(t)|^{8}+\left\vert \bar{Y}(t)\right\vert ^{8}+\left\vert \bar {u}(t)|^{8}+|u(t)\right\vert ^{8}\right) dt\right] \right\} ^{\frac{\beta }{8}}\\ \leq C\epsilon^{\frac{\beta}{2}}. \end{array} \] In this case, $A_{1}^{\epsilon}(\cdot)$, $C_{1}^{\epsilon}(\cdot)$, $D_{1}^{\epsilon}(T)$ is the same as Lemma \ref{est-one-order}, and \[ B_{1}^{\epsilon}(t)=(\tilde{\sigma}_{x}^{\epsilon}(t)-\sigma_{x (t))X_{1}(t)+(\tilde{\sigma}_{y}^{\epsilon}(t)-\sigma_{y}(t))Y_{1}(t). \] By Theorem \ref{est-fbsde-lp}, we obtain \ \begin{array} [c]{l \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |\xi^{2,\epsilon }(t)|^{4}+|\eta^{2,\epsilon}(t)|^{4}\right) +\left( \int_{0}^{T |\zeta^{2,\epsilon}(t)|^{2}dt\right) ^{2}\right] \\ \leq C\mathbb{E}\left[ \left( \int_{0}^{T}\left( |A_{1}^{\epsilon }(t)|+|C_{1}^{\epsilon}(t)|\right) dt\right) ^{4}+\left( \int_{0}^{T |B_{1}^{\epsilon}(t)|^{2}dt\right) ^{2}+|D_{1}^{\epsilon}(T)|^{4}\right] \\ \leq C\mathbb{E}\left[ |D_{1}^{\epsilon}(T)|^{4}+\left( \int_{0}^{T |C_{1}^{\epsilon}(t)|dt\right) ^{4}+\left( \int_{0}^{T}|B_{1}^{\epsilon }(t)|^{2}dt\right) ^{2}+\left( \int_{0}^{T}|A_{1}^{\epsilon}(t)|dt\right) ^{4}\right] . \end{array} \] We estimate term by term as follows. (1) \[ \mathbb{E}\left[ |D_{1}^{\epsilon}(T)|^{4}\right] \leq C\left\{ \mathbb{E}\left[ \sup\limits_{0\leq t\leq T}|X_{1}(t)|^{6}\right] \right\} ^{\frac{2}{3}}\left\{ \mathbb{E}\left[ \left\vert \tilde{\phi}_{x ^{\epsilon}(T)-\phi_{x}\left( \bar{X}\left( T\right) \right) \right\vert ^{12}\right] \right\} ^{\frac{1}{3}}=o\left( \epsilon^{2}\right) . \] (2) \begin{equation \begin{array} [c]{l \mathbb{E}\left[ \left( \int_{0}^{T}|\tilde{g}_{z}^{\epsilon}(t)-g_{z (t)||Z_{1}(t)|dt\right) ^{4}\right] \\ \leq C\mathbb{E}\left[ \left( \int_{0}^{T}|\tilde{g}_{z}^{\epsilon (t)-g_{z}(t)|^{2}dt\right) ^{2}\left( \int_{0}^{T}|Z_{1}(t)|^{2}dt\right) ^{2}\right] \\ \leq C\left\{ \mathbb{E}\left[ \left( \int_{0}^{T}|\tilde{g}_{z}^{\epsilon }(t)-g_{z}(t)|^{2}dt\right) ^{6}\right] \right\} ^{\frac{1}{3}}\left\{ \mathbb{E}\left[ \left( \int_{0}^{T}|Z_{1}(t)|^{2}dt\right) ^{3}\right] \right\} ^{\frac{2}{3}}\\ =o\left( \epsilon^{2}\right) , \end{array} \label{est-g-z-q-unbound \end{equation} the estimates of $\mathbb{E}\left[ \left( \int_{0}^{T}\left\vert \left( \tilde{g}_{x}^{\epsilon}(t)-g_{x}(t)\right) X_{1}(t)\right\vert dt\right) ^{4}\right] $ and $\mathbb{E}\left[ \left( \int_{0}^{T}\left\vert \left( \tilde{g}_{y}^{\epsilon}(t)-g_{y}(t)\right) Y_{1}(t)\right\vert dt\right) ^{4}\right] $ are the same as \eqref{est-g-z-q-unbound}, \begin{equation} \mathbb{E}\left[ \left( \int_{0}^{T}|g_{z}(t)\Delta(t)I_{E_{\epsilon }(t)|dt\right) ^{4}\right] \leq C\epsilon^{3}\int_{E_{\epsilon} \mathbb{E}[|\Delta(t)|^{4}]dt=o\left( \epsilon^{2}\right) , \label{est-g-del-qunbound \end{equation} the estimate of $\mathbb{E}\left[ \left( \int_{0}^{T}\left\vert \delta g\left( t\right) I_{E_{\epsilon}}(t)\right\vert dt\right) ^{4}\right] $ is the same as \eqref{est-g-del-qunbound} \ \begin{array} [c]{l \mathbb{E}\left[ (\int_{0}^{T}|q(t)\delta\sigma(t,\Delta)I_{E_{\epsilon }(t)|dt)^{4}\right] \\ \leq C\left\{ \mathbb{E}\left[ \left( \int_{E_{\epsilon}}|q(t)|^{2 dt\right) ^{4}\right] \right\} ^{\frac{1}{2}}\left\{ \mathbb{E}\left[ \left( \int_{E_{\epsilon}}|\delta\sigma(t,\Delta)|^{2}dt\right) ^{4}\right] \right\} ^{\frac{1}{2}}\\ \leq C\left\{ \mathbb{E}\left[ \left( \int_{E_{\epsilon}}|q(t)|^{2 dt\right) ^{4}\right] \right\} ^{\frac{1}{2}}\left\{ \mathbb{E}\left[ \left( \int_{E_{\epsilon}}\left( 1+|\bar{X}(t)|^{2}+\left\vert \bar {Y}(t)\right\vert ^{2}+\left\vert \bar{u}(t)|^{2}+|u(t)\right\vert ^{2}\right) dt\right) ^{4}\right] \right\} ^{\frac{1}{2}}\\ \leq C\epsilon^{2}\left\{ \mathbb{E}\left[ \left( \int_{E_{\epsilon }|q(t)|^{2}dt\right) ^{4}\right] \right\} ^{\frac{1}{2}}\\ =o\left( \epsilon^{2}\right) . \end{array} \] Then, \[ \mathbb{E}\left[ \left( \int_{0}^{T}|C_{1}^{\epsilon}(t)|dt\right) ^{4}\right] =o\left( \epsilon^{2}\right) . \] (3) \ \begin{array} [c]{l \mathbb{E}\left[ \left( \int_{0}^{T}|\tilde{\sigma}_{y}^{\epsilon (t)-\sigma_{y}(t)|^{2}|Y_{1}(t)|^{2}dt\right) ^{2}\right] \\ \leq C\mathbb{E}\left[ \sup\limits_{0\leq t\leq T}|Y_{1}(t)|^{4}\left( \int_{0}^{T}|\tilde{\sigma}_{z}^{\epsilon}(t)-\sigma_{z}(t)|^{2}dt\right) ^{2}\right] \\ \leq C\left\{ \mathbb{E}\left[ \sup\limits_{0\leq t\leq T}|Y_{1 (t)|^{6}\right] \right\} ^{\frac{2}{3}}\left\{ \mathbb{E}\left[ \left( \int_{0}^{T}|\tilde{\sigma}_{z}^{\epsilon}(t)-\sigma_{z}(t)|^{2}dt\right) ^{6}\right] \right\} ^{\frac{1}{3}}\\ =o\left( \epsilon^{2}\right) , \end{array} \] the estimate of $\mathbb{E}\left[ \left( \int_{0}^{T}|\tilde{\sigma _{x}^{\epsilon}(t)-\sigma_{x}(t)|^{2}|X_{1}(t)|^{2}dt\right) ^{2}\right] $ is similar. Thus, \[ \mathbb{E}\left[ \left( \int_{0}^{T}|B_{1}^{\epsilon}(t)|^{2}dt\right) ^{2}\right] =o\left( \epsilon^{2}\right) . \] (4) The estimate of $\mathbb{E}\left[ \left( \int_{0}^{T}|A_{1}^{\epsilon }(t)|dt\right) ^{4}\right] $ is the same as $\mathbb{E}\left[ \left( \int_{0}^{T}|C_{1}^{\epsilon}(t)|dt\right) ^{4}\right] $. \end{proof} The second-order variational equation becomes \begin{equation} \left\{ \begin{array} [c]{rl dX_{2}(t)= & \left\{ b_{x}(t)X_{2}(t)+b_{y}(t)Y_{2}(t)+b_{z}(t)Z_{2 (t)+\delta b(t,\Delta)I_{E_{\epsilon}}(t)\right. \\ & \left. +\frac{1}{2}\left[ 1,p(t),K_{1}(t)\right] D^{2}b(t)\left[ 1,p(t),K_{1}(t)\right] ^{\intercal}X_{1}(t)^{2}\right\} dt\\ & +\left\{ \sigma_{x}(t)X_{2}(t)+\sigma_{y}(t)Y_{2}(t)+A(t)Z_{2}(t)+\left[ \delta\sigma_{x}(t)X_{1}(t)+\delta\sigma_{y}(t)Y_{1}(t)\right] I_{E_{\epsilon }}(t)\right. \\ & \left. +\frac{1}{2}\left[ X_{1}(t),Y_{1}(t)\right] D^{2}\sigma _{1}(t)\left[ X_{1}(t),Y_{1}(t)\right] ^{\intercal}\right\} dB(t),\\ X_{2}(0)= & 0, \end{array} \right. \label{new-form-x2-q-unbound \end{equation \begin{equation} \left\{ \begin{array} [c]{ll dY_{2}(t)= & -\left\{ g_{x}(t)X_{2}(t)+g_{y}(t)Y_{2}(t)+g_{z}(t)Z_{2 (t)+\frac{1}{2}\left[ 1,p(t),K_{1}(t)\right] D^{2}g(t)\left[ 1,p(t),K_{1 (t)\right] ^{\intercal}X_{1}^{2}(t)\right. \\ & \left. +q(t)\delta\sigma(t,\Delta)I_{E_{\epsilon}}(t)+\delta g(t,\Delta )I_{E_{\epsilon}}(t)\right\} dt+Z_{2}(t)dB(t),\\ Y_{2}(T)= & \phi_{x}(\bar{X}(T))X_{2}(T)+\frac{1}{2}\phi_{xx}(\bar{X (T))X_{1}^{2}(T). \end{array} \right. \label{new-form-y2-q-unbound \end{equation} The following second-order estimates hold. \begin{lemma} \label{est-second-order-q-unbound} Suppose Assumptions \ref{assum-2}, \ref{assum-3} and \ref{assm-sig-small} hold. Then we have the following estimate \ \begin{array} [c]{rl \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}|X^{\epsilon}(t)-\bar {X}(t)-X_{1}(t)-X_{2}(t)|^{2}\right] & =o(\epsilon^{2}),\\ \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}|Y^{\epsilon}(t)-\bar {Y}(t)-Y_{1}(t)-Y_{2}(t)|^{2}\right] +\mathbb{E}\left[ \int_{0 ^{T}|Z^{\epsilon}(t)-\bar{Z}(t)-Z_{1}(t)-Z_{2}(t)|^{2}dt\right] & =o(\epsilon^{2}). \end{array} \] \end{lemma} \begin{proof} We use the same notations $A_{2}^{\epsilon}(t)$ $C_{2}^{\epsilon}(t)$ and $D_{2}^{\epsilon}(T)$ as in\ Lemma \ref{est-second-order}. The only different term is \ \begin{array} [c]{ll B_{2}^{\epsilon}(t)= & \delta\sigma_{x}(t)\xi^{2,\epsilon}(t)I_{E_{\epsilon }(t)+\delta\sigma_{y}(t)\eta^{2,\epsilon}(t)I_{E_{\epsilon}}(t)+\frac{1 {2}\left[ \xi^{1,\epsilon}(t),\eta^{1,\epsilon}(t)\right] \widetilde{D^{2 \sigma^{\epsilon}}(t)\left[ \xi^{1,\epsilon}(t),\eta^{1,\epsilon}(t)\right] ^{\intercal}\\ & -\frac{1}{2}\left[ X_{1}(t),Y_{1}(t)\right] D^{2}\sigma(t)\left[ X_{1}(t),Y_{1}(t)\right] ^{\intercal}. \end{array} \] Then, we have that \begin{equation} \left\{ \begin{array} [c]{l d\xi^{3,\epsilon}(t)=\left[ b_{x}(t)\xi^{3,\epsilon}(t)+b_{y}(t)\eta ^{3,\epsilon}(t)+b_{z}(t)\zeta^{3,\epsilon}(t)+A_{2}^{\epsilon}(t)\right] dt\ \\ \text{ \ \ \ \ \ \ }+\left[ \sigma_{x}(t)\xi^{3,\epsilon}(t)+\sigma _{y}(t)\eta^{3,\epsilon}(t)+A(t)\zeta^{3,\epsilon}(t)+B_{2}^{\epsilon }(t)\right] dB(t),\\ \xi^{3,\epsilon}(0)=0, \end{array} \right. \label{x-x1-x2-bz-0 \end{equation} an \begin{equation} \left\{ \begin{array} [c]{ll d\eta^{3,\epsilon}(t)= & -\left[ g_{x}(t)\xi^{3,\epsilon}(t)+g_{y (t)\eta^{3,\epsilon}(t)+g_{z}(t)\zeta^{3,\epsilon}(t)+C_{2}^{\epsilon }(t)\right] dt-\zeta^{3,\epsilon}(t)dB(t),\\ \eta^{3,\epsilon}(T)= & \phi_{x}(\bar{X}(T))\xi^{3,\epsilon}(T)+D_{2 ^{\epsilon}(T). \end{array} \right. \label{y-y1-y2-bz-0 \end{equation} By Theorem \ref{est-fbsde-lp} \ \begin{array} [c]{l \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left( |\xi^{3,\epsilon }(t)|^{2}+|\eta^{3,\epsilon}(t)|^{2}\right) +\int_{0}^{T}|\zeta^{3,\epsilon }(t)|^{2}dt\right] \\ \leq\mathbb{E}\left[ \left( \int_{0}^{T}|A_{2}^{\epsilon}(t)|dt\right) ^{2}+\left( \int_{0}^{T}|C_{2}^{\epsilon}(t)|dt\right) ^{2}+\int_{0 ^{T}|B_{2}^{\epsilon}(t)|^{2}dt+|D_{2}^{\epsilon}(T)|^{2}\right] . \end{array} \] We estimate term by term in the followings. (1) \ \begin{array} [c]{l \mathbb{E}\left[ \left( \int_{0}^{T}(\delta b_{z}(t,\Delta)(\zeta ^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t))dt\right) ^{2}\right] \\ \leq\mathbb{E}\left[ \left( \int_{E_{\epsilon}}|\delta b_{z}(t,\Delta )|\left( |\zeta^{2,\epsilon}(t)|+|K_{1}(t)X_{1}(t)|\right) dt\right) ^{2}\right] \\ \leq C\epsilon\mathbb{E}\left[ \int_{E_{\epsilon}}|\zeta^{2,\epsilon (t)|^{2}dt\right] +C\epsilon\mathbb{E}\left[ \sup\limits_{t\in\lbrack 0,T]}|X_{1}(t)|^{2}\int_{E_{\epsilon}}|K_{1}(t)|^{2}dt\right] \\ \leq C\epsilon\left\{ \mathbb{E}\left[ \left( \int_{0}^{T}|\zeta ^{2,\epsilon}(t)|^{2}dt\right) ^{2}\right] \right\} ^{\frac{1}{2 }+C\epsilon^{2}\left\{ \mathbb{E}\left[ \left( \int_{E_{\epsilon} |K_{1}(t)|^{2}dt\right) ^{2}\right] \right\} ^{\frac{1}{2}}\\ =o(\epsilon^{2}), \end{array} \ \ \begin{array} [c]{l \mathbb{E}\left[ \left( \int_{0}^{T}\left( \tilde{b}_{zz}(t)(\zeta ^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t))^{2}-b_{zz}(t)K_{1}(t)^{2 X_{1}(t)^{2}\right) dt\right) ^{2}\right] \\ \leq\mathbb{E}\left[ \left( \int_{0}^{T}\tilde{b}_{zz}(t)\left( (\zeta^{1,\epsilon}(t)-\Delta(t)I_{E_{\epsilon}}(t))^{2}-K_{1}(t)^{2 X_{1}(t)^{2}\right) dt\right) ^{2}+\left( \int_{0}^{T}\left\vert \tilde {b}_{zz}(t)-b_{zz}(t)\right\vert \left\vert K_{1}(t)X_{1}(t)\right\vert ^{2}dt\right) ^{2}\right] \\ \leq C\mathbb{E}\left[ \left( \int_{0}^{T}\left\vert \zeta^{2,\epsilon }(t)\right\vert ^{2}dt\right) ^{2}+\left( \int_{0}^{T}\left\vert \zeta^{2,\epsilon}(t)\right\vert \left\vert K_{1}(t)X_{1}(t)\right\vert dt\right) ^{2}\right] \\ \text{ \ }+\left\{ \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left\vert X_{1}(t)\right\vert ^{6}\right] \right\} ^{\frac{2}{3}}\left\{ \mathbb{E}\left[ \left( \int_{0}^{T}\left\vert \tilde{b}_{zz}(t)-b_{zz (t)\right\vert \left\vert K_{1}(t)\right\vert ^{2}dt\right) ^{6}\right] \right\} ^{\frac{1}{3}}\\ \leq C\left\{ \mathbb{E}\left[ \left( \int_{0}^{T}|\zeta^{2,\epsilon }(t)|^{2}dt\right) ^{2}\right] \right\} ^{\frac{1}{2}}\left\{ \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left\vert X_{1}(t)\right\vert ^{6}\right] \right\} ^{\frac{1}{3}}\left\{ \mathbb{E}\left[ \left( \int_{0}^{T}\left\vert K_{1}(t)\right\vert ^{2}dt\right) ^{6}\right] \right\} ^{\frac{1}{6}}+o(\epsilon^{2})\\ =o(\epsilon^{2}), \end{array} \] the other terms are similar. Then \[ \mathbb{E}\left[ \left( \int_{0}^{T}|A_{2}^{\epsilon}(t)|dt\right) ^{2}\right] =o(\epsilon^{2}). \] (2) The estimate of $C_{2}^{\epsilon}(t)$ is the same as $A_{2}^{\epsilon}(t)$. (3) \ \begin{array} [c]{l \mathbb{E}\left[ \int_{0}^{T}\left\vert \delta\sigma_{x}(t)\xi^{2,\epsilon }(t)\right\vert ^{2}I_{E_{\epsilon}}(t)dt\right] \leq C\epsilon\left\{ \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left\vert \xi^{2,\epsilon }(t)\right\vert ^{4}\right] \right\} ^{\frac{1}{2}}=o(\epsilon^{2}), \end{array} \ \ \begin{array} [c]{l \mathbb{E}\left[ \int_{0}^{T}\left\vert \tilde{\sigma}_{yy}^{\epsilon (t)\eta^{1,\epsilon}(t)^{2}-\sigma_{yy}(t)Y_{1}(t)^{2}\right\vert ^{2}dt\right] \ \ \\ \leq\mathbb{E}\left[ \int_{0}^{T}\left\vert \tilde{\sigma}_{yy}^{\epsilon }(t)\eta^{2,\epsilon}(t)(\eta^{1,\epsilon}(t)+Y_{1}(t))\right\vert ^{2}dt\right] +\mathbb{E}\left[ \int_{0}^{T}\left\vert \tilde{\sigma _{yy}^{\epsilon}(t)-\sigma_{yy}(t)\right\vert ^{2}Y_{1}(t)^{4}dt\right] \\ \leq C\mathbb{E}\left[ \int_{0}^{T}\left\vert \eta^{2,\epsilon}(t)\right\vert ^{2}\left\vert \eta^{1,\epsilon}(t)+Y_{1}(t)\right\vert ^{2}dt\right] +\mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left\vert Y_{1}(t)\right\vert ^{4}\int_{0}^{T}\left\vert \tilde{\sigma}_{yy}^{\epsilon}(t)-\sigma _{yy}(t)\right\vert ^{2}dt\right] \\ \leq C\left\{ \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left\vert \eta^{2,\epsilon}(t)\right\vert ^{4}\right] \right\} ^{\frac{1}{2}}\left\{ \mathbb{E}\left[ \sup\limits_{t\in\lbrack0,T]}\left\vert \eta^{1,\epsilon }(t)+Y_{1}(t)\right\vert ^{4}\right] \right\} ^{\frac{1}{2}}+o(\epsilon ^{2})\\ =o(\epsilon^{2}), \end{array} \] the other terms are similar. Thus \[ \mathbb{E}\left[ \int_{0}^{T}|B_{2}^{\epsilon}(t)|^{2}dt\right] =o(\epsilon^{2}). \] (4) \[ \mathbb{E}\left[ |D_{2}^{\epsilon}(T)|^{2}\right] \leq C\mathbb{E}\left[ |\tilde{\phi}_{xx}^{\epsilon}(T)-\phi_{xx}(\bar{X}(T))|^{2}|\xi^{1,\epsilon }(T)|^{4}+|\xi^{2,\epsilon}(T)|^{2}|\xi^{1,\epsilon}(T)+X_{1}(T)|^{2}\right] =o(\epsilon^{2}). \] Thus \[ \mathbb{E}\left\{ \sup\limits_{t\in\lbrack0,T]}[|\xi^{3,\epsilon (t)|^{2}+|\eta^{3,\epsilon}(t)|^{2}]+\int_{0}^{T}|\zeta^{3,\epsilon (t)|^{2}dt\right\} =o(\epsilon^{2}). \] \end{proof} Now we introduce the second-order adjoint equation \begin{equation} \left\{ \begin{array} [c]{rl -dP(t)= & \left\{ P(t)\left[ (D\sigma(t)^{\intercal}[1,p(t),K_{1 (t)]^{\intercal})^{2}+2Db(t)^{\intercal}[1,p(t),K_{1}(t)]^{\intercal +H_{y}(t)\right] \right. \\ & +2Q(t)D\sigma(t)^{\intercal}[1,p(t),K_{1}(t)]^{\intercal}+\left[ 1,p(t),K_{1}(t)\right] D^{2}H(t)\left[ 1,p(t),K_{1}(t)\right] ^{\intercal }\left. +H_{z}(t)K_{2}(t)\right\} dt\\ & -Q(t)dB(t),\\ P(T)= & \phi_{xx}(\bar{X}(T)), \end{array} \right. \label{eq-P-sigmaz0 \end{equation} where \ \begin{array} [c]{ll H(t,x,y,z,u,p,q)= & g(t,x,y,z,u)+pb(t,x,y,z,u)+q\sigma(t,x,y,z,u), \end{array} \ \ \begin{array} [c]{ll K_{2}(t)= & \left( 1-p(t)A(t)\right) ^{-1}\left\{ p(t)\sigma_{y (t)+2\left[ \sigma_{x}(t)+\sigma_{y}(t)p(t)+A(t)K_{1}(t)\right] \right\} P(t)\\ & +\left( 1-p(t)A(t)\right) ^{-1}\left\{ Q(t)+p(t)[1,p(t)]D^{2}\sigma _{1}(t)[1,p(t)]^{\intercal}\right\} . \end{array} \] (\ref{eq-P-sigmaz0}) is a linear BSDE with non-Lipschitz coefficient for $P(\cdot)$. Then, by Theorem \ref{q-exp-th} in appendix, (\ref{eq-P-sigmaz0}) has a unique pair of solution according to Theorem 5.21 in \cite{Pardoux-book . By the same analysis as in Lemma \ref{relation-y2}, we introduce the following auxiliary equation: \begin{equation \begin{array} [c]{l \hat{Y}(t)=\int_{t}^{T}\left\{ (H_{y}(s)+\sigma_{y}(s)g_{z (s)p(s)(1-p(s)A(s))^{-1})\hat{Y}(s)+\left( H_{z}(s)+\sigma_{z}(s)g_{z (s)p(s)(1-p(s)A(s))^{-1}\right) \hat{Z}(s)\right. \\ \ \ \ \ \ \ \ \ \ \ \left. +\left[ \delta H(s,\Delta)+\frac{1}{2 P(s)\delta\sigma(s,\Delta)^{2}\right] I_{E_{\epsilon}}(s)\right\} ds-\int_{t}^{T}\hat{Z}(s)dB(s), \end{array} \label{yhat-sigmaz0 \end{equation} where $\delta H(s,\Delta):=p(s)\delta b(s,\Delta)+q(s)\delta\sigma (s,\Delta)+\delta g(s,\Delta)$, and obtain the following relationship. \begin{lemma} \label{relation-second-order-q-unbound}Suppose Assumptions \ref{assum-2}, \ref{assum-3} and \ref{assm-sig-small} hold. The \ \begin{array} [c]{rl Y_{2}(t) & =p(t)X_{2}(t)+\frac{1}{2}P(t)X_{1}(t)^{2}+\hat{Y}(t),\\ Z_{2}(t) & =\mathbf{I(t)}+\hat{Z}(t), \end{array} \] where $(\hat{Y}(\cdot),\hat{Z}(\cdot))$ is the solution to \eqref{yhat-sigmaz0} and \begin{align*} \mathbf{I(t)} & =K_{1}(t)X_{2}(t)+\frac{1}{2}K_{2}(t)X_{1}^{2 (t)+(1-p(t)A(t))^{-1}p(t)(\sigma_{y}(t)\hat{Y}(t)+A(t)\hat{Z}(t))+P(t)\delta \sigma(t,\Delta)X_{1}(t)I_{E_{\epsilon}}(t)\\ & \;+(1-p(t)A(t))^{-1}p(t)\left[ \delta\sigma_{x}(t,\Delta)X_{1 (t)+\delta\sigma_{y}(t,\Delta)p(t)X_{1}(t)\right] I_{E_{\epsilon}}(t). \end{align*} \end{lemma} \begin{proof} Applying the techniques in Lemma \ref{lemma-y1}, we can deduce the above relationship similarly. \end{proof} Combing the estimates in Lemma \ref{est-second-order-q-unbound} and the relationship in Lemma \ref{relation-second-order-q-unbound}, we deduce that \[ Y^{\epsilon}(0)-\bar{Y}(0)=Y_{1}(0)+Y_{2}(0)+o(\epsilon)=\hat{Y (0)+o(\epsilon)\geq0. \] Define \ \begin{array} [c]{l \mathcal{H}(t,x,y,z,u,p,q,P)=pb(t,x,y,z,u)+q\sigma(t,x,y,z,u)+\frac{1 {2}P(\sigma(t,x,y,z,u)-\sigma(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar {u}(t)))^{2}\\ \text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +g(t,x,y,z+p(t)(\sigma(t,x,y,z,u)-\sigma(t,\bar{X}(t),\bar{Y}(t),\bar {Z}(t),\bar{u}(t))),u). \end{array} \] By the same analysis as in Theorem \ref{Th-MP}, we obtain the following maximum principle. \begin{theorem} \label{th-mp-q-unboud}Suppose Assumptions \ref{assum-2}, \ref{assum-3} and \ref{assm-sig-small} hold. Let $\bar{u}(\cdot)\in\mathcal{U}[0,T]$ be optimal and $(\bar{X}(\cdot),\bar{Y}(\cdot),\bar{Z}(\cdot))$ be the corresponding state processes of (\ref{state-eq}). Then the following stochastic maximum principle holds: \[ \mathcal{H}(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),u,p(t),q(t),P(t))\geq \mathcal{H}(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar{u (t),p(t),q(t),P(t)),\ \ \ \forall u\in U,\ a.e.,\ a.s.. \] \end{theorem} \subsection{The general case\label{sec-general}} When Brownian motion in \eqref{state-eq} is $d$-dimensional, by similar analysis as for $1$-dimensional case, we obtain the following results. The state equation becomes \begin{equation} \left\{ \begin{array} [c]{rl dX(t)= & b(t,X(t),Y(t),Z(t),u(t))dt+\sigma^{\intercal (t,X(t),Y(t),Z(t),u(t))dB(t)\\ dY(t)= & -g(t,X(t),Y(t),Z(t),u(t))dt+Z^{\intercal}(t)dB(t),\\ X(0)= & x_{0},\ Y(T)=\phi(X(T)), \end{array} \right. \label{state-eq-multi \end{equation} where \[ \sigma:[0,T]\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}^{d}\times U\rightarrow\mathbb{R}^{d}. \] The first-order adjoint equation is \begin{equation} \left\{ \begin{array} [c]{rl dp(t)= & -\left\{ g_{x}(t)+g_{y}(t)p(t)+\left\langle g_{z}(t),K_{1 (t)\right\rangle +(b_{x}(t)+b_{y}(t)p(t))p(t)\right. \\ & \left. +p(t)\langle b_{z}(t),K_{1}(t)\rangle+\langle(\sigma_{x (t)+\sigma_{y}(t)p(t)),q(t)\rangle+\langle q(t),\sigma_{z}(t)K_{1 (t)\rangle\right\} dt+q^{\intercal}(t)dB(t),\\ p(T)= & \phi_{x}(\bar{X}(T)), \end{array} \right. \label{eq-p-multi \end{equation} where \ \begin{array} [c]{rl K_{1}(t) & =(I-p(t)\sigma_{z}(t))^{-1}\left[ p(t)(\sigma_{x}(t)+\sigma _{y}(t)p(t))+q(t)\right] \in\mathbb{R}^{d},\\ g_{z}(t) & :=(g_{z^{1}}(t),g_{z^{2}}(t),...,g_{z^{d}}(t))^{\intercal},\text{ }\\ b_{z}(t) & :=(b_{z^{1}}(t),b_{z^{2}}(t),...,b_{z^{d}}(t))^{\intercal},\\ \sigma_{z}(t) & =\left( \begin{array} [c]{c \sigma_{z^{1}}^{1}(t),\sigma_{z^{2}}^{1}(t),...,\sigma_{z^{d}}^{1}(t)\\ \sigma_{z^{1}}^{2}(t),\sigma_{z^{2}}^{2}(t),...,\sigma_{z^{d}}^{2}(t)\\ \vdots\\ \sigma_{z^{1}}^{d}(t),\sigma_{z^{2}}^{d}(t),...,\sigma_{z^{d}}^{d}(t) \end{array} \right) \in\mathbb{R}^{d\times d}. \end{array} \] Denote the Hessian matrix of $\sigma^{i}(t)$ with respect to $(x,y,z^{1 ,z^{2},...,z^{d})$ by $D^{2}\sigma^{i}$. Se \[ \lbrack1,p(t),K_{1}^{\intercal}(t)]D^{2}\sigma(t)[1,p(t),K_{1}^{\intercal }(t)]^{\intercal}=\left( \begin{array} [c]{c \lbrack1,p(t),K_{1}^{\intercal}(t)]D^{2}\sigma^{1}(t)[1,p(t),K_{1}^{\intercal }(t)]^{\intercal}\\ \lbrack1,p(t),K_{1}^{\intercal}(t)]D^{2}\sigma^{2}(t)[1,p(t),K_{1}^{\intercal }(t)]^{\intercal}\\ \vdots\\ \lbrack1,p(t),K_{1}^{\intercal}(t)]D^{2}\sigma^{d}(t)[1,p(t),K_{1}^{\intercal }(t)]^{\intercal \end{array} \right) \in\mathbb{R}^{d}. \] Then, the second-order adjoint equation is \begin{equation} \left\{ \begin{array} [c]{rl -dP(t)= & \left\{ P(t)[(\sigma_{x}(t)+p(t)\sigma_{y}(t)+\sigma_{z (t)K_{1}(t))^{\intercal}(\sigma_{x}(t)+p(t)\sigma_{y}(t)+\sigma_{z (t)K_{1}(t))\right. \\ & +2(b_{x}(t)+b_{y}(t)p(t)+\langle b_{z}(t),K_{1}(t)\rangle)]+2\langle Q(t),(\sigma_{x}(t)+p(t)\sigma_{y}(t)+\sigma_{z}(t)K_{1}(t))\rangle\\ & +p(t)b_{y}(t)P(t)+p(t)[1,p(t),K_{1}^{\intercal}(t)]D^{2}b(t)[1,p(t),K_{1 ^{\intercal}(t)]^{\intercal}\\ & +\langle q(t),\mathbf{[}\sigma_{y}(t)P(t)+[1,p(t),K_{1}^{\intercal (t)]D^{2}\sigma(t)[1,p(t),K_{1}^{\intercal}(t)]^{\intercal}\mathbf{] \rangle+g_{y}(t)P(t)\\ & \left. +[I,p(t),K_{1}^{\intercal}(t)]D^{2}g(t)[I,p(t),K_{1}^{\intercal }(t)]^{\intercal}+\left\langle g_{z}(t)+b_{z}(t)p(t),K_{2}(t)\right\rangle +\langle q(t),\sigma_{z}(t)K_{2}(t)\rangle\right\} dt\\ & -Q^{\intercal}(t)dB(t),\\ P(T)= & \phi_{xx}(\bar{X}(T)), \end{array} \right. \label{eq-P-multi \end{equation} where \ \begin{array} [c]{ll K_{2}(t)= & (I-p(t)\sigma_{z}(t))^{-1}p(t)\left\{ \sigma_{y (t)P(t)+[1,p(t),K_{1}^{\intercal}(t)]D^{2}\sigma(t)[1,p(t),K_{1}^{\intercal }(t)]^{\intercal}\right\} \\ & +(I-p(t)\sigma_{z}(t))^{-1}\{Q(t)+2P(t)(\sigma_{x}(t)+\sigma_{y (t)p(t)+\sigma_{z}(t)K_{1}(t))\}\in\mathbb{R}^{d}. \end{array} \] Define \begin{equation \begin{array} [c]{ll \mathcal{H}(t,x,y,z,u,p,q,P)= & pb(t,x,y,z+\Delta(t),u)+\langle q,\sigma (t,x,y,z+\Delta(t),u)\rangle\\ & +\frac{1}{2}P(\sigma(t,x,y,z+\Delta(t),u)-\sigma(t,\bar{X}(t),\bar {Y}(t),\bar{Z}(t),\bar{u}(t)))^{\intercal}\\ & \cdot(\sigma(t,x,y,z+\Delta(t),u)-\sigma(t,\bar{X}(t),\bar{Y}(t),\bar {Z}(t),\bar{u}(t)))\\ & +g(t,x,y,z+\Delta(t),u), \end{array} \label{h-function-multi \end{equation} where $\Delta(t)\ $satisfies \begin{equation} \Delta(t)=p(t)(\sigma(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t)+\Delta(t),u)-\sigma (t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar{u}(t))),\;t\in\lbrack0,T]. \label{def-delt-multi \end{equation} Thus, we obtain the following maximum principle. \begin{theorem} Suppose Assumptions \ref{assum-2}, \ref{assum-3} and \ref{assm-q-bound} hold. Let $\bar{u}(\cdot)\in\mathcal{U}[0,T]$ be optimal and $(\bar{X}(\cdot ),\bar{Y}(\cdot),\bar{Z}(\cdot))$ be the corresponding state processes of (\ref{state-eq-multi}). Then the following stochastic maximum principle holds: \[ \mathcal{H}(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),u,p(t),q(t),P(t))\geq \mathcal{H}(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar{u (t),p(t),q(t),P(t)),\ \ \ \forall u\in U,\ a.e.,\ a.s., \] where $(p\left( \cdot\right) ,q\left( \cdot\right) )$, $\left( P\left( \cdot\right) ,Q\left( \cdot\right) \right) $ satisfy (\ref{eq-p-multi}), (\ref{eq-P-multi}) respectively, and $\Delta(\cdot)$ satisfies (\ref{def-delt-multi}).\ \end{theorem} \begin{remark} The above theorem still hold under Assumptions \ref{assum-2}, \ref{assum-3} and \ref{assm-sig-small}. \end{remark} \section{A linear quadratic control problem} In this section, we study a linear quadratic control problem by the results in the section 3. For simplicity of presentation, we suppose all the processes are one dimensional. Consider the following linear forward-backward stochastic control system \begin{equation} \left\{ \begin{array} [c]{rcl dX(t) & = & [A_{1}(t)X(t)+B_{1}(t)Y(t)+C_{1}(t)Z(t)+D_{1}(t)u(t)]dt\\ & & +[A_{2}(t)X(t)+B_{2}(t)Y(t)+C_{2}(t)Z(t)+D_{2}(t)u(t)]dB(t),\\ dY(t) & = & -[A_{3}(t)X(t)+B_{3}(t)Y(t)+C_{3}(t)Z(t)+D_{3 (t)u(t)]dt+Z(t)dB(t),\\ X(0) & = & x_{0},\ Y(T)=FX(T)+J, \end{array} \right. \label{state-lq \end{equation} and minimizing the following cost functional \begin{equation} J(u(\cdot))=\mathbb{E}\left[ \int_{0}^{T}\left( A_{4}(t)X(t)^{2 +B_{4}(t)Y(t)^{2}+C_{4}(t)Z(t)^{2}+D_{4}(t)u(t)^{2}\right) dt+GX(T)^{2 +Y(0)^{2}\right] , \label{cost-lq \end{equation} where $A_{i}$, $B_{i}$, $C_{i}$, $D_{i}$ $i=1,2,3,4$ are deterministic $\mathbb{R}$-valued functions, $F$, $G$ are deterministic constants and $J$ is $\mathcal{F}_{T}$-measurable bounded random variable. Let $\bar{u}(\cdot)$ be the optimal control, and the corresponding optimal state is $(\bar{X (\cdot),\bar{Y}(\cdot),\bar{Z}(\cdot))$. The variational equation becomes \[ \left\{ \begin{array} [c]{l d\left( X_{1}(t)+X_{2}(t)\right) \\ \text{ }=[A_{1}(t)\left( X_{1}(t)+X_{2}(t)\right) +B_{1}(t)\left( Y_{1}(t)+Y_{2}(t)\right) +C_{1}(t)\left( Z_{1}(t)+Z_{2}(t)\right) +D_{1}(t)(u^{\epsilon}(t)-\bar{u}(t))]dt\\ \text{ \ \ }+[A_{2}(t)\left( X_{1}(t)+X_{2}(t)\right) +B_{2}(t)\left( Y_{1}(t)+Y_{2}(t)\right) +C_{2}(t)\left( Z_{1}(t)+Z_{2}(t)\right) +D_{2}(t)(u^{\epsilon}(t)-\bar{u}(t))]dB(t),\\ d\left( Y_{1}(t)+Y_{2}(t)\right) \\ \text{ }=-[A_{3}(t)\left( X_{1}(t)+X_{2}(t)\right) +B_{3}(t)\left( Y_{1}(t)+Y_{2}(t)\right) +C_{3}(t)\left( Z_{1}(t)+Z_{2}(t)\right) +D_{3}(t)(u^{\epsilon}(t)-\bar{u}(t))]dt\\ \ \ \ +\left( Z_{1}(t)+Z_{2}(t)\right) dB(t),\\ X_{1}(0)+X_{2}(0)=0,\ Y_{1}(T)+Y_{2}(T)=F\left( X_{1}(T)+X_{2}(T)\right) , \end{array} \right. \] and the first order adjoint equation is \begin{equation} \left\{ \begin{array} [c]{rl dp(t)= & -\left\{ A_{3}(t)+B_{3}(t)p(t)+C_{3}(t)K_{1}(t)+A_{1}(t)p(t)+B_{1 (t)p^{2}(t)\right. \\ & \left. +C_{1}(t)K_{1}(t)p(t)+A_{2}(t)q(t)+B_{2}(t)p(t)q(t)+C_{2 (t)K_{1}(t)q(t)\right\} dt+q(t)dB(t),\\ p(T)= & F, \end{array} \right. \label{eq-p-lq \end{equation} where \[ K_{1}(t)=(1-p(t)C_{2}(t))^{-1}\left[ A_{2}(t)p(t)+B_{2}(t)p^{2 (t)+q(t)\right] . \] This adjoint equation is a nonlinear backward stochastic differential equation with deterministic coefficients and the solution to \eqref{eq-p-lq} is $(p(\cdot),0)$, which $p(\cdot)$ satisfies the following ODE \begin{equation} \left\{ \begin{array} [c]{rl dp(t)= & -\left\{ A_{3}(t)+B_{3}(t)p(t)+C_{3}(t)K_{1}(t)+A_{1}(t)p(t)+B_{1 (t)p^{2}(t)+C_{1}(t)K_{1}(t)p(t)\right\} dt,\\ p(T)= & F, \end{array} \right. \label{eq-p-lq-ode \end{equation} with \[ K_{1}(t)=(1-p(t)C_{2}(t))^{-1}\left[ A_{2}(t)p(t)+B_{2}(t)p^{2}(t)\right] . \] \begin{remark} It should be note that in our context the Assumption \ref{assm-q-bound} holds. \end{remark} Moreover, $\Delta(t)$ has the following explicitly form \[ \Delta(t)=(1-p(t)C_{2}(t))^{-1}p(t)D_{2}(t)(u(t)-\bar{u}(t)). \] Since $\bar{u}(\cdot)$ is the optimal control, \begin{equation} J(u^{\epsilon}(\cdot))-J(\bar{u}(\cdot))\geq0. \label{eq-lq-cost \end{equation} By Lemma \ref{lemma-est-sup}, the following estimates hold, \ \begin{array} [c]{lll X^{\epsilon}(t)-\bar{X}(t) & = & X_{1}(t)+X_{2}(t)+o(\epsilon),\\ Y^{\epsilon}(t)-\bar{Y}(t) & = & Y_{1}(t)+Y_{2}(t)+o(\epsilon),\\ Z^{\epsilon}(t)-\bar{Z}(t) & = & Z_{1}(t)+Z_{2}(t)+o(\epsilon). \end{array} \] We can expand (\ref{eq-lq-cost}) term by term as follows \ \begin{array} [c]{l \mathbb{E}\left[ \int_{0}^{T}A_{4}(t)\left( X^{\epsilon}(t)^{2}-\bar {X}(t)^{2}\right) dt\right] \\ \text{ }=\mathbb{E}\left\{ \int_{0}^{T}\left[ 2A_{4}(t)\bar{X}(t)\left( X_{1}(t)+X_{2}(t)\right) +A_{4}(t)X_{1}(t)^{2}\right] dt\right\} +o(\epsilon). \end{array} \] Similarly, one has \ \begin{array} [c]{rl \mathbb{E}\left[ \int_{0}^{T}B_{4}(t)\left( Y^{\epsilon}(t)^{2}-\bar {Y}(t)^{2}\right) dt\right] & =\mathbb{E}\left\{ \int_{0}^{T}\left[ 2B_{4}(t)\bar{Y}(t)\left( Y_{1}(t)+Y_{2}(t)\right) +B_{4}(t)Y_{1 (t)^{2}\right] dt\right\} +o(\epsilon);\\ \mathbb{E}\left[ G\left( X^{\epsilon}(T)^{2}-\bar{X}(T)^{2}\right) \right] & =\mathbb{E}\left[ 2G\bar{X}(T)\left( X_{1}(T)+X_{2}(T)\right) +GX_{1}(T)^{2}\right] +o(\epsilon);\\ Y^{\varepsilon}(0)^{2}-\bar{Y}(0)^{2} & =2\bar{Y}(0)\left( Y_{1 (0)+Y_{2}(0)\right) +Y_{1}(0)^{2}+o(\epsilon);\\ \mathbb{E}\left[ \int_{0}^{T}C_{4}(t)\left( Z^{\epsilon}(t)^{2}-\bar {Z}(t)^{2}\right) dt\right] & =\mathbb{E}\left\{ \int_{0}^{T}\left[ 2C_{4}(t)\bar{Z}(t)\left( Z_{1}(t)+Z_{2}(t)\right) +C_{4}(t)Z_{1 (t)^{2}\right] dt\right\} +o(\epsilon).\ \end{array} \] Thus \ \begin{array} [c]{l J(u^{\epsilon}(\cdot))-J(\bar{u}(\cdot))\\ =\mathbb{E}\left\{ \int_{0}^{T}\left[ 2A_{4}(t)\bar{X}(t)\left( X_{1}(t)+X_{2}(t)\right) +2B_{4}(t)\bar{Y}(t)\left( Y_{1}(t)+Y_{2 (t)\right) +2C_{4}(t)\bar{Z}(t)\left( Z_{1}(t)+Z_{2}(t)\right) \right. \right. \\ \left. +A_{4}(t)X_{1}(t)^{2}+B_{4}(t)Y_{1}(t)^{2}+C_{4}(t)Z_{1}(t)^{2 +2D_{4}(t)\bar{u}(t)\left( u^{\epsilon}(t)-\bar{u}(t)\right) +D_{4 (t)\left( u^{\epsilon}(t)-\bar{u}(t)\right) ^{2}\right] dt\\ \left. +2G\bar{X}(T)\left( X_{1}(T)+X_{2}(T)\right) +GX_{1}(T)^{2}+2\bar {Y}(0)\left( Y_{1}(0)+Y_{2}(0)\right) +Y_{1}(0)^{2}\right\} +o(\epsilon). \end{array} \] \ \ \ \ \ $\ \ $Introduce the adjoint equation for $X_{1}(t)+X_{2 (t),Y_{1}(t)+Y_{2}(t),Z_{1}(t)+Z_{2}(t)$ as \[ \left\{ \begin{array} [c]{rl dh(t)= & \left[ B_{3}(t)h(t)+B_{1}(t)m(t)+B_{2}(t)n(t)+2B_{4}(t)Y(t)\right] dt\\ & +\left[ C_{3}(t)h(t)+C_{1}(t)m(t)+C_{2}(t)n(t)+2C_{4}(t)Z(t)\right] dB(t),\\ h(0)= & 2\bar{Y}(0),\\ dm(t)= & -\left[ A_{3}(t)h(t)+A_{1}(t)m(t)+A_{2}(t)n(t)+2A_{4}(t)X(t)\right] dt+n(t)dB(t),\\ m(T)= & 2G\bar{X}(T)+Fh(T). \end{array} \right. \] Applying It\^{o}'s formula to $m(t)\left( X_{1}(t)+X_{2}(t)\right) -h(t)\left( Y_{1}(t)+Y_{2}(t)\right) $, we get \ \begin{array} [c]{l J(u^{\epsilon}(\cdot))-J(\bar{u}(\cdot))\\ =\mathbb{E}\left\{ \int_{0}^{T}\left[ \left( D_{1}(t)m(t)+D_{2 (t)n(t)+D_{3}(t)h(t)+2D_{4}(t)\bar{u}(t)\right) \left( u^{\epsilon (t)-\bar{u}(t)\right) \right. \right. \\ \left. \left. +A_{4}(t)X_{1}(t)^{2}+B_{4}(t)Y_{1}(t)^{2}+C_{4 (t)Z_{1}(t)^{2}+D_{4}(t)\left( u^{\epsilon}(t)-\bar{u}(t)\right) ^{2}\right] dt+GX_{1}(T)^{2}+Y_{1}(0)^{2}\right\} +o(\epsilon). \end{array} \] Noting that the relationship between $X_{1}(t),Y_{1}(t)$ and $Z_{1}(t), \ \begin{array} [c]{ll Y_{1}(t)= & p(t)X_{1}(t),\\ Z_{1}(t)= & K_{1}(t)X_{1}(t)+\Delta(t)I_{E_{\epsilon}}(t), \end{array} \] thus \ \begin{array} [c]{l J(u^{\epsilon}(\cdot))-J(\bar{u}(\cdot))\\ =\mathbb{E}\left\{ \int_{0}^{T}\left[ \left( A_{4}(t)+B_{4}(t)p(t)^{2 +C_{4}(t)K_{1}(t)^{2}\right) X_{1}(t)^{2}+C_{4}(t)\Delta(t)^{2 I_{E_{\epsilon}}(t)+D_{4}(t)\left( u^{\epsilon}(t)-\bar{u}(t)\right) ^{2}\right. \right. \\ \text{ \ \ \ }\left. \left. +\left( D_{1}(t)m(t)+D_{2}(t)n(t)+D_{3 (t)h(t)+2D_{4}(t)\bar{u}(t)\right) \left( u^{\epsilon}(t)-\bar{u}(t)\right) \right] dt+GX_{1}(T)^{2}\right\} +o(\epsilon). \end{array} \] $\ \ \ $Introducing the adjoint equation for $X_{1}(t)^{2}$, \begin{equation} \left\{ \begin{array} [c]{rl -dP(t)= & \left[ R_{1}(t)P(t)+R_{2}(t)Q(t)+A_{4}(t)+B_{4}(t)p(t)^{2 +C_{4}(t)K_{1}(t)^{2}\right] dt-Q(t)dB(t),\\ P(T)= & G, \end{array} \right. \end{equation} where \ \begin{array} [c]{ll R_{1}(t)= & 2\left( A_{1}(t)+B_{1}(t)p(t)+C_{1}(t)K_{1}(t)\right) +\left( A_{2}(t)+B_{2}(t)p(t)+C_{2}(t)K_{1}(t)\right) ^{2},\\ R_{2}(t)= & 2\left( A_{2}(t)+B_{2}(t)p(t)+C_{2}(t)K_{1}(t)\right) . \end{array} \] Similar to $p(\cdot)$, the solution to $(P(\cdot),Q(\cdot))$ is $(P(\cdot ),0)$, which $P(\cdot)$ satisfies the following ODE, \begin{equation} \left\{ \begin{array} [c]{rl -dP(t)= & \left[ R_{1}(t)P(t)+A_{4}(t)+B_{4}(t)p(t)^{2}+C_{4}(t)K_{1 (t)^{2}\right] dt,\\ P(T)= & G, \end{array} \right. \end{equation} where \ \begin{array} [c]{ll R_{1}(t)= & 2\left( A_{1}(t)+B_{1}(t)p(t)+C_{1}(t)K_{1}(t)\right) +\left( A_{2}(t)+B_{2}(t)p(t)+C_{2}(t)K_{1}(t)\right) ^{2}.\\ & \end{array} \] We obtain \ \begin{array} [c]{l J(u^{\epsilon}(\cdot))-J(\bar{u}(\cdot))\\ =\mathbb{E}\left\{ \int_{0}^{T}\left[ \left( D_{1}(t)m(t)+D_{2 (t)n(t)+D_{3}(t)h(t)+2D_{4}(t)\bar{u}(t)\right) \left( u^{\epsilon (t)-\bar{u}(t)\right) \right. \right. \\ \left. \left. +P(t)D_{2}(t)^{2}(u^{\epsilon}(t)-\bar{u}(t))^{2 +D_{4}(t)\left( u^{\epsilon}(t)-\bar{u}(t)\right) ^{2}+C_{4}(t)\Delta (t)^{2}I_{E_{\epsilon}}(t)\right] dt\right\} +o(\epsilon). \end{array} \] Thus, we obtain the following maximum principle for (\ref{state-lq )-(\ref{cost-lq}). \begin{theorem} \label{th-mp-lq}Suppose Assumptions \ref{assum-2} and \ref{assum-3} hold. Let $\bar{u}(\cdot)\in\mathcal{U}[0,T]$ be optimal and $(\bar{X}(\cdot),\bar {Y}(\cdot),\bar{Z}(\cdot))$ be the corresponding state processes of (\ref{state-lq}). Then the following stochastic maximum principle holds: \ \begin{array} [c]{l \left( D_{1}(t)m(t)+D_{2}(t)n(t)+D_{3}(t)h(t)+2D_{4}(t)\bar{u}(t)\right) (u-\bar{u}(t))\\ +\left[ \frac{C_{4}(t)p(t)^{2}D_{2}(t)^{2}}{\left( 1-p(t)C_{2}(t)\right) ^{2}}+D_{4}(t)+P(t)D_{2}(t)^{2}\right] (u-\bar{u}(t))^{2}\geq0,\ \forall u\in U,\ a.e.,\ a.s.. \end{array} \] \end{theorem} \begin{remark} Using Theorem \ref{th-mp-q-unboud}, we can also consider linear quadratic control problem with random coefficients. \end{remark} Now, we give an example to show the difference between the global and local maximum principle. \begin{example} Consider the following linear forward-backward stochastic control system \begin{equation} \left\{ \begin{array} [c]{rcl dX(t) & = & [aZ(t)+bu(t)]dB(t),\\ dY(t) & = & -cu(t)dt+Z(t)dB(t),\\ X(0) & = & 1,\ Y(T)=dX(T), \end{array} \right. \end{equation} and minimizing the following cost functional \[ J(u(\cdot))=\mathbb{E}\left[ \int_{0}^{T}u(t)^{2}dt\right] +Y(0)^{2}, \] where $a$, $b$, $c$, $d$ are constants such that, $0<\left\vert 2cd\right\vert \leq1$ and $ad<1$, and $U=\left\{ -1,0,1\right\} $. Let $\bar{u}(\cdot)$ be the optimal control, and the corresponding optimal state is $(\bar{X (\cdot),\bar{Y}(\cdot),\bar{Z}(\cdot))$. In this case $p(t)=d$, $q(t)=0$, $P(t)=Q(t)=0$, $h(t)=2\bar{Y}(0)$, $m(t)=2d\bar{Y}(0)$, $n(t)=0$, for $t\in\left[ 0,T\right] $. The maximum principle by Theorem \ref{th-mp-lq} is \begin{equation} 2\left( c\bar{Y}(0)+\bar{u}(t)\right) (u-\bar{u}(t))+(u-\bar{u}(t))^{2 \geq0,\ \forall u\in U\ a.e.,\ a.s.. \label{mp-exp \end{equation} Noting that $\bar{Y}(0)=d$ for $\bar{u}(t)=0$ , then it is easy to check that $\bar{u}(t)=0$ satisfies the maximum principle \eqref{mp-exp}. Furthermore, we can prove $\bar{u}(t)=0$ is the optimal control. For each $u\left( \cdot\right) \in\mathcal{U}[0,T]$, \[ Y(0)=\mathbb{E}\left[ dX(T)+\int_{0}^{T}cu(t)dt\right] =d+\mathbb{E}\left[ \int_{0}^{T}cu(t)dt\right] . \] Then \ \begin{array} [c]{rl J(u(\cdot))-J(\bar{u}(\cdot))= & \mathbb{E}\left[ \int_{0}^{T}u(t)^{2 dt\right] +Y(0)^{2}-d^{2}\\ = & \left( \mathbb{E}\left[ \int_{0}^{T}cu(t)dt\right] \right) ^{2}+\mathbb{E}\left[ \int_{0}^{T}\left( u(t)^{2}+2cdu(t)\right) dt\right] \\ \geq & 0, \end{array} \] which implies $\bar{u}(\cdot)=0$ is optimal. \newline\ \ When the control domain is $U=[-1,1]$, similar to Corollary \ref{cor-mp-convex}, by Theorem \ref{th-mp-lq} we obtain the following maximum principle, \begin{equation} 2\left( c\bar{Y}(0)+\bar{u}(t)\right) (u-\bar{u}(t))\geq0,\ \forall u\in U\ a.e.,\ a.s.. \label{mp-exp-convex \end{equation} It is obvious that $\bar{u}(\cdot)=0$ does not satisfy the maximum principle \eqref{mp-exp-convex}. \end{example} \section{Appendix} \subsection{$L^{p}$-estimate of decoupled FBSDEs} The following Lemma is a combination of Theorem 3.17 and Theorem 5.17 in \cite{Pardoux-book}. \begin{lemma} \label{sde-bsde}For each fixed $p>1$ and a pair of adapted stochastic process $(y(\cdot),z(\cdot))$, consider the following system \begin{equation} \left\{ \begin{array} [c]{rl dX(t)= & b(t,X(t),y(t),z(t))dt+\sigma(t,X(t),y(t),z(t))dB(t),\\ dY(t)= & -g(t,X(t),Y(t),Z(t))dt+Z(t)dB(t),\\ X(0)= & x_{0},\ Y(t)=\phi(X(T)), \end{array} \right. \label{fbsde-pardoux \end{equation} where $b$, $\sigma$, $g$, $\phi$ are the same in equation (\ref{fbsde}). If the coefficients satisfy (i) $b(\cdot,0,y(\cdot),z(\cdot))$, $\sigma(\cdot,0,y(\cdot),z(\cdot))$, $g(\cdot,0,0,0)$ are $\mathbb{F}$-adapted processes and \[ \mathbb{E}\left\{ |\phi(0)|^{p}+\left( \int_{0}^{T}\left[ |b(t,0,y(t),z(t))|+|g(t,0,0,0)|\right] dt\right) ^{p}+\left( \int_{0 ^{T}|\sigma(t,0,y(t),z(t))|^{2}dt\right) ^{\frac{p}{2}}\right\} <\infty, \] (ii \ \begin{array} [c]{rl |\psi(t,x_{1},y,z)-\psi(t,x_{2},y,z)| & \leq L_{1}|x_{1}-x_{2}|,\ \ \text{for }\ \psi=b,\sigma;\\ |g(t,x_{1},y_{1},z_{1})-g(t,x_{2},y_{2},z_{2})| & \leq L_{1}(|x_{1 -x_{2}|+|y_{1}-y_{2}|+|z_{1}-z_{2}|), \end{array} \] then (\ref{fbsde-pardoux}) has a unique solution $(X(\cdot),Y(\cdot ),Z(\cdot))\in L_{\mathcal{F}}^{p}(\Omega;C([0,T],\mathbb{R}^{n}))\times L_{\mathcal{F}}^{p}(\Omega;C([0,T],\mathbb{R}^{m}))\times L_{\mathcal{F }^{2,p}([0,T];\mathbb{R}^{m\times d})$ and there exists a constant $C_{p}$ which only depends on $L_{1}$, $p$, $T$ such that \ \begin{array} [c]{l \mathbb{E}\left\{ \sup\limits_{t\in\lbrack0,T]}\left[ |X(t)|^{p +|Y(t)|^{p}\right] +\left( \int_{0}^{T}|Z(t)|^{2}dt\right) ^{\frac{p}{2 }\right\} \\ \ \leq C_{p}\mathbb{E}\left\{ \left[ \int_{0}^{T}\left( |b(t,0,y(t),z(t))|+|g(t,0,0,0)|\right) dt\right] ^{p}+\left( \int_{0 ^{T}|\sigma(t,0,y(t),z(t))|^{2}dt\right) ^{\frac{p}{2}}+|\phi(0)|^{p +|x_{0}|^{p}\right\} . \end{array} \] \end{lemma} \subsection{An estimate of $Z$ for some BSDEs} Consider the following BSDE\ \begin{equation} Y(t)=\xi+\int_{t}^{T}f(s,Y(s),Z(s))ds-\int_{t}^{T}Z(s)dB(s). \label{appen-eq-bsde \end{equation} \begin{theorem} \label{q-exp-th} Suppose that $(Y(\cdot),Z(\cdot))\in L_{\mathcal{F}}^{\infty }(0,T;\mathbb{R})\times L_{\mathcal{F}}^{2,2}([0,T];\mathbb{R}^{d})$ solves BSDE (\ref{appen-eq-bsde}), and \newline$\left\vert f(s,Y(s),Z(s))\right\vert \leq C_{1}\left( 1+|Z(s)|^{2}\right) $, where $C_{1}$ is a constant. Then there exists a $\delta>0$ such that for each $\lambda_{1}<\delta$, \begin{equation} \mathbb{E}\left[ \left. \exp\left( \lambda_{1}\int_{t}^{T}|Z(s)|^{2 ds\right) \right\vert \mathcal{F}_{t}\right] \leq C\text{ and \mathbb{E}\left[ \sup\limits_{0\leq t\leq T}\exp\left( \lambda_{1}\in _{0}^{t}Z(s)dB(s)\right) \right] \leq C\text{,} \label{appwn-eq-11 \end{equation} where $C$ depends on $C_{1}$, $\delta$, $T$ and $||\xi||_{\infty}$. Moreover, for each $\lambda_{2}>0$, \begin{equation} \mathbb{E}\left[ \exp\left( \lambda_{2}\int_{0}^{T}|Z(s)|ds\right) \right] <\infty. \label{appwn-eq-12 \end{equation} \end{theorem} \begin{proof} In the following, $C$ is a constant, and will be changed from line to line. Define \[ u(x)=\frac{1}{4C_{1}^{2}}\left( e^{2C_{1}x}-1-2C_{1}x\right) . \] It is easy to check that $x\rightarrow u(|x|)$ is $C^{2}$. Applying It\^{o}'s formula to $u(|Y(s)|)$, we get \ \begin{array} [c]{lll u\left( \left\vert Y(t)\right\vert \right) & = & u\left( \left\vert Y(T)\right\vert \right) +\int_{t}^{T}\{u^{^{\prime} (|Y(s)|)sgn(Y(s))f(s,Y(s),Z(s))-\frac{1}{2}u^{^{\prime\prime} (|Y(s)|)|Z(s)|^{2}\}ds\\ & & -\int_{t}^{T}u^{^{\prime}}(|Y(s)|)sgn(Y(s))Z(s)dB(s)\\ & \leq & u\left( \left\vert Y(T)\right\vert \right) +\int_{t}^{T \{u^{^{\prime}}(|Y(s)|)C_{1}(1+|Z(s)|^{2})-\frac{1}{2}u^{^{\prime\prime }(|Y(s)|)|Z(s)|^{2}\}ds\\ & & -\int_{t}^{T}u^{^{\prime}}(|Y(s)|)sgn(Y(s))Z(s)dB(s)\\ & \leq & C-\frac{1}{2}\int_{t}^{T}|Z(s)|^{2}ds-\int_{t}^{T}u^{^{\prime }(|Y(s)|)sgn(Y(s))Z(s)dB(s). \end{array} \] From the above inequality, we can deduce that, for each stopping time $\tau\leq T$ \[ \mathbb{E}\left[ \left. \int_{\tau}^{T}|Z\left( s\right) |^{2 ds\right\vert \mathcal{F}_{\tau}\right] \leq C, \] where $C$ is independent of $\tau$. Thus $(\int_{0}^{t}Z\left( s\right) dB(s))_{t\in\lbrack0,T]}$ is a BMO martingale. By the Nirenberg inequality (see Theorem 10.43 in \cite{HWY}), we obtain (\ref{appwn-eq-11}). For each given $\lambda_{2}>0$, choose $\delta_{0}>0$ such that $\lambda_{2 \sqrt{\delta_{0}}<\delta$. Thus, by (\ref{appwn-eq-11}), we get \ \begin{array} [c]{l \mathbb{E}\left[ \exp\left( \lambda_{2}\int_{0}^{T}|Z(s)|ds\right) \right] \\ =\mathbb{E}\left[ \exp\left( \lambda_{2}\int_{0}^{T-\delta_{0 }|Z(s)|ds\right) \exp\left( \lambda_{2}\int_{T-\delta_{0}}^{T |Z(s)|ds\right) \right] \\ =\mathbb{E}\left[ \exp\left( \lambda_{2}\int_{0}^{T-\delta_{0 }|Z(s)|ds\right) \mathbb{E}\left[ \left. \exp\left( \lambda_{2 \int_{T-\delta_{0}}^{T}|Z(s)|ds\right) \right\vert \mathcal{F}_{T-\delta_{0 }\right] \right] \\ \leq\mathbb{E}\left[ \exp\left( \lambda_{2}\int_{0}^{T-\delta_{0 }|Z(s)|ds\right) \mathbb{E}\left[ \left. \exp\left( \lambda_{2 \sqrt{\delta_{0}}\left[ \int_{T-\delta_{0}}^{T}|Z(s)|^{2}ds\right] ^{\frac{1}{2}}\right) \right\vert \mathcal{F}_{T-\delta_{0}}\right] \right] \\ \leq\mathbb{E}\left[ \exp\left( \lambda_{2}\int_{0}^{T-\delta_{0 }|Z(s)|ds\right) \mathbb{E}\left[ \left. e^{\lambda_{2}\sqrt{\delta_{0} }+\exp\left( \lambda_{2}\sqrt{\delta_{0}}\int_{T-\delta_{0}}^{T |Z(s)|^{2}ds\right) I_{\{\int_{T-\delta_{0}}^{T}|Z(s)|^{2}ds>1\}}\right\vert \mathcal{F}_{T-\delta_{0}}\right] \right] \\ \leq\left( e^{\delta}+C\right) \mathbb{E}\left[ \exp\left( \lambda_{2 \int_{0}^{T-\delta_{0}}|Z(s)|ds\right) \right] \\ \leq\left( e^{\delta}+C\right) ^{[\frac{T}{\delta_{0}}]+1}<\infty. \end{array} \] This completes the proof. \end{proof} \subsection{Solution to linear FBSDEs} \label{sect-solu-linearfbsde} Considering the following forward-backward stochastic differential equation \begin{equation} \left\{ \begin{array} [c]{rl dX(t)= & \left[ \alpha_{1}(t)X(t)+\beta_{1}(t)Y(t)+\gamma_{1}(t)Z(t)+L_{1 (t)\right] dt+\left[ \alpha_{2}(t)X(t)+\beta_{2}(t)Y(t)+\gamma _{2}(t)Z(t)+L_{2}(t)\right] dB(t),\\ dY(t)= & -\left[ \alpha_{3}(t)X(t)+\beta_{3}(t)Y(t)+\gamma_{3}(t)Z(t)+L_{3 (t)\right] dt+Z(t)dB(t),\\ X(0)= & x_{0},\ Y(T)=\kappa X(T), \end{array} \right. \label{appen-eq-xyz \end{equation} where $\alpha_{i}(\cdot)$, $\beta_{i}(\cdot)$, $\gamma_{i}(\cdot)$, $i=1,2,3$, are bounded adapted processes, $L_{1}(\cdot)$, $L_{3}(\cdot)\in L_{\mathcal{F }^{1,2}([0,T];\mathbb{R})$, $L_{2}(\cdot)\in L_{\mathcal{F}}^{2,2 ([0,T];\mathbb{R})$ and $\kappa$ is an $\mathcal{F}_{T}$-measurable bounded random variable. Suppose that the solution to (\ref{appen-eq-xyz}) has the following relationship \[ Y(t)=p(t)X(t)+\varphi(t), \] where $p(t)$, $\varphi(t)$ satisfy \begin{equation} \left\{ \begin{array} [c]{rl dp(t)= & -A(t)dt+q(t)dB(t),\\ d\varphi(t)= & -C(t)dt+\nu(t)dB(t),\\ p(T)= & \kappa,\ \varphi(T)=0, \end{array} \right. \label{appen-eq-pq \end{equation} $A(t)$ and $C(t)$ will be determined later. Applying It\^{o}'s formula to $p(t)X(t)+\varphi(t)$, we have \begin{equation \begin{array} [c]{ll d\left( p(t)X(t)+\varphi(t)\right) & =\left\{ p(t)\left[ \alpha _{1}(t)X(t)+\beta_{1}(t)Y(t)+\gamma_{1}(t)Z(t)+L_{1}(t)\right] -A(t)X(t)\right. \\ & \ \ \left. +q(t)\left[ \alpha_{2}(t)X(t)+\beta_{2}(t)Y(t)+\gamma _{2}(t)Z(t)+L_{2}(t)\right] -C(t)\right\} dt\\ & \ \ +\left\{ p(t)\left[ \alpha_{2}(t)X(t)+\beta_{2}(t)Y(t)+\gamma _{2}(t)Z(t)+L_{2}(t)\right] +q(t)X(t)+\nu(t)\right\} dB(t). \end{array} \end{equation} Comparing with the equation satisfied by $Y(t)$, one has \begin{equation} Z(t)=p(t)\left[ \alpha_{2}(t)X(t)+\beta_{2}(t)Y(t)+\gamma_{2}(t)Z(t)+L_{2 (t)\right] +q(t)X(t)+\nu(t), \label{appen-eq-z \end{equation \begin{equation \begin{array} [c]{ll -\left[ \alpha_{3}(t)X(t)+\beta_{3}(t)Y(t)+\gamma_{3}(t)Z(t)+L_{3}(t)\right] & =p(t)\left[ \alpha_{1}(t)X(t)+\beta_{1}(t)Y(t)+\gamma_{1}(t)Z(t)+L_{1 (t)\right] -A(t)X(t)\\ & \ \ +q(t)\left[ \alpha_{2}(t)X(t)+\beta_{2}(t)Y(t)+\gamma_{2 (t)Z(t)+L_{2}(t)\right] -C(t). \end{array} \label{appen-relation-generator \end{equation} From equation \eqref{appen-eq-z}, we have the form of $Z(t)$ as \begin{align*} Z(t) & =\left( 1-p(t)\gamma_{2}(t)\right) ^{-1}\left\{ p(t)\left[ \alpha_{2}(t)X(t)+\beta_{2}(t)Y(t)+L_{2}(t)\right] +q(t)X(t)+\nu(t)\right\} \\ & =\left( 1-p(t)\gamma_{2}(t)\right) ^{-1}\left[ \left( \alpha _{2}(t)p(t)+\beta_{2}(t)p(t)^{2}+q(t)\right) X(t)+p(t)\beta_{2 (t)\varphi(t)+p(t)L_{2}(t)+\nu(t)\right] . \end{align*} From the equation \eqref{appen-relation-generator}, and utilizing the form of $Y(t)$ and $Z(t)$, we derive tha \begin{equation \begin{array} [c]{rl A(t)= & \alpha_{3}(t)+\beta_{3}(t)p(t)+\gamma_{3}(t)K_{1}(t)+\alpha _{1}(t)p(t)+\beta_{1}(t)p^{2}(t)\\ & +\gamma_{1}(t)K_{1}(t)p(t)+\alpha_{2}(t)q(t)+\beta_{2}(t)p(t)q(t)+\gamma _{2}(t)K_{1}(t)q(t), \end{array} \label{new-eq-111 \end{equation} where \[ K_{1}(t)=(1-p(t)\gamma_{2}(t))^{-1}\left[ \alpha_{2}(t)p(t)+\beta_{2 (t)p^{2}(t)+q(t)\right] , \] an \begin{equation \begin{array} [c]{rl C(t)= & (\beta_{1}(t)p(t)+\beta_{2}(t)q(t)+\beta_{3}(t))\varphi(t)+p(t)L_{1 (t)+q(t)L_{2}(t)+L_{3}(t)\\ & +(\gamma_{1}(t)p(t)+\gamma_{2}(t)q(t)+\gamma_{3}(t))(1-p(t)\gamma _{2}(t))^{-1}(\beta_{2}(t)p(t)\varphi(t)+p(t)L_{2}(t)+\nu(t)). \end{array} \label{new-eq-112 \end{equation} \begin{theorem} Assume \eqref{appen-eq-pq} has a solution $(p(\cdot),q(\cdot))$, $(\varphi(\cdot),\nu(\cdot))\in L_{\mathcal{F}}^{2}(\Omega;C([0,T],\mathbb{R ))\times L_{\mathcal{F}}^{2,2}([0,T];\mathbb{R})$, and $(\tilde{X (\cdot),\tilde{Y}(\cdot),\tilde{Z}(\cdot))\in L_{\mathcal{F}}^{2 (\Omega;C([0,T],\mathbb{R}))\times L_{\mathcal{F}}^{2}(\Omega ;C([0,T],\mathbb{R}))\times L_{\mathcal{F}}^{2,2}([0,T];\mathbb{R})$, where $\tilde{X}(\cdot)$ is the solution to \begin{equation} \left\{ \begin{array} [c]{rl d\tilde{X}(t)= & \left\{ \alpha_{1}(t)\tilde{X}(t)+\beta_{1}(t)p(t)\tilde {X}(t)+\beta_{1}(t)\varphi(t)+L_{1}(t)+\gamma_{1}(t)\left( 1-p(t)\gamma _{2}(t)\right) ^{-1}\right. \\ & \cdot\left[ \left( \alpha_{2}(t)p(t)+\beta_{2}(t)p(t)^{2}+q(t)\right) \tilde{X}(t)\right. \left. \left. +p(t)\sigma_{y}(t)\varphi(t)+p(t)L_{2 (t)+\nu(t)\right] \right\} dt\\ & +\left\{ \alpha_{2}(t)\tilde{X}(t)+\beta_{2}(t)p(t)\tilde{X}(t)+\beta _{2}(t)\varphi(t)+L_{2}(t)+\gamma_{2}(t)\left( 1-p(t)\gamma_{2}(t)\right) ^{-1}\right. \\ & \cdot\left[ \left( \alpha_{2}(t)p(t)+\beta_{2}(t)p(t)^{2}+q(t)\right) \tilde{X}(t)\right. \left. \left. +p(t)\sigma_{y}(t)\varphi(t)+p(t)L_{2 (t)+\nu(t)\right] \right\} dB(t),\\ \tilde{X}(0)= & x_{0}, \end{array} \right. \end{equation} an \begin{equation \begin{array} [c]{rl \tilde{Y}(t)= & p(t)\tilde{X}(t)+\varphi(t),\\ \tilde{Z}(t)= & \left( 1-p(t)\gamma_{2}(t)\right) ^{-1}\left[ \left( \alpha_{2}(t)p(t)+\beta_{2}(t)p(t)^{2}+q(t)\right) \tilde{X}(t)\right. \\ & \left. +p(t)\beta_{2}(t)\varphi(t)+p(t)L_{2}(t)+\nu(t)\right] . \end{array} \end{equation} Then $(\tilde{X}(\cdot),\tilde{Y}(\cdot),\tilde{Z}(\cdot))$ solves \eqref{appen-eq-xyz}.\ Moreover, if \[ p(t)L_{1}(t)+q(t)L_{2}(t)+L_{3}(t)+(\gamma_{1}(t)p(t)+\gamma_{2 (t)q(t)+\gamma_{3}(t))(1-p(t)\gamma_{2}(t))^{-1}p(t)L_{2}(t)=0, \] then $(\varphi(\cdot),\nu(\cdot))=(0,0)$ is a solution to \eqref{appen-eq-pq} and $(\tilde{X}(t),p(t)\tilde{X}(t),\left( 1-p(t)\gamma_{2}(t)\right) ^{-1}\left[ \left( \alpha_{2}(t)p(t)+\beta_{2}(t)p(t)^{2}+q(t)\right) \tilde{X}(t)+p(t)L_{2}(t)\right] )_{t\in\lbrack0,T]}$ solves \eqref{appen-eq-xyz}. \label{appen-th-linear-fbsde} \end{theorem} \begin{proof} The results follow by applying It\^{o}'s formula. \end{proof} Now we study the uniqueness of $(p(\cdot),q(\cdot))$ in (\ref{appen-eq-pq}). It is important to note that the form of $(p(\cdot),q(\cdot))$ in (\ref{appen-eq-pq}) does not depend on $x_{0}$, $L_{1}$, $L_{2}$, $L_{3}$. So we set $\ x_{0}=1$, $L_{1}=L_{2}=L_{3}=0$ in the followings. In this case $(\varphi(\cdot),\nu(\cdot))=(0,0)$ as in the above theorem. \begin{theorem} \label{unique-pq} Assume (\ref{appen-eq-xyz}) has a unique solution $(X(\cdot),Y(\cdot),Z(\cdot))\in L_{\mathcal{F}}^{2}(\Omega;C([0,T],\mathbb{R ))\times L_{\mathcal{F}}^{2}(\Omega;C([0,T],\mathbb{R}))\times L_{\mathcal{F }^{2,2}([0,T];\mathbb{R})$. \end{theorem} \begin{description} \item[(i)] If $\gamma_{2}(\cdot)$ is small enough and $(p_{i}(\cdot ),q_{i}(\cdot))\in L_{\mathcal{F}}^{\infty}(0,T;\mathbb{R})\times L_{\mathcal{F}}^{2,4}([0,T];\mathbb{R})$, $i=1,2$, are two solutions to (\ref{appen-eq-pq}), then $(p_{1}(\cdot),q_{1}(\cdot))=(p_{2}(\cdot ),q_{2}(\cdot));$ \item[(ii)] If $(p_{i}(\cdot),q_{i}(\cdot))\in L_{\mathcal{F}}^{\infty }(0,T;\mathbb{R})\times L_{\mathcal{F}}^{\infty}(0,T;\mathbb{R})$, $i=1,2$, are two solutions to (\ref{appen-eq-pq}), then $(p_{1}(\cdot),q_{1 (\cdot))=(p_{2}(\cdot),q_{2}(\cdot))$. \end{description} \begin{proof} We only prove (i), (ii) is similar. Consider the following SDEs \begin{equation} \left\{ \begin{array} [c]{ll d\tilde{X}_{i}(t)= & \left[ \alpha_{1}(t)+\beta_{1}(t)p_{i}(t)+\gamma _{1}(t)K_{i,1}(t)\right] \tilde{X}_{i}(t)dt\\ & +\left[ \alpha_{2}(t)+\beta_{2}(t)p_{i}(t)+\gamma_{2}(t)K_{i,1}(t)\right] \tilde{X}_{i}(t)dB(t),\\ \tilde{X}_{i}(0)= & 1,\text{ }i=1,2. \end{array} \right. \end{equation} Then $\tilde{X}_{i}(\cdot)$ has a explicit for \[ \tilde{X}_{i}(t)=\exp\left\{ \int_{0}^{t}\left( N_{i,1}(s)-\frac{1 {2}(N_{i,2}(s))^{2}\right) ds+\int_{0}^{t}N_{i,2}(s)dB(s)\right\} , \] wher \ \begin{array} [c]{rl K_{i,1}(s) & =(1-p_{i}(s)\gamma_{2}(s))^{-1}\left[ \alpha_{2}(s)p_{i (s)+\beta_{2}(s)p_{i}^{2}(s)+q_{i}(s)\right] ,\\ N_{i,1}(s) & =\alpha_{1}(s)+\beta_{1}(s)p_{i}(s)+\gamma_{1}(s)K_{i,1}(s),\\ N_{i,2}(s) & =\alpha_{2}(s)+\beta_{2}(s)p_{i}(s)+\gamma_{2}(s)K_{i,1}(s). \end{array} \] By Theorem \ref{q-exp-th}, it is easy to check that when $\gamma_{2}(\cdot)$ is small enough, \[ \mathbb{E}\left[ \underset{0\leq t\leq T}{\sup}\left\vert \tilde{X _{i}(t)\right\vert ^{4}\right] <\infty. \] Thus we have $\left( K_{i,1}(t)\tilde{X}_{i}(t)\right) _{t\in\lbrack0,T]}\in L_{\mathcal{F}}^{2,2}([0,T];\mathbb{R})$. Since (\ref{appen-eq-xyz}) has a unique solution, by Theorem \ref{appen-th-linear-fbsde}, we get for $t\in\lbrack0,T]$, \ \[ (\tilde{X}_{1}(t),p_{1}(t)\tilde{X}_{1}(t),K_{1,1}(t)\tilde{X}_{1 (t))=(\tilde{X}_{2}(t),p_{2}(t)\tilde{X}_{2}(t),K_{2,1}(t)\tilde{X}_{2}(t)). \] Note that $\tilde{X}_{1}(\cdot)>0$, then $(p_{1}(\cdot),q_{1}(\cdot ))=(p_{2}(\cdot),q_{2}(\cdot))$. \end{proof} \section*{Acknowledgement} We are highly grateful to Dr. Falei Wang for his helpful suggestions and comments.
2,869,038,156,487
arxiv
\section{Introduction} Models with extra dimensions \cite{ArkaniHamed:1998rs} provide one of the most promising solutions to the hierarchy problem, namely, the huge difference between the scale of gravity $M_P = G_N^{-1} \sim 10^{19}$ GeV and the electroweak (EW) scale $M_{EW} \sim 100$ GeV. In these models $M_P$ appears as an effective scale related with the fundamental one, $M_D \sim$ 1--10 TeV, by the volume of the compact space or by an exponential warp factor. The difference between $M_{EW}$ and $M_D$ would then just define a {\it little} hierarchy problem that should be easier to solve consistenly with all collider data. The phenomenological consequencies of this framework are quite {\it intriguing}: the fundamental scale would be at accessible energies, and processes with $\sqrt{s} \gg M_D$ would probe a {\it transplanckian} regime where gravity is expected to dominate over the other interactions \cite{Emparan:2001ce}. The spin two of the graviton implies then gravitational cross sections that grow fast with $\sqrt{s}$ and become long distance interactions. As a consequence, quantum gravity or other short distance effects become irrelevant as they are screened by black hole (BH) horizons \cite{Giddings:2001bu}. One of the scenarios in which TeV gravity effects could play a significant role is provided by cosmic rays physics. The Earth is constantly hit by a flux of protons with energy of up to $10^{11}$ GeV and, associated to that flux, it is also expected a flux of cosmogenic neutrinos (still unobserved) with a typical energy peaked around $10^{10}$ GeV \cite{Semikoz:2003wv}. These are energies much larger than the ones to be explored at the LHC, where there would be no evidence for gravitational interactions if the scale $M_D$ is above a few TeV. In addition, notice that the new physics should be more relevant in collisions of particles with a small SM cross section, as it is expected for the interaction of a proton with a dark matter particle $\chi$ if it is taken to be a weakly interacting massive particle (WIMP). We will discuss here the interaction of ultra high energy cosmic rays (UHECR) with dark matter particles $\chi$ in our galactic halo. No detail about the nature of $\chi$ other than its mass, which defines the center-of-mass energy $\sqrt{s}=\sqrt{2m_\chi E}$ in the collision, is going to be significant to the present analysis. We also consider collisions of UHECR with other cosmic rays. These are arguably the most energetic elementary processes that we know that occur in nature at the present time, and would produce mini BHs significantly colder and longer-lived than the ones usually considered in the literature. We will focus just on BH production and evaporation, being this analysis a necessary first step in order to understand the full effects of TeV gravity on UHECR phenomenology. \section{Cosmogenic black hole production} BH production processes are the most widely and detailfully discussed aspect of TeV-gravity phenomenology \cite{D'Eath:1992hb}, and they have been considered both in the LHC \cite{Dimopoulos:2001en} and in the UHECR context \cite{Feng:2001ib}. Here we will assume a scenario with $n$ flat extra dimensions of common lenght where gravity is free to propagate, while matter fields are trapped on a (non-compact) four-dimensional brane. We will use the basic estimate that the collision of two pointlike particles at impact parameters smaller than the Schwarzschild radius $r_H$ of the system leads to the production of a BH whose mass is given by $M=\sqrt{s}$. The BHs that we are considering ($M < 10^{11}$ GeV) will be described by a ($4+n$)-dimensional metric (they are smaller than the volume of the compact space), being their radius \begin{equation} r_{H}=\left({2^n\pi^{n-3\over 2}\Gamma\left({n+3\over 2}\right) \over n+2}\right)^{1\over n+1} \left({M\over M_D}\right)^{1\over n+1} {1\over M_D}\;. \label{rh} \end{equation} For two pointlike particles, the cross section $\sigma (s)= \sigma_{\nu \nu}= \sigma_{\nu \chi}$ to produce a BH is then written as \begin{equation} \sigma = \pi \, r_H^2 \; . \end{equation} If the collision involves non-elementary (at the scale $\mu=1/r_H$) protons, then its partonic structure has to be included in order to find the total cross section, as it is usually done for analyses of BH production at LHC \cite{Dimopoulos:2001en}. The p--$\chi$ (or p--$\nu$) cross section may therefore be written as \begin{equation} \sigma_{p\nu}(s)=\int_{M_D^2/s}^1 {\rm d}x \left(\sum_i f_i(x,\mu) \right) \hat \sigma(xs)\;. \label{sigmapnu} \end{equation} This formula expresses the cross section as the sum of partial contributions $\hat \sigma(xs)$ to produce a BH of mass $M= \sqrt{xs}$ resulting from the collision of a parton $i$ that carries a fraction $x$ of momentum with a pointlike target. It is crucial to notice that the scale $\mu$ in the collision is fixed by the inverse Scwarzschild radius, rather than by the BH mass \cite{Giddings:2001bu} \cite{Emparan:2001kf}, since the scattering is probing a lenght scale that grows (not decreases!) with $s$. Actually, we expect that for large enough $s$ the scale that we are exploring goes above its radius and a pointlike behaviour for the proton will emerge. In contrast with a QED scattering, here at lower energies ($\approx 10^3$ GeV) we can {\it see} the composite structure of the proton, while at higher energies ($\approx 10^9$ GeV) the proton will scatter coherently as a whole. Since Eq. \ref{sigmapnu} does not reproduce this behaviour, it is necessary to include matching corrections between the two energy regions. The cross section in Eq. \ref{sigmapnu} describes the low-energy regime, and it is dominated by the large number of partons of low $x$ that may produce a BH of mass near the threshold $M_D$. This scheme explains why $\sigma_{p\nu} > \sigma_{\nu\nu}$. When the cross section $\sigma_{p \chi}$ approaches the proton size ($\approx 20$ mbarn), then the density of partons with enough energy to produce a BH is so large that the parton cross sections overlap, and the BHs produced are big enough to trap other {\it spectator} partons. This overlapping reduces the total cross section and increases the average mass of the produced BH. In this regime $\sigma_{p\nu}$ is basically constant with $s$ until it matches the pointlike behaviour in $\sigma_{\nu\nu}$. A similar behavior is also expected in $p$--$p$ collisions, where the partonic enhancement of the cross section is even more important at lower energies (in this regime $\sigma_{pp} > \sigma_{p\nu} > \sigma_{\nu\nu}$) and the intermadiate regime of constant total cross section is reached at lower energies. The smooth transition from these regimes can be modelled numerically discounting the contributions from spectator partons, and are summarized in Fig.~\ref{fig1}. There we plot the BH production cross section for different kind of particles\footnote{We assumed a CTEQ6M set of PDF \cite{Pumplin:2002vw}}. \begin{figure} \begin{center} \includegraphics[width=0.5\linewidth]{fig1.eps} \caption{ Cross sections to produce a BH for $n=2$ and $M_D=1$ TeV. \label{fig1}} \end{center} \end{figure} We will analyze two processes that can lead to BH production (see \cite{Draggiotis:2008jz} for the fluxes of proton, cosmogenic neutrinos and for the dark matter density). {\it (i)} A cosmic ray of energy $E$ colliding with a dark matter particle $\chi$ at rest in the frame of reference of our galaxy. The average number of BHs produced per unit time and volume depends on the density $\rho_\chi$, the cross section $\sigma_{i\chi} $ and the differential flux of cosmic rays ${\frac{d\phi_i}{dE}}$ (with $i={\rm p},\nu$): \begin{equation} {{\rm d}^2N\over {\rm d}t\; {\rm d} V}= 4 \pi \int {\rm d}E \;\sigma_{i \chi} (s)\; \frac{{\rm d}\phi_i}{{\rm d}E}\; \rho_\chi \;. \label{eqDMCR} \end{equation} Here the center of mass energy $\sqrt{s}=\sqrt{2 m_\chi E}$ can run from $M_D$ to $10^7$ GeV. {\it (ii)} A cosmic ray of energy $E_1$ colliding with a cosmic ray of energy $E_2$. In this case the center of mass energy depends upon the relative angle $\theta$, and results into $\sqrt{s}=\sqrt{2 E_1 E_2(1-\cos{\theta})}$. The interaction rate per unit time and volume is expressed by: \begin{equation} {{\rm d}^2N\over {\rm d}t\; {\rm d} V}= 16 \pi^2 \int {\rm d}E_1\; {\rm d}E_2\; {\rm d}\cos\theta \;\sigma_{ij} (s)\;\sin\theta/2\; \frac{{\rm d}\phi_i}{{\rm d}E_1} \; \frac{{\rm d}\phi_j}{{\rm d}E_2}\;. \label{eqCRCR} \end{equation} These processes generate BH masses $M=\sqrt{s}$ that can reach $\sim 10^{12}$ GeV. In Fig.~\ref{fig2} we plot the production rate of BHs from both types of collisions. \begin{figure} \begin{center} \includegraphics[width=0.5\linewidth]{fig2.eps} \caption{ Spectrum of BHs produced by collisions of cosmic rays (protons and cosmogenic neutrinos) when $n=2$, $M_D=1$ TeV, $m_\chi =100$ GeV . \label{fig2}} \end{center} \end{figure} \section{Black hole evaporation} To understand what kind of signal one could observe from such an event, it is necessary to estimate how the BH evolves after its production. It is expected that initially the BH undergoes a quick {\it balding} phase, in which it loses its gauge hair and asymmetries. Then it experiences a {\it spin down} phase, where its angular momentum is radiated while losing just a small fraction of its mass \cite{Winstanley:2007hj}. Finally, during most of its life the BH is in a Schwarzschild phase, losing mass through spherically symmetric Hawking radiation \cite{Hawking:1974rv}. The spectrum is, in a first approximation, that of a black body of temperature \cite{Myers:1986un} \begin{equation} T={n+1\over 4\pi r_{H}}\;. \label{eqT} \end{equation} This means that the scale of emission is fixed by the inverse Schwarzshild radius. This formula has important corrections arising from the gravitational barrier that the particles have to cross once emitted. These corrections are usually expressed in terms of the so called {\it greybody factors}, effective emission areas $\sigma_n^{(i)} (\omega)$ that depend on the dimensionality $(4+n)$ of the space-time, the spin of the particle emitted, and its energy $\omega$ \cite{Page:1976df}. These factors give corrections of order 1 to the black-body emission rates for all particles species except for the graviton, which can have a stronger correction depending upon the number of extra dimensions. We will assume here the numerical greybody factors given in \cite{Harris:2003eg}. The number of particles of the species $i$ emitted with ($4+n$)-dimensional momenta between $k$ and $k+{\rm d}k$ in a time interval ${\rm d} t$ can be written as \begin{equation} {\rm d}N_i(\omega) = g_i \, \sigma_n^{(i)}(\omega) \left( \frac{1} {\exp{(\omega/T_{BH})} \pm 1} \right) \frac{{\rm d}^{n+3} k}{(2\pi)^{n+3}} \, {\rm d}t\;, \label{eqnumber} \end{equation} while the radiated energy is given by \begin{equation} {\rm d}E_i(\omega) = g_i \, \sigma_n^{(i)}(\omega) \left( \frac{\omega} {\exp{(\omega/T_{BH})} \pm 1} \right) \frac{{\rm d}^{n+3} k}{(2\pi)^{n+3}} \, {\rm d}t\;. \label{eqenergy} \end{equation} Some remarks are here in order. {\it (i)} On dimensional grounds $\dot{E}\sim A_{2+n} T^{4+n}\sim 1/r_{H}^2\sim T^2$ and $\dot{N} \sim T$, so each degree of freedom should contribute equally (up to order one geometric and greybody factors) to the total emission independently from its bulk or brane localization \cite{Emparan:2000rs}. {\it (ii)} We are considering BH temperatures above $\Lambda_{QCD}$ ($M \lsim 10^{11}$ GeV leads to $T \gsim 1$ GeV), so QCD degrees of freedom (quarks and gluons) are also radiated and dominate the total emission. Once the instant spectrum is known, we integrate it over time to get the BH lifetime. On dimensional grounds $\tau \sim M_D^{-1} \left( M/M_D \right)^{\frac{n+3}{n+1}}$, although the dependence upon the number of the radiated degrees of freedom at different temperatures may be significant. In Fig.~\ref{fig3} we plot the correlation between lifetime, mass and initial temperatures for BHs of mass ranging from $10$ TeV to $10^{11}$ GeV, $n=2,6$ and $M_D=1$ TeV; it is there shown that lifetimes go from a maximum of $10^{-14}$ s for the most heavy BHs to a minimum around $10^{-26}$ s for LHC-like BHs. \begin{figure} \begin{center} \includegraphics[width=0.5\linewidth]{fig3.eps} \caption{ Correlation between mass, temperature, and lifetime of a BH for $M_D=1$ TeV and $n=2,6$. \label{fig3}} \end{center} \end{figure} \section{Thermal properties of the radiation} An important issue about the evolution of the radiation from a BH is the debated question relative to its thermalization. It has been argued \cite{Heckler:1995qq} that the emitted particles should produce a thick shell of almost-thermal plasma of QED (QCD) particles usually called photosphere (chromosphere). This would occur for BHs above a critical temperature $T_{QED}$ ($T_{QCD}$), and would change the average energy of the emitted particles from $E_{av} \sim T$ to $E_{av} \approx m_e$ (or $E_{av} \approx \Lambda_{QCD}$). The argument leading to these shells is based on the average number $\Gamma $ of interactions of the particles exiting the BH, so $\Gamma\gg 1$ should suffice to confirm the presence of the plasma shell. Initial estimates \cite{Heckler:1995qq} used the expression \begin{equation} \Gamma = \langle \sigma v \rho \rangle \, \label{gamma} \end{equation} which describes the case of particles scattering against a fixed target. Recently, however, it has been noticed that the kinematic differences between that case and the case of particles exiting radially from a BH are so significant that lead to a complete suppression of the interaction rate \cite{MacGibbon:2007yq}. We will show, following the approach of Carr, McGibbon and Page, that their arguments\footnote{These arguments are supported by the numerical analysis in \cite{Alig:2006up}.} (formulated for ordinary BHs) hold also for BHs in TeV gravity models. The first kinematic effect is due to {\it causality}, and depends on the fact that the scattered particles do not come from infinity (as in a regular collision), they are created in definite points of space-time. This introduces a {\it minimal} separation between particles successively emitted, both in time and lenght, and induces via Heisenberg's indetermination principle an UV cutoff in the scale of the exchanged momenta. The scattering cross section is reduced because not all the energies can be interchanged. In particular, in QED (QCD) Bremsstrahlung and pair production the momenta dominating the collision are of order $Q^2 \sim m_e^2$ (or right above $\sim \Lambda_{QCD}^2$). If the particle wave functions do not overlap, and their minimum distance $\nu^{-1}$ (in units of their Compton wavelength) is larger than the dominant inverse momenta, then the process will be suppressed. Checking the parameter $\nu$ is sufficient to decide about the effective connection between emitted particles, and eventually exclude thermal interactions. In \cite{Draggiotis:2008jz} We have shown that this argument excludes the presence of a photosphere for any number of extra dimensions, but not of a chromosphere when $n>2$. The second suppression effect is based on the fact that the interaction between two particles is not instantaneous, it takes a finite time to complete. It is easy to see that when this occurs the particles are already far away from each other, so that they can not interact again. In particular, after completing a QCD interaction partons will be at distances larger than $\Lambda_{QCD}^{-1}$, where QCD is already ineffective. To fully understand this point, one first has to notice that the interaction between particles moving radially in the same direction (within the {\it exclusion cone}) is negligible, as the density in such a region is low. Also, that particles moving radially keep on moving radially, as the average angular deviation due to Bremsstrahlung-like processes is small. This implies that the distance of a particle to the particles out of the exclusion cone will always increase (they {\it never} approach to each other), and when it reaches a radius $r_{brem}$ this distance will be bigger than $m_e^{-1}$ (or $\Lambda^{-1}_{QCD}$) and the particle is no longer able to interact. If the BH temperature is above $T \sim \Lambda_{QCD}$, as it is the case for the BHs under study here, it is easy to see that after the particle has completed one interaction it will have already crossed $r_{brem}$. \section{Stable particle spectrum} Once the greybody spectrum of emission has been established, it is necessary to study how it evolves at astrophysical distances: unstable particles will decay, and colored particles (which dominate the spectrum) will fragment into hadrons and then shower into stable species. We present our results following the approach of \cite{MacGibbon:1990zk}, who first studied this issue for primordial BHs. The main difference with their analysis is that while the authors in \cite{MacGibbon:1990zk} compute the stationary spectrum at a given $T$ (which only changes on astrophysical time scales), we need here to evaluate the spectrum integrated over the whole (very short) BH lifetime. In any case, our results will be analogous, since the temperature of a BH variates little for most of its lifetime. Of course, our framework also deals with a different scale of gravity $M_D \ll M_{Pl}$ and extra dimensions where gravitons propagate. This implies emission into the bulk and different greybody factors for all the species. Notice finally that the spectrum that we are discussing is in the BH rest frame, it is {\it not} the one to be observed at the Earth as the BHs produced in cosmic ray collisions will be highly boosted. We will assume that the evolution of the species $i$ emitted by a BH at rest coincides with the one in $e^+ e^- \rightarrow i \bar i$ in the center of mass frame, so we will use the MonteCarlo jet code HERWIG6 \cite{Corcella:2000bw} to evolve the greybody spectrum described before. Namely, we compute the convolution \begin{equation} \frac{{\rm d}N_j}{{\rm d}t {\rm d}\omega} = \sum_i \int {\rm d}\omega^\prime \left( \frac{{\rm d}N_i}{{\rm d}t {\rm d}\omega^\prime} (\omega^\prime)\right) \, \left( \frac{{\rm d} g_{ji}}{{\rm d}\omega} (\omega, \omega^\prime) \right) \,, \end{equation} to obtain the number ${\rm d}N_j$ of stable particles of species $j$ with energy between $\omega$ and $\omega+{\rm d}\omega$ emitted in a time ${\rm d}t$. The first term in parenthesis stands for the greybody spectrum of emission for particle species $i$, while the second encodes the probability for the species $i$ of energy $\omega^\prime$ to give a $j$ of energy $\omega$. For a given $T$, this has been implemented via MonteCarlo including all particles of mass $m_i < T$ (leptons, quarks and gauge bosons, neglecting the Higgs or the dark matter particle) and has resulted in a final spectrum of neutrinos, electrons, photons and protons. The spectrum includes the same number of particles and antiparticles (they are generated at the same rate), and the three families of neutrinos (their flavor oscillates at astrophysical lenght scales). In Fig.~\ref{fig4} we plot the spectrum at fixed temperature $T=10$ GeV, whereas in Fig.~\ref{fig5} we give the complete spectrum for initial masses of $M=10^4$ GeV and $10^{10}$ and $n=2$. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.5\linewidth]{fig4a.eps} & \includegraphics[width=0.5\linewidth]{fig4b.eps} \end{tabular} \end{center} \caption{Instant spectrum of stable particles and bulk gravitons (dashed) emitted by a BH of temperature $T=10$ GeV for $M_D=1$ TeV and $n=2$ (left) and $n=6$ (right). \label{fig4}} \end{figure} The results can be summarized as follows. {\it (i)} The main product of the emission is constituted by particles resulting from the showering of QCD species; this explains the primary peak at $\approx 0.2$ GeV observed in the spectrum. It is also possible to detect at $E \sim T$ the direct greybody emission as a secondary peak. Gravitons {\it decouple}, since they are not produced by decay of unstable species. {\it (ii)} The relative emissivities of Standard Model particles are an approximate $43\%$ of neutrinos, a $28\%$ of electrons, a $16\%$ of photons and a $13\%$ of protons. This is only mildly sensitive to the BH mass or $M_D$, as it is determined by the showering of colored particles. {\it (iii)} Emission into the bulk goes from the 0.4\% of the total number of particles (16\% of the total energy) emitted if $n=6$ and $T= 1.2$ GeV to the 0.02\% of the particles (1.4\% of the energy) emitted for $n=2$ and $T=120$ GeV. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=0.5\linewidth]{fig5a.eps} & \includegraphics[width=0.5\linewidth]{fig5b.eps} \end{tabular} \end{center} \caption{Total spectrum of stable particles and bulk gravitons (dashed) produced by a BH of mass $M=10$ TeV (left) and $M=10^{10}$ GeV (right) for $M_D=1$ TeV and $n=2$. \label{fig5}} \end{figure} \section{Summary and outlook} The head to head collision of two cosmic rays provides center-of-mass energies of up to $10^{11}$ GeV. In models with extra dimensions and a fundamental scale of gravity at the TeV such collision should result in the formation of a mini BH. Its evaporation and showering into stable particles could provide an observable signal. We have estimated the production rate of these BHs (Fig.~\ref{fig2}) via collisions of two cosmic rays or, more frequently, in the collision of a cosmic ray and a dark matter particle. In particular, it seems worth to analyze the possibility that {\it (i)} extragalactic cosmic rays crossing the galactic DM halo produce a flux of secondary particles with a characteristic shape and strongly dependent upon galactic latitude; {\it (ii)} a fraction of the flux of cosmic rays with energy up to $\sim 10^8$ GeV trapped in our galaxy by $\mu G$ magnetic fields can be processed by TeV interactions into a secondary flux peaked at smaller energies. Notice that the physics proposed in this talk is expected to become relevant just at center of mass energies above $\sqrt{s}\sim \sqrt{2 E m_\chi} \sim 1$ TeV, {i.e.}, at cosmic ray energies around the cosmic ray {\it knee}. These considerations will be worked out in \cite{Masip:2008mk}, where the additional effects of gravitational elastic interactions will also be included. Here we have discussed the properties of BHs with masses between $10^4$ and $10^{11}$ GeV. Such objects have a proper lifetime between $10^{-14}$ and $10^{-26}$ s (Fig.~\ref{fig3}), and their desintegration products are mainly determined by the fragmentation of QCD species produced via Hawking radiation. Interactions among emitted particles are not able to produce a thermal shell of radiation, so the spectrum of fundamental species exit the BH with basically the black body spectrum described by Hawking. The final spectrum of stable particles at large distances, however, is peaked around $\Lambda_{QCD}$, and exhibits features weakly dependent upon number of extra dimensions or the BHs mass. Standard Model modes are constituted by an approximate $43\%$ of neutrinos, a $28\%$ of electrons, a $16\%$ of photons and a $13\%$ of protons. The gravitons produced are a fraction that goes from the $0.4\%$ of the total number of particles ($16\%$ of the energy) for $M=10^{10}$ GeV and $n = 6$ to the $0.02\%$ ($1.4\%$ of the energy) for $M=10$ TeV and $n=2$. This work is a preliminary analysis, with results that can be useful for future search for effects of TeV gravity on cosmic ray physics.
2,869,038,156,488
arxiv
\section{Introduction} In this article we study improvement of stability effects in Runge approximation originating from the interplay of geometry and an increasing frequency parameter for the acoustic Helmholtz equation. These effects had first been observed in \cite{HI04} and have subsequently been the object of intensive study, both in the context of unique continuation \cite{IK11,I19,I07,IS10,IS07} and with regards to their effects on inverse problems \cite{CI16,I18,I11,EI18,EI20,IN14,IN14a,ILW26,I13,IW20,BLT10, NUW13,BLZ20,INUW14}. Due to the notorious instability in many inverse problems, these improved stability estimates are of great significance, both from a theoretical and practical point of view \cite{BNO19}. We refer to Section ~\ref{sec:literature} for an (incomplete) overview of the history and background of these type of results. Building on the observation that Runge approximation properties are qualitatively and quantitatively dual to unique continuation \cite{RS17a} (see also \cite{L88,Z07} and the references therein for analogous results in the control theory community), in this article we seek to study the effects of geometry and increasing frequency $k$ for acoustic Helmholtz equations \begin{equation} \label{eq:generaleq} \big(\Delta+k^2q(x)+V(x)\big)u(x) = 0 \end{equation} on the associated Runge approximation properties under suitable conditions on the geometry and the potentials (see Section ~\ref{sec:setting}). For the special case of the (pure) Helmholtz equation ($q=1$ and $V=0$) quantitative Runge approximation had been deduced in \cite[Lemma 2.1]{EPS19} in the context of approximation properties for dispersive equations. Relying on the duality between Runge approximation and unique continuation, we here prove quantitative unique continuation properties for the acoustic Helmholtz equation \eqref{eq:generaleq} in suitable, adapted function spaces, carefully tracking the parameter dependence. As two of our main results, we deduce Runge approximation properties with \emph{exponential} $k$-dependence \emph{without} geometric assumptions (Theorems ~\ref{thm:Rungeboundary} and ~\ref{thm:Rungeinterior}) and \emph{improved, polynomial} behaviour under \emph{convexity} conditions on the domain (Theorem ~\ref{thm:Rungeinterior_improv}). In order to illustrate the importance, robustness and usefulness of these estimates, we consider the partial data inverse problem for the equation \eqref{eq:generaleq}. Using the systematic duality strategy from \cite{RS17b}, we prove improved stability results for this nonlinear inverse problem (Proposition ~\ref{prop:stability}). This generalizes the results from \cite{KU19} to the case of acoustic Helmholtz equations. \subsection{Setting} \label{sec:setting} In the following, we outline the precise geometric and functional assumptions under which our results are valid. Here, as a model system, we focus on generalizations of Helmholtz type equations with homogeneous Dirichlet conditions under the assumption that the increasing parameter is chosen with some distance to the spectrum. More precisely, for $n\ge2$, $k\ge 1$ and $\Omega\subseteq \R^n$ a bounded, connected, open set with Lipschitz boundary we consider the acoustic Helmholtz equation \eqref{eq:generaleq} in $\Omega$, where \begin{enumerate}[leftmargin=*] \item[(i)] \label{assV}$V\in L^\infty(\Omega)$ and zero is not a Dirichlet-eigenvalue of $\D + V$ in $\Omega$, \item[(ii)] \label{assq}$q\in C^1(\Omega)$ and $q$ is strictly positive, that is $0<\kappa^{-1}<q<\kappa$ for some $\kappa>1$. \end{enumerate} Let us comment on these conditions: The assumption that $V\in L^{\infty}(\Omega)$ ensures that the potential $V$ is subcritical in terms of scaling and that it is well in the regime in which unique continuation results are available. The second condition in \textnormal{(\hyperref[assV]{i})} is a (technical) solvability condition. By domain perturbation arguments, this is generically satisfied \cite{Kato}. The conditions formulated in \textnormal{(\hyperref[assq]{ii})} on $q$ are two-fold: The sign condition and bounds on $q$ ensure that the acoustic equation \eqref{eq:generaleq} is of Helmholtz type; $q=1$ corresponds to the Helmholtz equation with potential. The regularity condition $q\in C^1(\Omega)$ will be used in order to treat the term $k^2 q(x)$ as part of the principal symbol of the operator. In Sections ~\ref{sec:qUCP_without_geo} and ~\ref{sec:Runge} this will be a consequence of introducing an auxiliary new dimension in order to reduce the parameter dependent equation to a non-parameter dependent equation with $C^1$ principal symbol. In Section ~\ref{sec:improved_results_convexity} we will directly treat the parameter dependent term $k^2 q$ as part of the principal symbol of the Carleman estimate for which we again require some regularity on $q$. In order to do so, we will complement the condition \textnormal{(\hyperref[assq]{ii})} with an additional radial monotonicity assumption (see \textnormal{(\hyperref[assq2]{ii'})} in Section ~\ref{sec:improved_results_convexity_a}). We consider \eqref{eq:generaleq} with homogeneous Dirichlet boundary conditions. In order to avoid solvability issues or a priori estimates without control on $k$, we make an additional technical assumption: We suppose that the real parameter $k\geq 1$ is chosen such that zero is not a Dirichlet eigenvalue of the operator in $\Omega$. More precisely, let $\Sigma_{V,q}$ denote the set of the inverse of eigenvalues of the operator $T:=(-\D-V)^{-1} M_q$, where $M_q$ denotes the multiplication operator with $q$. We will then assume that \begin{enumerate}[leftmargin=.75cm] \item[(a1)] \label{assSpec} $\dist(k^2,\Sigma_{V,q})>ck^{2-n}$, for some $c\ll 1$. \end{enumerate} We remark that generically this does not pose major restrictions, as it is always possible to find arbitrarily large values of $k$ such that the condition \textnormal{(\hyperref[assSpec]{a1})} is fulfilled: Indeed, the operator $T$ is a classical pseudodifferential operator of order $-2$. We denote the spectrum of $T^{-1}$ by $\Sigma_{V,q}=\{\lambda_n\}_{n\in\N}$. By Weyl's law \cite{HormIV, S87} $$\#\{\lambda_n\leq E\}= C E^{\frac n 2} \mbox{ as } E\to\infty.$$ Thus, on average, the distance between consecutive eigenvalues is $C'E^{1-\frac n 2}$ as $E\to\infty$. In this case, \textnormal{(\hyperref[assSpec]{a1})} ensures that it is possible to find admissible frequencies in essentially all frequency ranges with $c\in(0,\frac {C'}{3})$. We remark that also other boundary conditions would have been feasible. An alternative, natural condition would have been impedance conditions (including the potential $q$) which in the limit of growing domains would have approximated a Sommerfeld type radiation condition. This would also have had the advantage of avoiding the eigenvalue assumptions and discussions. Since we are working in finite domains, for simplicity, we here restrict our attention to the Dirichlet setting. \subsection{Runge approximation without convexity conditions} With the conditions stated above, we first address Runge approximation results \emph{without} additional convexity assumptions on the domain. Our main results then provide quantitative Runge approximation results with a quantified dependence on the parameter $k$. Since these properties are dual to unique continuation properties for which \emph{exponential} dependences on $k$ are unavoidable without additional geometric assumptions \cite{BM20}, the dependences on $k$ are expected to be exponential. For the case of approximation in the domain in which a solution is prescribed we thus obtain the following result: \begin{thm}\label{thm:Rungeboundary} Let $\Omega_1, \Omega_2 \subset\R^n$ be open, bounded, connected Lipschitz domains such that $\Omega_1\Subset \Omega_2$ and such that $\Omega_2\backslash\overline\Omega_1$ is connected. Let $\Gamma$ be a non-empty, open subset of $\p \Omega_2$. Let $V$ and $q$ satisfy \textnormal{(\hyperref[assV]{i})}-\textnormal{(\hyperref[assq]{ii})} in $\Omega_2$. There exist constants $\mu>1$, $s\geq n+6$ and $C>1$ depending on $n, \Omega_2, \Omega_1, \|V\|_{L^\infty(\Omega_2)}, \kappa$ and $\|q\|_{C^1(\Omega_2)}$ such that for any solution $v \in H^1(\Omega_1)$ of \begin{align*} (\Delta+k^2q+V)v = 0 \quad \mbox{in} \quad\Omega_1, \end{align*} with $k\geq 1$ satisfying \textnormal{(\hyperref[assSpec]{a1})}, and any $\epsilon>0$, there exists a solution $u$ to \begin{align*} (\Delta+k^2q+V)u = 0 \quad \mbox{in} \quad\Omega_2, \end{align*} with $ u|_{\p\Omega_2}\in \widetilde{H}^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)$ such that \begin{align}\label{eq:Rungeboundary2} \|u-v\|_{L^2(\Omega_1)} \leq \epsilon \|v\|_{H^1(\Omega_1)}, \qquad \|u\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\partial \Omega_2)} \leq C e^{Ck^s \epsilon^{-\mu}} \|v\|_{L^2(\Omega_1)}. \end{align} \end{thm} In Section ~\ref{sec:optimality} we show that in the full data case and for $q=1$, up to the precise values of $\mu$, $s$ and $C$, the bound in $\epsilon$ is optimal, see \cite[Section 5]{RS17a} for the analogous result for the Laplacian without a large parameter. If $v$ is a solution in a slightly larger domain than the one for which we seek to find a good approximation, the exponential dependence in $\epsilon$ changes to a polynomial dependence while the $k$-dependence remains exponential: \begin{thm}\label{thm:Rungeinterior} Let $\Omega_1, \Omega_2, \Gamma, V$ and $q$ be as in Theorem ~\ref{thm:Rungeboundary}. Further, let $\tilde\Omega_1$ be a bounded, Lipschitz domain such that $\Omega_1 \Subset \tilde\Omega_1\Subset\Omega_2$. There exist constants $\nu>1$ and $C>1$ depending on $n, \Omega_2, \Omega_1, \|V\|_{L^\infty(\Omega_2)}, \kappa$ and $\|q\|_{C^1(\Omega_2)}$ such that for any solution $\tilde v \in H^1(\tilde \Omega_1)$ of \begin{align*} (\Delta+k^2q+V) \tilde v = 0\; \mbox{ in } \tilde\Omega_1, \end{align*} with $k\geq 1$ satisfying \textnormal{(\hyperref[assSpec]{a1})}, there exists a solution $u$ to \begin{align*} (\Delta+k^2q+V)u = 0 \; \mbox{ in } \Omega_2 \end{align*} with $u|_{\p\Omega_2}\in \widetilde{H}^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)$ such that \begin{align}\label{eq:Rungeinterior2} \|u-\tilde v\|_{L^2(\Omega_1)} \leq \epsilon \|\tilde v\|_{H^1(\tilde\Omega_1)}, \qquad \|u\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)}\leq C{ e^{Ck}}\epsilon^{-\nu}\|\tilde v\|_{L^2( \Omega_1)}. \end{align} \end{thm} As an application of these results we prove a partial data uniqueness result for the Calder\'on problem with stability improvement in $k$ under a priori assumptions on the potential in a neighbourhood of the boundary. This generalizes the results from \cite{KU19} to accoustic equations. In particular, it thus combines ideas from \cite{AU04,RS17} with the observations from \cite{HI04} (see also the references above and below). We further refer to \cite{I11} for similar results for different ranges of $k$. \begin{prop}\label{prop:stability} Let $n\geq 3$, let $\Omega\subset\R^n$ be a bounded, connected, smooth open set and let $\Gamma\subset\p\Omega$ be a nonempty open subset. Let $\Omega'\Subset\Omega$ be an open, Lipschitz subset such that $\Omega\backslash\Omega'$ is connected. Let $q_1, q_2, V_1, V_2$ verify \textnormal{(\hyperref[assV]{i})}-\textnormal{(\hyperref[assq]{ii})} in $\Omega$ and be such that \begin{align*} \|q_j\|_{L^\infty(\Omega)}+\|V_j\|_{L^\infty(\Omega)}\leq B,\\ q_1=q_2,\; V_1=V_2 \mbox{ in } \Omega\backslash\Omega'. \end{align*} Then, there exists a constant $C>1$ depending on $n, \Omega, \Omega',\Gamma $, $\|q_j\|_{C^1(\Omega)}$ and $B$ such that for all $k\geq 1$ such that $\dist(k^2, \Sigma_{V_j,q_j}) > ck^{2-n}$ and $\delta=\|\Lambda_{V_1,q_1}^\Gamma(k)-\Lambda_{V_2,q_2}^\Gamma(k)\|_{\tilde H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)\to H^{-\text{\normalfont\sfrac{1}{2}}}(\Gamma)}<1$, we have \begin{align*} \|k^2(q_2-q_1)+(V_2-V_1)\|_{H^{-1}(\Omega)} \leq C\Bigg(e^{Ck^{n+3}} \delta+\frac{1}{\left(k+|\log\delta|^{\frac{1}{n+3}}\right)^{\frac 2 n}}\Bigg). \end{align*} \end{prop} \begin{rmk} \label{rmk:two_meas} We remark that if in Proposition ~\ref{prop:stability} two measurements for different values of $k$ are available, it is possible to provide stability both for $q_j$ and $V_j$ separately. \end{rmk} Earlier improvement of stability results had been obtained for the corresponding \emph{full data} inverse problems in \cite{NUW13, INUW14}. While in the case of the Helmholtz equation ($q=1$) with potential, the $k$-dependence of the Lipschitz contribution was proved to be \emph{polynomial} instead of exponential, already in the full data acoustic case ($q\neq 1$) exponential $k$-dependences emerged. \subsection{Improvements of the Runge approximation results in convex geometries} \label{sec:improved_results_convexity_a} Last but not least, in our final section, in line with the observations from \cite{HI04} (and the literature building up on this), showing that in convex domain geometries the $k$-dependences in quantitative unique continuation improve with a large parameter, we also obtain improved Runge approximation results in the interior in the presence of a large parameter. Here we impose an additional monotonicity condition on the potential $q$ which is well-known in the context of the study of embedded eigenvalues \cite{KT06}: \begin{itemize} \item[(ii')] \label{assq2} $q\in C^1(\Omega)$, $\kappa^{-1}\leq q\leq \kappa$ for some $\kappa>1$ and $\nabla q\cdot x\geq 0$. \end{itemize} With this assumption we deduce improved (in $k$) dependences in the Runge approximation results in the interior for convex geometries: \begin{thm}\label{thm:Rungeinterior_improv} Let $V$ and $q$ be as in \textnormal{(\hyperref[assV]{i})}-\textnormal{(\hyperref[assq2]{ii'})} in $\Omega_2=B_2 \backslash \overline{B_{\text{\normalfont\sfrac{1}{2}}}}$ and let $\Omega_1=B_1 \backslash \overline{B_{\text{\normalfont\sfrac{1}{2}}}}$ and $\tilde \Omega_1=B_{1+\delta}\backslash \overline{B_{\text{\normalfont\sfrac{1}{2}}}}$, for some $\delta\in(0,1)$. There exist parameters $\nu>1$, $s>3$ and a constant $C>1$ depending on $n, \|V\|_{L^\infty(\Omega_2)}, \kappa$ and $\|q\|_{C^1(\Omega_2)}$ such that for any solution $\tilde v \in H^1(\tilde \Omega_1)$ of \begin{align*} (\Delta+k^2q+V) \tilde v &= 0 \; \mbox{ in } \tilde \Omega_1,\\ \tilde v&=0 \; \mbox{ on } \p B_{\text{\normalfont\sfrac{1}{2}}}, \end{align*} with $k\geq 1$ satisfying \textnormal{(\hyperref[assSpec]{a1})}, there exists a solution $u$ to \begin{align*} (\Delta+k^2q+V)u &= 0 \; \mbox{ in } \Omega_2,\\u&=0 \; \mbox{ on } \p B_{\text{\normalfont\sfrac{1}{2}}}, \end{align*} such that \begin{align*} \|u-\tilde v\|_{L^2(\Omega_1)} \leq \epsilon \|\tilde v\|_{H^1(\tilde \Omega_1)}, \qquad \|u\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\p B_2)}\leq C k^s \epsilon^{-\nu}\| \tilde v\|_{L^2(\Omega_1)}. \end{align*} \end{thm} These results will be derived by duality from improved unique continuation estimates for the dual equations. The choice of the specific geometry here should be viewed as a sample results which -- based on the known unique continuation properties -- are expected for a larger class of convex domains. \subsection{Connection with the literature} \label{sec:literature} In order to put our results into a proper context, we recall some of the earlier literature on improved stability properties. Due to their ability to stabilize notoriously ill-posed inverse problems, the stabilization effects at high frequency which had first been established in \cite{HI04} in the context of improved (interior) unique continuation properties were subsequently extended to improved unique continuation properties in various other geometric settings and other model equations \cite{IK11,I19,I07,IS10,IS07,CI16,I18,I11,EI18,EI20,IN14,IN14a,ILW26,I13,IW20,BLT10, NUW13,BLZ20,INUW14}. The optimality of exponential $k$-dependences in unique continuation (in the form of three balls inequalities) was further established recently in \cite{BM20} for the exact Helmholtz equation (which can be studied by investigating explicit behaviour of Bessel functions). Earlier, in \cite{J60}, the role of the geometry had already been highlighted for the closely connected wave equation, see also \cite{KRS20} for a systematic, microlocal argument for this. Relying on these ideas further stability improvement results were also obtained for nonlinear inverse problems such as various variants of the Calder\'on problem. In this context, full data results were established in \cite{INUW14} for the Helmholtz equation with potential and in \cite{NUW13} for the acoustic equation. In recent work \cite{KU19}, this was extended to a partial data result for the Helmholtz equation with potential and impedance boundary conditions. Optimality of the improved stability estimates was discussed in a series of articles \cite{Isaev13,IN12,Isaev13a}. \subsection{Outline of the remaining article} The remaining article is organised as follows: After briefly recalling some auxiliary results in Section \ref{sec:not}, we turn to the quantitative unique continuation results without geometric assumptions in Section \ref{sec:qUCP_without_geo}. In Section \ref{sec:Runge} a duality argument is used to transfer these into quantitative Runge approximation results. As an application we prove partial data stability for the Calder\'on problem for the acoustic equation with a priori information in a boundary layer in Section \ref{sec:stability}. Finally, in Section \ref{sec:improved_results_convexity} we discuss improvements arising from convex geometries. \subsection{Notation and preliminaries} \label{sec:not} Before turning to the proofs of our main results we recall a number of auxiliary arguments and summarize our notation. \subsubsection{On spectral estimates} The following result contains a global estimate for the homogeneous Dirichlet problem depending on $\dist(k^2,\Sigma_{V,q})$. It generalizes \cite[Proposition 2]{Beretta} and together with the assumption \textnormal{(\hyperref[assSpec]{a1})} allows us to invert the operator under consideration. \begin{lem}\label{lem:apriori} Let $\Omega\subset\R^n$ be a bounded Lipschitz domain. Let $V$ and $q$ be as in \textnormal{(\hyperref[assV]{i})}-\textnormal{(\hyperref[assq]{ii})} in $\Omega$. Then there is a discrete set $\Sigma_{V,q}\subset\R$ such that for every $k^2\notin\Sigma_{V,q}$, $k\geq 1$, there exists a unique solution $u\in H^1(\Omega)$ of \begin{align} \label{eq:beretta} \begin{split} \big(\Delta+k^2q+V\big)u &= f \;\mbox{ in } \Omega,\\ u &= 0 \; \mbox{ on }\p\Omega, \end{split} \end{align} where $f\in L^2(\Omega)$. In addition, there is a constant $C>0$ depending on $\Omega$, $\kappa$ and $\|V\|_{L^\infty(\Omega)}$ such that \begin{align*} \|u\|_{H^1(\Omega)}\leq C\left(1+\frac{k^3}{\dist(k^2,\Sigma_{V,q})}\right)\|f\|_{L^2(\Omega)}. \end{align*} \end{lem} \begin{proof} Recalling that zero is not a Dirichlet eigenvalue of $\D + V$ in $\Omega$, we consider the operator $T=(-\Delta-V)^{-1}M_q:H_0^1(\Omega)\to H_0^1(\Omega)$, where $M_q$ denotes the multiplication operator $M_qu=qu$. Then $T$ has eigenvalues $\alpha_n \in \R$ with $\alpha_n\to 0$ as $n\to\infty$. Let $\Sigma_{V,q}=\{\lambda_n=\alpha_n^{-1}\}_{n\in\N}$ and let $\{e_n\}_{n\in\N}$ be an orthonormal basis of $L^2(\Omega)$ with $Te_n=\alpha_n e_n$. Notice that \eqref{eq:beretta} is equivalent to $(I-k^2 T)u=(-\Delta-V)^{-1}f=:h$, $u\in H^1_0(\Omega)$. If $k^2\notin\Sigma_{V,q}$, by the Fredholm alternative, there is a unique solution $u$ to this problem. Moreover, we can write $u=\sum_{n\in\N} u_n e_n$, where $u_n=( u,e_n )_{L^2(\Omega)}$ is given by \begin{align*} (1-k^2\alpha_n)u_n=h_n= ( h,e_n )_{L^2(\Omega)} \quad \mbox{ i.e.} \quad u_n= \frac{1}{1-{k^2}{\alpha_n}}h_n=\left(1+\frac{k^2}{\lambda_n-k^2}\right)h_n. \end{align*} Therefore, \begin{align*} \|u\|_{L^2(\Omega)}^2=\sum_{n\in\N}|u_n|^2 \leq \left(1+\frac{k^2}{\dist(k^2,\Sigma_{V,q})}\right)^2\sum_{n\in\N}|h_n|^2 =\left(1+\frac{k^2}{\dist(k^2,\Sigma_{V,q})}\right)^2\|h\|^2_{L^2(\Omega)}. \end{align*} Taking into account that $\|h\|_{L^2(\Omega)}\leq C\|f\|_{L^2(\Omega)}$, we conclude \begin{align*} \|u\|_{L^2(\Omega)}^2 \leq C\left(1+\frac{k^2}{\dist(k^2,\Sigma_{V,q})}\right)\|f\|_{L^2(\Omega)}. \end{align*} Finally, testing the equation with $u$, we obtain $\|\nabla u\|_{L^2(\Omega)}\leq C(1+k)\|u\|_{L^2(\Omega)}+\|f\|_{L^2(\Omega)}$, which in combination with the previous estimate yields the desired result. \end{proof} \subsubsection{Notation} For $s\in \R$, the whole space Sobolev spaces are denoted by \begin{align*} H^s(\R^n):=\big\{f\in \mathcal{S'}(\R^n): \|(1+|\cdot|^2)^{\frac s 2}\mathcal F f \|_{L^2(\R^n)} < \infty\big\}, \end{align*} where \begin{align*} \mathcal F f(\xi)=\int_{\R^n} f(x)e^{-ix\cdot\xi}dx \end{align*} denotes the Fourier transform. Let $\Omega\subset \R^n$ be an open set, then we define \begin{align*} H^s(\Omega)&:=\big\{f|_\Omega: f\in H^s(\R^n)\big\}, \mbox{ equipped with the quotient topology},\\ \tilde H^s(\Omega)&:=\mbox{ closure of } C^\infty_c(\Omega) \mbox{ in } H^s(\R^n). \end{align*} For any $s\in\R$ these spaces satisfy \begin{align*} \big(H^s(\Omega)\big)^*=\tilde H^{-s}(\Omega), \ \ \big(\tilde H^s(\Omega)\big)^*= H^{-s}(\Omega). \end{align*} In addition, for $\Gamma\subset\p\Omega$, we set \begin{align*} \tilde H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)&:=\big\{ f\in H^{\text{\normalfont\sfrac{1}{2}}}(\p\Omega): \supp f\subseteq\Gamma\big\}, \end{align*} which is a closed subspace of $H^{\text{\normalfont\sfrac{1}{2}}}(\p\Omega)$ and its dual space may be identified with $ H^{-\text{\normalfont\sfrac{1}{2}}}(\Gamma)$. We denote by $(\cdot,\cdot)_{L^2(\Omega)}$ the inner product in $L^2(\Omega)$ and also use the abbreviation $(\cdot, \cdot)_{\partial \Omega}$ to denote $(\cdot, \cdot)_{L^2(\partial \Omega)}$. Furthermore, for $r>0$ and $x_0\in\R^n$, we denote the $n-$dimensional ball by $B_r(x_0)\subset\R^n$ and we define the cylindrical $(n+1)$-dimensional domain $Q_r(x_0):=B_r(x_0)\times(-r, r)\subset \R^{n+1}$. In addition, given an open set $\Omega\subset\R^n$, $B_r^+(x_0):=B_r(x_0)\cap\Omega$. \section{Quantitative Unique Continuation} \label{sec:qUCP_without_geo} In this section we begin our analysis of the Runge approximation properties for the acoustic Helmholtz equation by proving quantitative unique continuation results without geometric assumptions on the underlying domains. Here we only assume the validity of the conditions \textnormal{(\hyperref[assV]{i})}-\textnormal{(\hyperref[assq]{ii})} (not necessarily the condition \textnormal{(\hyperref[assSpec]{a1})}). The Runge approximation properties will be deduced as dual results in the next section. Since in this case exponential losses in $k$ are expected to be unavoidable (see \cite{BM20} for a proof of this in the closely related three balls inequalities), we do not prove these estimates by carefully tracking the $k$-dependence in the original equations but by embedding these equations into a family of elliptic equations without a large parameter but in an additional dimension. This is achieved by passing from $u(x)$ to $\tilde{u}(x,t) = e^{kt}u(x)$. We emphasize that this is a well-known procedure (see for instance \cite{LL12} and the references therein). The corresponding unique continuation properties follow from well-known results in the literature (e.g. \cite{ARRV09}). The main novelty of this first part of our article -- in which we do not pose geometric assumptions on our domains -- are the quantitative (in $k$) Runge approximation results and the application of these to the stability of the partial data inverse problem which are deduced in the next sections. Formulated for the original function $u$ the unique continuation properties read as follows: \begin{prop} \label{prop:quantitative_UCP} Let $\Omega$ be an open, bounded, connected Lipschitz domain and let $\Gamma\subset \p\Omega$ be a non-empty relatively open subset. Let $V$ and $q$ be as in \textnormal{(\hyperref[assV]{i})}-\textnormal{(\hyperref[assq]{ii})} in $\Omega$. Let $u\in H^1(\Omega)$ be a solution to \begin{align}\label{eq:eqinOm} \begin{split} \Delta u+k^2qu+Vu&=0 \;\mbox{ in } \Omega, \end{split} \end{align} and let $M$, $\eta$ be such that \begin{align*} \|u\|_{H^1(\Omega)} &\leq M,\\ \|u\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)}+\|\p_{\nu} u \|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\Gamma)}&\leq \eta. \end{align*} Then there exist a parameter $\mu\in(0,1)$ and a constant $C>1$ depending on $n, \Omega, \Gamma, \|V\|_{L^\infty(\Omega)}, \kappa$ and $\|q\|_{C^1(\Omega)}$ such that \begin{align}\label{eq:QUCPboundary} \|u\|_{L^2(\Omega)}\leq C k\left|\log\left(\frac{\eta}{M+\eta}\right)\right|^{-\mu}({M+\eta}). \end{align} In addition, if $G$ is a bounded Lipschitz domain with $G\Subset \Omega$, then there exist a parameter $\nu\in(0,1)$ and a constant $C>1$ (depending on $n, \Omega, G, \Gamma, \|V\|_{L^\infty(\Omega)}, \kappa$ and $\|q\|_{C^1(\Omega)}$) such that \begin{align}\label{eq:QUCPinterior} \|u\|_{L^2(G)}\leq Ce^{C k}\left(\frac{\eta}{M+\eta}\right)^\nu (M+\eta). \end{align} \end{prop} As an auxiliary ingredient, the proof of Proposition ~\ref{prop:quantitative_UCP} uses the following three-balls (boundary-bulk) inequalities derived from \cite{ARRV09}: \begin{lem}\label{lem:3balls} Under the same assumptions as in Proposition ~\ref{prop:quantitative_UCP}, there exist a parameter ${\alpha \in (0,1)}$ and a constant $C>1$ depending on $\Omega, \|V\|_{L^{\infty}(\Omega)}, \kappa$ and $\|q\|_{C^1(\Omega)}$ such that \begin{align}\label{eq:3balls} \|u\|_{L^2(B_{r}(x_0))} \leq C e^{C k} \|u\|_{L^2(B_{2r}(x_0))}^{1-\alpha} \|u\|_{L^2(B_{r/2}(x_0))}^{\alpha}, \end{align} where $x_0 \in \Omega$ and $r>0$ are such that $B_{4r}(x_0)\subset \Omega $. \\ In addition, there exist a parameter $\alpha_0 \in (0,1)$ and a constant $C>1$ depending on $\Omega, \Gamma, \|V\|_{L^{\infty}(\Omega)}, \kappa$ and $\|q\|_{C^1(\Omega)}$ such that \begin{align}\label{eq:3ballsboundary} \|u\|_{L^2(B_{r}^+(x_0))} \leq C e^{C k} \big(\|u\|_{L^2(B_{2r}^+(x_0))}+\eta\big)^{1-\alpha_0} \eta^{\alpha_0}, \end{align} where $x_0 \in \Gamma$ and $r>0$ are such that $B_{4r}(x_0)\cap (\partial \Omega \backslash\Gamma) = \emptyset$ and $B^+_r(x_0)=B_r(x_0)\cap\Omega$. \end{lem} In order to invoke the quantitative uniqueness results for elliptic equations without a large parameter, we pass to equations in an additional dimension which is a well-known method in quantitative uniqueness for eigenfunctions \cite{LL12, L18}. We remark that in the setting of Helmholtz equations where $q=1$, the $k$-dependence in this result is optimal as proved in \cite{BM20}. \begin{proof}[Proof of Lemma ~\ref{lem:3balls}] Let $\tilde \Omega :=\Omega\times(-d,d)$, with $d=\diam(\Omega)$. Let $\tilde u\in H^1(\tilde\Omega)$ be a solution to \begin{align*} \big(\D+q(x)\p_t^2+V(x)\big)\tilde u(x,t)=0 \quad \mbox{for}\quad (x,t)\in\tilde \Omega, \end{align*} where $\D$ denotes the Laplacian in $x$. Recalling the assumption \textnormal{(\hyperref[assq]{ii})}, we observe that the operator $\D + q(x)\p_t^2$ is elliptic with $C^1$ coefficients. Hence, the results from \cite{ARRV09} are applicable. Using the notation $Q_r(x_0)=B_r(x_0)\times(-r,r)$, by \cite[Theorem 1.10]{ARRV09}, there exist $C>1$ and $\alpha\in(0,1)$ depending on $\Omega, \|V\|_{L^\infty(\Omega)}, \kappa$ and $\|q\|_{C^1(\Omega)}$ such that \begin{align}\label{eq:3cylinders} \|\tilde u\|_{L^2(Q_r(x_0))} \leq C \|\tilde u\|_{L^2(Q_{2r}(x_0))}^{1-\alpha}\|\tilde u\|_{L^2(Q_{\sfrac{r}{2}}(x_0))}^\alpha, \end{align} where $B_{4r}(x_0)\subset\Omega$. We now consider the particular solution $\tilde u(x,t)=e^{kt}u(x)$, with $u(x)$ satisfying \eqref{eq:eqinOm}. Then \eqref{eq:3balls} follows from \eqref{eq:3cylinders} together with the observation that \begin{align*} 2r e^{-kd} \|u\|_{L^2(B_r(x_0))}\leq \|\tilde u\|_{L^2(Q_r(x_0))} \leq 2r e^{kd} \|u\|_{L^2(B_r(x_0))}. \end{align*} Inserting this into \eqref{eq:3cylinders} concludes the proof of \eqref{eq:3balls}. In order to obtain \eqref{eq:3ballsboundary}, we observe that similarly, by \cite[Theorem 1.7]{ARRV09}, there exist $C>1$ and $\alpha_0\in(0,1)$ depending on $\Omega, \Gamma, \|V\|_{L^\infty(\Omega)}, \kappa$ and $\|q\|_{C^1(\Omega)}$ such that \begin{align*} \|\tilde u\|_{L^2(Q_r(x_0)\cap \tilde\Omega)} \leq C \big(\|\tilde u\|_{L^2(Q_{2r} \cap \tilde{\Omega})}+\tilde\eta\big)^{1-\alpha_0} \tilde\eta^{\alpha_0}. \end{align*} Here $\tilde \Gamma=\Gamma\times(-d,d)$ and \begin{align*} \|\tilde u\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\tilde\Gamma)}+\|\p_{\nu} \tilde u \|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\tilde\Gamma)}\leq\tilde\eta. \end{align*} The requirement $\dist(Q_r(x_0)\cap \tilde\Omega, \p\tilde\Omega\backslash\tilde \Gamma)>0$ is satisfied since $B_{4r}(x_0)\cap (\partial \Omega \backslash\Gamma) = \emptyset$. Notice that on the lateral boundary $\tilde{\Gamma}$ the normal derivative does not have any contribution in the $t$ direction. Therefore, using the definition of the weak form of $\p_{\nu}\tilde{u}$ in terms of the bilinear form associated with \eqref{eq:eqinOm}, choosing $\tilde u(x,t)=e^{kt}u(x)$ and $\tilde\eta=Ce^{Ck}\eta$, the estimate \eqref{eq:3ballsboundary} follows as above. \end{proof} With Lemma ~\ref{lem:3balls} available, we next address the proof of Proposition ~\ref{prop:quantitative_UCP}. \begin{proof}[Proof of Proposition ~\ref{prop:quantitative_UCP}] Let us define \begin{align}\label{eq:sets} \begin{split} W_\epsilon& : =\{x\in \Omega: \dist(x, \p\Omega)<\epsilon\},\\ \Omega_\epsilon& : =\{x\in \Omega: \dist(x, \p\Omega)\geq\epsilon\}, \end{split} \end{align} for $\epsilon\in (0, \epsilon_0)$, for some $\epsilon_0<1$ such that $\Omega_{\epsilon_0}$ is connected. We argue in three steps, estimating $u$ separately on $W_\epsilon $ and on $\Omega_\epsilon$ and combining these bounds by means of a final optimization step (in $\epsilon$). \textit{Step 1: Estimate on $W_\epsilon$.} By the H\"older and Sobolev inequalities we have \begin{align}\label{eq:QUCstep1} \|u\|_{L^2(W_\epsilon)} \leq C\epsilon^{\frac{1}{p}}\|u\|_{L^q(W_\epsilon)} \leq C \epsilon^{\frac{1}{p}}\|u\|_{H^1(\Omega)} \leq C \epsilon^{\frac{1}{p}}M, \end{align} with $\frac{1}{p}+\frac{1}{q}=\frac{1}{2}$ and the constant $C>0$ depending on $\Omega$. \textit{Step 2: Estimate on $\Omega_\epsilon$.} We use Lemma ~\ref{lem:3balls} to propagate the smallness of $\eta$ to $\|u\|_{L^2(\Omega_\epsilon)}$. Firstly, we transport the information from the boundary to the interior of $\Omega_\epsilon$. Let $x_0 \in \Gamma$ and $r_0>0$ such that $B_{4r_0}(x_0)\cap (\p\Omega\backslash\Gamma)=\emptyset.$ Then by \eqref{eq:3ballsboundary} it holds \begin{align*} \|u\|_{L^2(B_{r_0}^+(x_0))}\leq C e^{Ck} (M+\eta)^{1-\alpha_0} \eta^{\alpha_0}. \end{align*} Once we have reached the interior, we iterate \eqref{eq:3balls} along a chain of balls which cover $\Omega_\epsilon$ and such that $B_{4r}(x)\subset \Omega$. This implies that it is necessary to iterate \eqref{eq:3balls} roughly $N\sim N_0-C\log \epsilon$ times, where $N_0$ and $C$ depend on $\Omega$. Therefore, we obtain \begin{align}\label{eq:QUCstep2} \|u\|_{L^2(\Omega_\epsilon)} \leq C e^{\frac{C}{1-\alpha}k}(M+\eta)^{1-\alpha_0\alpha^N} \eta^{\alpha_0\alpha^N}. \end{align} \textit{Step 3: Optimization.} Combining \eqref{eq:QUCstep1} and \eqref{eq:QUCstep2}, we obtain \begin{align*} \|u\|_{L^2(\Omega)} \leq C\left(\epsilon^{\frac{1}{p}}+e^{Ck}\left(\frac{\eta}{M+\eta}\right)^{C_1\epsilon^{C_2}}\right)(M+\eta), \end{align*} where the constants depend on $\Omega, \Gamma, \|V\|_{L^\infty(\Omega)}, \kappa$ and $\|q\|_{C^1(\Omega)}$. Abbreviating \begin{align}\label{eq:renaming} \tilde{\epsilon}:= \epsilon^{C_2}, \quad \tilde{\eta}:= \left(\frac{\eta}{\eta + M} \right)^{C_1}, \quad \gamma:= \frac{1}{C_2 p}, \end{align} we thus seek to optimize the expression \begin{align*} F(\tilde{\epsilon}, \tilde{\eta}):= \tilde{\epsilon}^{\gamma} + e^{Ck} \tilde{\eta}^{\tilde{\epsilon}} \end{align*} by choosing $\tilde{\epsilon} = \tilde{\epsilon}(\tilde{\eta})>0$ appropriately. Setting $\tilde{\epsilon}:= \frac{1}{(-\log\tilde{\eta})^{\beta}} + \frac{C k}{(-\log\tilde{\eta})}>0 $ for some $\beta \in (0,1)$, we obtain \begin{align*} |F(\tilde{\epsilon}, \tilde{\eta}) | \leq \left( \frac{1}{(-\log\tilde{\eta})^{\beta}} + \frac{k}{(-\log\tilde{\eta})} \right)^{\gamma} + e^{-|\log\tilde{\eta}|^{1-\beta}} \leq \frac{C k^{\gamma}}{|\log\tilde{\eta}|^{\beta \gamma}}+\frac{1}{|\log\tilde{\eta}|^{1-\beta}}. \end{align*} By \eqref{eq:renaming} we have $\gamma<1$ provided $p>2$ in \eqref{eq:QUCstep1} is chosen big enough. Then, for $k\geq 1$, in particular $k^\gamma<k$. Choosing $\beta=\frac{1}{1+\gamma}$ we infer \eqref{eq:QUCPboundary} with $\mu=\beta\gamma<1$. The bound \eqref{eq:QUCPinterior} follows directly from Step 2 for a suitable choice of $\epsilon$ with $\nu=\alpha_0\alpha^{N{(\epsilon)}}$. \end{proof} \section{Proof of the Runge Approximation Theorems ~\ref{thm:Rungeboundary} and ~\ref{thm:Rungeinterior}} \label{sec:Runge} This section is devoted to the proofs of the (in $k$) quantitative Runge approximation results of Theorems ~\ref{thm:Rungeboundary} and ~\ref{thm:Rungeinterior}. This relies on duality arguments and the quantitative unique continuation results from the previous section. In addition to the assumptions \textnormal{(\hyperref[assV]{i})} and \textnormal{(\hyperref[assq]{ii})}, we will now always also assume the condition \textnormal{(\hyperref[assSpec]{a1})} in $\Omega_2$ throughout the whole section in order to avoid solvability issues. \begin{prop}\label{prop:QUCH1} Let $\Omega_1\Subset \Omega_2$ and $\Gamma\subset\p\Omega_2$ be as in Theorem ~\ref{thm:Rungeboundary}. Let $V$ and $q$ satisfy the assumptions \textnormal{(\hyperref[assV]{i})}-\textnormal{(\hyperref[assq]{ii})} in $\Omega_2$. Let $u\in H^1(\Omega_2)$ be the unique solution to \begin{align*} \begin{split} \Delta u+k^2qu+Vu&=v\mathbb{1}_{\Omega_1} \; \mbox{ in } \Omega_2,\\ u&=0 \!\quad\quad\mbox{ on } \p\Omega_2, \end{split} \end{align*} with $v\in L^2(\Omega_1)$ and $k\geq 1$ satisfying the condition \textnormal{(\hyperref[assSpec]{a1})}. Then there exist a parameter $\mu_0\in(0,1)$ and a constant $C>1$ depending on $n, \Omega_2, \Omega_1, \Gamma, \|V\|_{L^\infty(\Omega_2)}, \kappa$ and $\|q\|_{C^1(\Omega_2)}$ such that \begin{align}\label{eq:QUCPgradboundary} \|u\|_{H^1(\Omega_2\backslash\overline\Omega_1)}\leq C k^{n+4}\left|\log\left(C\frac{\|\p_\nu u\|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\Gamma)}}{\|v\|_{L^2(\Omega_1)}}\right)\right|^{-\mu_0}\|v\|_{L^2(\Omega_1)}. \end{align} In addition, if $G$ is a bounded Lipschitz domain with $G\Subset \Omega_2\backslash\Omega_1$, then there exist a parameter $\nu_0\in(0,1)$ and a constant $C>1$ depending on $n, \Omega_2, \Omega_1, G, \Gamma, \|V\|_{L^\infty(\Omega_2)}, \kappa$ and $\|q\|_{C^1(\Omega_2)}$ such that \begin{align}\label{eq:QUCPgradinterior} \|u\|_{H^1(G)}\leq Ce^{C k}\left(\frac{\|\p_\nu u\|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\Gamma)}}{\|v\|_{L^2(\Omega_1)}}\right)^{\nu_0} \|v\|_{L^2(\Omega_1)}. \end{align} \end{prop} \begin{proof} We start by estimating $\|u\|_{H^1(\Omega_2)}$ in terms of $v$. By Lemma ~\ref{lem:apriori}, there is a constant $C>1$ such that \begin{align*} \|u\|_{H^1(\Omega_2)}\leq C\left(1+\frac{k^{3}}{\dist(k^2, \Sigma_{V,q})}\right)\|v\|_{L^2(\Omega_1)} \leq Ck^{n+1} \|v\|_{L^2(\Omega_1)}, \end{align*} where for the last inequality we have used the assumption \textnormal{(\hyperref[assSpec]{a1})}. Since $u$ satisfies \eqref{eq:generaleq} in $\Omega=\Omega_2\backslash\overline\Omega_1$, which is connected, the results of Proposition ~\ref{prop:quantitative_UCP} hold with \begin{align}\label{eq:selectionMeta} \begin{split} \eta&=\|\p_\nu u\|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\Gamma)}, \quad M=Ck^{n+1}\|v\|_{L^2(\Omega_1)}. \end{split} \end{align} In order to promote \eqref{eq:QUCPboundary} and \eqref{eq:QUCPinterior} to the gradient, we argue similarly as in Proposition ~\ref{prop:quantitative_UCP}. We consider the subsets $W_\epsilon$ and $\Omega_\epsilon$ defined in \eqref{eq:sets} with $\Omega=\Omega_2\backslash\overline\Omega_1$. \textit{Step 1': Estimate on $W_\epsilon$.} By the H\"older inequality \begin{align*} \|\nabla u\|_{L^2(W_\epsilon)}\leq C \epsilon^{\frac{1}{p}}\|\nabla u\|_{L^q(\Omega_2)}, \end{align*} where $\frac 1 p+\frac 1 q=\frac 1 2$. By \cite[Theorem 1]{M63} (together with \cite[Theorem 0.5]{JK95} for the admissibility of the Lipschitz domain) there exists $q>2$ such that \begin{align*} \|\nabla u\|_{L^q(\Omega_2)}\leq C\|v\mathbb{1}_{\Omega_1}-k^2 qu-Vu\|_{L^2(\Omega_2)} \leq C\Big(\|v\|_{L^2(\Omega_1)}+(k^2 \kappa+\|V\|_{L^\infty(\Omega_2)})\|u\|_{L^2(\Omega_2)}\Big). \end{align*} In addition, testing the weak version of the equation with itself, we have \begin{align}\label{eq:kL2} k\|u\|_{L^2(\Omega_2)}\leq C(\|u\|_{H^1(\Omega_2)}+\|v\|_{L^2(\Omega_1)})\leq CM, \end{align} with $C$ depending on $\kappa$ and $\|V\|_{L^\infty(\Omega_2)}$. Therefore, \begin{align*} \|\nabla u\|_{L^2(W_\epsilon)}\leq C \epsilon^{\frac{1}{p}}k M. \end{align*} \textit{Step 2': Estimate on $\Omega_\epsilon $.} Let $\chi$ be a smooth cut-off function supported in $\Omega_{\epsilon/2}$ with $\chi=1$ in $\Omega_\epsilon$ and $|\nabla \chi|\leq c\epsilon^{-1}$. We then obtain the following Caccioppoli inequality by testing the equation with $\chi^2 u$: \begin{align*} \|\nabla u\|_{L^2(\Omega_\epsilon)} &\leq C(\epsilon^{-1}+\|V\|_{L^\infty(\Omega_2)}^{\frac 1 2}+k\kappa)\|u\|_{L^2(\Omega_{\epsilon/2})} \\&\leq C k\epsilon^{-1}\|u\|_{L^2(\Omega_{\epsilon/2})}. \end{align*} Inserting the estimate \eqref{eq:QUCPinterior} with explicit $\epsilon$ dependence coming from the Step 2 in the proof of Proposition ~\ref{prop:quantitative_UCP}, we infer \begin{align*} \|\nabla u\|_{L^2(\Omega_\epsilon)} \leq Ck\epsilon^{-1} e^{Ck} \left(\frac{\eta}{M+\eta}\right)^{C_1'\epsilon^{C_2}}(M+\eta). \end{align*} \textit{Step 3': Optimization.} Combining the previous two steps we obtain \begin{align*} \|\nabla u\|_{L^2(\Omega_2\backslash\overline\Omega_1)}\leq C k \left(\epsilon^{\frac{1}{p}}+\epsilon^{-1} e^{Ck}\left(\frac{\eta}{M+\eta}\right)^{C_1'\epsilon^{C_2}}\right)(M+\eta). \end{align*} Optimizing in $\epsilon$ as in the Step 3 in the proof of Proposition ~\ref{prop:quantitative_UCP} yields \begin{align*} \|\nabla u\|_{L^2(\Omega_2\backslash\overline\Omega_1)} \leq C k^2\left|\log\left(\frac{\eta}{M+\eta}\right)\right|^{-\mu_0}(M+\eta) \end{align*} for a suitable $\mu_0\in(0,1)$. Introducing \eqref{eq:selectionMeta} and taking into account that by \eqref{eq:kL2} \begin{align}\label{eq:estetaM} \begin{split} \eta=\|\p_\nu u\|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\Gamma)} &\leq C(k^2 \|u\|_{L^2(\Omega_2)}+\|\nabla u\|_{L^2(\Omega_2)}+\|v\|_{L^2(\Omega_1)}) \\ &\leq CkM \leq Ck^{n+2}\|v\|_{L^2(\Omega_1)}, \end{split} \end{align} we infer \eqref{eq:QUCPgradboundary}. Estimate \eqref{eq:QUCPgradinterior} follows from the Caccioppoli inequality in Step 2' for suitable choice of $\epsilon$ together with the previous estimate for $\eta$. \end{proof} Using the results from Proposition ~\ref{prop:QUCH1} we now address the proof of Theorem ~\ref{thm:Rungeboundary}: \begin{proof}[Proof of Theorem ~\ref{thm:Rungeboundary}] We seek to show that for any $\alpha >0$ there exists a solution $u_\alpha$ to \begin{align*} (\Delta+k^2q+V)u_\alpha = 0 \; \mbox{ in } \Omega_2 \end{align*} with \begin{align*} \|u_\alpha-v\|_{L^2(\Omega_1)} \leq C(\alpha, k)\|v\|_{H^1(\Omega_1)}, \qquad \|u_\alpha\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\p \Omega_2)} \leq \frac{1}{\alpha}\| v\|_{L^2(\Omega_1)}. \end{align*} Let $X$ be the closure of $\{u\in H^1(\Omega_1)\mid(\Delta + k^2q+V)u=0 \text{ in } \Omega_1\}$ in $L^2(\Omega_1)$. We then define \begin{align*} A \colon \tilde H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)&\to X, \\ g \quad&\mapsto Ag = u|_{\Omega_1}, \end{align*} where $u\in H^1(\Omega_2)$ is the solution to \eqref{eq:generaleq} in $\Omega_2$ satisfying the boundary condition $u|_{\p\Omega_2}= g\in \tilde H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)$. We denote by $A^*$ the Hilbert space adjoint of $A$, which maps \begin{align*} A^* \colon X &\to \tilde H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma), \\ u &\mapsto A^*u = R(\p_\nu w|_\Gamma), \end{align*} where $R$ is the Riesz isomorphism $R: H^{-\text{\normalfont\sfrac{1}{2}}}(\Gamma)\to \tilde H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)$ and $w\in H^1(\Omega_2)$ satisfies \begin{align*} (\Delta+k^2q+V)w&=\mathbb{1}_{\Omega_1}u \;\mbox{ in } \Omega_2,\\ w&=0 \qquad\mbox{ on } \p \Omega_2. \end{align*} By \cite[Lemma 4.1]{RS17}, $A$ is a compact, injective operator with dense range in $X$ and applying the spectral theorem to $A^*A$ yields an orthonormal basis of eigenvectors $\{\phi_j\}_{j=1}^\infty$ for $ \tilde H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)$ and a sequence of positive, decreasing eigenvalues $\{\mu_j\}_{j=1}^\infty$ with \[A^*A\phi_j=\mu_j\phi_j.\] Then, setting $\psi_j:=\mu_j^{-\text{\normalfont\sfrac{1}{2}}} A\phi_j$ yields an orthonormal basis $\{\psi_j\}_{j=1}^\infty$ of $X$. In particular, we have \begin{align}\label{eq:adjAbasis} \|A^*\psi_j\|_{ H^{\text{\normalfont\sfrac{1}{2}}}(\partial \Omega_2)}\leq \mu_j^{\text{\normalfont\sfrac{1}{2}}}. \end{align} Returning to our setting, we notice that $v\in X$, hence it admits a unique decomposition in the orthonormal basis $v=\sum_{j=1}^\infty \beta_j\psi_j$. For $\alpha>0$, we define $$v_\alpha:=\sum_{\alpha \geq \mu_j^{\text{\normalfont\sfrac{1}{2}}}}\beta_j\psi_j$$ and let $w_\alpha$ be the solution of \begin{align*} (\Delta +k^2q+V)w_\alpha&=\mathbb{1}_{\Omega_1}v_\alpha \;\mbox{ in } \Omega_2,\\ w_\alpha&=0 \;\qquad \mbox{ on } \partial \Omega_2. \end{align*} Here the notation $\alpha \geq \mu^{\text{\normalfont\sfrac{1}{2}}}_{j}$ is an abbreviation for the set $\{j\in \N: \ \alpha \geq \mu^{\text{\normalfont\sfrac{1}{2}}}_{j}\}$. By \eqref{eq:adjAbasis}, it holds \begin{align} \label{eq:partial boundary bd2} \|\partial_\nu w_\alpha\|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\Gamma)}=\|A^*v_\alpha\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\partial\Omega_2)}\leq \alpha \|v_\alpha\|_{L^2(\Omega_1)}. \end{align} Now, we define $u_\alpha$ as the solution to \eqref{eq:generaleq} on $\Omega_2$ satisfying the boundary condition $ u_\alpha = g_\alpha$ on $\p\Omega_2$, with $g_\alpha=\sum_{\alpha\le\mu_j^{\text{\normalfont\sfrac{1}{2}}}}\beta_j\mu_j^{-\text{\normalfont\sfrac{1}{2}}}\phi_j$. Note that at the boundary we have by the previous considerations \begin{align*} \|u_\alpha\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\p\Omega_2)}^2=\|g_\alpha\|_{ H^{\text{\normalfont\sfrac{1}{2}}}(\p\Omega_2)}^2=\bigg\|\sum_{\alpha\le\mu_j^{\text{\normalfont\sfrac{1}{2}}}}\beta_j\mu_j^{-\text{\normalfont\sfrac{1}{2}}}\phi_j\bigg\|^2_{ H^{\text{\normalfont\sfrac{1}{2}}}(\partial \Omega_2)}=\sum_{\alpha\le\mu_j^{\text{\normalfont\sfrac{1}{2}}}}\frac{\beta_j^2}{\mu_j} \leq \frac{1}{\alpha^2} \| v \|_{L^2(\Omega_1)}^2. \end{align*} In addition, notice that \begin{align*} u_\alpha|_{\Omega_1}=Ag_\alpha=A\left(\sum_{\alpha\le\mu_j^{\text{\normalfont\sfrac{1}{2}}}}\beta_j\mu_j^{-\text{\normalfont\sfrac{1}{2}}}\phi_j\right)=\sum_{\alpha\le\mu_j^{1/2}}\beta_j\psi_j=v-v_\alpha. \end{align*} Thus, it remains to obtain an explicit dependence on $\alpha$ and $k$ in \begin{align*} \|u_\alpha-v\|_{L^2(\Omega_1)}=\|v_\alpha\|_{L^2(\Omega_1)}\leq C(\alpha, k) \|v\|_{H^1(\Omega_1)}. \end{align*} Orthogonality considerations show \begin{align}\label{eq:valpha1} \|v_\alpha\|^2_{L^2(\Omega_1)}=(v,\partial_\nu w_\alpha)_{\partial\Omega_1}-(\partial_\nu v,w_\alpha)_{\partial \Omega_1}. \end{align} Using trace estimates for the solutions we find \begin{align} \label{eq:auxw} \|v_\alpha\|^2_{L^2(\Omega_1)} &\leq C (1+\|V\|_{L^\infty(\Omega_2)}+k^2\kappa) \|v\|_{H^1(\Omega_1)}\|w_\alpha\|_{H^1(\Omega_2\backslash\overline\Omega_1)} \end{align} with some constant $C>0$ depending on $\Omega_1, \Omega_2\backslash \Omega_1$. Using \eqref{eq:QUCPgradboundary} to estimate the norm of $w_{\alpha}$ in \eqref{eq:auxw} yields \begin{align*} \|v_\alpha\|_{L^2(\Omega_1)}^2 &\leq C k^{n+6}\left|\log\left(C\frac{\|\p_\nu w_{\alpha}\|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\Gamma)}}{\|v_{\alpha}\|_{L^2(\Omega_1)}}\right)\right|^{-\mu_0}\|v_{\alpha}\|_{L^2(\Omega_1)}\|v\|_{H^1(\Omega_1)}. \end{align*} Finally, dividing by $\|v_{\alpha}\|_{L^2(\Omega_1)}$, recalling \eqref{eq:partial boundary bd2} and using monotonicity, we arrive at \begin{align*} \|v_\alpha\|_{L^2(\Omega_1)} & \leq C k^{n+6}\left|\log (C\alpha) \right|^{-\mu_0}\|v\|_{H^1(\Omega_1)}. \end{align*} We choose $\alpha<1$ so that $C k^{n+6}{\left|\log (C\alpha) \right|^{-\mu_0}}=\epsilon$, i.e. \begin{align*} \frac{1}{\alpha}= C e^{ Ck^{s} \epsilon^{-\mu}} \end{align*} with $s=\frac{n+6}{\mu_0}$ and $\mu=\mu_0^{-1}$. This concludes the proof. \end{proof} Relying on similar ideas, we also obtain the bounds from Theorem ~\ref{thm:Rungeinterior}: \begin{proof}[Proof of Theorem ~\ref{thm:Rungeinterior}] We define $v_\alpha$ and $w_\alpha$ as in the proof of Theorem ~\ref{thm:Rungeboundary} with $v=\tilde v|_{\Omega_1}$. Equations ~\eqref{eq:valpha1} and \eqref{eq:auxw} can be slightly modified to read \begin{align*} \|v_\alpha\|^2_{L^2(\Omega_1)}=(\tilde v,\partial_\nu w_\alpha)_{\partial\tilde \Omega_1}-(\partial_\nu \tilde v,w_\alpha)_{\partial\tilde \Omega_1}\leq Ck^2\|\tilde v\|_{H^1(\tilde\Omega_1)}\|w_\alpha\|_{H^1(G)}, \end{align*} where $G=\Omega_2'\backslash \tilde\Omega_1\Subset\Omega_2\backslash\Omega_1$ with $\Omega_2'\Subset\Omega_2$. Arguing as above and using the quantitative unique continuation result \eqref{eq:QUCPgradinterior}, we obtain \begin{align*} \|u_\alpha-u\|_{L^2(\Omega_1)}=\|v_\alpha\|_{L^2(\Omega_1)} &\leq C e^{C k}\left(\frac{\|\partial_\nu w_\alpha\|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\partial \Omega_2)}}{\|v_\alpha\|_{L^2(\Omega_1)}}\right)^{\nu_0} \|\tilde v\|_{H^1(\tilde \Omega_1)} \\ &\leq C e^{C k}\alpha^{\nu_0} \|\tilde v\|_{H^1(\tilde \Omega_1)}. \end{align*} Choosing $\alpha$ so that $ e^{C k}\alpha^{\nu_0}=\epsilon$, i.e. $\frac{1}{\alpha}= C\Big(\frac{e^{C k}}{\epsilon}\Big)^{\frac1{\nu_0}}$, the result follows with $\nu={\nu_0}^{-1}$. \end{proof} \subsection{Optimality of the estimates in Theorem ~\ref{thm:Rungeinterior}} \label{sec:optimality} In order to infer the optimality of the quantitative Runge approximation results in the parameter $\epsilon$, we consider the case $q=1$ (i.e. the case of the Helmholtz equation). We remark that optimality results in $k$ for three balls inequalities were recently obtained in \cite{BM20}. \begin{lem} Let $\Omega_2=B_1, \Omega_1=B_{\text{\normalfont\sfrac{1}{2}}}$ and $\Gamma=\p B_1$. For fixed $k\geq 1$, there exists $N=N(k)\in\N$ and a sequence $(v_\ell)_{\ell\geq N}$ of solutions to $(\Delta+k^2)v_\ell=0$ in $\Omega_1$ with $\|v_\ell\|_{H^1(\Omega_1)}=1$ such that for any solution $u$ of $(\Delta+k^2)u=0$ in $\Omega_2$ with $\|v_\ell-u\|_{L^2(\Omega_1)}\leq(2^{\frac n2+4}\ell)^{-1}$ we have $\|u\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)}\geq ce^{C\ell}$. \end{lem} \begin{proof} Arguing by separation of variables, we obtain that any solution $u\in H^1(B_1)$ of ${(\Delta+k^2)u=0}$ can be written with respect to the variables $r=|x|$, $\theta= \frac{x}{|x|}\in \mathbb{S}^{n-1}$ as \begin{align*} u(x)=u(r,\theta)=\sum_{\ell=0}^\infty \sum_{m=0}^{N_\ell} c_{\ell m} R_\ell(k r) \psi_{\ell m}(\theta), \end{align*} where $\{\psi_{\ell m}\}_{m=0}^{N_\ell}$ is an orthonormal basis of $L^2(\mathbb{S}^{n-1})$ consisting of the spherical harmonics of degree $\ell$ and $$R_\ell(r)= r^{1-\frac n 2} J_{\ell+\frac n 2 -1}(r),$$ with $J_\alpha$ denoting the Bessel functions. We consider $g_\ell(x)=R_\ell(k r) \psi_{\ell,1}(\theta)$ and define $v_\ell=\alpha_\ell g_\ell$ with $\alpha_\ell=\|g_\ell\|_{H^1(\Omega_1)}^{-1}$. Then, we may write $u=cv_\ell+w$, where $(w, g_\ell)_{L^2(B_1)}=0$ and $c\alpha_\ell=\|g_\ell\|_{L^2(B_1)}^{-2}(u, g_\ell)_{L^2(B_1)}$. Therefore, $$u(x)|_{\p B_1}=c\alpha_\ell R_\ell(k)\psi_{\ell,1}(\theta)+\omega(\theta)$$ with $(\omega, \psi_{\ell,1})_{L^2(\p B_1)}=0$ and $\omega(\theta) = w(x)|_{\partial B_1}$. We are interested in estimating $\|u\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)}$ from below. If we assume $\|u-v_\ell\|_{L^2(\Omega_1)}<\epsilon$, then $|c-1|\alpha_\ell\|g_\ell\|_{L^2(\Omega_1)}<\epsilon$. Therefore, \begin{align}\label{eq:optimalitybdnorm} \begin{split} \|u\|_{H^\text{\normalfont\sfrac{1}{2}}(\Gamma)} &\geq (1+\lambda_\ell^{\frac 1 2})^{\frac 1 2} |c\alpha_\ell||R_\ell(k)| \geq \ell^{\frac{1}{2}}\big(|\alpha_\ell| -\epsilon\|g_\ell\|_{L^2(\Omega_1)}^{-1}\big) |R_\ell(k)| \\ &\geq \ell^{\frac{1}{2}}\big(\|g_\ell\|_{H^1(\Omega_1)}^{-1}-\epsilon\|g_\ell\|_{L^2(\Omega_1)}^{-1}\big) |R_\ell(k)|, \end{split} \end{align} where $\lambda_\ell=\ell(\ell+n-2)$. Using \cite[10.22.27]{NIST}, we can estimate the $L^2$ norm of $g_{\ell}$ as follows: \begin{align*} \|g_\ell\|_{L^2(\Omega_1)}^2 &=\int_0^{\frac 1 2} R_\ell^2(kr)r^{n-1} dr =k^{-n}\int_0^{\frac k 2} t J_{\ell+\frac n 2-1}^2(t)dt \\ &=2k^{-n}\sum_{m=0}^\infty \Big(\ell+\frac n 2+2m\Big) J_{\ell+\frac n 2+2m}^2\Big(\frac k 2 \Big) \geq k^{-n}(2\ell+n) J_{\ell+\frac n 2}^2\Big(\frac k 2 \Big). \end{align*} For the norm of the gradient, using integration by parts and the equation, we obtain \begin{align*} \|\nabla g_\ell\|_{L^2(\Omega_1)}^2 &=\int_{\p B_{\text{\normalfont\sfrac{1}{2}}}} g_\ell \partial_r g_\ell d\mathcal H^{n-1}(\theta)-\int_{B_{\text{\normalfont\sfrac{1}{2}}}}g_\ell \Delta g_\ell dx \\&= R_\ell\Big(\frac{k}{2}\Big)\partial_r R_\ell\Big(\frac{k}{2}\Big)+k^2\|g_\ell\|^2_{L^2(\Omega_1)}. \end{align*} We next collect some properties of Bessel functions from \cite{Paris84} for $x\in (0,1)$ and $\alpha>0$: \begin{align}\label{eq:Besselprop} 1\leq \frac{J_{\alpha}(\alpha x)}{x^\alpha J_{\alpha}(\alpha)}\leq e^{\alpha(1-x)}, \qquad 0<\frac 1 x-\frac{J'_{\alpha}(\alpha x)}{J_{\alpha}(\alpha x)} <1, \qquad \frac{J_\alpha(\alpha x)}{J_{\alpha+1}(\alpha x)}<\frac{2\alpha+2}{\alpha x}. \end{align} By the second estimate in \eqref{eq:Besselprop} and the fact that $J_\alpha(\alpha x)>0$ (e.g. \cite[10.14.2]{NIST} together with the first estimate in \eqref{eq:Besselprop} or \cite[10.14.7]{NIST}), we have that $J_\alpha(\alpha x)$ is monotonously increasing for $x\in(0,1)$. Moreover, we know \cite[10.19.1]{NIST} that for $z\neq 0$ fixed \begin{align} \label{eq:Besselasym} J_\alpha(z)\sim \frac{1}{\sqrt{2\pi\alpha}}\left(\frac{ez}{2\alpha}\right)^\alpha, \qquad \alpha\to\infty. \end{align} We assume from now on that $\ell+\frac n 2-1>k$, so the previous estimates \eqref{eq:Besselprop} can be applied. In particular, due to the monotonicity \begin{align*} \|g_\ell\|_{L^2(\Omega_1)} =\left(\int_0^{\frac 1 2} R_\ell^2(kr)r^{n-1} dr\right)^{\frac 1 2} \leq \frac{1}{\sqrt{2}} R_\ell\Big(\frac{k}{2}\Big), \end{align*} Inserting the previous estimates on $g_\ell$ into \eqref{eq:optimalitybdnorm} yields \begin{align*} \|u\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)} &\geq \ell^{\frac 1 2}\left(\frac{\frac{R_\ell(k)}{R_\ell(\frac{k}{2})}}{1+k+ \Big(\frac{\p_rR_\ell(\frac{k}{2})}{R_\ell(\frac{k}{2})}\Big)^{\frac 1 2}}-\epsilon\frac{R_\ell(k)k^{\frac n 2}}{(2\ell+n)^{\frac 12}J_{\ell+\frac n2}\big(\frac k2\big)}\right) \\ &= \ell^{\frac 1 2}\frac{J_\alpha(k)}{J_\alpha\big(\frac{k}2\big)} \left( \frac{2^{1-\frac n2}}{1+k+ \big(\frac{\p_rR_\ell(\frac{k}{2})}{R_\ell(\frac{k}{2})}\big)^{\frac 1 2}}-\epsilon\frac{k}{(2\ell+n)^{\frac 1 2}}\frac{J_\alpha\big(\frac k2\big)}{J_{\alpha+1}\big(\frac k2\big)}\right), \end{align*} where $\alpha=\ell+\frac n 2-1>k$. Using the different estimates in \eqref{eq:Besselprop}, we deduce \begin{align*} \frac{\p_rR_\ell\big(\frac{k}{2}\big)}{R_\ell\big(\frac{k}{2}\big)} &=\frac{J'_{\alpha}\big(\frac k 2\big)}{J_\alpha\big(\frac k 2\big)}-\frac{n-2}{k}<\frac{2\alpha}{k}-\frac{n-2}{k}=\frac{2\ell}{k}, \\ \frac{k}{(2\ell+n)^{\frac 1 2}}\frac{J_\alpha\big(\frac k2\big)}{J_{\alpha+1}\big(\frac k2\big)} &\leq\frac{k}{(2\ell+n)^{\frac 1 2}}\frac{2\alpha+2}{\frac k 2} = 2(2\ell+n)^{\frac 1 2}.\end{align*} Therefore, \begin{align}\label{eq:optimalitybdnorm2} \|u\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)} &\geq 2\ell^{\frac 1 2}\frac{J_\alpha(k)}{J_\alpha\big(\frac{k}2\big)} \left(\frac{2^{-\frac n2}}{1+k+\big(\frac{2\ell}{k}\big)^{\frac 1 2}}-\epsilon (2\ell+n)^{\frac 12}\right). \end{align} In order to finally obtain the optimality in $\epsilon$, we consider $\ell\gg\max\{k^2, n\}$ and $\epsilon=(2^{\frac n 2+4}\ell)^{-1}$. Then, by \eqref{eq:Besselasym} \begin{align*} \|u\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma)} &\geq C2^{-\frac n2} 2^\alpha \ell^{\frac 1 2} \left(\frac{1}{3\ell^{\frac 1 2}}-\frac{3}{16\ell^{\frac 1 2}}\right) >c2^\ell. \qedhere \end{align*} \end{proof} \section{Stability for the Calder\'on Problem for the Helmholtz Equation with Potential} \label{sec:stability} As an application of the Runge approximation results from above, we present the proof of the stability estimate from Proposition ~\ref{prop:stability} for a partial data Calder\'on problem with stability improvement for an increasing parameter $k$. For the Helmholtz setting with impedance boundary conditions this had earlier been deduced in \cite{KU19}. More precisely, we assume the following set-up: We consider $n\ge 3$, $\Omega\subseteq \R^n$ a bounded connected open set with $C^\infty$ boundary and $V$ and $q$ as in \textnormal{(\hyperref[assV]{i})}-\textnormal{(\hyperref[assq]{ii})} and $k\geq 1$ satisfying \textnormal{(\hyperref[assSpec]{a1})}. Let $\Gamma$ be a non-empty open subset of $\p\Omega$. We study the local Dirichlet-to-Neumann map \begin{align*} \Lambda_{V,q}^\Gamma(k): \tilde H^{\text{\normalfont\sfrac{1}{2}}}(\Gamma) &\to H^{-\text{\normalfont\sfrac{1}{2}}}(\Gamma),\\ g & \mapsto \p_\nu u|_{\Gamma}, \end{align*} where $u\in H^1(\Omega)$ is the solution to \begin{align*} (\Delta+k^2 q+V)u&=0 \;\mbox{ in }\Omega,\\ u&=g \;\mbox{ on }\p\Omega. \end{align*} Theorem ~\ref{thm:Rungeinterior} allows us to obtain stability results for the inverse problem by using the strategy from \cite[Proposition 6.1]{RS17}. In particular, this reproves the result of \cite[Theorem 1.2]{KU19} in the case of Dirichlet boundary conditions and our spectral assumption \hyperref[assq]{(a1)}. \begin{proof}[Proof of Proposition ~\ref{prop:stability}] We will use the short hand notation $b_j=k^2q_j+V_j$ for $j=1,2$, for which $\|b_j\|_{L^2(\Omega)}\leq k^2B$ and $b_1=b_2$ in $\Omega\backslash\Omega'$. We start with the construction of complex geometrical optics solutions $u_j$ solving $(\Delta+b_j)u_j = 0$ in $\Omega$ following \cite{SU87}. We fix $\omega\in \mathbb S^{n-1}$ and choose $\omega^\perp, \tilde\omega^\perp\in \mathbb S^{n-1}$ such that \[\omega \cdot \omega^{\bot} = \omega \cdot \widetilde{\omega}^{\bot} = \omega^{\bot} \cdot \widetilde{\omega}^{\bot} = 0.\] We set for $\tau, r\in \R$ with $\tau\geq \frac{|r|}{2}$ \begin{align*} \xi_1=\tau\omega^\perp+i\left(-\frac{r}{2}\omega+\sqrt{\tau^2-\frac{r^2}{4}}\tilde\omega^\perp\right), \quad \xi_2&=-\tau\omega^\perp+i\left(-\frac{r}{2}\omega-\sqrt{\tau^2-\frac{r^2}{4}}\tilde\omega^\perp\right). \end{align*} By \cite{SU87}, if $\tau\geq \max\{C_0k^2B,1\}$, there are solutions $u_j$ for $j\in\{1,2\}$ of the form \begin{align*} u_j(x)=e^{\xi_j\cdot x}\big(1+\psi_j(x)\big), \end{align*} where \begin{align*} \|\psi_j\|_{L^2(\Omega)}\leq\frac{Ck^2B}{\tau}, \qquad \|\psi_j\|_{H^1(\Omega)}\leq Ck^2B. \end{align*} This implies the following estimates for the solutions: \begin{align*} \|u_j\|_{L^2(\Omega)}\leq Ce^\tau, \qquad \|u_j\|_{H^1(\Omega)}\leq C\tau e^{\tau}. \end{align*} Now we seek to approximate $u_j$ up to some order $\epsilon>0$ which will be chosen later. We apply Theorem ~\ref{thm:Rungeinterior} with $\Omega_2=\Omega$, $\Omega_1=\Omega'$ and $\tilde\Omega_1=\tilde \Omega'$, where the latter is a slightly bigger domain containing $\Omega'$. This yields solutions $\tilde u_j$ to $ (\Delta+b_j)\tilde{u}_j = 0 $ in $\Omega$ with $\tilde{u}_j|_{\p\Omega}$ supported in $\Gamma$ and \begin{align*} \|u_j-\tilde u_j\|_{L^2(\Omega')}\leq \epsilon\|u_j\|_{H^1(\tilde\Omega')}, \qquad \|\tilde u_j\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\p\Omega)}\leq Ce^{Ck}\epsilon^{-\nu}\|u_j\|_{L^2(\Omega')}. \end{align*} In addition, since $b_1=b_2$ in $\Omega\backslash\Omega'$ and using integration by parts, we obtain the following analog to Alessandrini's identity \cite{A88} \begin{equation*} \int_{\Omega'} (b_2-b_1) \tilde u_{1} \tilde u_{2} \, d x =\int_{\Omega} (b_2-b_1) \tilde u_{1} \tilde u_{2} \, d x =\Big(\big(\Lambda_{V_1,q_1}^\Gamma(k)-\Lambda_{V_2,q_2}^\Gamma(k)\big)\tilde u_1,\tilde u_2\Big)_{L^2(\partial \Omega)}. \end{equation*} Abbreviating $\delta:= \|\Lambda_{V_1,q_1}^\Gamma(k)-\Lambda_{V_2,q_2}^\Gamma(k)\|_{\tilde{H}^{\text{\normalfont\sfrac{1}{2}}}(\Gamma) \rightarrow H^{-\text{\normalfont\sfrac{1}{2}}}(\Gamma)}$ and applying the previous estimates leads to \begin{equation*} \Big|\int_{\Omega'} (b_2-b_1) \tilde u_{1} \tilde u_{2} \, d x\Big| \leq \delta \|\tilde u_1\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\p\Omega)}\|\tilde u_2\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\p\Omega)}\leq C \delta e^{Ck}\epsilon^{-2\nu}\|u_1\|_{L^2(\Omega)}\|u_2\|_{L^2(\Omega)}. \end{equation*} We extend $b_j$ by zero to $\R^n$. Now we seek to apply the previous steps to estimate \begin{align*} |\mathcal F(b_2-b_1)(r\omega)| &=\left|\int_{\Omega'} (b_2-b_1) e^{-ir\omega\cdot x}dx \right| \end{align*} for any $|r|\leq 2\tau$ and $\omega\in \mathbb S^{n-1}$. Notice that \begin{align*} e^{-ir\omega\cdot x} &=u_1u_2-e^{-ir\omega\cdot x}(\psi_1+\psi_2+\psi_1\psi_2) \\ &=-e^{-ir\omega\cdot x}(\psi_1+\psi_2+\psi_1\psi_2) +(u_1-\tilde u_1)u_2+(u_2-\tilde u_2)\tilde u_1+\tilde u_1\tilde u_2. \end{align*} Thus, using the Runge approximation bounds again and invoking the estimates for the functions $u_j$ and $\psi_j$, we obtain \begin{align*} |\mathcal F(b_2-b_1)(r\omega)| &\leq Ck^2B\left(\frac{k^2B} {\tau}+\epsilon \tau^2e^{2\tau} \right)+C\delta e^{Ck}\epsilon^{-2\nu} e^{2\tau}. \end{align*} In order to estimate $\|b_2-b_1\|_{H^{-1}(\Omega)}$, we notice that for any $\rho<2\tau$ \begin{align*} \|b_2-b_1\|_{H^{-1}(\Omega)}^2 &=\int_{\R^n}|\F(b_2-b_1)(\zeta)|^2(1+|\zeta|^2)^{-1}d\zeta \\ &\leq \int_{|\zeta|<\rho}|\F(b_2-b_1)(\zeta)|^2(1+|\zeta|^2)^{-1}d\zeta +\frac{1}{1+\rho^2}\|b_1-b_2\|_{L^2(\Omega)}^2 \\ &\leq C\rho^{n-2}\left(\frac{(k^2B)^4} {\tau^2}+(k^2B)^2\epsilon^2 e^{5\tau} +\delta^2 e^{Ck}\epsilon^{-4\nu} e^{5\tau}\right)+C\rho^{-2}(k^2 B)^2. \end{align*} Choosing $\rho=\left(\frac{\tau}{k^2B}\right)^{\frac 2n}$ and $\epsilon=\delta^{\frac{1}{1+2\nu}}$ yields \begin{align*} \|b_2-b_1\|_{H^{-1}(\Omega)}^2 &\leq C\left((k^2B)^{2+\frac 4 n} \tau^{-\frac 4n}+ e^{C\tau}e^{Ck}\delta^{\frac{2}{1+2\nu}}\right). \end{align*} Now we assume $\delta<1$, recall that $\tau \geq \max\{C_0k^2 B, 1\}$ and choose \begin{align*} C\tau=CC_0k^{n+3}B-\left(\frac{1}{1+2\nu}\right)\log \delta, \end{align*} which results in \begin{align*} \|b_2-b_1\|_{H^{-1}(\Omega)}^2 &\leq C\frac{1}{(k+k^{-(n+2)}|\log\delta|)^{\frac 4 n}}+e^{Ck^{n+3}}\delta^{\frac{1}{1+2\nu}}, \end{align*} where the constant $C>0$ now includes the $B$-dependence. Applying Young's inequality we can estimate the last term as follows: \begin{align*} e^{Ck^{n+3}} \delta^{\frac{1}{1+2\nu}} \leq C \left( e^{Ck^{n+3}}\delta^{2} +\frac{\delta^{\frac{2}{3+8\nu}}}{k^{\frac 4 n}}\right). \end{align*} Taking into account that ${\delta^{\alpha}}{k^{-\frac 4 n}}\leq (k+\frac n 4\alpha |\log\delta|)^{-\frac 4 n}$, then \begin{align*} \|b_2-b_1\|_{H^{-1}(\Omega)} &\leq C\left(\frac{1}{(k+k^{-(n+2)}|\log\delta|)^{\frac 2 n}}+e^{Ck^{n+3}}\delta\right). \end{align*} In order to infer the desired result, we finally notice that \begin{align*} \frac{1}{k+k^{-(n+2)}|\log\delta|} \leq C\frac{1}{k+|\log\delta|^{\frac{1}{n+3}}}. \end{align*} Indeed, applying again Young's inequality, we have \begin{align*} |\log\delta|^{\frac{1}{n+3}}\leq \frac{1}{\min\{p,q\}}\left({(k^{-\frac{n+2}{n+3}}|\log\delta|^{\frac{1}{n+3}})^p}+{k^{\frac{n+2}{n+3}q}}\right). \end{align*} Choosing $p=n+3$ and $q=\frac{n+3}{n+2}$, the previous claim follows. \end{proof} \section{Improved Carleman Estimates and Three Balls Inequalities in the Presence of Convexity} \label{sec:improved_results_convexity} Last but not least, in this section we show how in the presence of convexity of the domains the Runge approximation results can be improved. This provides the Runge approximation counterpart to the improved stability estimates for unique continuation. Since we need quantitative unique continuation estimates in the natural trace spaces, we also provide the relevant Carleman estimates. In other functional settings similar results had been proved earlier in the literature, see for instance \cite{HI04,IS07}. In order to illustrate the effect, we consider the geometric setting of concentric balls but remark that this could also be extended to other convex geometries. \subsection{Improved unique continuation results} \label{sec:UCP_improve} We seek to deduce improved unique continuation estimates in $k$. To this end, we first derive a Carleman estimate with improvements in $k$ for the model case of the acoustic equation without potential \begin{align*} (\D + k^2 q)u & = 0 \; \mbox{ in } \Omega. \end{align*} Here we differ slightly from the argument by Isakov and use ideas from results on excluding embedded eigenvalues instead (see for instance \cite{KT06}). \begin{prop} \label{prop:Carl_eigenval} Let $u: \R^n \rightarrow \R$ be compactly supported in $B_2 \backslash \overline{B_1} \subset \R^n\backslash \{0\}$ and solve \begin{align}\label{eq:eqCarl} (\D + k^2 q) u & = f + \sum_{j=1}^n\p_j F^j\; \mbox{ in } \R^n, \end{align} where $q$ satisfies \textnormal{(\hyperref[assq2]{ii'})} in $\R^n$ and $f,\,F^j \in L^2(\R^n)$ with $\supp f, \,\supp F^j\subset B_2\backslash\overline{B_1}$. Let $\phi(x) := \tau \log(|x|) $. Then, there exists $\tau_0>0$ such that for any $\tau\geq \tau_0$, there is a constant $C>0$ depending on $n$ and $\kappa$ such that \begin{align}\label{eq:Carleman} \begin{split} &\tau \| e^{\phi} u\|_{L^2(\R^n)} + \|e^{\phi} |x| \nabla u\|_{L^2(\R^n)} + \tau^{\text{\normalfont\sfrac{1}{2}}} k \| {q^{\text{\normalfont\sfrac{1}{2}}}|x|}e^{ \phi} u\|_{L^2(\R^n)}\\ & \quad \leq C \big( \|e^{ \phi} |x|^2 f\|_{L^2(\R^n)} + {\max\{\tau,k\}} \sum_{j=1}^n\|e^{\phi}|x| F^j\|_{L^2(\R^n)}\big). \end{split} \end{align} \end{prop} \begin{rmk} As is common in stability improvement results, the main feature of the Carleman estimate from Proposition ~\ref{prop:Carl_eigenval} is that the frequency is included in the right hand side (the main part of the operator) and that there is an improvement depending on $k$ on the left hand side of the estimate. Such ideas also hold for more general operators (see, for instance, \cite{KT06} or \cite{I19}). \end{rmk} \begin{proof} We argue in three steps: First, we pass to conformal polar coordinates, then we invoke a splitting strategy in which we split the conjugated equation into an elliptic and a subelliptic contribution. For these we separately deduce the corresponding estimates. Finally, we combine these two estimates into the desired overall bound.\\ \emph{Step 1: Coordinate transformation.} We pass to conformal polar coordinates, i.e. we set $x = \psi (t,\theta)$ for \begin{align*} \psi: \R \times {\mathbb{S}^{n-1}} &\to \R^n\backslash \{0\}, \\ (t,\theta) &\mapsto e^{t} \theta. \end{align*} For any $\varphi\in L^1(\R^n, |x|^{-n}dx)$ we obtain with the area formula \begin{align*} \int_\R \int_{\mathbb{S}^{n-1}} \varphi \circ \psi (t,\theta) d \mathcal H^{n-1}(\theta) dt = \int_{\R^n}\frac{1}{|x|^{n}}\varphi(x) dx, \end{align*} where $\mathcal H^{n-1}$ denotes the $n-1$ dimensional Hausdorff measure on $\mathbb{S}^{n-1}$. Thus, at least formally, $d\mathcal H^{n-1}(\theta)dt=d\theta dt = |x|^{-n}dx$. A standard calculation shows that in the new coordinates \begin{align*} (|x|^2\D u) \circ \psi = \big(\p_t^2 +(n-2)\p_t + \D_{\mathbb{S}^{n-1}}\big)(u \circ \psi ). \end{align*} We can discard the first order term by conjugating the operator above with $|x|^{-\frac{n-2}{2}}$, that is $e^{-\frac{n-2}{2}t}$ in the new coordinates. Therefore \eqref{eq:eqCarl} becomes the following equation for $\tilde u(t,\theta)=e^{\frac{n-2}{2}t} u \circ \psi(t,\theta)$: \begin{align}\label{eq:eqfortildeu} \left(\p_t^2 + \D_{\mathbb \mathbb S^{n-1}}+k^2e^{2t}\tilde q-c_n\right)\tilde u= \tilde f+\p_t \tilde F^t+\mbox{div}_{\mathbb S^{n-1}} \tilde F^\theta, \end{align} where \begin{align*} \tilde q(t,\theta) &=q\circ \psi (t,\theta), \qquad c_n=\left(\frac{n-2}{2}\right)^2, \\ \tilde f(t,\theta) &=e^{2t}e^{\frac{n-2}{2}t}f\circ\psi(t, \theta)+\left(\frac{n}{2}-1\right) \tilde F^t(t,\theta), \\ \tilde F^t(t,\theta) &=e^{\frac n 2 t}\left(\sum_{j=2}^n \theta_{j-1} F^j\circ\psi(t, \theta)\pm \Big({1-\sum_{i=1}^{n-1}\theta_i^2}\Big)^{\frac 1 2} F^1\circ\psi(t, \theta)\right), \\ \tilde F^{\theta_i}(t,\theta) &=e^{\frac n 2 t}F^{i+1}\circ\psi(t, \theta)-\theta_i \tilde F^t(t,\theta), \quad i\in\{1, \dots, n-1\}, \\ \tilde F^{\theta}(t,\theta) &=\big(\tilde F^{\theta_1}(t,\theta), \dots, \tilde F^{\theta_{n-1}}(t,\theta)\big), \\ \theta_i &=\frac{x_{i+1}}{|x|}, \quad i\in\{1, \dots, n-1\}. \end{align*} For ease of notation and later reference, we denote the operator on the left hand side of \eqref{eq:eqfortildeu} by $L$. In addition, for some given weight $\Phi=\Phi(t)$, we denote by $L_\Phi$ the conjugated operator given by \begin{align}\label{eq:LPhi} L_{\Phi}= e^\Phi L e^{-\Phi}= \p_t^2 +\D_{\mathbb S^{n-1}}-2\Phi' \p_t+k^2e^{2t} \tilde q +{\Phi'^2-\Phi''}- c_n. \end{align} \emph{Step 2: Splitting strategy.} In order to deal with the divergence contributions, we use a splitting strategy and set $u= u_1 + u_2$, where $\tilde u_1(t,\theta):=e^{\frac{n-2}{2}t} u_1 \circ \psi(t,\theta)$ is a weak solution to \begin{align*} \left(L-D \max\{\tau^2, k^2 e^{2t} \tilde q \} \right)\tilde u_1= \tilde f+\p_t \tilde F^t+\mbox{div}_{\mathbb S^{n-1}} \tilde F^\theta, \end{align*} for $D>0$ large. A solution to this exists by Lax-Milgram. Indeed, this follows by considering the bilinear form \begin{align*} B(h_1, h_2)= \int_{\mathbb S^{n-1}}\!\int_\R \Big( \p_t h_1\p_t h_2+\nabla_{\mathbb S^{n-1}} h_1\cdot\nabla_{\mathbb S^{n-1}} h_2+ bh_1h_2 \Big)dtd\theta \end{align*} with $b=D \max\{\tau^2, k^2 e^{2t}\tilde q\}-k^2 e^{2t}\tilde q+c_n>0$. An application of the Lax-Milgram theorem with this bilinear form then yields a solution $\tilde u_1\in H^1(\R\times\mathbb S^{n-1})$ and associated energy bounds in terms of the $L^2$ norms of $\tilde{f}$, $\tilde{F}^t, \tilde{F}^{\theta}$. The equation for the function $\tilde u_2(t,\theta)=e^{\frac{n-2}{2}t} u_2 \circ \psi(t,\theta)$ is determined by considering the difference of $\tilde u-\tilde u_1$.\\ \emph{Step 2a: Energy estimates for $u_1$.} We seek to complement the existence result for $\tilde{u}_1$ with exponentially weighted energy estimates. Since the support of $\tilde{u}_1$ is in general not bounded, we first consider the conjugated equation with a truncated weight. Energy estimates which are uniform in the truncation parameter and a limiting argument then allow us to pass to the desired weight. To this end, we consider a smooth weight $\Phi_R$ for $R\geq 2$ such that $\Phi_R(t)=(1+\tau)t$ for $t\leq R$ and $\Phi_R(t)=(1+\tau)\frac{3R}{2}$ for $t\geq 2R$. In addition, $\Phi'_R\leq {1+\tau}$ and $\Phi_R''\leq \frac{1+\tau}{R}$ in their corresponding supports. Let $w_R=e^{\Phi_R}\tilde u_1$, then it satisfies the equation \begin{align*} \big(L_{\Phi_R}-D\max\{\tau^2, k^2 e^{2t}\tilde q\}\big)w_R =e^{\Phi_R}\left(\tilde f+\p_t \tilde F^t+\mbox{div}_{\mathbb S^{n-1}} \tilde F^\theta\right), \end{align*} where $L_{\Phi_R}$ is given by \eqref{eq:LPhi}. Testing the equation for $w_R$ with itself yields \begin{align*} &\|\p_t w_R\|_{L^2(\R^n\times\mathbb S^{n-1})}^2 +\|\nabla_{\mathbb S^{n-1}}w_R\|_{L^2(\R\times\mathbb S^{n-1})}^2 \\ &\qquad +\int_{\mathbb S^{n-1}}\!\int_\R \left(D\max\{\tau^2,k^2 e^{2t}\tilde q \}-k^2 e^{2t}\tilde q+c_n\right) w_R^2 dtd\theta \\ & \qquad+\int_{\mathbb S^{n-1}}\!\int_\R (\Phi_R''-\Phi_R'^2)w_R^2 dtd\theta +2\int_{\mathbb S^{n-1}}\!\int_\R \Phi_R'w_R\p_t w_R dtd\theta \\ &=-\int_{\mathbb S^{n-1}}\!\int_\R e^{\Phi_R}\left(\tilde f+\p_t \tilde F^t+\mbox{div}_{\mathbb S^{n-1}} \tilde F^\theta\right)w_R dtd\theta. \end{align*} Applying integration by parts and Young's inequality, together with the fact that $\supp \tilde f, \, \supp \tilde F^j\subset (0,\log 2)\times\mathbb S^{n-1}=:I \times \mathbb{S}^{n-1}$, we obtain the following estimates for $\tau>1$ \begin{align*} \left|\int_{\mathbb S^{n-1}}\!\int_\R (\Phi_R''-\Phi_R'^2)w_R^2 dtd\theta\right| &\leq 6\tau^2 \|w_R\|_{L^2(\R\times\mathbb S^{n-1})}^2, \\ \left|2\int_{\mathbb S^{n-1}}\!\int_\R \Phi_R'w_R\p_t w_R dtd\theta\right| &\leq 16\tau^2\|w_R\|_{L^2(\R\times\mathbb S^{n-1})}^2 +\frac{1}{4}\|\p_t w_R\|_{L^2(\R\times\mathbb S^{n-1})}^2 , \\ \left|\int_{\mathbb S^{n-1}}\!\int_\R e^{\Phi_R} \tilde fw_R dtd\theta\right| &\leq \frac{\kappa}{\max\{\tau^2, k^2\}}\|e^{\Phi_R}\tilde f\|_{L^2(\R\times\mathbb S^{n-1})}^2 + \frac 1 4 \frac{1}{\kappa}\max\{\tau^2, k^2 \}\|w_R\|_{L^2(I\times\mathbb S^{n-1})}^2 \\ &\leq \frac{\kappa}{\max\{\tau^2, k^2\}}\|e^{(1+\tau)t}\tilde f\|_{L^2(\R\times\mathbb S^{n-1})}^2\\ & \quad + \frac 1 4 \|\max\{\tau^2, k^2 e^{2t}\tilde q\}^{\text{\normalfont\sfrac{1}{2}}}w_R\|_{L^2(I\times\mathbb S^{n-1})}^2 \\ &\leq \frac{\kappa}{\max\{\tau^2, k^2\}}\|e^{(1+\tau)t}\tilde f\|_{L^2(\R\times\mathbb S^{n-1})}^2\\ & \quad + \frac 1 4 \|\max\{\tau^2, k^2 e^{2t}\tilde q\}^{\text{\normalfont\sfrac{1}{2}}}w_R\|_{L^2(\R\times\mathbb S^{n-1})}^2, \\ \left|\int_{\mathbb S^{n-1}}\!\int_\R e^{\Phi_R}\p_t \tilde F^tw_R dtd\theta\right| & \leq C\|e^{(1+\tau)t}\tilde F^t\|_{L^2(\R\times\mathbb S^{n-1})}^2+ \frac{\tau^2}{4}\|w_R\|_{L^2(\R\times\mathbb S^{n-1})}^2 +\frac 1 4\|\p_tw_R\|_{L^2(\R\times\mathbb S^{n-1})}^2, \\ \left|\int_{\mathbb S^{n-1}}\!\int_\R e^{\Phi_R} \mbox{div}_{\mathbb S^{n-1}} \tilde F^\theta w_R dtd\theta\right| & \leq \|e^{(1+\tau)t}\tilde F^\theta\|_{L^2(\R\times\mathbb S^{n-1})}^2 +\frac 1 4\|\nabla_{\mathbb S^{n-1}}w_R\|_{L^2(\R\times\mathbb S^{n-1})}^2. \end{align*} Absorbing the terms with $w_R$ and the non-positive terms for $D$ sufficiently large, we obtain \begin{align*} &\tau\|w_R\|_{L^2(\R\times\mathbb S^{n-1})} +\|\p_t w_R\|_{L^2(\R\times\mathbb S^{n-1})} +\|\nabla_{\mathbb S^{n-1}}w_R\|_{L^2(\R\times\mathbb S^{n-1})} +k\| e^t\tilde q^{\text{\normalfont\sfrac{1}{2}}}w_R\|_{L^2(\R\times\mathbb S^{n-1})} \\ &\quad\leq C\left( \frac{1}{\max\{\tau, k\}}\|e^{(1+\tau)t}\tilde f\|_{L^2(\R\times\mathbb S^{n-1})} +\|e^{(1+\tau)t}\tilde F^t\|_{L^2(\R\times\mathbb S^{n-1})} + \|e^{(1+\tau)t}\tilde F^\theta\|_{L^2(\R\times\mathbb S^{n-1})}\right). \end{align*} Notice that the right hand side is finite and does not depend on $R$, so taking $R\to \infty$, we obtain similar estimates for $w=e^{(1+\tau)t} \tilde u_1$. Multiplying the whole expression by $\max\{\tau, k\}$ and returning to the original coordinates we arrive at \begin{align}\label{eq:estimateu1i} \begin{split} &\tau^2\|e^\phi u_1\|_{L^2(\R^n)} +\max\{\tau, k\}\|e^\phi |x|\nabla u_1\|_{L^2(\R^n)} +\max\{\tau,k\}k\|q^{\text{\normalfont\sfrac{1}{2}}} e^\phi |x| u_1\|_{L^2(\R^n)} \\ &\qquad\leq C\Big(\|e^\phi|x|^2f\|_{L^2(\R^n)}+ \max\{\tau,k\}\sum_{j=1}^n\|e^\phi |x|F^j\|_{L^2(\R^n)}\Big). \end{split} \end{align} Arguing similarly for $\tilde \Phi_R$ with $\tilde \Phi_R=(2+\tau)t$ if $t\leq R$, we also deduce \begin{align}\label{eq:estimateu1ii} \begin{split} k^2\|e^\phi|x|^2q^{\text{\normalfont\sfrac{1}{2}}}u_1\|_{L^2(\R^n)} \leq C\Big(\|e^\phi|x|^3f\|_{L^2(\R^n)}+ \max\{\tau,k\}\sum_{j=1}^n\|e^\phi |x|^2F^j\|_{L^2(\R^n)}\Big). \end{split} \end{align} Combining \eqref{eq:estimateu1i}-\eqref{eq:estimateu1ii} and exploiting again the compact support of $f$ and $F^j$ and \textnormal{(\hyperref[assq2]{ii'})}, we infer that \begin{align}\label{eq:estimateu2RHS} \begin{split} \|D\max\{\tau^2, {k^2|x|^2 q}\} e^\phi u_1\|_{L^2(\R^n)} &\leq D \tau^2\|e^\phi u_1\|_{L^2(\R^n)} +D\kappa^{\text{\normalfont\sfrac{1}{2}}} k^2\||x|^2 q^{\text{\normalfont\sfrac{1}{2}}} e^\phi u_1\|_{L^2(\R^n)} \\ &\leq C \Big(\|e^\phi|x|^2f\|_{L^2(\R^n)}+ \max\{\tau,k\}\sum_{j=1}^n\|e^\phi |x|F^j\|_{L^2(\R^n)}\Big), \end{split} \end{align} where now $C$ also depends on $\kappa$.\\ \emph{Step 2b: Carleman estimates for $u_2$.} We now consider the estimate for $u_2$. To this end, we note that $\tilde u_2$ solves the equation \begin{align*} L \tilde u_2 = D \max\{\tau^2, k^2 e^{2t}\tilde q\} \tilde u_1 \; \mbox{ in } \R \times \mathbb{S}^{n-1}. \end{align*} We now carry out the conjugation with $e^{\Phi}$ for $\Phi(t)=(1+\tau) t$ and split the operator $L_{\Phi}$ given in \eqref{eq:LPhi} into its symmetric and antisymmetric parts (with respect to the $L^2(\R \times {\mathbb{S}^{n-1}})$ scalar product) \begin{align*} S_{\Phi} = \p_t^2 + \D_{\mathbb S^{n-1}} + k^2e^{2t} \tilde q +{(1+\tau)^2}- c_n,\quad A_{\Phi} = -2(1+\tau)\p_t. \end{align*} Let us set $v=e^{\Phi} \tilde u_2$. Expanding the right hand side of the last equality, we obtain \begin{align*} \|L_{\Phi} v\|_{L^2(\R \times \mathbb S^{n-1})}^2 = \|S_{\Phi} v\|_{L^2(\R \times \mathbb S^{n-1})}^2 + \|A_{\Phi} v\|_{L^2(\R \times \mathbb S^{n-1})}^2 + ([S_{\Phi},A_{\Phi}]v ,v)_{L^2(\R \times \mathbb S^{n-1})}. \end{align*} In addition, by the definition of $L_\Phi$ and $v$, \begin{align*} \|L_{\Phi} v \|_{L^2(\R \times \mathbb S^{n-1})}=\|e^{\Phi}L\tilde u_2\|_{L^2(\R \times \mathbb S^{n-1})}= \|D\max\{\tau^2, {k^2|x|^2 q}\} e^\phi u_1\|_{L^2(\R^n)}. \end{align*} We begin with a lower bound on the commutator. We calculate \[ [S_{\Phi},A_{\Phi}]v =[ e^{2t } k^2\tilde q, A_{\Phi}]v=2 (1+\tau)\p_t(k^2e^{2t}\tilde q) v. \] As $\p_t \tilde q= (\nabla q\cdot x) \circ \psi \ge 0 $ and $\tilde{q}> 0$ by assumption, we thus find after returning to the standard coordinates \[ ([S_{\Phi},A_{\Phi}]v,v)_{L^2(\R \times \mathbb S^{n-1})}\ge 4(1+\tau)(e^{2t}k^2\tilde q v,v)_{L^2(\R \times \mathbb S^{n-1})}=4(1+\tau)k^2\int_{\R^n}q|x|^2e^{2\phi}u^2_2 \,dx. \] Therefore, we conclude \begin{equation} \label{eq:1} 4(1+\tau)k^2\int_{\R^n}q|x|^2e^{2\phi}u^2_2 \,dx\leq \|D\max\{\tau^2, {k^2|x|^2 q}\} e^\phi u_1\|_{L^2(\R^n)}^2. \end{equation} Now we seek to estimate $\|v\|_{L^2(\R\times\mathbb S^{n-1})}$ in terms of $\|A_{\Phi}v\|_{L^2(\R \times \mathbb{S}^{n-1})}=2(1+\tau)\|\p_tv\|_{L^2(\R \times \mathbb{S}^{n-1})}$. Using the compact support of $\tilde{u}$, we can apply the Poincar\'e inequality to the function $e^\Phi \tilde u(\,\cdot\,, \theta)$ for almost every $\theta\in \Sn$ as follows \begin{align*} \|v\|_{L^2(\R \times \mathbb{S}^{n-1})} &\leq \|e^\Phi \tilde u\|_{L^2(\R \times \mathbb{S}^{n-1})}+\|e^\Phi \tilde u_1\|_{L^2(\R \times \mathbb{S}^{n-1})} \\&\leq C\big(\|\p_t(e^\Phi \tilde u)\|_{L^2(\R \times \mathbb{S}^{n-1})}+\|e^\Phi \tilde u_1\|_{L^2(\R \times \mathbb{S}^{n-1})}\big) \\&\leq C\big(\|\p_tv\|_{L^2(\R \times \mathbb{S}^{n-1})}+\|\p_t(e^\Phi \tilde u_1)\|_{L^2(\R \times \mathbb{S}^{n-1})}+\|e^\Phi \tilde u_1\|_{L^2(\R \times \mathbb{S}^{n-1})}\big) \\&\leq C\big(\|\p_tv\|_{L^2(\R \times \mathbb{S}^{n-1})}+\|e^\Phi \p_t\tilde u_1\|_{L^2(\R \times \mathbb{S}^{n-1})}+\tau\|e^\Phi \tilde u_1\|_{L^2(\R \times \mathbb{S}^{n-1})}\big), \end{align*} where $C$ depends on $n$ (and the support of $\tilde u$). Multiplying the whole inequality by $(1+\tau)$ we can write \begin{align*} (1+\tau)\|v\|_{L^2(\R \times \mathbb{S}^{n-1})} &\leq C\big(\|A_\Phi v\|_{L^2(\R \times \mathbb{S}^{n-1})}+\tau\|e^\Phi \p_t\tilde u_1\|_{L^2(\R \times \mathbb{S}^{n-1})}+\tau^2\|e^\Phi \tilde u_1\|_{L^2(\R \times \mathbb{S}^{n-1})}\big). \end{align*} Returning to Euclidean coordinates yields \begin{align} \begin{split} \label{eq:2} (1+\tau)\| e^{\phi} u_2 \|_{L^2(\R^n)} &\leq C \big(\|D\max\{\tau^2, {k^2|x|^2 q}\} e^\phi u_1\|_{L^2(\R^n)} + \tau^2 \|e^{\phi} u_1\|_{L^2(\R^n)}\\ &\qquad \quad +\tau\|e^{\phi} |x| \nabla u_1 \|_{L^2(\R^n)}\big). \end{split} \end{align} Lastly, we deduce a gradient bound on $\tilde u_2$ using the symmetric part of the operator. Testing $S_\Phi v$ with $v$ and integrating by parts we obtain \begin{align*} (S_\Phi v,v)_{L^2(\R\times \mathbb{S}^{n-1})} &=-\|\p_t v\|_{L^2(\R\times \mathbb{S}^{n-1})}^2 -\|\nabla_{\mathbb S^{n-1}}v\|_{L^2(\R\times \mathbb{S}^{n-1})}^2 \\ &\quad+k^2(e^{2t}\tilde q v,v)_{L^2(\R\times \mathbb{S}^{n-1})} +\left((1+\tau)^2-c_n\right)\|v\|^2_{L^2(\R\times \mathbb{S}^{n-1})}. \end{align*} Therefore, \begin{align*} \|\p_t v\|_{L^2(\R\times \mathbb{S}^{n-1})}^2 +\|\nabla_{\mathbb S^{n-1}}v\|_{L^2(\R\times \mathbb{S}^{n-1})}^2 &\leq \|S_\Phi v\|_{L^2(\R\times \mathbb{S}^{n-1})}^2 +k^2(e^{2t}\tilde q v,v)_{L^2(\R\times \mathbb{S}^{n-1})}\\ & \quad +2(1+\tau)^2\|v\|^2_{L^2(\R\times \mathbb{S}^{n-1})}. \end{align*} Returning to the original coordinates and using \eqref{eq:1}-\eqref{eq:2} to estimate the right hand side yields \begin{align}\label{eq:3} \begin{split} \|e^\phi|x|\nabla u_2\|_{L^2(\R^n)} &\leq C \big(\|D\max\{\tau^2, {k^2|x|^2 q}\} e^\phi u_1\|_{L^2(\R^n)} + \tau^2 \|e^{\phi} u_1\|_{L^2(\R^n)}\\ &\qquad \quad + \tau\|e^{\phi} |x| \nabla u_1 \|_{L^2(\R^n)}\big).\end{split} \end{align} Finally, combining \eqref{eq:1},\eqref{eq:2} and\eqref{eq:3} with \eqref{eq:estimateu1i} and \eqref{eq:estimateu2RHS}, we obtain for $\tau>1$ \begin{align}\label{eq:estimateu2} \begin{split} &\tau \| e^{\phi} u_2\|_{L^2(\R^n)} + \|e^{\phi} |x| \nabla u_2\|_{L^2(\R^n)} + \tau^{\text{\normalfont\sfrac{1}{2}}} k \| {q^{\text{\normalfont\sfrac{1}{2}}}|x|}e^{ \phi} u_2\|_{L^2(\R^n)} \\&\quad\leq C\Big(\|e^\phi|x|^2f\|_{L^2(\R^n)}+ \max\{\tau,k\}\sum_{j=1}^n\|e^\phi |x|F^j\|_{L^2(\R^n)}\Big). \end{split} \end{align} \emph{Step 3: Conclusion.} The final estimate \eqref{eq:Carleman} follows by an application of the triangle inequality and the estimates \eqref{eq:estimateu1i} and \eqref{eq:estimateu2} for $u_1$ and $u_2$, respectively. \end{proof} Next, using the previous Carleman estimate, we deduce a quantitative unique continuation result which does not suffer from the losses in $k$. \begin{thm} \label{prop:UCP_improved_ink} Let $V$ and $q$ be as in \textnormal{(\hyperref[assV]{i})}-\textnormal{(\hyperref[assq2]{ii'})} in $\Omega=B_2 \backslash \overline{B_1}$. Let $u\in H^1(\Omega)$ be a solution to \begin{align}\label{eq:eqinOm1} \begin{split} (\Delta+k^2q+V)u&=0 \, \mbox{ in } \, \Omega, \end{split} \end{align} and let $M$, $\eta$ be such that \begin{align*} \|u\|_{H^1(\Omega)} &\leq M,\\ \|u\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\p B_2)}+\|\p_{\nu} u \|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\p B_2)}&\leq \eta. \end{align*} Assume further that $0<{k^3} \eta \leq M$. Then there exist a parameter $\mu\in(0,1)$ and a constant $C>1$ depending on $n, \|V\|_{L^\infty(\Omega)}, \kappa$ and $\|q\|_{C^1(\Omega)}$ (but not on $k$) such that \begin{align}\label{eq:QUCPboundary_improv} \|u\|_{L^2(\Omega)}\leq C \left|\log\left(\frac{k^3\eta}{M}\right)\right|^{-\mu}M. \end{align} In addition, if $G=B_2 \backslash B_{1+\delta}$ for some $\delta \in (0,1)$, then there exist a parameter $\nu\in(0,1)$ and a constant $C>1$ (depending on $n, \delta, \|V\|_{L^\infty(\Omega)}, \kappa$ and $\|q\|_{C^1(\Omega)}$ but not on $k$) such that \begin{align}\label{eq:QUCPinterior_improv} \|u\|_{L^2(G)}\leq C \left({k^3\eta}\right)^\nu M^{1-\nu}. \end{align} \end{thm} We will prove Theorem ~\ref{prop:UCP_improved_ink} in several steps. First we prove a corresponding propagation of smallness result from the interior for divergence form equations. Combined with an extension argument this will then lead to the desired claim of Theorem ~\ref{prop:UCP_improved_ink}. \begin{prop} \label{prop:div_form_rhs} Let $V$ and $q$ be as in \textnormal{(\hyperref[assV]{i})}-\textnormal{(\hyperref[assq2]{ii'})} in $\Omega=B_2 \backslash \overline{B_1}$. Let $u \in H^1(\Omega)$ with $\supp u\subset B_2 \backslash B_1$ be a solution to \begin{align*} (\D + k^2 q + V ) u & = f+\sum_{j=1}^n\p_j F^j \; \mbox{ in } \; \Omega, \end{align*} where $f,\, F^j\in L^2(\Omega)$ with $\supp f,\, \supp F^j\subset B_2\backslash B_1$. Let $M, \eta>0$ be such that \begin{align*} \|u\|_{H^1(\Omega)} &\leq M,\\ \|f\|_{L^2(\Omega)}+\sum_{j=1}^n \|F^j\|_{L^2(\Omega)} &\leq \eta. \end{align*} Assume further that $0<k \eta \leq M$. Then there exist $\mu \in (0,1)$ and $C>1$ (depending on n, $\|V\|_{L^\infty(\Omega)}$ and $\kappa$) such that \begin{align}\label{eq:QUCPboundary_impr} \|u\|_{L^2(\Omega)} \leq C \left|\log\left(\frac{k\eta}{M}\right)\right|^{-\mu}M. \end{align} In addition, if $G=B_2 \backslash \overline{B_{1+\delta}}$ for some $\delta \in (0,1)$, then there exist a parameter $\nu\in(0,1)$ and a constant $C>1$ (depending on $n, \delta$, $\|V\|_{L^\infty(\Omega)}$ and $\kappa$) such that \begin{align}\label{eq:QUCPinterior_impr} \|u\|_{L^2(G)}\leq C (k\eta)^\nu M^{1-\nu}. \end{align} \end{prop} \begin{rmk} We emphasise that unlike in Proposition ~\ref{prop:Carl_eigenval}, in Proposition ~\ref{prop:div_form_rhs}, we are not assuming that $u$ and the functions $f$ and $F^j$ vanish on the interior boundary $\p B_1$. \end{rmk} \begin{proof}[Proof of Proposition ~\ref{prop:div_form_rhs}] We use the Carleman inequality from Proposition ~\ref{prop:Carl_eigenval} in combination with the Sobolev embedding theorem and an optimization argument. We argue in two steps, first proving \eqref{eq:QUCPinterior_impr} and then using this to prove \eqref{eq:QUCPboundary_impr} .\\ \emph{Step 1: Proof of \eqref{eq:QUCPinterior_impr}}. We apply the Carleman estimate from Proposition ~\ref{prop:Carl_eigenval} to the function $w:= u \chi$, where $\chi $ is a smooth cut-off function which is equal to one on $B_2 \backslash B_{1+\sfrac{\delta}{2}}$, vanishes on $B_{1+\sfrac{\delta}{4}}$ and satisfies $|\nabla\chi|\leq C\delta^{-1}$, $|\Delta\chi|\leq C\delta^{-2}$. The function $w$ thus is compactly supported in $B_2\backslash \overline{B_1}$ and solves the equation \begin{align}\label{eq:eqw} (\D + k^2 q) w & = -Vw +g + \sum_{j=1}^n\p_j G^j \quad \mbox{ in } B_2 \backslash \overline{B_1}, \end{align} where \begin{align*} g&=u \D \chi + 2 \nabla u \cdot \nabla \chi+\chi f - \sum_{j=1}^n(\p_j \chi)F^j,\ \quad G^j=\chi F^j. \end{align*} We now seek to apply Proposition \ref{prop:Carl_eigenval}. To this end, we first extend $q$ to $\R^n$ such that \textnormal{(\hyperref[assq2]{ii'})} remains true. To this end, we first notice that \eqref{eq:eqw} only depends on the value of $q$ in some domain $\Omega'=B_{2-\epsilon}\backslash B_{1+\sfrac{\delta}{4}}\Subset\Omega$, where $\epsilon\in(0,\frac 1 2)$ depends on the support of $u$. Let $\xi$ be a radial smooth function supported in $\Omega$ and such that $\xi=1$ in $\Omega'$, $\p_r\xi\geq0$ in the bounded component of $\Omega\backslash\Omega'$ and $\p_r\xi\leq0$ otherwise. Now we consider the function $\tilde q=\xi q + (1-\xi)(\kappa^{-1}\mathbb{1}_{B_{\sfrac 3 2}}+\kappa\mathbb{1}_{\R^n\backslash B_{\sfrac{3}{2}}})$, which coincides with $q$ in $\Omega'$. It is clear that $\tilde q\in C^1(\R^n)$ and $\kappa^{-1}\leq \tilde q\leq \kappa$ in $\R^n$. Finally, since $\nabla q\cdot x\geq 0$ in $\Omega$, $\p_r\xi(q-\kappa^{-1})\geq 0$ in $(\Omega\backslash\Omega')\cap B_{\sfrac{3}{2}}$ and $\p_r\xi(q-\kappa)\geq 0$ in $(\Omega\backslash\Omega')\cap (\R^n\backslash B_{\sfrac{3}{2}})$ we deduce $\nabla \tilde q\cdot x\geq 0$ in $\R^n$. Therefore, invoking Proposition ~\ref{prop:Carl_eigenval}, we obtain for $\tau>\tau_0$ \begin{align}\label{eq:proofcom} \tau \|e^{\phi} w\|_{L^2(\R^n)} \leq C \Big(\|e^\phi |x|^2 Vw\|_{L^2(\R^n)}+\|e^{\phi}|x|^2 g\|_{L^2(\R^n)} + \max\{\tau, k\}\sum_{j=1}^n \|e^{\phi} |x| G^j \|_{L^2(\R^n)} \Big). \end{align} Considering $\tau\geq C\|V\|_{L^\infty(\Omega)}$, we can absorb the first term on the right hand side of \eqref{eq:proofcom} into the left hand side. Then, inserting the expressions for $w$, $g$, $G^j$ and $\phi$, we infer \begin{align*} \begin{split} \tau \||x|^{\tau} u\|_{L^2(B_2 \backslash \overline{B_{1+\delta}})} &\leq C\Big({\delta^{-2}}\||x|^{2+\tau} u\|_{L^2(B_{1+\sfrac{\delta}{2}}\backslash \overline{B_{1+\sfrac{\delta}{4}}})} +{\delta^{-1}} \||x|^{2+\tau} \nabla u\|_{L^2(B_{1+\sfrac{\delta}{2}}\backslash \overline{B_{1+\sfrac{\delta}{4}}})}\\ & \qquad\quad + \||x|^{2+\tau} f\|_{L^2(\Omega)} + (\delta^{-1}+\max\{\tau,k\})\sum_{j=1}^n\||x|^{1+\tau} F^j\|_{L^2(\Omega)}\Big). \end{split} \end{align*} Hence, \begin{align*} & \| u\|_{L^2(B_2 \backslash \overline{B_{1+\delta}})} \\ &\quad \leq C\delta^{-2} \left(\left(\frac{1+\sfrac{\delta}{2}}{1+\delta}\right)^{\tau+2} \| u\|_{H^1(B_{1+\sfrac{\delta}{2}}\backslash \overline{B_{1+\sfrac{\delta}{4}}})} + 4^{\tau} k\Big(\| f\|_{L^2\Omega)} + \sum_{j=1}^n\| F^j\|_{L^2(\Omega)}\Big) \right)\\ &\quad \leq C{\delta^{-2}}\left(\left(\frac{1+\sfrac{\delta}{2}}{1+\delta}\right)^{\tau} \| u\|_{H^1(\Omega)} + 4^{\tau} k\Big(\| f\|_{L^2(\Omega)} +\sum_{j=1}^n \| F^j\|_{L^2(\Omega)}\Big) \right) \\ & \quad\leq C\delta^{-2} \left(\left(\frac{1+\sfrac{\delta}{2}}{1+\delta}\right)^{\tau} M + 4^{\tau} k\eta\right). \end{align*} Recalling that by assumption $k\eta \leq M$ and optimizing the right hand side by choosing $\tau= \tau_1 + \tau_0+ C\|V\|_{L^\infty(\Omega)}$ for $\tau_1>0$ such that \begin{align} \label{eq:optimization1} \left(\frac{1+\delta/2}{1+\delta}\right)^{\tau_1} M \sim 4^{\tau_1} k\eta. \end{align} This then implies the desired result with $\nu=1-\frac{\log 4}{\log 4+\log\frac{1+\delta}{1+\sfrac{\delta}{2}}}$.\\ \emph{Step 2: Proof of \eqref{eq:QUCPboundary_impr}.} We argue by making \eqref{eq:optimization1} more explicit. If $\delta\leq 1/2$ (which we can assume without loss of generality), then \begin{align*} \left(\frac{1+\delta/2}{1+\delta}\right)^{\tau} \leq \left(1-\frac{\delta}{3}\right)^{\tau}. \end{align*} Hence, in the optimization argument we obtain \begin{align*} \tau = \frac{1}{ \log\left(\frac{4}{1- \frac{\delta}{3} } \right) }\log\left( \frac{M}{k\eta} \right)+C\|V\|_{L^\infty(\Omega)}. \end{align*} As a consequence, \begin{align*} \|u\|_{L^2(B_2 \backslash \overline{B_{1+\delta}})} \leq C {\delta^{-2}}(k\eta)^{\alpha} M^{1-\alpha}, \end{align*} with $\alpha = 1-\frac{\log(2)}{\log(2)-\log(1-\frac{\delta}{3})}\geq c \delta$ and $C$ depending on $\|V\|_{L^\infty(\Omega)}$. We combine this with an application of Hölder's inequality and Sobolev embedding close to the boundary: \begin{align*} \|u\|_{L^2(B_{1+\delta}\backslash \overline{B_{1}})} \leq C \delta^{\frac{1}{n}} \|u\|_{L^{\frac{2n}{n-2}}(B_{1+\delta}\backslash \overline{B_{1}})} \leq C \delta^{\frac{1}{n}} \|u\|_{H^1(\Omega)}\leq C\delta^{\frac 1n}M. \end{align*} The combination of the two estimates then yields \begin{align*} \|u\|_{L^2(B_2 \backslash \overline{B_1})} \leq C \left(\delta^{\frac{1}{n}} + \delta^{-2}\Big(\frac{k\eta}{M}\Big)^{c\delta}\right)M. \end{align*} We now choose $\delta = c\log\left( \frac{M}{k\eta} \right)^{-\beta}$ for some $\beta\in(0,1)$. This implies the claim with $\mu=\frac{2\beta}{n}$ (and a corresponding constant $C>0$ which depends on $\beta$). \end{proof} With Proposition ~\ref{prop:div_form_rhs} at our disposal, we next address the proof of Theorem ~\ref{prop:UCP_improved_ink}. \begin{proof}[Proof of Theorem ~\ref{prop:UCP_improved_ink}] We seek to reduce the problem with Cauchy data to the problem with a divergence form $H^{-1}$ right hand side. To this end, we argue by an extension argument. We note that by definition of the $H^{\text{\normalfont\sfrac{1}{2}}}(\partial B_2)$ norm there exists a function $v \in H^{1}(B_3 \backslash B_2)$ such that \begin{align*} \|v\|_{H^1(B_3 \backslash B_2)} \leq C \|u|_{\partial B_2}\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\partial B_2)}. \end{align*} Let now $\chi \in C^{\infty}( B_3 \backslash \overline{B_2})$ be a smooth cut-off function with $\chi|_{\p B_2}=1$ and $\chi|_{\p B_3}=0$. We then define \begin{align}\label{eq:tildeu} \tilde{u}:= \left\{ \begin{array}{ll} u \;\;\mbox{ in } B_2 \backslash {B_1},\\ \chi v \mbox{ in } B_3 \backslash \overline{B_2}. \end{array} \right. \end{align} This function then is an element of $H^1(B_3 \backslash \overline{B_1})$ with $\supp \tilde u\subset B_3\backslash B_1$. In addition, we claim that it is a weak solution to \begin{align}\label{eq:eqFjbound} (\D + k^2 q+ V) \tilde{u} &= f+\sum_{j=1}^n\p_j F^j \mbox{ in } B_3 \backslash {B_1}, \end{align} where $f, \, F^j \in L^2(B_3 \backslash {B_1})$ are functions supported in $B_3\backslash B_1$ and satisfying the bounds \begin{align}\label{eq:estimatesFfbound} \|f\|_{L^2(B_3 \backslash B_1)}+\sum_{j=1}^n\|F^j\|_{L^2(B_3 \backslash B_1)} \leq C k^2 \big(\|u|_{\partial B_2}\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\partial B_2)} + \|\p_{\nu} u\|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\partial B_{2})}\big). \end{align} Indeed, by the weak formulation of \eqref{eq:eqFjbound}, for $\varphi \in C_c^{\infty}(B_3 \backslash \overline{B_1})$, \begin{align*} \int\limits_{B_3 \backslash {B_1}} \big(\nabla \tilde{u} \cdot \nabla \varphi + k^2 q \tilde{u} \varphi + V \tilde{u} \varphi\big) dx &= \int\limits_{B_2 \backslash {B_1}}\big( \nabla u \cdot \nabla \varphi + k^2 q u \varphi + V u \varphi \big)dx \\ & \quad + \int\limits_{B_3 \backslash \overline{B_2}} \big(\nabla ({\chi}v) \cdot \nabla \varphi + k^2 q {\chi}v\varphi + V {\chi}v \varphi \big)dx\\ & = - \int\limits_{\partial B_2} \p_{\nu} u \varphi dx + \int\limits_{B_3 \backslash \overline{B_2}} \big(\nabla ({\chi}v) \cdot \nabla \varphi + k^2 q {\chi}v\varphi + V {\chi}v \varphi\big) dx. \end{align*} Note that the mapping \begin{align*} \Psi: \varphi \mapsto \int\limits_{\partial B_2} \p_{\nu} u \varphi dx \end{align*} is bounded as an element in $H^{-\text{\normalfont\sfrac{1}{2}}}(\partial B_2)$ and also as an element in $H^{-1}(B_2 \backslash \overline{B_1})$. Indeed, for $\varphi\in H^1(B_2 \backslash \overline{B_1})$ we have $$|\Psi(\varphi)|\leq \|\p_{\nu} u\|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\p B_2)}\|\varphi\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\p B_2)} \leq C\|\p_{\nu} u\|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\p B_2)}\|\varphi\|_{H^{1}(B_2\backslash \overline{B_1})}. $$ Therefore $\Psi\in (H^1(B_2 \backslash \overline{B_1}))^*\subset H^{-1}(B_2 \backslash \overline{B_1})$. Then, it admits a representation $\Psi=g+\sum_{j=1}^n\p_j G^j$ with $g, G^j\in L^2(B_2 \backslash \overline{B_1}) $ and $$\|g\|_{L^2(B_2 \backslash {B_1})}+\sum_{j=1}^n\|G^j\|_{L^2(B_2 \backslash {B_1})}=\|\Psi\|_{H^{-1}(B_2 \backslash \overline{B_1})}\leq C\|\p_{\nu} u \|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\p B_2)}.$$ As a consequence, we obtain \eqref{eq:eqFjbound} with \begin{align*} f=\begin{cases} -g &\mbox{ in }B_2\backslash B_1,\\ k^2 q \chi v+V\chi v &\mbox{ in }B_3\backslash \overline{B_2}, \end{cases}, \qquad F^j=\begin{cases} -G^j &\mbox{ in }B_2\backslash B_1,\\ -\p_j(\chi v) &\mbox{ in }B_3\backslash \overline{B_2}, \end{cases} \end{align*} and \eqref{eq:estimatesFfbound} holds. The result of Proposition ~\ref{prop:div_form_rhs} (rescaled to $B_3\backslash B_1$) is therefore applicable with \begin{align}\label{eq:Metatilde} \begin{split} \tilde \eta&=Ck^2\eta\geq \|f\|_{L^2(B_3 \backslash B_1)}+\sum_{j=1}^n\|F^j\|_{L^2(B_3 \backslash B_1)},\\ \tilde M&=CM\geq M+C\eta\geq \|\tilde u\|_{H^1(B_3\backslash B_1)}. \end{split} \end{align} This yields the desired result. \end{proof} \subsection{Improved Runge approximation result} \label{sec:Runge_improved} This section contains the proof of Theorem ~\ref{thm:Rungeinterior_improv}. We start by upgrading the interior quantitative estimate from Theorem ~\ref{prop:UCP_improved_ink} similarly as in Proposition ~\ref{prop:QUCH1}. \begin{prop}\label{prop:QUCH1_improv} Let $V$ and $q$ be as in \textnormal{(\hyperref[assV]{i})}-\textnormal{(\hyperref[assq2]{ii'})} in $\Omega_2=B_2 \backslash \overline{B_{\text{\normalfont\sfrac{1}{2}}}}$ and let $\Omega_1=B_1 \backslash \overline{B_{\text{\normalfont\sfrac{1}{2}}}}$. Let $u\in H^1(\Omega_2)$ be the unique solution to \begin{align*} \begin{split} \Delta u+k^2qu+Vu&=v\mathbb{1}_{\Omega_1} \quad \mbox{ in } \Omega_2,\\ u&=0 \;\qquad\mbox{ on } \p\Omega_2, \end{split} \end{align*} with $v\in L^2(\Omega_1)$ and $k\geq 1$ satisfying \textnormal{(\hyperref[assSpec]{a1})}. Let $G=B_2\backslash \overline{B_{1+\delta}}$ for some $\delta\in (0,1)$. Then there exist parameters $\nu_0\in(0,1)$, $s_0\in[3,n+1]$ and a constant $C>1$ (depending on $n, \delta, \|V\|_{L^\infty(\Omega_2)}, \kappa$ and $\|q\|_{C^1(\Omega_2)}$) such that \begin{align}\label{eq:QUCPgradinterior_improv} \|u\|_{H^1(G)}\leq Ck^{s_0}\|\p_\nu u\|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\p B_2)}^{\nu_0}\|v\|_{L^2(\Omega_1)}^{1-\nu_0}. \end{align} \end{prop} \begin{proof} We start by estimating $\|u\|_{H^1(\Omega_2)}$ in terms of $\|v\|_{L^2(\Omega_1)}$ as in Proposition ~\ref{prop:QUCH1}. By Lemma ~\ref{lem:apriori} and \textnormal{(\hyperref[assSpec]{a1})}, there is $C>1$ such that \begin{align*} \|u\|_{H^1(\Omega_2)}\leq Ck^{n+1}\|v\|_{L^2(\Omega_1)}. \end{align*} Notice that then $u$ satisfies the assumptions in Theorem ~\ref{prop:UCP_improved_ink}, so \eqref{eq:QUCPinterior_impr} holds with \begin{align*} M=Ck^{n+1}\|v\|_{L^2(\Omega_1)},\qquad \eta=\|\p_\nu u\|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\p B_2)}. \end{align*} Let us now show that the bound \eqref{eq:QUCPinterior_improv} can be upgraded to an estimate for the $H^1$ norm. This is inherited from \eqref{eq:QUCPinterior_impr}. Indeed, we argue as in Step 1 of the proof of Proposition ~\ref{prop:div_form_rhs}, but now including into the left hand side of \eqref{eq:proofcom} the gradient term $\|e^\phi|x|\nabla w\|_{L^2(\R^n)}$ coming from Proposition ~\ref{prop:Carl_eigenval}. Therefore, if $k\tilde \eta\leq\tilde M$, \begin{align*} \|\tilde u\|_{H^1(G)}\leq C\left(\frac{k\tilde\eta}{\tilde M}\right)^{\nu}\tilde M, \end{align*} where $\nu$ and $C$ depend in particular on $\delta$. Here $\tilde u$ is given by \eqref{eq:tildeu} and $\tilde M$ and $\tilde \eta$ are connected with $M$ and $\eta$ according to \eqref{eq:Metatilde}. Following the proof of Theorem ~\ref{prop:UCP_improved_ink}, we then obtain \begin{align*} \|u\|_{H^1(G)} \leq C\left(\frac{k^3\eta}{M}\right)^{\nu_0}M, \end{align*} if $k^3\eta\leq M$. Otherwise, if $k^3\eta\geq M$, the estimate is immediate. Therefore the final bound \eqref{eq:QUCPgradinterior_improv} holds with $s_0=3\nu_0+(n+1)(1-\nu_0)$. \end{proof} With Proposition \ref{prop:QUCH1_improv} we deduce the proof of Theorem \ref{thm:Rungeinterior_improv} similarly as in the analogous non-convex settings: \begin{proof}[Proof of Theorem ~\ref{thm:Rungeinterior_improv}] The proof follows the proof of Theorem ~\ref{thm:Rungeinterior} in Section ~\ref{sec:Runge} with $\Gamma=\p B_2$ in order to construct $u_\alpha, v_\alpha$ and $w_\alpha$. The difference appears at the time of estimating $\|w_\alpha\|$. Applying the improved estimate \eqref{eq:QUCPgradinterior_improv} instead of \eqref{eq:QUCPgradinterior}, we obtain \begin{align*} \|u_\alpha-u\|_{L^2(\Omega_1)}\leq k^{s_0} \left(\frac{\|\p_\nu w_\alpha\|_{H^{-\text{\normalfont\sfrac{1}{2}}}(\p B_2)}}{\|v_\alpha\|_{L^2(\Omega_1)}}\right)^{\nu_0}\|\tilde v\|_{H^1(\tilde\Omega_1)} \leq Ck^{s_0}\alpha^{\nu_0}\|\tilde v\|_{H^1(\tilde\Omega_1)}. \end{align*} Choosing $\alpha$ such that $Ck^{s_0}\alpha^{\nu_0}=\epsilon$, we finally deduce \begin{align*} \|u_\alpha\|_{H^{\text{\normalfont\sfrac{1}{2}}}(\p B_2)}\leq \frac{1}{\alpha} \|\tilde v\|_{L^2(\Omega_1)}=Ck^{\frac {s_0} {\nu_0}}\epsilon^{-\frac 1 {\nu_0}}\|\tilde v\|_{L^2(\Omega_1)}. \end{align*} \end{proof} \section*{Acknowledgements} A.R. was supported by the Deutsche Forschungsgemeinschaft (DFG, German ResearchFoundation) under Germany’s Excellence Strategy EXC-2181/1 - 390900948 (the Heidelberg STRUCTURES Cluster of Excellence). W.Z. was supported by the European Research Council (ERC) under the Grant Agreement No 801867. \bibliographystyle{alpha}
2,869,038,156,489
arxiv
\section{Proof of Theorem 1} \label{appsec: thm1_proof} \input{sections/transportation-proofs} \section{Full Proof of Theorem 2} \label{appsec: thm2_proof} For a closed convex ball $\mathcal{B} \subseteq \mathbb{R}^d$, define the cone $\mathcal{C}_{\mathcal{B}} \subseteq \mathbb{R}^{d+1}$, $\mathcal{C}_{\mathcal{B}} = \{(z,\alpha) : \alpha \geq 0, z \in \alpha \mathcal{B}\}$. Observe that $\mathcal{C}_{\mathcal{B}}$ is convex and for $c \geq 0$, $(z,\alpha) \in \mathcal{C}_{\mathcal{B}}$ implies $(cz,c\alpha) \in \mathcal{C}_{\mathcal{B}}$. Thus $\mathcal{C}_{\mathcal{B}}$ is indeed a cone. From this, define the norm $\|z\|_{\mathcal{B}} = \min \{\alpha : (z,\alpha) \in \mathcal{C}_{\mathcal{B}}\}$. Thus $\mathcal{C}_{\mathcal{B}} = \{(z,\alpha) : \|z\|_{\mathcal{B}} \leq \alpha\}$. For a cone $\mathcal{C} \subseteq \mathbb{R}^d$, the definition of the dual cone is $\mathcal{C}^* = \{y \in \mathbb{R}^d : y^{\top}x \geq 0 \;\forall x \in \mathcal{C}\}$. A pair $(w,\gamma) \in \mathcal{C}_{\mathcal{B}}^*$ if and only if $w^{\top}z + \alpha \gamma \geq 0$ for all $(z,\alpha) \in \mathcal{C}_{\mathcal{B}}$. It is enough to check the pairs $(z, \|z\|_{\mathcal{B}})$, which gives the condition $-w^{\top}z \leq \|z\| \gamma$. This is very close to the ordinary definition of the dual norm. However, when $\mathcal{B}$ is not symmetric, the minus sign matters. If $\mathbf{0} \in \mathcal{B}$, then $(\mathbf{0},1) \in \mathcal{C}_{\mathcal{B}}$ and the constraint $\gamma \geq 0$ applies to $\mathcal{C}_{\mathcal{B}}^*$. However, if $\mathbf{0} \not\in \mathcal{B}$, $\mathcal{C}_{\mathcal{B}}^*$ with contain points with negative $\gamma$ components. In this case, there is no interpretation as a norm. \bcomment{ The dual ball is $\mathcal{B}_* = \{y \in \mathbb{R}^d : x^{\top} y \leq 1 \;\forall x \in \mathcal{B}\}$. The dual cone $\mathcal{C}_{\mathcal{B}*}$ comes from the dual norm $\|\cdot\|_{\mathcal{B}*}$. For $(z,\alpha) \in \mathcal{C}_{\mathcal{B}}$ and $(w,\gamma) \in \mathcal{C}_{\mathcal{B} *}$, $w^{\top}z + \alpha \gamma \geq 0$. This is because $-w^{\top}z \leq \|{-w}\|_{\mathcal{B}*}\|z\|_{\mathcal{B}} = \|w\|_{\mathcal{B}*}\|z\|_{\mathcal{B}} \leq \gamma \alpha$. Note that we are using the symmetry property of the norm and the ball here. } \subsection{Proof of Lemma 1} \label{appsubsec: lm1_proof} Consider the following convex program: \begin{align*} (z,\alpha,y,\beta) \in \mathbb{R}^{d+1+d+1}\\ \min a \alpha + b \beta&\\ (z,\alpha,y,\beta) &\in \mathcal{C}_{\mathcal{B}} \times \mathcal{C}_{\Sigma}\\ z + y &= \mu \end{align*} The cone constraint is equivalent to $\|z\|_{\mathcal{B}} \leq \alpha$ and $\|y\|_{\Sigma} \leq \beta$. The equality condition is equivalent to $\mu - z - y \in \{\mathbf{0}\}$, the trivial cone. The Lagrangian is \begin{align*} L &= a \alpha + b \beta - w^{\top}(z+y-\mu)\\ &= \begin{pmatrix} \mathbf{0}^{\top} & a & \mathbf{0}^{\top} & b \end{pmatrix} \begin{pmatrix} z \\ \alpha \\ y \\ \beta \end{pmatrix} - w^{\top} \begin{pmatrix} I & \mathbf{0} & I & \mathbf{0} \end{pmatrix} \begin{pmatrix} z \\ \alpha \\ y \\ \beta \end{pmatrix} + w^{\top} \mu \end{align*} The dual is \begin{align*} w \in \mathbb{R}^{d}\\ \max \mu^Tw\\ (-w,a,-w,b) &\in \mathcal{C}_{\mathcal{B}}^* \times \mathcal{C}_{\Sigma}^*\\ \end{align*} The cone constraint on $w$ is trivial because the dual of $\{\mathbf{0}\}$ is all of $\mathbb{R}^d$. If we change the objective of the first program to use a hard constraint on $\alpha$ instead of including it in the objective, the new primal is \begin{align*} (z,\alpha,y,\beta) \in \mathbb{R}^{d+1+d+1}\\ \min b \beta&\\ (z,\alpha,y,\beta) &\in \mathcal{C}_{\mathcal{B}} \times \mathcal{C}_{\Sigma}\\ z + y &= \mu \\ \alpha &\leq \alpha' \end{align*} the new Lagrangian is \[ L = b \beta - w^{\top}(z+y-\mu) - \eta(\alpha' - \alpha). \] The new dual is \begin{align*} (w,\eta) \in \mathbb{R}^{d+1}\\ \max \mu^Tw - \alpha' \eta\\ \eta &\geq 0\\ (-w,\eta,-w,b) &\in \mathcal{C}_{\mathcal{B}}^* \times \mathcal{C}_{\Sigma}^*. \end{align*} Rewriting without any cone notation, combining $\alpha$ with $\alpha'$, and specializing to $b=1$, we have \begin{align*} (z,y,\beta) \in \mathbb{R}^{d+d+1}\\ \min \beta&\\ \|z\|_{\mathcal{B}} &\leq \alpha\\ \|y\|_{\Sigma} &\leq \beta\\ z + y &= \mu \end{align*} and \begin{align*} (w,\eta) \in \mathbb{R}^{d+1}\\ \max \mu^Tw - \alpha \eta\\ \eta &\geq 0\\ \|{-w}\|_{\mathcal{B}}^* &\leq \eta\\ \|{-w}\|_{\Sigma}^* &\leq 1 \end{align*} From complementary slackness we have $-w^{\top}z+\eta\alpha = 0$ and $-w^{\top}y+b\beta = 0$. From the constraints, we have $\|z\|_{\mathcal{B}} \leq \alpha$, $\|y\|_{\mathcal{B}} \leq \beta$, $\|{-w}\|_{\mathcal{B}}^* \leq \eta$, and $\|{-w}\|_{\Sigma}^* \leq b$. We have $w^{\top}z \leq \|w\|_{\mathcal{B}}^* \|z\|_{\mathcal{B}}$ and $w^{\top}y \leq \|w\|_{\Sigma}^* \|y\|_{\Sigma}$. Combining these, all six inequalities are actually equalities. \bcomment{ \begin{lemma} Let $\lambda = \lambda^*(\alpha)$ and let $y$ and $z$ be solutions to $\mu = \alpha z + \lambda y$, $\|y\|_{\Sigma} \leq 1$, $\|z\|_{\mathcal{B}} \leq 1$. Then $\|y\|_{\Sigma} = 1$, $\|z\|_{\mathcal{B}} = 1$, and there is some $w \in \mathbb{R}^d$ such that $\|w\|_{\mathcal{B} *} = \|w\|_{\Sigma *} = 1$, $w^{\top}z = \alpha$ and $w^{\top}y = \lambda$. \end{lemma} \begin{proof} Consider the optimization problem \begin{align} &\max w^{\top}\mu\\ \text{s.t. }& \alpha \|w\|_{\mathcal{B} *} \leq 1\\ &\lambda \|w\|_{\Sigma *} \leq 1. \label{dual} \end{align} This is the optimization of a linear objective over a closed convex set. Any optimizing $w$ will have the properties that we seek. We have $w^{\top}\mu = \alpha w^{\top} z + \lambda w^{\top} y \leq 1$ where the inequality follows from the constraints of \eqref{dual}. \begin{align*} \max w^{\top}\mu&\\ \text{s.t. } \|w\|_{\mathcal{B} *} &\leq a\\ \|w\|_{\Sigma *} &\leq b\\ a &\leq 1\\ b &\leq 1 \end{align*} \begin{align*} \min &\\ \|z\|_{\mathcal{B}} &\leq c\\ \|y\|_{\Sigma} &\leq d\\ \end{align*} \end{proof} } \subsection{Simplification of transportation problem} \label{appsubsec: transport_simple} From Theorem\ref{thm: transport}, \begin{align} C_N \circ C_N^{\top}(P_{X_1},P_{X_{-1}}) &\leq \inf_{z \in \beta \mathcal{B}} C_{TV}(\tilde{P}_{X_1}, \tilde{P}_{X_{-1}}), \\ & = \inf_{z \in \beta \mathcal{B}} \sup_A \tilde{P}_{X_1}(A) - \tilde{P}_{X_{-1}}(A),\\ & = \inf_{z \in \beta \mathcal{B}} \sup_{w} \mathbb{E}_{x \sim \mathcal{N}(\mu -z, \Sigma)} \left[ \bm{1} (w^\intercal x >0) \right] - \mathbb{E}_{x \sim \mathcal{N}(-\mu +z, \Sigma)} \left[ \bm{1} (w^\intercal x >0) \right] \\ &= \inf_{z \in \beta \mathcal{B}} \sup_{w} Q \left( \frac{w^\intercal z - w ^\intercal \mu}{\sqrt{w^\intercal \Sigma w}} \right) - Q \left( \frac{w ^\intercal \mu - w^\intercal z}{\sqrt{w^\intercal \Sigma w}} \right), \\ &= \inf_{z \in \beta \mathcal{B}} \sup_{w} 2 Q \left( \frac{w^\intercal z - w ^\intercal \mu}{\sqrt{w^\intercal \Sigma w}} \right) -1. \end{align} As before, since the $Q$-function decreases monotonically, its supremum is obtained by finding $\inf_{w} \frac{w^\intercal z - w ^\intercal \mu}{\sqrt{w^\intercal \Sigma w}}$. The infimum is attained at $w^*=2 \Sigma^{-1} (z -\mu)$ and its value is $\sqrt{(z-\mu)^\intercal \Sigma^{-1} (z - \mu)}$, which implies that \begin{align} C_N \circ C_N^{\top}(P_{X_1},P_{X_{-1}}) &\leq \inf_{z \in \beta \mathcal{B}} 2Q \left(\sqrt{(z-\mu)^\intercal \Sigma^{-1} (z - \mu)} \right) -1. \end{align} \subsection{Connection to the classification problem} We consider the linear classification function $f_{w}(x)= \operatorname{sgn} \left(w^\intercal x \right)$. \paragraph{Classification accuracy:} We define the classification problem with respect to the classification accuracy $\mathbb{E}_{(x,y) \sim P} \left[\bm{1}(f_{w}(x)=y)\right] = \mathbb{P}_{(x,y) \sim P} \left[ f_{w}(x)=y \right] $, which also equals the standard $0-1$ loss subtracted from $1$. The aim of the learner is to maximize the classification accuracy, i.e. the classification problem is to find $w^*$ which is the solution of $\max_{w} \mathbb{P}_{(x,y) \sim P} \left[ f_{w}(x)=y \right]$. \paragraph{Performance with adversary:} In the presence of an adversary, the classification problem becomes \begin{align*} &\max_{w} \mathbb{P}_{(x,y) \sim P} \left[ f_{w}(x + h(x, y, w) )=y \right] \\ & = \max_{w} \frac{1}{2} \mathbb{P}_{x \sim \mathcal{N}(\mu, \Sigma) } \left[ f_{w}(x + h(x, 1, w) )=1 \right ] + \frac{1}{2} \mathbb{P}_{x \sim \mathcal{N}(-\mu, \Sigma) } \left[ f_{w}(x + h(x, -1, w) )=-1 \right ]. \end{align*} We will focus on the case with $y=1$ for ease of exposition since the analysis is identical. The correct classification event is then \begin{align*} & f_{w}(x + h(x, 1, w) )=1, \\ \Rightarrow &w^\intercal (x+h(x, 1, w))>0,\\ \Rightarrow &w^\intercal x- w^\intercal \argmax_{z \in \beta \mathcal{B}} w^\intercal z>0, \\ \Rightarrow & w^\intercal x- \max_{z \in \beta \mathcal{B}} w^\intercal z>0 \\ \Rightarrow & w^\intercal x - \beta \|w\|_* >0, \end{align*} where $\|\cdot\|_*$ is the dual norm for the norm associated with $\mathcal{B}$. This gives us the classification accuracy for the case with $y=1$ as $\max_{w} \mathbb{E}_{x \sim \mathcal{N}(\mu, \Sigma) } \left[ \bm{1}(w^\intercal x - \beta \|w\|_* >0) \right] $. We now perform a few changes of variables to obtain an expression in terms of the standard normal distribution. For the first, we do $x' = x - \mu$, which gives us $ \max_{w} \mathbb{E}_{x' \sim \mathcal{N}(\bm{0}, \Sigma) } \left[ \bm{1}(w^\intercal x' +w^\intercal \mu - \beta \|w\|_* >0) \right] $. The second is $x'' = w^\intercal x'$, which results in $\max_{w} \mathbb{E}_{x'' \sim \mathcal{N}(0, \sigma^2) } \left[ \bm{1}(x'' +w^\intercal \mu - \beta \|w\|_* >0) \right] $, where $\sigma = \sqrt{w^ \intercal \Sigma w}$. Finally, we set $x''' = \frac{x''}{\sigma}$, leading to $\max_{w} \mathbb{E}_{x''' \sim \mathcal{N}(0, 1) } \left[ \bm{1}(x''' +\frac{w^\intercal \mu}{\sigma} - \frac{\beta \|w\|_*}{\sigma} >0) \right] $. The classification problem is then \begin{align} &\max_{w} \frac{1}{2} \mathbb{P}_{x \sim \mathcal{N}(\mu, \Sigma) } \left[ f_{w}(x + h(x, 1, w) )=1 \right ] + \frac{1}{2} \mathbb{P}_{x \sim \mathcal{N}(-\mu, \Sigma) } \left[ f_{w}(x + h(x, -1, w) )=-1 \right ],\\ = &\max_{w} Q \left( \frac{\beta \|w\|_* - w^\intercal \mu}{\sqrt{w^\intercal \Sigma w}} \right). \end{align} Since $Q(\cdot)$ is a monotonically decreasing function, it achieves its maximum at $w^* = \min_{w} \frac{\beta \|w\|_* - w^\intercal \mu}{\sqrt{w^\intercal \Sigma w}}$. This is the dual problem to the one described in the previous section. \section{Proof of Theorem 3} \label{appsec: thm3_proof} The proof of Theorem \ref{thm:bayes} is below. The assumptions and setup are in Section \ref{sec: sample_complexity} of the main paper. \begin{proof} Let $\hat{\mu} = \mathbb{E}[\mu|((X_1,Y_1),\ldots,(X_n,Y_n)]$. A straightforward computation using Bayes rule shows that $X_{n+1}\cdot Y_{n+1}|((X_1,Y_1),\ldots,(X_n,Y_n)) \sim \mathcal{N}(\hat{\mu},I)$. Thus after observing $n$ examples, the learner is faced with a hypothesis testing problem between two Gaussian distributions with known parameters. From Theorem \ref{thm:gauss-opt-transport}, the optimal loss for this problem is $Q(\alpha^*(\beta,\hat{\mu}))$. Furthermore, $\hat{\mu} = \frac{1}{m+n}\sum_{i=1}^n X_i$ and $\hat{\mu} \sim \mathcal{N}(\mathbf{0},\frac{n}{m(m+n)}I)$. \bcomment{ \begin{align*} x|\mu &\sim \mathcal{N}(\mu,I)\\ \overline{x}|\mu &\sim \mathcal{N}(\mu,\frac{1}{n}I)\\ \overline{x} &\sim \mathcal{N}(\mathbf{0},\frac{m+n}{mn}I)\\ \hat{\mu} &\sim \mathcal{N}(\mathbf{0},\frac{n}{m(m+n)}I)\\ \mu|\hat{\mu} &\sim \mathcal{N}(\hat{\mu},\frac{1}{m+n}I)\\ x|\hat{\mu} &\sim \mathcal{N}(\hat{\mu},I)\\ \end{align*} } Averaging over the training examples, we see that the expected loss is \[ \mathbb{E}[Q(\alpha^*(\beta,\hat{\mu}))] = \Pr[T \geq \alpha^*(\beta,\hat{\mu})] = \Pr[(\hat{\mu},T) \in S(1,\beta)] = \Pr[Y \in S(\rho,\rho\beta)] \] where $T \in \mathbb{R}$, $T \sim \mathbb{N}(0,1)$ and $V \in \mathbb{R}^{d+1}$, $V \sim \mathbb{N}(0,I)$. \end{proof} \section{Results for an $\ell_{\infty}$ adversary} \label{appsec: linf_adv} \begin{figure}[t] \centering \subfloat[MNIST]{\resizebox{0.3\textwidth}{!}{\input{plots/3_7_mnist_linf_500_.tex}}\label{subfig: mnist}} \hspace{0mm} \subfloat[Fashion MNIST]{\resizebox{0.3\textwidth}{!}{\input{plots/3_7_fmnist_linf_500_.tex}}\label{subfig: fmnist}} \hspace{0mm} \subfloat[CIFAR-10]{\resizebox{0.3\textwidth}{!}{\input{plots/3_7_2000_cifar_linf_.tex}}\label{subfig: cifar-10}} \caption{Variation in minimum $0-1$ loss (adversarial robustness) as $\beta$ is varied for `3 vs. 7'. For MNIST and Fashion-MNIST, the loss of a robustly classifier (trained with iterative adversarial training) is also shown for a PGD adversary with an $\ell_{\infty}$ constraint.} \label{fig: linf} \end{figure} In Figures \ref{subfig: mnist} and \ref{subfig: fmnist}, we see that the lower bound in the case of $\ell_{\infty}$ adversaries is not very informative for checking if a robust classifier has good adversarial robustness since the bound is almost always 0, except at $\beta=0.5$, in which any two samples can be reached from one another with zero adversarial cost, reducing the maximum possible classification accuracy to 0.5. This implies that in the $\ell_{\infty}$ distance, these image datasets are very well separated even with an adversary and there exist good hypotheses $h$. For MNIST (till $\beta=0.4$) and Fashion MNIST ($\beta=0.3$), we find that iterative adversarial training is effective. For the CIFAR-10 dataset \ref{subfig: cifar-10}, non-zero adversarial robustness occurs after $\beta=0.2$. However, current defense methods have only shown robust classification with $\beta$ up to 0.1, where the lower bound is 0. In future work, we will explore the limits of $\beta$ till which robust classification is possible with neural networks. \section{Introduction} \label{sec: intro} \input{sections/intro} \section{Preliminaries and Notation}\label{sec: prelim} \input{sections/adversarial} \section{Adversarial Robustness from Optimal transport}\label{sec: optimal_transport} \input{sections/transportation} \section{Gaussian data: Optimal loss}\label{sec: gauss_data} \input{sections/gauss_data} \section{Gaussian data: Sample complexity lower bound}\label{sec: sample_complexity} \input{sections/bayes} \section{Experimental Results}\label{sec: experiments} \input{sections/results} \section{Related work and Concluding Remarks}\label{sec: rel_work} \input{sections/related_work} \subsection*{Acknowledgements} We would like to thank Chawin Sitawarin for providing part of the code used in our experiments. This research was sponsored by the National Science Foundation under grants CNS-1553437, CNS1704105, CIF-1617286 and EARS-1642962, by Intel through the Intel Faculty Research Award, by the Office of Naval Research through the Young Investigator Program (YIP) Award, by the Army Research Office through the Young Investigator Program (YIP) Award and a Schmidt DataX Award. ANB would like to thank Siemens for supporting him through the FutureMakers Fellowship. {\small \bibliographystyle{plain} \subsection{Basic definitions from optimal transport}\label{subsec: basic_ot} In this section, we use capital letters for random variables and lowercase letters for points in spaces. \paragraph{Couplings} A coupling between probability distributions $P_X$ on $\mathcal{X}$ and $P_Y$ on $\mathcal{Y}$ is a joint distribution on $\mathcal{X} \times \mathcal{Y}$ with marginals $P_X$ and $P_Y$. Let $\Pi(P_{X},P_{Y})$ be the set of such couplings. \begin{definition}[Optimal transport cost] For a cost function $c: \mathcal{X} \times \mathcal{Y} \rightarrow \mathbb{R} \cup \{+\infty\}$ and marginal distributions $P_X$ and $P_Y$, the optimal transport cost is \begin{equation} C(P_X,P_Y) = \inf_{P_{XY} \in \Pi(P_{X},P_{Y})} \mathbb{E}_{(X,Y) \sim P_{XY}}[c(X,Y)]. \label{primal-Kantorovich} \end{equation} \bcomment{ where $X$ has distribution $P_X$ and $Y$ has distribution $P_Y$, i.e. the distribution of $(X,Y)$ is a coupling of $(P,Q)$. Alternative notations: \begin{align} C(P_X,P_Y) = \inf_{X,Y} \mathbb{E}[c(X,Y)]\\ C(P_X,P_Y) &= \inf_{P_{XY} \in \Pi(P_{X},P_{Y})} \mathbb{E}_{(X,Y) \sim P_{XY}}[c(X,Y)]\\ C(P_X,P_Y) &= \inf_{P_{XY} \in \Pi(P_{X},P_{Y})} \int_{\mathcal{X} \times \mathcal{Y}} c(x,y)\, dP_{XY}(x,y). \end{align} } \end{definition} \paragraph{Potential functions and Kantorovich duality}There is a dual characterization of optimal transport cost in terms of potential functions which we use to make the connection between the transport and classification problems. \begin{definition}[Potential functions] Functions $f:\mathcal{X} \to \mathbb{R}$ and $g:\mathcal{Y} \to \mathbb{R}$ are potential functions for the cost $c$ if $g(y) - f(x) \leq c(x,y)$ for all $(x,y) \in \mathcal{X} \times \mathcal{Y}$. \end{definition} A pair of potential functions provide a one-dimensional representation of the spaces $\mathcal{X}$ and $\mathcal{Y}$. This representation must be be faithful to the cost structure on the original spaces: if a pair of points $(x,y)$ are close in transportation cost, then $f(x)$ must be close to $g(y)$. In the dual optimization problem for optimal transport cost, we search for a representation that separates $P_X$ from $P_Y$ as much as possible: \begin{equation} C(P_X,P_Y) = \sup_{f,g} \mathbb{E}_{Y \sim P_Y}[g(Y)] - \mathbb{E}_{X \sim P_X}[f(X)]. \label{dual-Kantorovich} \end{equation} For any choices of $f$, $g$, and $P_{XY}$, it is clear that $\mathbb{E}[g(Y)] - \mathbb{E}[f(X)] \leq \mathbb{E}[c(X,Y)]$. Kantorovich duality states that there are in fact choices for $f$ and $g$ that attain equality. Define the dual of $f$ relative to $c$ to be $f^c(y) = \inf_x c(x,y) + f(x)$. This is the largest function that forms a potential for $c$ when paired with with $f$. In \eqref{dual-Kantorovich}, it is sufficient to optimize over pairs $(f,f^c)$. \paragraph{Compositions} The composition of cost functions $c:\mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ and $c': \mathcal{Y} \times \mathcal{Z} \to \mathbb{R}$ is \begin{equation*} (c \circ c'):\mathcal{X} \times \mathcal{Z} \to \mathbb{R}\quad\quad\quad (c \circ c')(x,z) = \inf_{y \in \mathcal{Y}} c(x,y) + c'(y,z). \end{equation*} The composition of optimal transport costs can be defined in two equivalent ways: \[ (C \circ C')(P_X,P_Z) = \inf_{P_Y} C(P_X,P_Y) + C'(P_Y,P_Z) = \inf_{P_{XZ}} \mathbb{E}[(c \circ c')(X,Z)] \] \bcomment{ The composition of couplings $P_{XY}$ on $\mathcal{X} \times \mathcal{Y}$ and $P_{YZ}$ on $\mathcal{Y} \times \mathcal{Z}$ is defined as follows. There is a unique distribution $P_{XYZ}$ on $\mathcal{X} \times \mathcal{Y} \times \mathcal{Z}$ with $P_{XY}$ and $P_{YZ}$ as marginals. The distribution $P_{XZ}$ obtained by marginalizing over $\mathcal{Y}$ is the composition of $P_{XY}$ and $P_{YZ}$. The composition of potentials $(f,g) : \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ with potentials $(f',g') : \mathcal{Y} \times \mathcal{Z} \to \mathbb{R}$ is $(f + \inf_y (f'(y)-g(y)),g')$. \TODO{Verify if this should be inf or sup} } \paragraph{Total variation distance} The total variation distance between distributions $P$ and $Q$ is \begin{equation} C_{\text{TV}}(P,Q) = \sup_A P(A) - Q(A). \label{TV} \end{equation} We use this notation because it is the optimal transport cost for the cost function $c_{\text{TV}}: \mathcal{X} \times \mathcal{X} \to \mathbb{R}$, $ c_{\text{TV}}(x,x') = \mathbf{1}[x \neq x']$. \bcomment{ \[ c_{\text{TV}}(x,x') = \begin{cases} 0 & x = x'\\ 1 & x \neq x' \end{cases}. \]} \TODO{Comment about c dual for this cost} Observe that \eqref{TV} is equivalent to \eqref{dual-Kantorovich} with the additional restrictions that $f(x) \in \{0,1\}$ for all $x$, i.e. $f$ is an indicator function for some set $A$ and $g = f^{c_{\text{TV}}}$. For binary classification with a symmetric prior on the classes, a set $A$ that achieves the optimum in Eq. \eqref{TV} corresponds to an optimal test for distinguishing $P$ from $Q$. \bcomment{ The classification accuracy of the best test distinguishing $P$ from $Q$ is \[ \sup_A \frac{1}{2}P(A) + \frac{1}{2}Q(\overline{A}) = \sup_A \frac{1}{2}(1 + P(A) - Q(A)) = \frac{1}{2}(1 + C_{\text{TV}}(P,Q)). \] } \subsection{Adversarial cost functions and couplings}\label{subsec: adv_cost} We now construct specialized version of costs and couplings that translate between robust classification and optimal transport. \paragraph{Cost functions for adversarial classification} The adversarial constraint information $N$ can be encoded into the following cost function $c_N: \mathcal{X} \times \tilde{\mathcal{X}} \to \mathbb{R}$: $c_N(x,\tilde{x}) = \mathbf{1}[\tilde{x} \not\in N(x)]$. \bcomment{ \[ c_N(x,\tilde{x}) = \begin{cases} 0 &: \tilde{x} \in N(x)\\ 1 &: \tilde{x} \not\in N(x). \end{cases} \] } The composition of $c_N$ and $c_N^{\top}$ (i.e. $c_N$ with the arguments flipped) has simple combinatorial interpretation: $(c_N \circ c_N^{\top}) (x,x') = \mathbf{1}[N(x) \cap N(x') = \varnothing]$. \bcomment{ \begin{equation} (c_N \circ c_N^{\top}) (x,x') = \begin{cases} 0 &: N(x) \cap N(x') \neq \varnothing\\ 1 &: N(x) \cap N(x') = \varnothing. \end{cases} \label{interpretation} \end{equation} } Perhaps the most well-known example of optimal transport is the earth-mover's or $1$-Wasserstein distance, where the cost function is a metric on the underlying space. In general, the transportation cost $c_N \circ c_N^{\top}$ is not a metric on $\mathcal{X}$ because $(c_N \circ c_N^{\top})(x,x') = 0$ does not necessarily imply $x = x'$. However, when $(c_N \circ c_N^{\top})(x,x') = 0$, we say that the points are \emph{adversarially indistinguishible}. \paragraph{Couplings from adversarial strategies} Let $a: \mathcal{X} \to \tilde{\mathcal{X}}$ be a function such that $a(x) \in N(x)$ for all $x \in \mathcal{X}$. Then $a$ is an admissible adversarial perturbation strategy. The adversarial expected risk can be expressed as a maximization over adversarial strategies: \( L(N,P,h) = \sup_{a_1,a_{-1}} \mathbb{E}_{(x,c) \sim P} [\ell(h(a_c(x)),c)] \). Let $\tilde{X}_1 = a_1(X_1)$, so $a_1$ gives a coupling $P_{X_1 \tilde{X}_1}$ between $P_{X_1}$ and $P_{\tilde{X}_1}$. By construction, $C_N(P_{X_1},P_{\tilde{X}_1}) = 0$. A general coupling between $P_{X_1}$ and $P_{\tilde{X}_1}$ with $C_N(P_{X_1},P_{\tilde{X}_1}) = 0$ corresponds to a randomized adversarial strategy. We define $P_{\tilde{X}_{-1}}$ and $P_{X_{-1}\tilde{X}_{-1}}$ analogously. By composing the adversarial strategy coupling $P_{X_1\tilde{X}_1}$, the total variation coupling of $P_{\tilde{X}_1}$ and $P_{\tilde{X}_{-1}}$, and $P_{\tilde{X}_{-1}X_{-1}}$, we obtain a coupling $P_{X_1 X_{-1}}$. \paragraph{Potential functions from classifiers} \begin{wrapfigure}{r}{0.5\textwidth} \begin{tikzpicture}[] \definecolor{darkgreen}{rgb}{0, 0.5, 0}; \begin{scope}[shift = {(0,2)}] \draw[domain=-3:3,smooth,variable=\x] plot (\x,{exp(-(\x+0.6)*(\x+0.6))}); \draw[domain=-3:3,smooth,variable=\x] plot (\x,{exp(-(\x-0.6)*(\x-0.6))}); \draw[thick,<->] (-3,0) -- (3,0); \draw (-3.5,0) node{$\tilde{\mathcal{X}}$}; \draw (-2.2,0.5) node{$P_{\tilde{X}_{-1}}$}; \draw (2.2,0.5) node{$P_{\tilde{X}_1}$}; \draw[thick,darkgreen,fill] (-3,-0.2) -- (-0.05,-0.2) -- (-0.05,-0.3) -- (-3,-0.3) --cycle; \draw[thick,blue,fill] (0.05,-0.2) -- (3,-0.2) -- (3,-0.3) -- (0.05,-0.3) --cycle; \draw (-1.5,-0.6) node{$h(x) = -1$}; \draw ( 1.5,-0.6) node{$h(x) = 1$}; \end{scope} \begin{scope}[shift = {(0,0)}] \draw[domain=-3:3,smooth,variable=\x] plot (\x,{exp(-(\x+1.6)*(\x+1.6))}); \draw[domain=-3:3,smooth,variable=\x] plot (\x,{exp(-(\x-1.6)*(\x-1.6))}); \draw[thick,<->] (-3,0) -- (3,0); \draw (-3.5,0) node{$\mathcal{X}$}; \draw (-1.6,0.5) node{$P_{X_{-1}}$}; \draw (1.6,0.5) node{$P_{X_1}$}; \draw[thick,darkgreen,fill] (-3,-0.2) -- (-1.05,-0.2) -- (-1.05,-0.3) -- (-3,-0.3) --cycle; \draw[thick,blue,fill] (1.05,-0.2) -- (3,-0.2) -- (3,-0.3) -- (1.05,-0.3) --cycle; \draw[thick,red,fill] (-0.95,-0.2) -- (0.95,-0.2) -- (0.95,-0.3) -- (-0.95,-0.3) --cycle; \draw (-2,-0.6) node{$\tilde{h}(x) = -1$}; \draw ( 0,-0.6) node{$\tilde{h}(x) = \bot$}; \draw ( 2,-0.6) node{$\tilde{h}(x) = 1$}; \end{scope} \begin{scope}[shift = {(0,-2)}] \draw (-3.5,0) node{$0$}; \draw (-3.5,1) node{$1$}; \draw ( 0,0.8) node{$f$}; \draw ( 0,0.2) node{$g$}; \draw[thick,blue] (-3,1.1) -- (1,1.1) -- (1,0.1) -- (3,0.1); \draw[thick,darkgreen] (-3,1) -- (-1,1) -- (-1,0) -- (3,0); \end{scope} \end{tikzpicture} \caption{The relationships between a classifier $h : \mathcal{X} \to \{1,-1\}$, a degraded classifier $\tilde{h} : \tilde{\mathcal{X}} \to \{1,-1,\bot\}$, and potential functions $f,g : \mathcal{X} \to \mathbb{R}$.} \label{fig:potentials} \vspace{-8mm} \end{wrapfigure} Now we can explore the relationship between transport and classification. Consider a given hypothesis $h : \tilde{\mathcal{X}} \to \{-1,1\}$. A labeled adversarial example $(\tilde{x},y)$ is classified correctly if $\tilde{x} \in h^{-1}(y)$. A labeled example $(x,y)$ is classified correctly if $N(x) \subseteq h^{-1}(y)$. Following Cullina et al. \cite{cullina2018pac}, we define degraded hypotheses $\tilde{h} : \mathcal{X} \to \{-1,1,\bot\}$, \[ \tilde{h}(x) = \begin{cases} y &: N(x) \subseteq h^{-1}(y)\\ \bot &: \text{otherwise}. \end{cases} \] This allows us to express the adversarial classification accuracy of $h$, $1 - L(N,h,P)$, as \[ \frac{1}{2} (\mathbb{E}[\mathbf{1}[\tilde{h}(X_1)=1]] + \mathbb{E}[\mathbf{1}[\tilde{h}(X_{-1})=-1]]). \] Observe that $\mathbf{1}[\tilde{h}(x) = 1] + \mathbf{1}[\tilde{h}(x') = -1] \leq (c_N \circ c_N^{\top})(x,x') + 1$. Thus the functions $f(x) = 1 - \mathbf{1}[\tilde{h}(x) = 1]$ and $g(x) = \mathbf{1}[\tilde{h}(x) = -1]$ are admissible potentials for $c_N \circ c_N^{\top}$. This is illustrated in Figure \ref{fig:potentials}. Our first theorem characterizes optimal adversarial robustness when $h$ is allowed to be any classifier. \begin{theorem} Let $\mathcal{X}$ and $\tilde{\mathcal{X}}$ be Polish spaces and let $N : \mathcal{X} \to 2^{\tilde{\mathcal{X}}}$ be an upper-hemicontinuous neighborhood function such that $N(x)$ is nonempty and closed for all $x$. For any pair of distributions $P_{X_1}$,$P_{X_{-1}}$ on $\mathcal{X}$, \label{thm: transport} \[ (C_N \circ C_N^{\top})(P_{X_1},P_{X_{-1}}) = 1 - 2 \inf_h L(N,h,P) \] where $h: \tilde{\mathcal{X}} \to \{1,-1\}$ can be any measurable function. Furthermore there is some $h$ that achieves the infimum. \end{theorem} In the case of finite spaces, this theorem is essentially equivalent to the K\"{o}nig-Egerv\'{a}ry theorem on size of a maximum matching in a bipartite graph. The full proof is in Section \ref{appsec: thm1_proof} of the Appendix. If instead of all measurable functions, we consider $h \in \mathcal{H}$, a smaller hypothesis class, Theorem \ref{thm: transport} provides a lower bound on $\inf_{h \in \mathcal{H}} L(N,h,P)$. \subsection{Experimental Setup} We consider the adversarial classification problem on three widely used image datasets, namely MNIST \cite{lecun1998mnist}, Fashion-MNIST \cite{xiao2017/online} and CIFAR-10 \cite{krizhevsky2009learning}, and obtain lower bounds on the adversarial robustness for any classifier for these datasets. For each dataset, we use data from classes 3 ($P_{X_1}$) and 7 ($P_{X_{-1}}$) to obtain a binary classification problem. This choice is arbitrary and similar results are obtained with other choices, which we omit for brevity. We use 2000 images from the \emph{training set} of each class to compute the lower bound on adversarial robustness when the adversary is constrained using the $\ell_2$ norm. For the $\ell_{\infty}$ norm, these pairs of classes are very well separated, making the lower bounds less interesting (results in Section \ref{appsec: linf_adv} of the Appendix). For the MNIST and Fashion MNIST dataset, we compare the lower bound with the performance of a 3-layer Convolutional Neural Network (CNN) that is robustly trained using iterative adversarial training \cite{madry_towards_2017} with the Adam optimizer \cite{kingma2014adam} for 12 epochs. This network achieves 99.9\% accuracy on the `3 vs. 7' binary classification task on both MNIST and Fashion-MNIST. For the CIFAR-10 dataset, we use a ResNet-18 \cite{he2016deep} trained for 200 epochs, which achieves 97\% accuracy on the binary classification task. To generate adversarial examples both during the training process and to test robustness, we use Projected Gradient Descent (PGD) with an $\ell_2$ constraint, random initialization and a minimum of 10 iterations. Since more powerful heuristic attacks may be possible against these robustly trained classifiers, the `robust classifier loss' reported here is a lower bound. \subsection{Lower bounds on adversarial robustness for empirical distributions} Now, we describe the steps we follow to obtain a lower bound on adversarial robustness for empirical distributions through a direct application of Theorem \ref{thm: transport}. We first create a $k \times k$ matrix $D$ whose entries are $\|x_i-x_j\|_p$, where $k$ is the number of samples from each class and $p$ defines the norm. Now, we threshold these entries to obtain $D_{\text{thresh}}$, the matrix of adversarial costs $(c_N \circ c_N^{\top})(x_i,x_j)$ (recall Section \ref{subsec: adv_cost}), whose $(i,j)^{\text{th}}$ entry is $1$ if $D_{ij}>2\beta$ and $0$ otherwise, where $\beta$ is the constraint on the adversary. Finally, optimal coupling cost $(C_N \circ C_N^{\top})(P_{X_1},P_{X_{-1}})$ is computed by performing minimum weight matching over the bipartite graph defined by the cost matrix $D_{\text{thresh}}$ using the Linear Sum Assignment module from Scipy \cite{scipyref}. \begin{figure}[t] \centering \subfloat[ MNIST]{\resizebox{0.31\textwidth}{!}{\input{plots/3_7_mnist_l2_2000_mA_large_.tex}}\label{subfig: mnist}} \hspace{0mm} \subfloat[ Fashion MNIST]{\resizebox{0.31\textwidth}{!}{\input{plots/3_7_fmnist_l2_2000_mA_large_.tex}}\label{subfig: fmnist}} \hspace{0mm} \subfloat[ CIFAR-10]{\resizebox{0.31\textwidth}{!}{\input{plots/3_7_cifar_l2_r18_.tex}}\label{subfig: cifar-10}} \caption{Variation in minimum $0-1$ loss (adversarial robustness) as $\beta$ is varied for `3 vs. 7'. For all datasets, the loss of a robustly classifier (trained with iterative adversarial training \cite{madry_towards_2017}) is also shown for a PGD adversary with an $\ell_2$ constraint.} \label{fig: compare_plot} \vspace{-10pt} \end{figure} In Figure \ref{fig: compare_plot}, we show the variation in the minimum possible $0-1$ loss (adversarial robustness) in the presence of an $\ell_2$ constrained adversary as the attack budget $\beta$ is increased. We compare this loss value to that of a robustly trained classifier \cite{madry_towards_2017} when the PGD attack is used (on the same data). Until a certain $\beta$ value, robust training converges and the model attains a non-trivial adversarial robustness value. Nevertheless, there is a gap between the empirically obtained and theoretically predicted minimum loss values. Further, after $\beta=3.8$ (MNIST), $\beta=4.8$ (Fashion MNIST) and $\beta=1.5$, we observe that robust training is unable to converge. We believe this occurs as a large fraction of the data at that value of $\beta$ is close to the boundary when adversarially perturbed, making the classification problem very challenging. We note that in order to reduce the classification accuracy to random for CIFAR-10, a much larger $\ell_2$ budget is needed compared to either MNIST or Fashion-MNIST, implying that the classes are better separated. \subsection{Special cases} \label{subsec: gauss_special} \noindent \textbf{Matching norms for data and adversary:} When $\mathcal{B}$ is the unit ball derived from $\Sigma$, the optimization problem \eqref{convex-optimization} has a very simple solution: $\alpha^*(\beta,\mu) = \|\mu\|_{\Sigma} - \beta$, $y = \alpha \mu$, $z = \beta \mu$, and $w = \frac{1}{\|\mu\|_{\Sigma}} \Sigma^{-1}\mu$. Thus, the same classifier is optimal for all adversarial budgets. In general, $\alpha^*(0,\mu) = \|\mu\|_{\Sigma}$ and $\alpha^*(\|\mu\|_{\mathcal{B}},\mu) = 0$, but $\alpha^*(\beta,\mu)$ can be nontrivially convex for $0 \leq \beta \leq \|\mu\|_{\mathcal{B}}$. When there is a difference between the two seminorms, the optimal modification is not proportional to $\mu$, which can be used by the adversary. The optimal classifier varies with the adversarial budget, so there is a trade-off between accuracy and robust accuracy. \noindent \textbf{$\ell_{\infty}$ adversaries:} In Figure \ref{fig: alpha_beta}, we illustrate this phenomenon for an $\ell_{\infty}$ adversary. We plot $\alpha(\beta,\mu)$ for $\Sigma = I$ (so $\|\cdot\|_{\Sigma} = \|\cdot\|_2$) and taking $\mathcal{B}$ to be the $\ell_{\infty}$ unit ball (so $\|\cdot\|_{\mathcal{B}} = \|\cdot\|_{\infty}$). In this case \eqref{convex-optimization} has an explicit solution. For each coordinate $z_i$, set $z_i = \min(\mu_i, \beta)$, which gives $y_i= \mu_i - \min(\mu_i, \beta)$, which makes the constraints tight. Thus, as $\beta$ increases, more components of $z$ equal those of $\mu$, reducing the marginal effect of an additional increase in $\beta$. \arjun{Due to the mismatch between the seminorms governing the data and adversary, the value of $\beta$ determines which features are useful for classification, since features less than $\beta$ can be completely erased. Without an adversary, all of these features would be potentially useful for classification, implying that human-imposed adversarial constraints, with their mismatch from the underlying geometry of the data distribution, lead to the presence of non-robust features that are nevertheless useful for classification. A similar observation was made in concurrent work by Ilyas et al. \cite{ilyas2019adversarial}.} \subsection{Concluding remarks} Our framework provides lower bounds on adversarial robustness through the use of optimal transport for binary classification problems, which we apply to empirical datasets of interest to analyze the performance of current defenses. In future work, we will extend our framework to the multi-class classification setting. As a special case, we also characterize the learning problem exactly in the case of Gaussian data and study the relationship between noise in the learning problem and adversarial perturbations. Recent work \cite{fawzi2016robustness,ford2019adversarial} has established an empirical connection between these two noise regimes and an interesting direction would be to precisely characterize which type of noise dominates the learning process for a given adversarial budget. Another natural next step would be to consider distributions beyond the Gaussian to derive expressions for optimal adversarial robustness as well as the sample complexity of attaining it.
2,869,038,156,490
arxiv
\section{Introduction} There are several puzzles in neutrino physics: \begin{itemize} \item the appearance of $\bar{\nu_{e}}$'s in the LSND experiment, not confirmed by the very similar experiment Karmen \cite{LSND_Karmen}. \item the disappearance of atmospheric $\nu_{\mu}$'s at SuperKamiokande over distances of the order of the earth's diameter \cite{SuperK}. \end{itemize} These two findings have been interpreted as evidence of neutrino oscillations. Together with the solar deficit also interpreted as a sign of oscillations, it is difficult to build a coherent scenario with the only three neutrinos which are known to exist. Another possibility is considered here, namely the radiative decays of $\nu_{\mu}$'s in the context of mass-degenerate neutrinos, for example neutrinos having masses of a few eV for cosmological purposes and related by a $\delta m^2$ fixed by the solar deficit. This could explain both the LSND and SuperKamiokande signals. Decays of neutrinos have been advocated \cite{Nudecay} and rejected \cite{Nudec_rej} as a solution for the atmospheric deficit. We consider here the radiative mode which is hugely amplified by matter effects \cite{matter_effect1,matter_effect2,matter_effect3}. This process differs from the simple case of decays in vacuum considered up to now in two aspects: antineutrinos may not be affected (the refraction index is different for neutrinos and antineutrinos), and the decay probability varies rapidly with the density of the traversed medium. \section{Interpretation of the LSND signal} The radiative decay of neutrinos consists of the process:\\ $$\nu_2 \rightarrow \nu_1 + \gamma$$ where $\nu_2$ and $\nu_1$ are mass eigenstates, $\nu_2$ being the heaviest one. In a simple scheme, $\nu_2$ is predominantly $\nu_{\mu}$ and $\nu_1$ predominantly $\nu_e$. As a consequence of the helicity flip in the transition, the final neutrino is right-handed. If neutrinos are Dirac particles, the emerging neutrino is sterile. If, on the other hand, neutrinos are Majorana particles, the right-handed final neutrino is active and the process can be written: $$\nu_{\mu} \rightarrow \bar{\nu_{e}} + \gamma$$ This is the decay mode which will be assumed for the present argument. Similar considerations of stimulated conversion between mass-degenerate neutrinos have been discussed \cite{stim_conv}. Radiative decays of $\nu_{\mu}$'s have been searched for experimentally \cite{rad_dec_exp}. The result is $\tau$/m $\geq$ 15.4 s/eV, where m is the mass of the decaying neutrino. This result seems to exclude the considerations which are developed below. However this limit only applies to neutrinos with very different masses, when the emitted photon takes half of the incident neutrino energy. With mass-degenerate neutrinos, the limit does not apply, and the $\bar{\nu_{e}}$ takes up most of the incident energy. This process could therefore be at the origin of the LSND signal. The LSND beam is composed of $\nu_{\mu}$, $\bar\nu_{\mu}$ and $\nu_{e}$ at equal level, but contains almost no $\bar\nu_{e}$. A signal of $\bar\nu_{e}$ is claimed and the favoured interpretation is the oscillation of $\bar\nu_{\mu}$ into $\bar\nu_{e}$ . The decay discussed above would be equally satisfactory. In fact, it would explain why the Karmen experiment does not see a signal. With Karmen, the beam is better time-defined, the $\nu_{\mu}$'s and $\bar\nu_{\mu}$'s are well separated, and the oscillation is specifically searched from the $\bar\nu_{\mu}$ component. If this is the correct interpretation of the LSND signal, it gives a decay probability of $3 \ 10^{-3}$ for 30 MeV neutrinos, over a decay path of about 30 m (distance between the beam stop and the centre of the detector). With these parameters the lifetime is: $\tau$/m $\simeq 10^{-12}$ s/eV. Such a short lifetime is not a priori excluded by laboratory limits, which only apply to non-degenerated neutrino masses. \section{Consequences for atmospheric neutrinos} Let us now consider a 1 GeV $\nu_{\mu}$ travelling along a flight path of 13000 km (diameter of the earth). This is the typical situation encountered with atmospheric neutrinos. The lifetime inferred from LSND gives a $\gamma$c$\tau$ of $3 \ 10^5$ m. This is more than an order of magnitude too small to give a decay probability corresponding to the level of disappearance seen by the SuperKamiokande experiment for upward going neutrinos. However, the case to be considered is more complex, as the neutrinos are travelling through matter. It has been shown that radiative decays of neutrinos are hugely amplified in dense media. The lifetime $\tau_m$ in matter is related to the lifetime in vacuum $\tau_0$ by the expression: $$\Large \frac{\tau_0}{\tau_m} \normalsize = 8.6 \ 10^{23}F(v) \Large (\frac{N_e}{10^{24}cm^{-3}})^2 (\frac{1eV}{m})^4$$ where $N_e$ is the electron density of the medium. This formula applies for neutrinos with a mass hierarchy. For mass-degenerate neutrinos, it becomes: $$\Large \frac{\tau_0}{\tau_m} \normalsize = 8.6 \ 10^{23}F(v) \Large (\frac{N_e}{10^{24}cm^{-3}})^2 (\frac{1eV}{m})^4(\frac{m^2}{\delta m^2})^2$$ The value of F(v) has not been completely elucidated. For relativistic neutrinos the term F(v) tends to 4 m/E according to some authors \cite{matter_effect2} whilst it is about 1 according to others \cite{matter_effect3}. The issue needs further calculations, and we have adopted the naive approach, with an amplification proportional to the square of $N_e$, and inversely proportional to the neutrino energy. Taking into account these factors, let us reconsider the cases of LSND and SuperKamiokande. In the LSND beam, the neutrinos cross about 10 m of copper and steel. This corresponds to a path weighed by the square electron density of the traversed matter of approximately 130 m ($gcm^{-3})^2$. In a simplified description, the earth is composed of a central core of radius 3500 km and density 11.5 $gcm^{-3}$, surrounded by a mantel of 3000 km thickness and density 4.5 $gcm^{-3}$. This gives, for a neutrino crossing the whole diameter of the earth, a weighed path of about 160000 $km(gcm^{-3})^2$. We keep the simple formula for the decay probability: $$P = exp(-lm/Ec\tau_m)$$ where $l$ is the actual length, and $\tau_m$ includes the matter effect. Note that, in principle, the mass of a neutrino is affected by matter effects and thus can vary depending on the medium. We take here a well defined mass m which may or may not be the vacuum value. Scaling from the LSND result, the probability for a 1 GeV $\nu_{\mu}$ to decay through the earth is 0.80. The disappearance seen by SuperKamiokande is about 0.50. The model seems to give an excessive deficit, but the enhancement in matter comes from a coherent interaction on atomic electrons, and is different for neutrinos and antineutrinos. Atmospheric neutrinos at low energy have equal populations of $\nu_{\mu}$ and $\bar{\nu_{\mu}}$. Because of the reduced cross-section of $\bar{\nu_{\mu}}$ , 1/4 of the events coming from this source are unaffected. Furthermore, the weighed path decreases very rapidly with the zenith angle, as the dense matter is concentrated in the core. For a cos$\theta$ = -0.8 (the last bin in the SuperKamiokande notation) the probability of decay goes down to 0.30. With the angular resolution of SuperKamiokande, and considering the unaffected contribution of antineutrinos, the deficit obtained for contained events (sub-GeV as well as multi-GeV) is satisfactory. The decay results in $\bar\nu_{e}$ and gives an excess of e-like events. However because of the reduced cross-section of antineutrinos, this excess is small, and can be seen in the data. The difficulty may arise with up-going muons. Here the direction is well reconstructed and the model predicts a deficit of 0.07 for 5 GeV $\nu_{\mu}$ and 0.02 for 10 GeV $\nu_{\mu}$ between horizontal and vertical directions. This seems low compared with the observations. \section{Conclusion} The conjecture of a common origin for the LSND and SuperKamiokande findings is suggested. It is surprising that both experiments can be interpreted by the radiative decay: $$\nu_{\mu} \rightarrow \bar{\nu_{e}} + \gamma$$ with degenerated neutrino masses. Within this hypothesis, LSND sees the appearance of $\bar\nu_{e}$, while SuperKamiokande sees the disappearance of $\nu_{\mu}$ . Taking into account the amplification in matter, one finds that the lifetime inferred from LSND reproduces adequately the size of the effect seen in atmospheric neutrinos, at least for the contained event sample. This lifetime is not in contradiction with other experimental results. A careful $\chi^2$ analysis would probably prefer the oscillation interpretation, but the present observation has the advantage of explaining both LSND and SuperKamiokande with the same phenomenon. If the energy term in the amplification factor is the one found in Ref.[6], the effect would be very small in the MiniBoone and I216 experiments proposed to check the LSND signal, and also in the high energy long base-line projects. On the other hand, other experimental tests are possible and are being studied. \section*{References}
2,869,038,156,491
arxiv
\section{Introduction} \label{sec:intro} For many problems in science and in statistics, the large deviation properties play an important role \cite{denHollander2000,dembo2010}. Only for few cases analytical results can be obtained. Thus, most problems have to be studied by numerical simulations \cite{practical_guide2009}, in particular by Monte Carlo (MC) techniques \cite{newman1999,landau2000}. Classically, MC simulation have been applied to random systems in the following way: For a finite set of independently drawn quenched random instances regular or large-deviation properties of these instances have been calculated using importance-sampling MC simulations. Only recently it has been noticed that by introducing an artificial sampling temperature also the large-deviation properties with respect to the quenched random ensemble can be obtained \cite{align2002}. This corresponds somehow to an annealed average, but the results are re-weighted in a way that the results for the original quenched ensemble are obtained. In this way, the large-deviation properties of the distribution of alignment scores for protein comparison was studied \cite{align2002,align_long2007,newberg2008}, which is of importance to calculate the significance of results of protein-data-base queries \cite{durbin2006}. Motivated by these results, similar approaches have been applied to other problems like the distribution of the number of components of Erd\H{o}s-R\'enyi (ER) random graphs \cite{rare-graphs2004}, the partition function of Potts models \cite{partition2005}, the distribution of ground-state energies of spin glasses \cite{pe_sk2006} and of directed polymers in random media \cite{monthus2006}, the distribution of Lee–-Yang zeros for spin glasses \cite{matsuda2008}, the distribution of success probabilities of error-correcting codes \cite{iba2008}, the distribution of free energies of RNA secondary structures \cite{rnaFreeDistr2010}, and some large-deviation properties of random matrices \cite{driscoll2007,saito2010}. Interestingly, so far no comparison between numerical and mathematically exact results for the full support of a distribution involving a large-deviation tail has been performed to the knowledge of the author. In few cases, numerical results had been compared to rigorous results \cite{monthus2006}, but only near the peak of the distribution, where finite-size effects are often very small. In another case \cite{rare-graphs2004}, the numerical and analytical distributions were compared on the full support, but the result was obtained using a non-rigorous statistical mechanics approach. In this work, the numerical large-deviation approach is applied to obtain the complete distribution of the size $S$ of the largest component of ER random graphs. In this case also mathematically exact results for the leading order large-deviation rate function are available for arbitrary values of the finite connectivity $c$. This allows for a comprehensive comparison and an estimation of the strength of finite-size effects. Furthermore, in this paper results are obtained for the two-dimensional (2d) percolation problem, where no analytic results are available for the distribution of the largest-component size $S$. For both models also the dependence of $S$ on the artificial sampling temperature $T$ is studied. First-order phase transitions are found for both graph ensembles in the percolating regime. The two graph ensembles are defined as follows. In both cases, each graph $G=(V,E)$ consists of $N$ nodes $i \in V$ and undirected edges $\{i,j\}\in E \subset V^{(2)}$. For ER random graphs \cite{erdoes1960}, each possible edge $\{i,j\}$ is present with probability $c/N$. Hence, the average degree (connectivity) is $c$. For 2d percolation, the graph is embedded in a two-dimensional square lattice of size $N=L\times L$ with periodic boundary conditions in both directions, i.e., in a torus. Each node can be connected by edges to its four nearest neighbours, each edge is present with probabilty $p$. Hence, the average deegree is $4p$. Two nodes $i,j$ are called \emph{connected} if there exist a \emph{path} of disjoint edges $\{i_0,i_1\},\{i_1,i_2\},\ldots, \{i_{l-1},i_l\}$ such that $i=i_0$ and $j=i_l$. The maximum-size subsets $C\subset V$ of nodes, such that all pairs $i,j\in C$ are connected are called the (connected) \emph{components} of a graph. The size of the largest component of a graph is denoted here by $S$. Via the random-graph enembles, a probability distribution $P(S)$ for the size of the largest component and the corresponding probability $P(s)$ for relative sizes $s=S/N$ are defined. The probabilities $P(s)$ for values of $s$ different from the typical size are exponentiall small in $N$. Hence, one uses the concept of the large-deviation \emph{rate function} \cite{denHollander2000} by writing \begin{equation} P(s) = e^{-N\Phi(s)+o(N)}\quad (N\to\infty) \end{equation} This leading-order behavior of the large-deviation rate function $\Phi_{\rm ER}(s,c)$ for ER random graphs with connectivity $c$ is known exactly \cite{biskup2007} and given by the following set of equations \begin{eqnarray} \tilde S(s) & = & s \log s + (1-s)\log(1-s)\nonumber\\ \pi_1(\alpha) & = & 1- e^{-\alpha} \nonumber \\ \Psi(\alpha) & = & \left( \log \alpha - 0.5[\alpha-1/\alpha]\right) \wedge 0\nonumber \\ \Phi_{\rm ER}(s,c) & = & \tilde S(s)-s\log \pi_1(cs) - (1-s)\log \left(1-\pi_1(cs)\right) \nonumber \\ & & -(1-s)\Psi\left(c(1-s)\right)\;, \label{eq:rate_fct_analytic} \end{eqnarray} where the expression $g(\alpha)\wedge 0$ results in 0 if $g(\alpha)>0$ and in $g(\alpha)$ else. For percolation in finite dimensions similar results are not available, to the knowledge of the author. There are only analytical results for the distribution of finite (non-percolating) components \cite{alexander1990}. The paper is organised as follows. In the second section, the numerical simulation technique and the corresponding re-weighting approach are explained. In the third section, the results are displayed, first for the ER random graph ensemble, next for the 2d percolation problem. Finally, a summary and an outlook are given. A concise summary of this paper is available at the \emph{papercore} web page \cite{papercore}. \section{Simulation and reweighting method} \label{sec:method} To determine the distribution $P(S)$ for any measurable quantity $S$, here denoting the largest component for an ensemble of graphs, \emph{simple sampling} is straightforward: One generates a certain number $K$ of graph samples and obtains $S(G)$ for each sample $G$. This means each graph $G$ will appear with its natural ensemble probability $Q(G)$. The probability to measure a value of $S$ is given by \begin{equation} P(S) = \sum_{G} Q(G)\delta_{S(G),S} \label{eq:PS} \end{equation} Therefore, by calculating a histogram of the values for $S$, a good estimation for $P(S)$ is obtained. Nevertheless, $P(S)$ can only be measured in a regime where $P(S)$ is relatively large, about $P(S)>1/K$. Unfortunately, the distribution decreases exponentially fast in the system size $N$ when moving away from its typical (peak) value. This means, even for moderate system sizes $N$, the distribution will be unknown on almost its complete support. To estimate $P(S)$ for a much larger range, even possibly on the full support of $P(S)$, where probabilities smaller than $10^{-100}$ may appear, a different approach is used \cite{align2002}. For self-containedness, the method is outlined subsequently. The basic idea is to use an additional Boltzmann factor $\exp(-S(G)/T)$, $T$ being a ``temperature'' parameter, in the following manner: A standard Markov-chain MC simulation \cite{newman1999,landau2000} is performed, where in each step $t$ from the current graph $G(t)$ a candidate graph $G^*$ is created: A node $i$ of the current graph is selected randomly, with uniform weight $1/N$, and all adjacent edges are deleted. For all feasible edges $\{i,j\}$, the edge is added with a weight corresponding to the natural weight $Q(G)$, i.e., with probability $c/N$ (ER random graph) or with probability $p$ (percolation), respectively. For the candidate graph, the size $S(G^*)$ of the largest component is calculated. Finally, the candidate graph is \emph{accepted}, ($G(t+1)=G^*$) with the Metropolis probability \begin{equation} p_{\rm Met} = \min\left\{1,e^{-[S(G^*)-S(G(t))]/T}\right\}\,. \end{equation} Otherwise the current graph is kept ($G(t+1)=G(t)$). By construction, the algorithm fulfills detailed balance. Clearly the algorithm is also ergodic, since within $N$ steps, each possible graph may be constructed. Thus, in the limit of infinite long Markov chains, the distribution of graphs will follow the probability \begin{equation} q_T(G) = \frac{1}{Z(T)} Q(G)e^{-S(G)/T}\,, \label{eq:qT} \end{equation} where $Z(T)$ is the a priori unknown normalisation factor. The distribution for $S$ at temperature $T$ is given by \begin{eqnarray} P_T(S) & = &\sum_{G} q_T(G) \delta_{S(G),S} \nonumber\\ & \stackrel{(\ref{eq:qT})}{=} & \sum_{G} Q(G)e^{-S(G)/T} \delta_{S(G),S} \nonumber \\ & = & \frac{e^{-S/T}}{Z(T)} \sum_{G} Q(G) \delta_{S(G),S} \nonumber \\ & \stackrel{(\ref{eq:PS})}{=} & \frac{e^{-S/T}}{Z(T)} P(S) \nonumber\\ \Rightarrow \quad P(S) & = & e^{S/T} Z(T) P_T(S) \label{eq:rescaling} \end{eqnarray} Hence, the target distribution $P(S)$ can be estimated, up to a normalisation constant $Z(T)$, from sampling at finite temperature $T$. For each temperature, a specific range of the distribution $P(S)$ will be sampled: Using a positive temperature allows to sample the region of a distribution left to its peak (values smaller than the typical value), while negative temperatures are used to access the right tail. Temperatures of large absolute value will cause a sampling of the distribution close to its typical value, while temperatures of small absolute value are used to access the tails of the distribution. Hence, by choosing a suitable set of temperatures, $P(S)$ can be measured over a large range, possibly on its full support. The normalisation constants $Z(T)$ can easily be obtained by including a histogram obtained from simple sampling, which corresponds to temperature $T=\pm\infty$, which means $Z\approx 1$ (within numerical accuracy). Using suitably chosen temperatures $T_{+1}$, $T_{-1}$, one measures histograms which overlap with the simple sampling histogram on its left and right border, respectively. Then the corresponding normalisation constants $Z(T_{\pm 1})$ can be obtained by the requirement that after rescaling the histograms according to (\ref{eq:rescaling}), they must agree in the overlapping regions with the simple sampling histogram within error bars. This means, the histograms are ``glued'' together. In the same manner, the range of covered $S$ values can be extended iteratively to the left and to the right by choosing additional suitable temperatures $T_{\pm 2}, T_{\pm 3}, \ldots$ and gluing the resulting histograms one to the other. A pedagogical explanation and examples of this procedure can be found in Ref.\ \cite{align_book}. In order to obtain the correct result, the MC simulations must be equilibrated. For the case of the distribution of the size of the largest component, this is very easy to verify: The equilibration of the simulation can be monitored by starting with two different initial graphs, respectively: \begin{itemize} \item Either an unbiased random graph is taken, which means that the largest component is of typical size. In the inset of Fig.\ \ref{fig:equil_distr} the evolution of $S$ as a function of the number $t_{\rm MCS}=t/N$ of Monte Carlo sweeps is shown for Erd\H{o}s-R\'eny random graphs with $N=500$ nodes, connectivity $c=0.5$ at temperature $T=2$. As one can see, $S(t_{\rm MCS})$ moves quickly away from the typical size which is around $S=30$ towards a values around $S=200$. This shows that easily different parts of the distribution can be addressed. The result of a second run with a negative temperature is shown in the same inset. In this case an initial graph was used which consists of a single line of nodes, i.e., in particular the graph is fully connected leading to $S=N$. \item Alternatively, if the temperature is positive, one can start with an empty graph ($S=1$). In any case, for the two different initial conditions, the evolution of $S(t_{\rm MCS})$ will approach from two different extremes, which allows for a simple equilibration test: equilibration is achieved if the measured values of $S$ agree within the range of fluctuations. Only data was used in this work, where equilibration was achieved within 200 Monte Carlo steps. \end{itemize} The resulting distribution for ER random graphs ($C=2,N=500$) is shown in the main plot of Fig.\ \ref{fig:equil_distr}. As one can see, the distribution can be measured over its full support such that probabilities as small as $10^{-180}$ are accessible. \begin{figure}[t!] \centering \includegraphics[clip,width=0.45\textwidth]{equil_distr_er0.5.eps} \caption{ \label{fig:equil_distr} Distribution of the size $S$ of the largest component for Erd\H{o}s-R\'enyi random graphs of size $N=500$ at connectivity $c=0.5$. In this and all other plots, error bars are of symbol size or smaller if not explicitly shown. The inset shows the size of the largest component as function of the number $t_{\rm MCS}$ of Monte Carlo sweeps for the same type of graphs at temperature $T=-2$. Two different starting conditions are displayed: either a random graph (leading to a typical components size around $30$) or a graph consisting of a line $(S=500)$ were used. } \end{figure} Note that in principle one can also use a Wang-Landau approach \cite{wang2001} or similar approaches to obtain the distribution $P(S)$ without the need to perform independent simulations at different values for the temperatures. Nevertheless, the author has performed tests for ER random graphs and experienced problems by using the Wang-Landau approach, because the sampled distributions tend to stay in a limited fraction of the values of interest. Using the finite-temperature approach it is much easier to guide the simulations to the regions of interest, e.g., where data is missing using the so-far-obtained data, and to monitor the equilibration process. Furthermore, the behavior of $S$ as a function of $T$ appears to be of interest on its own, see next section. \section{Results} \label{sec:results} ER random graphs of size $N=500$ and 2d percolation problems with lateral size $L=32$ ($N=1024$ sites) were studied. In few cases, additional system sizes were considered to estimate the strength of finite-size effects, see below. For each problem, the model was studied right at the percolation transition, for one point in the non-percolating regime, and for one point in the percolating regime. The temperature ranges used for the different cases are shown in Tab.\ \ref{tab:parameters}. Note that, depending on the position of the peak of the size distribution, sometimes positive, sometimes negative and sometimes both types of temperatures had to be used. For the systems listed in the table, equilibration was always achieved within the first 200 MCS. In general, studying significantly larger sizes or going deeper into the percolation regime makes the equilibration much more difficult. After equilibration, data was collected for 9800 MCS, in some cases, to improve statistics, for about $10^6$ MCS. In the two subsequent subsections, the results for ER random graphs and for 2d percolation are presented, respectively. \begin{table} \begin{center} \begin{tabular}{l|lll} system & $T_1$ & $T_2$ & $N_T$ \\\hline ER $c=0.5$ & -5 & -0.4 & 14 \\ ER $c=1.0$ & -7.0 & -0.6 & 9\\ ER $c=2.0$ & -2.0 & 10.0 & 6\\ perc $p=0.3$ & -10.0 & -0.6 & 13\\ perc $p=0.5$ & -5.00 & 50.0 & 13\\ perc $p=0.6$ & 0.60 & 30.0 & 14\\ \end{tabular} \caption{Parameters used to determine the distributions $P(S)$ for the different models. $T_1$ is the minimum and $T_2$ the maxi\-mum temperature used. $N_T$ denotes the number of different temperature values. Note that for determinining the average value $\overline{S}(T)$, see Figs.\ \ref{fig:STER} and \ref{fig:STdistrPerc}, usually a higher number of temperatures was used. \label{tab:parameters}} \end{center} \end{table} \subsection{ER graphs} For the numerical simulations, first the case of ER random graphs is treated, because the analytical result (\ref{eq:rate_fct_analytic}) can be used for comparison. This allows to assess the quality of the method and to get an impression of influence of the non-leading finite-size corrections. In Fig.\ \ref{fig:rateER0.5_1.0} the empirical rate function \begin{equation} \Phi(s)\equiv-\frac{1}{N} \log P(s) \end{equation} for $c=0.5$ is displayed, corresponding to the distribution shown in Fig.\ \ref{fig:equil_distr}. Note that by just stating the analytical asymptotic rate function $\Phi_{\rm ER}(s,c)$, the corresponding distribution $P(s)$ is not normalised. Hence, for comparison, $\Phi(s)$ is shifted for all values of the connectivity $c$ such that it is zero at its minimum value, like $\Phi_{\rm ER}(s, c)$. The numerical data agrees very well with the analytic result. Only in the region of intermediate cluster sizes, a small systematic deviation is visible, which is likely to be a finite-size effect. Given that for the numerical simulations only graphs with $N=500$ nodes were treated, the agreement with the $N\to\infty$ leading-order analytical result is remarkable. The resulting rate function right at the percolation transition $c=c_{\rm c}=1$ is shown in the inset of Fig.\ \ref{fig:rateER0.5_1.0}. Qualitatively, the result is very similar to the non-percolating case $c=0.5$, except that the distribution is much broader, corresponding to smaller values of the rate function. Again, the agreement with the analytical result is very good, except for the data close to the origin $s=0$: The numerical results exhibit a minimum near $s=0.05$, while the analytical result exhibits its minimum naturally at $s=0$. This is clearly due to the finite size of the numerical samples. \begin{figure}[ht] \centering \includegraphics[clip,width=0.45\textwidth]{er_0.5_1.0_rate.eps} \caption{ \label{fig:rateER0.5_1.0} Large-deviation rate function $\Phi(s)$ of ER random graphs with average connectivity $c=0.5<c_{\rm c}$, $N=500$ (symbols). The line displays the analytical result from Eq. (\ref{eq:rate_fct_analytic}). The inset shows the same for the case $c=1.0=c_{\rm c}$. } \end{figure} The case of the percolating regime (connectivity $c=2$), is displayed in Fig.\ \ref{fig:rateER2.0}. The rate function exhibits a minimum at a finite value of $s$, corresponding to the finite average fraction of nodes contained in the largest component. The behaviour of the rate function is more interesting compared to $c\le c_{\rm c}$, because $\Phi_{\rm ER}(s)$ grows strongly near its minimum, but for $s\to 0$ it levels off horizontally. For most of the support of the distribution, the numerical data for $N=500$ agrees again very well with the analytic result. Nevertheless, for $s\to 0$, strong deviations become visible because the numerical rate function $\Phi(s)$ grows strongly as $s\to 0$. By comparing with the result for a smaller system, $N=100$, where this deviation is even larger, it becomes clear that this is a finite-size effect, corresponding to the non-leading corrections, which will disappear for $N\to\infty$. \begin{figure}[ht] \centering \includegraphics[clip,width=0.45\textwidth]{er_2.0_rate.eps} \caption{ \label{fig:rateER2.0} Large-deviation rate function $\Phi(s)$ of ER random graphs with average connectivity $c=2>c_{\rm c}$, $N=100$ and $N=500$ (symbols). The line displays the analytical result from Eq. (\ref{eq:rate_fct_analytic}). } \end{figure} Although the parameter $T$ is mainly used as a way to address different parts of the distribution, the temperature dependence of $S$, which is shown in Fig.\ \ref{fig:STER}, exhibits an interesting behaviour on its own in the percolating regime. When studying the average size $\overline{s}$ as a function of temperature, a strong increase around temperature $T=9.5$ becomes visible, which may correspond to a kind of phase transition, see below. \begin{figure}[ht] \centering \includegraphics[clip,width=0.45\textwidth]{S_T_er.eps} \caption{ \label{fig:STER} Average relative size $\overline{s}$ of the largest component as a function of artificial temperature $T$ for ER random graphs in the percolating regime ($c=2$) and (inset) in the non-percolating regime ($c=0.5$). } \end{figure} On the other hand, in the non-percolating regime $c<1$, the average size of the largest component is rather smooth, as shown in the inset of Fig.\ \ref{fig:STER}. Note that here negative artificial temperatures are used, because the peak of the distribution $P(S)$ is close to $S=0$, in contrast to the percolating regime $c>1$, where the peak of $P(S)$ is at finite values. In Fig.\ \ref{fig:PtERT} the size $S$ of the largest component is shown as a function of the number of MC sweeps for a temperature $T=9.5$, which is located in the regime of the assumed transition. One can see that $S(t)$ fluctuates quickly between two sets of typical sizes, which shows also that the data is well equilibrated. In particular, values around $S=350$ and around $S=100$ are more frequent than intermediate values. \begin{figure}[ht] \centering \includegraphics[clip,width=0.45\textwidth]{St_er_T.eps} \caption{ \label{fig:PtERT} Time-series for the size $S$ of largest cluster as function of the number $t_{\rm MCS}$ of MC sweeps for ER random graphs ($c=2$, $N=500$) at artificial temperature $T=9.5$. } \end{figure} This result is made more quantitative by studying the resulting distribution of the sizes of the largest component, see Fig.\ \ref{fig:distrERPsT}. A two-peak structure can be observed, which indicates that indeed a transition between small and large components of first-order type is present in the percolating regime $c>1$. Similar first-order transitions have been observed for biased simulations of the one-dimensional Ising model \cite{jack2010,jack2010b}. The first-order nature of this transition is a reason that obtaining $\Phi(s)$ becomes harder for large system sizes, because the time for tunnelling between the two sets of values grows quickly. Furthermore, even worse, the number of observed configurations having a value of $S$ which is located between the peaks decreases strongly, such that for large intervals no data can be collected at all, already for $N=1000$. Hence, $P(S)$ cannot be sampled for such system sizes on its complete support, because the different parts of the distribution cannot be ``glued'' together. \begin{figure}[ht] \centering \includegraphics[clip,width=0.45\textwidth]{distr_er_Ps_T.eps} \caption{ \label{fig:distrERPsT} Distribution (normalised such that the integral is one) of the size of the largest component for ER random graphs ($c=2$, $N=500$) at artificial temperature $T=9.5$. } \end{figure} A detailed study of this phase transition is beyond the scope of this work, in particular because its physical relevance is yet not fully clear. \subsection{Two-dimensional percolation} Again the discussion of the rate function for the distribution of the relative size $s=S/N$ of the largest component is started by considering the non-percolating regime. The rate function for $p=0.3<p_{\rm c}$ ($L=32$) is shown in Fig.\ \ref{fig:rate2d0.3}. In principle it looks similar to the ER case displayed in Fig.\ \ref{fig:rateER0.5_1.0}. A striking difference is that it exhibits a large region where it behaves linearly, basically for half of the support, while it grows stronger for $s>0.5$. For $L=16$ (not shown), the same result was found. This means, the finite-size effects are small as in the ER case. This also indicates that the shape of the rate function should basically remain the same for $L\to\infty$. In particular, it appears likely that for $L\to\infty$, $\Phi(s)$ will consist of a linear part for small values of $s$ and it will grow stronger for $s\to 1$. \begin{figure}[t!] \centering \includegraphics[clip,width=0.45\textwidth]{perc_0.3_rate.eps} \caption{ \label{fig:rate2d0.3} Large-deviation rate function $\Phi(s)$ of two-dimensional (2d) bond percolation with occupation probability $p=0.3<p_{\rm c}$, $L=32$ (symbols). For small relative cluster sizes $s$, the rate function behaves linearly (the line displays a linear function with slope $0.1125$, obtained from fitting a linear function to the data in the range $s\in[0,0.5]$). } \end{figure} In contrast, the large-deviation rate function right at the percolation transition $p=p_{\rm c}=0.5$ ($L=32$), see Fig.\ \ref{fig:rate2d0.5_0.6}, looks very different from the ER case: It exhibits a minimum at a relative cluster size of about $s=S/N=0.78$, see main plot in Fig.\ \ref{fig:rate2d0.5_0.6}. This is very large compared to the ER case, but only a finite-size effect: For example for $L=512$ (additional simple sampling simulations, not shown), the minimum of the rate function has already moved to a smaller value of $s\approx 0.5$ and for $p=0.49$, just before the percolation transition, the most likely relative size of a cluster is $s=0.1$. Hence, the finite-size effects, i.e., corrections to the large-deviation rate function, are close to $p_c$ much stronger for the finite-dimensional case compared to ER random graphs. The case of the percolating regime is displayed in the inset of Fig.\ \ref{fig:rate2d0.5_0.6}. Here again, the result looks very similar to the ER case, except that the magnitude of the rate function is somehow smaller. Hence, it appears likely that the shape of the rate function for the 2d percolation problem in the limit $L\to\infty$ is very similar to the ER case, i.e., it may level off horizontally for $s\to 0$ at a finite value and the strong increase found for $L=32$ is again a finite size effect. This is confirmed by the result for $L=16$, which exhibits a much stronger increase for $s\to 0$ compared to the $L=32$ result, as in the case of ER random graphs. \begin{figure}[ht] \centering \includegraphics[clip,width=0.45\textwidth]{perc_0.5_0.6_rate.eps} \caption{ \label{fig:rate2d0.5_0.6} Large-deviation rate function $\Phi(s)$ of two-dimensional (2d) bond percolation with occupation probability $p=0.5=p_{\rm c}$, $L=32$ and (inset) $p=0.6$, $L=16$/$L=32$. } \end{figure} In Fig.\ \ref{fig:STdistrPerc} the result for the behaviour of $S$ as a function of the artificial temperature is shown. In the non-percolating regime ($p=0.3$), see right inset, the average value $\overline s$ behaves very regularly as a function of the temperature, no sign of a transition is visible. On the other hand, inside the percolation regime ($p=0.6$), see main plot, $\overline{s}(T)$ exhibits a strong increase around $T=27$. The distribution of $S$ at $T=27$ exhibits a strong bimodal signature, which indicates that indeed a phase transition of first-order type takes place. In summary, the temperature dependence of the size of the largest component is very similar to the results obtained for the ER random graphs. \begin{figure}[ht] \centering \includegraphics[clip,width=0.45\textwidth]{S_T_distr_perc.eps} \caption{ \label{fig:STdistrPerc} Average relative size $\overline{s}$ of the largest component as a function of artificial temperature $T$ for 2d percolation ($L=32$) in the percolating regime ($p=0.6$) and (right inset) in the non-percolating regime ($p=0.6$). The left inset shows the distribution (normalised such that the integral is one) of the size of the largest component ($p=6$) at artificial temperature $T=27$ (line is guide to the eyes only). } \end{figure} \section{Summary and outlook} By using an artificial Boltzmann ensemble characterised by an artificial temperature $T$, the distributions of the size of the largest component for ER randoms graphs with finite connectivity $c$ and for 2d percolation have been studied in this work. For not too large system sizes, the distributions can be calculated numerically over the full support, giving access to very small probabilities such as $10^{-180}$. For the ER case, the numerical results for the large-deviation rate function $\Phi(s)$, obtained for rather small graphs of size $N=500$, agree very well with analytical results obtained previously for the leading behaviour in the limit $N\to\infty$. This proves the usefulness of the numerical approach, which has been applied previously to models where no complete comparison between numerical data and exact analytic results have been performed. The main findings are that below and at the percolation transition, $\Phi(s)$ exhibits a minimum at $s=0$ and rises monotonously for $s\to 1$. Inside the percolating regime, $\Phi(s)$ exhibits a minimum, grows quickly around this minimum and levels off horizontally for $s\to 0$. The finite-size corrections are usually small, except for the percolating regime in an extended region near $s=0$. Furthermore, when studying the average value $\overline s$ as a function of temperature, a transition of first-order type is found between a phase where $S$ is untypically small to a phase where $S$ is large. For the 2d percolation problem, where no analytic results are available, basically the same results are found: the shape of the large-deviation rate functions below, at, and above the percolation threshold are qualitatively the same, except that the finite-size corrections at the percolation threshold appear to be larger compared to the ER results. Also the behaviour of the largest-component size as a function of the temperature seems to be similar, in particular exhibiting a first-order type transition for the percolating regime. Since the comparison with the exact results for the ER random graphs indicates the usefulness of this approach to study large-deviation properties of random graphs, it appears promising to consider many other properties of different ensembles of random graphs in the same way. For example, it would be interesting to obtain the distribution of the diameter of ER random graphs, where only for $c<1$ there is an analytic result available. Corresponding simulations are currently performed by the author of this work. \section*{Acknowledgements} The author thanks Oliver Melchert for critically reading the manuscript. The simulations were partially performed at the GOLEM I cluster for scientific computing at the University of Oldenburg (Germany). \bibliographystyle{epj}
2,869,038,156,492
arxiv
\section{Introduction} \label{sec intro} Symbolic control deals with the use of discrete synthesis techniques for controlling complex continuous or hybrid systems~\cite{belta2007symbolic,tabuada2009symbolic}. In such approaches, one relies on symbolic abstractions of the orignal system; i.e. dynamical systems with finitely many state and input values, each of which symbolizes sets of states and inputs of the concrete system~\cite{alur2000discrete}. This enables the use of discrete controller synthesis techniques, such as supervisory control~\cite{cassandras2009introduction} or algorithmic game theory~\cite{bloem2012synthesis}, which allows us to address high-level specifications such as safety, reachability or more general properties specified by automata or temporal logic formula~\cite{baier2008principles}. When the behaviors of the concrete system and of its abstraction are related by some formal inclusion relationship (such as alternating simulation~\cite{tabuada2009symbolic} or feedback refinement relations~\cite{reissig2016}), the discrete controller of the abstraction can be refined to control the concrete system, with guarantees of correctness. Several approaches exist for computing symbolic abstractions for a wide range of dynamical systems (see e.g.~\cite{tabuada2006linear,pola2008approximately,zamani2012symbolic,zamani2014symbolic,coogan2015mixed,reissig2016}), based on partitions or discretizations of the state and input spaces. The numbers of symbolic states and inputs are then typically exponential in the dimension of the concrete state and input spaces, respectively. This limits the application of these approaches to low-dimensional systems. Several works have been done for improving the scalability of symbolic control. In~\cite{le2013mode,zamani2015symbolic}, an approach, which does not require state space discretization, has been presented for computing symbolic abstractions of incrementally stable systems. In~\cite{pola2012integrated,girard2016safety}, algorithms combining discrete controller synthesis with on-the-fly computation of symbolic abstractions have been developed. Compositional approaches have also been explored in several papers~\cite{tazaki2008bisimilar,reissig2010abstraction,meyer2015adhs,boskos2015decentralized,kim2015compositional,dallal2015compositional,pola2016symbolic,pola2016decentralized}. In such approaches, a system with a control specification is decomposed into subsystems with local control specifications. Then, for each subsystem, a symbolic abstraction can be computed and a local controller is synthesized while assuming that the other subsystems meet their local specifications. This approach, called \emph{assume-guarantee} reasoning~\cite{henzinger1998you}, enables the use of symbolic control techniques for higher dimensional systems. In this paper, we develop a novel compositional approach for symbolic control synthesis for a general class of {discrete time} nonlinear systems. Our approach clearly differs from the previously mentioned works (and particularly from our previous work~\cite{meyer2015adhs}) by the possibility for subsystems to share common state variables through the definition for each subsystem of locally modeled but uncontrolled variables, which are accessible to the local controller. Hence, this makes it possible for local controllers to share information on some of the states of the system. In this setting, we develop compositional approaches for computing symbolic abstractions and synthesizing controllers that maintain the state of the system in some specified safe set. The paper is organized as follows. Section~\ref{sec preliminaries} introduces the class of systems, safety controllers and the abstraction framework considered in the paper. Section~\ref{sec compositional} presents a compositional approach for computing abstractions from symbolic subsystems with overlapping sets of states. Compositional controller synthesis is addressed in Section~\ref{sec synthesis}. Section~\ref{sec comparison} provides results to compare abstractions and controllers obtained from different system decompositions, and a discussion on the computational complexity of the approach. Numerical experiments are then reported in Section~\ref{sec simulation}. \section{Preliminaries} \label{sec preliminaries} \subsection{System description} \label{sub preli cooperative} We consider a class of discrete time nonlinear control systems modeled by the difference inclusion: \begin{equation} x(t+1)\in F(x(t),u(t)),\; t\in \mathbb{N} \label{eq system} \end{equation} where $\mathbb{N}=\{0,1,2,\dots\}$, $x(t)\in\mathbb{R}^n$, $u(t)\in \mathcal U\subseteq\mathbb{R}^p$ denote the state and the control input, respectively, and the set-valued map $F:\mathbb{R}^n\times \mathcal U \rightarrow 2^{\mathbb{R}^n}$. System (\ref{eq system}) is discrete time; however, it encompasses sampled versions of continuous time systems, possibly subject to disturbances (see e.g.~\cite{meyer2015adhs,reissig2016}). Throughout the paper, we assume, for simplicity, that for all $x\in \mathbb{R}^n$, $u\in \mathcal U$, $F(x,u)\ne \emptyset$. For a subset of states $\mathcal X'\subseteq \mathbb{R}^n$ and inputs $\mathcal U' \subseteq \mathcal U$ we denote $$ F(\mathcal X',\mathcal U') = \bigcup_{x\in\mathcal{X'},u\in\mathcal{U'}}F(x,u). $$ Exact computation of $F(\mathcal X',\mathcal U')$ may not always be possible, especially when (\ref{eq system}) corresponds to the sampled dynamics of a continuous time system. Therefore, we will assume throughout the paper that we are able to compute, for all sets of states $\mathcal X'\subseteq \mathbb{R}^n$ and of inputs $\mathcal U' \subseteq \mathcal U$, a set $\overline{F}(\mathcal X',\mathcal U')$ verifying \begin{equation} F(\mathcal X',\mathcal U') \subseteq \overline{F}(\mathcal X',\mathcal U'). \label{eq over reachable set centralized} \end{equation} Several methods exist for computing such over-approximations for linear~\cite{girard2005reachability,kurzhanskiy2007ellipsoidal,le2010reachability} and nonlinear~\cite{sassi2012reachability,althoff2014reachability,coogan2015mixed,reissig2016} systems. \subsection{Transition systems and safety controllers} \label{sub preli alternating} A {\it transition system} is defined as a triple $S=(X,U,\delta)$ consisting of: \begin{itemize} \item a set of states $X$; \item a set of inputs $U$; \item a transition map $\delta : X\times U \rightarrow 2^X$. \end{itemize} A transition $x'\in \delta(x,u)$ means that $S$ can evolve from state $x$ to state $x'$ under input $u$. $U(x)$ denotes the set of enabled inputs at state $x$: i.e. $u\in U(x)$ if and only if $\delta(x,u)\neq\emptyset$. A trajectory of $S$ is a finite or infinite sequence of transitions $(x^0,u^0,x^1,u^1,\dots)$ such that $x^{t+1}\in \delta(x^t,u^t)$, for $t\in\mathbb{N}$ In the following, we consider a safety synthesis problem for transition system $S$: let $\mathcal X\subseteq X$ be a subset of safe states, a {\it safety controller} for system $S$ and safe set $\mathcal X$ is a map $C:X \rightarrow 2^U$ such that: \begin{itemize} \item for all $x\in X$, $C(x)\subseteq U(x)$; \item its domain $dom(C)=\{x\in X |\; C(x)\ne \emptyset\}\subseteq \mathcal X$; \item for all $x\in dom(C)$ and $u\in C(x)$, $\delta(x,u) \subseteq dom(C)$. \end{itemize} Essentially, a safety controller makes it possible to generate infinite trajectories of $S$, $(x^0,u^0,x^1,u^1,\dots)$ such that $x^t \in \mathcal X$, for all $t\in \mathbb{N}$ as follows: $x^0\in dom(C)$, $u^t\in C(x^t)$ and $x^{t+1}\in \delta(x^t,u^t)$, for all $t\in \mathbb{N}$. It is known (see e.g.~\cite{tabuada2009symbolic}) that there exists a {\it maximal safety controller} $C^*$ for system $S$ and safe sate $\mathcal X$ such that for all safety controllers $C$, for all $x\in X$, it holds $C(x)\subseteq C^*(x)$. \subsection{Feedback refinement relations} Complex transition systems motivate the use of abstractions, since finding a control strategy for an abstraction is generally simpler than for the original system. However, to derive a controller for the original system from that of the abstraction, the systems must satisfy a formal behavioral relationship such as alternating simulation~\cite{tabuada2009symbolic}. In this paper, we will rely on the notion of feedback refinement relations~\cite{reissig2016}, which form a special case of alternating simulation relations: \begin{defn}[Feedback refinement] \label{def simulation} Given two transition systems $S_a=(X_a,U_a,\delta_a)$ and $S_b=(X_b,U_b,\delta_b)$, with $U_b\subseteq U_a$, a map $H:X_a\rightarrow X_b$ defines a feedback refinement relation from $S_a$ to $S_b$ if for all $(x_{a},x_{b})\in X_{a}\times X_{b}~\text{with}~x_{b}=H(x_{a})$: \begin{itemize} \item $U_b(x_b) \subseteq U_a(x_a)$; \item for all $u\in U_b(x_b)$, $H(\delta_a(x_a,u))\subseteq \delta_b(x_b,u).$ \end{itemize} We denote $S_a\preceq_{\mathcal{FR}}S_b$. \end{defn} In the previous definition, $S_a$ represents a complex concrete system while $S_b$ is a simpler abstraction. From Definition~\ref{def simulation}, it follows that all abstract inputs $u$ of $S_b$ can also be used in $S_a$ such that all concrete transitions in $S_a$ are matched by an abstract transition in $S_b$. As a result, controllers synthesized using the abstraction $S_b$ can be interfaced with the map $H$ to obtain a controller for the concrete system $S_a$ (see~\cite{reissig2016}). In particular, if $C_b:X_b\rightarrow 2^{U_b}$ is a safety controller for transition system $S_b$ and safe set $\mathcal X_b\subseteq X_b$, then $C_a:X_a\rightarrow 2^{U_a}$, given by $C_a(x_a)=C_b(H(x_a))$ for all $x_a\in X_a$, is a safety controller for transition system $S_a$ and safe set $\mathcal X_a=H^{-1}(\mathcal X_b) \subseteq X_a$. \section{Compositional abstraction} \label{sec compositional} System (\ref{eq system}) can be described as a transition system $S=(X,U,\delta)$ where, $X=\mathbb{R}^n$, $U=\mathcal U$ and $\delta=F$; let $\mathcal X \subseteq \mathbb{R}^n$ be a subset of states of interest. In this section, we present a compositional approach for computing symbolic abstractions of transition system $S$. In order to allow for system decomposition, we will make the following assumption on the structure of the state and input sets $\mathcal X$ and $\mathcal U$: \begin{assum} \label{assum set} The following equalities hold: \begin{IEEEeqnarray*}{ll} \mathcal X= \mathcal X_1 \times \dots \times \mathcal X_{\bar n}, & \text{ with } \mathcal X_i \subseteq \mathbb{R}^{n_i},\; i\in I= \{1,\dots,\bar n\};\\ \mathcal U= \mathcal U_1 \times \dots \times \mathcal U_{\bar p}, & \text{ with } \mathcal U_j \subseteq \mathbb{R}^{p_j},\; j\in J = \{1,\dots,\bar p\}. \end{IEEEeqnarray*} \end{assum} States $x\in \mathbb{R}^n$ and inputs $u\in \mathbb{R}^p$ can thus be seen as vectors of elementary components: $x=(x_1,\dots,x_{\bar n})$ with $x_i \in \mathbb{R}^{n_i}$ for $i\in I$, and $u=(u_1,\dots,u_{\bar p})$ with $u_j \in \mathbb{R}^{p_j}$ for $j\in J$. For $i\in I$, let $\mathcal P_i$ be a finite partition of the set $\mathcal X_i$, then let $\mathcal P$ be the finite partition of the safe set $\mathcal X$ obtained from the partitions $\mathcal P_i$ as follows: \begin{equation*} \mathcal P = \left\{ s_1 \times \dots \times s_{\bar n} |\; s_i \in \mathcal P_i, \; i\in I \right\}. \end{equation*} Similarly, for $j\in J$, let $\mathcal V_j$ be a finite subset of $\mathcal U_j$, then let $\mathcal V$ be the finite subset of $\mathcal U$ given by the Cartesian product of the sets $\mathcal V_j$: \begin{equation*} \mathcal V = \mathcal V_1 \times \dots \times \mathcal V_{\bar p}. \end{equation*} \subsection{System decomposition} \label{sub compo decomposition} Let $m\in \mathbb{N}$, with $1\le m \le \min(\bar n, \bar p)$, let $\Sigma= \{1,\dots,m\}$, the symbolic abstraction of $S$ is obtained by composition of $m$ symbolic subsystems $S_\sigma$, $\sigma \in \Sigma$. In the following, we use two types of indices: \begin{itemize} \item Latin letters $i \in I$, $j\in J$, refer to $x_i$ and $u_j$ the components of the state and input $x$ and $u$ of system $S$. \item Greek letters $\sigma \in \Sigma$ refer to $S_\sigma$ the $\sigma$-th symbolic subsystem, $s_\sigma$ and $u_\sigma$ denote the state and input of system $S_\sigma$ respectively. \end{itemize} We will use $\pi_i:\mathbb{R}^n \rightarrow \mathbb{R}^{n_i}$ and $\pi_j:\mathbb{R}^p \rightarrow \mathbb{R}^{p_j}$ to denote the projections over components $x_i$ and $u_j$, with $i\in I$, $j\in J$, respectively. For $\mathcal X' \subseteq \mathbb{R}^n$ and $\mathcal U' \subseteq \mathbb{R}^p$, we denote $\mathcal X'_i = \pi_i(\mathcal X')$ and $\mathcal U'_j = \pi_j(\mathcal U')$. Similarly, for subset of indices $I'\subseteq I$, $J'\subseteq J$, $\pi_{I'}: \mathbb{R}^n \rightarrow \prod_{i\in I'} \mathbb{R}^{n_i}$ and $\pi_{J'}:\mathbb{R}^p \rightarrow \prod_{j\in J'} \mathbb{R}^{p_j}$ denote the projections over the set of components $\{x_i |\; i\in I'\}$ and $\{u_j |\; j\in J'\}$, respectively; we use the notation $x_{I'} = \pi_{I'}(x)$, $\mathcal X'_{I'} = \pi_{I'}(\mathcal X')$, $u_{J'}= \pi_{J'}(u)$ and $\mathcal U'_{J'} = \pi_{J'}(\mathcal U')$. For $\sigma\in \Sigma$, subsystem $S_\sigma$ can be described using the following sets of indices: \begin{itemize} \item $I_\sigma^c \subseteq I$, with $I_\sigma^c\ne \emptyset$, denotes the state components to be controlled in $S_\sigma$, $(I_1^c,\dots,I_m^c)$ is a partition of the state indices $I$; \item $I_\sigma \subseteq I$, with $I_\sigma^c\subseteq I_\sigma$, denotes the state components modeled in $S_\sigma$; \item $I_\sigma^o \subseteq I$, with $I_\sigma^o=I_\sigma \backslash I_\sigma^c$, denotes the state components that are modeled but not controlled in $S_\sigma$; \item $I_\sigma^u \subseteq I$, with $I_\sigma^u= I \backslash I_\sigma$, denotes the remaining state components that are unmodeled in $S_\sigma$; \item $J_\sigma \subseteq J$, with $J_\sigma \ne \emptyset$, denotes the control input components modeled in $S_\sigma$, $(J_1,\dots,J_m)$ is a partition of the control input indices $J$; \item $J_\sigma^u \subseteq J$ with $J_\sigma^u=J \backslash J_\sigma$, denotes the remaining control input components that are unmodeled in $S_\sigma$. \end{itemize} It is important to note that the subsystems may share common modeled state components (i.e. the sets of indices $I_\sigma$ may overlap), though the sets of controlled state components $I_\sigma^c$ and modeled control input components $J_\sigma$ are necessarily disjoints. Intuitively, $S_\sigma$ will be used to control state components $I_\sigma^c$ using input components $J_\sigma$; other state components $I_\sigma^o\cup I_\sigma^u$ will be controlled in other subsystems using input components $J_\sigma^u$. Though state components $I_\sigma^o$ will be controlled in other subsystems, they are modeled in $S_\sigma$ and thus information on their dynamics is available for the control of $S_\sigma$. Let us remark that the sets of indices $I_\sigma^o$ and $I_\sigma^u$ may possibly be empty if $I_\sigma=I_\sigma^c$ and $I_\sigma=I$, respectively. If $m=1$, there is only one subsystem and we encompass the usual centralized abstraction approach~(see e.g.~\cite{tabuada2009symbolic,zamani2012symbolic,coogan2015mixed,reissig2016}). \begin{remark} In theory, the choice of the sets of indices can be made arbitrarily. However, if the considered system has some structure, i.e. if it consists of interconnected components, a natural decomposition is to associate to each component $\mathcal C$ one subsystem $S_\sigma$ where: the controlled states $I_\sigma^c$ and the modeled control input $J_\sigma$ are the states and control inputs of $\mathcal C$ and the modeled but uncontrolled states $I_\sigma^o$ are the states of other components that have the strongest interactions with $\mathcal C$. \end{remark} \subsection{Symbolic subsystems} \label{sub compo abstraction} Let $\sigma \in \Sigma$, the symbolic subsystem $S_\sigma$ is an abstraction of $S$, which models only state and input components $x_{I_\sigma}$ and $u_{J_\sigma}$ respectively. Formally, subsystem $S_\sigma$ is defined as a transition system $S_\sigma=(X_\sigma,U_\sigma,\delta_{\sigma})$ where: \begin{itemize} \item the set of states $X_\sigma$ is a finite partition of $\pi_{I_\sigma}(\mathbb{R}^n)$, given by $X_\sigma=X_\sigma^0\cup\{Out_\sigma\}$ where $Out_\sigma=\pi_{I_\sigma}(\mathbb{R}^n)\setminus \mathcal X_{I_\sigma}$ and $$ X_\sigma^0= \left\{ \prod_{i\in I_\sigma} s_i \Big|\; s_i \in \mathcal P_i, \; i\in I_\sigma \right\} $$ is a finite partition of $\mathcal X_{I_\sigma}$; \item the set of inputs $U_\sigma$ is a finite subset of $\mathcal U_{J_\sigma}$ given by $$U_\sigma=\prod_{j\in J_\sigma} \mathcal V_j.$$ \end{itemize} To define the transition relation of $S_\sigma$, let us first define the following map: given $s_\sigma \in X_\sigma^0$ and $u_\sigma\in U_\sigma$, we define the set ${\Phi}_\sigma(s_\sigma,u_\sigma)\subseteq \mathbb{R}^n$ as follows: \begin{equation} {\Phi}_\sigma(s_\sigma,u_\sigma)= \overline{F}(\mathcal X\cap \pi^{-1}_{I_\sigma}(s_\sigma),\mathcal U\cap \pi^{-1}_{J_\sigma}(\{u_\sigma\})) . \label{eq reachable set Si} \end{equation} The set ${\Phi}_\sigma(s_\sigma,u_\sigma)$ is therefore an over-approximation of successors of states $x\in \mathcal X$ with $\pi_{I_\sigma}(x)\in s_\sigma$, for control inputs $u\in \mathcal U$ with $\pi_{J_\sigma}(u)=u_\sigma$. Then, we define the transition relation of $S_\sigma$ as follows: \begin{itemize} \item for all $s_\sigma \in X_\sigma^0,~u_\sigma \in U_\sigma,~s_\sigma'\in X_\sigma^0$, \begin{equation} \label{eq trans1a} s_{\sigma}' \in \delta_\sigma(s_{\sigma},u_{\sigma}) \iff s_\sigma'\cap\pi_{I_\sigma}({\Phi}_\sigma(s_\sigma,u_\sigma))\neq\emptyset; \end{equation} \item for all $s_\sigma \in X_\sigma^0,~u_\sigma\in U_\sigma$, \begin{equation} \label{eq trans1b} \hspace{-0.3cm} Out_\sigma\in \delta_\sigma(s_{\sigma},u_{\sigma}) \iff \left\{ \begin{array}{l} \pi_{I_\sigma}({\Phi}_\sigma(s_\sigma,u_\sigma))\cap \mathcal X_{I_\sigma}=\emptyset\\ \text{or } \pi_{I_\sigma^c}({\Phi}_\sigma(s_\sigma,u_\sigma)) \nsubseteq \mathcal X_{I_\sigma^c}. \end{array} \right. \end{equation} \end{itemize} \begin{remark} \label{remark:subsystem1} The first condition in (\ref{eq trans1b}) holds if and only if there does not exist any transition defined by (\ref{eq trans1a}), because $X_\sigma^0$ is a partition of $\mathcal X_{I_\sigma}$. As a consequence, it follows that for all $s_\sigma \in X_\sigma^0,~u_\sigma \in U_\sigma$, $\delta_\sigma(s_\sigma,u_\sigma)\ne \emptyset$ and thus $U_\sigma(s_\sigma)=U_\sigma$. \end{remark} \ifdouble \begin{figure}[b] \centering \includegraphics[width=0.9\columnwidth]{AG2_3graphs2} \caption{Illustration of (\ref{eq trans1b}): a transition towards $Out_\sigma$ is created in cases a and b, but not in case c.} \label{fig ag2} \end{figure} \else \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{AG2_3graphs2} \caption{Illustration of (\ref{eq trans1b}): a transition towards $Out_\sigma$ is created in cases a and b, but not in case c.} \label{fig ag2} \end{figure} \fi \begin{remark} \label{remark:subsystem2} According to (\ref{eq trans1b}), a transition to $Out_\sigma$ exists if $\pi_{I_\sigma}({\Phi}_\sigma(s_\sigma,u_\sigma))$ is entirely outside $\mathcal X_{I_\sigma}$ (first condition and Figure~\ref{fig ag2}.a); or if $\pi_{I_\sigma^c}({\Phi}_\sigma(s_\sigma,u_\sigma))$ is not contained in $\mathcal X_{I_\sigma^c}$ (second condition and Figure~\ref{fig ag2}.b). It should be noted that in the case where the reachable set $\pi_{I_\sigma^c}({\Phi}_\sigma(s_\sigma,u_\sigma))$ is contained in $\mathcal X_{I_\sigma^c}$ but $\pi_{I_\sigma^o}({\Phi}_\sigma(s_\sigma,u_\sigma))$ is not contained in $\mathcal X_{I_\sigma^o}$ as in Figure~\ref{fig ag2}.c, no transition is created towards $Out_\sigma$. Finally, if $I_\sigma=I_\sigma^c$, (\ref{eq trans1b}) becomes equivalent to $$ Out_\sigma\in \delta_\sigma(s_{\sigma},u_{\sigma}) \iff \pi_{I_\sigma}({\Phi}_\sigma(s_\sigma,u_\sigma)) \nsubseteq \mathcal X_{I_\sigma}, $$ which is the condition used in~\cite{meyer2015adhs}, for compositional abstractions where the set of modeled state components $I_\sigma$ do not overlap (i.e. $I_\sigma=I^c_\sigma$, for all $\sigma\in \Sigma$). \end{remark} \subsection{Composition} \label{sub compo alternating} In this section, we show how the previous subsystems $S_\sigma$, with $\sigma \in \Sigma$, can be composed in order to define a symbolic abstraction $S_c$ of the original system $S$. The main result of the section is Theorem~\ref{th simulation Sc}, which shows that there exists a feedback refinement relation from $S$ to $S_c$. The composition of the subsystems $S_\sigma$, $\sigma \in \Sigma$, is given by the transition system $S_c=(X_c,U_c,\delta_c)$ where: \begin{itemize} \item the set of states $X_c$ is a finite partition of $\mathbb{R}^n$, given by $X_c=X_c^0\cup\{Out\}$ where $Out=\mathbb{R}^n\setminus \mathcal X$ and $X_c^0=\mathcal{P}$ is a finite partition of $\mathcal X$; \item the set of inputs $U_c=\mathcal V$ is a finite subset of $\mathcal U$. \end{itemize} Let us remark that by definition of $X_c^0$ and $X_\sigma^0$, we have that for all $s\in X_c^0$, its projection $s_{I_\sigma} \in X_\sigma^0$. Similarly, for all $u \in U_c$, its projection $u_{J_\sigma} \in U_\sigma$. The transition relation of $S_c$ can therefore be defined as follows: \begin{itemize} \item for all $s\in X_c^0,~u\in U_c,~s'\in X_c^0$, \begin{equation} \label{eq trans2a} s'\in \delta_c(s,u) \Longleftrightarrow\forall \sigma \in \Sigma,~s'_{I_\sigma} \in\delta_\sigma(s_{I_\sigma},u_{J_\sigma}); \end{equation} \item for all $s\in X_c^0,~u\in U_c$, \begin{equation} \label{eq trans2b} Out \in \delta_c(s,u) \Longleftrightarrow \exists \sigma \in \Sigma,\; Out_\sigma \in\delta_\sigma(s_{I_\sigma},u_{J_\sigma}). \end{equation} \end{itemize} \begin{remark} Because the sets of modeled state components $I_\sigma$ are allowed to overlap, the transition relation of $S_c$ cannot simply be obtained as the Cartesian product of the transition relations of the subsystems $S_\sigma$, as in~\cite{meyer2015adhs}. Indeed, for $s\in X_c^0,~u\in U_c$, it is possible that for all $\sigma \in \Sigma$, there exists $s'_\sigma \in X_\sigma^0$, such that $s_{\sigma}' \in \delta_\sigma(s_{I_\sigma},u_{J_\sigma})$. However, a transition to $X_c^0$ will exist in $S_c$ if and only if there exists $s'\in X_c^0$ such that $s_{I_\sigma}'=s'_\sigma$, for all $\sigma \in \Sigma$. \end{remark} In view of the previous remark, it is legitimate to ask if the composition of the subsystems can lead to couples of states and inputs $(s,u)\in X_c^0 \times U_c$ without a successor. The following proposition shows that this is not the case: \begin{prop} \label{prop input composed} Under Assumption~\ref{assum set}, for all $s\in X_c^0$ we have $U_c(s)=U_c$, i.e.\ $\delta_c(s,u)\neq\emptyset$, for all $u\in U_c$. \end{prop} \begin{proof} Let $s\in X_c^0$ and $u\in U_c$. Then for all $\sigma \in \Sigma$, $s_{I_\sigma}\in X_\sigma^0$, $u_{J_\sigma}\in U_\sigma$ and by construction, $\delta_\sigma(s_{I_\sigma},u_{J_\sigma})\neq\emptyset$ (see Remark~\ref{remark:subsystem1}). If there exists a subsystem $S_\sigma$ such that $Out_\sigma \in \delta_\sigma(s_{I_\sigma},u_{J_\sigma})$, then by definition of $S_c$ we have $Out\in \delta_c(s,u)$. Otherwise, we have that $Out_\sigma \notin \delta_\sigma(s_{I_\sigma},u_{J_\sigma})$ for all $\sigma \in \Sigma$, which from the second condition of (\ref{eq trans1b}) implies that \begin{equation} \label{eq p1a} \forall \sigma\in \Sigma,\; \pi_{I^c_\sigma}({\Phi}_\sigma(s_{I_\sigma},u_{J_\sigma}))\subseteq \mathcal X_{I_\sigma^c}. \end{equation} Remarking that $s\subseteq \mathcal X\cap \pi^{-1}_{I_\sigma}(s_{I_\sigma})$ and $\{u\}\subseteq \mathcal U\cap \pi^{-1}_{J_\sigma}(\{u_{J_\sigma}\})$, the following inclusion follows from (\ref{eq over reachable set centralized}) and (\ref{eq reachable set Si}): \ifdouble \begin{eqnarray} \nonumber F(s,\{u\}) &\subseteq & F(\mathcal X\cap \pi^{-1}_{I_\sigma}(s_{I_\sigma}),\mathcal U\cap \pi^{-1}_{J_\sigma}(\{u_{J_\sigma}\}))\\ \label{eq p1b} &\subseteq & {\Phi}_\sigma(s_{I_\sigma},u_{J_\sigma}). \end{eqnarray} \else \begin{equation} \label{eq p1b} F(s,\{u\}) \subseteq F(\mathcal X\cap \pi^{-1}_{I_\sigma}(s_{I_\sigma}),\mathcal U\cap \pi^{-1}_{J_\sigma}(\{u_{J_\sigma}\})) \subseteq {\Phi}_\sigma(s_{I_\sigma},u_{J_\sigma}). \end{equation} \fi Therefore, from (\ref{eq p1a}) and (\ref{eq p1b}) it follows $$ \forall \sigma\in \Sigma,\; \pi_{I^c_\sigma}(F(s,\{u\}) )\subseteq \mathcal X_{I_\sigma^c}. $$ This, together with Assumption~\ref{assum set} and the fact that $(I_1^c,\dots,I_m^c)$ is a partition of $I$, implies that $F(s,\{u\}) \subseteq \mathcal X$. Since $X_c^0$ is a partition of $\mathcal X$, there exists $s'\in X_c^0$ such that $s'\cap F(s,\{u\})\neq\emptyset$. Then, (\ref{eq p1b}) gives $s'\cap {\Phi}_\sigma(s_{I_\sigma},u_{J_\sigma}) \neq\emptyset$, for all $\sigma\in \Sigma$. Thus, for all $\sigma\in \Sigma$, $s'_{I_\sigma} \cap \pi_{I_\sigma}({\Phi}_\sigma(s_{I_\sigma},u_{J_\sigma})) \neq \emptyset$. It follows from (\ref{eq trans1a}) that $s_{I_\sigma}'\in \delta_\sigma(s_{I_\sigma},u_{J_\sigma})$ for all $\sigma\in \Sigma$, which gives, by (\ref{eq trans2a}), $s'\in \delta_c(s,u)$. \end{proof} We can now state the main result of the section: \begin{thm} \label{th simulation Sc} Let the map $H:X \rightarrow X_c$ be given by $H(x)=s$ if and only if $x\in s$. Then, under Assumption~\ref{assum set}, $H$ defines a feedback refinement relation from $S$ to $S_c$: $S\preceq_{\mathcal{FR}}S_c$. \end{thm} \begin{proof} Let $s\in X_c^0$, $x\in s$, $u\in U_c(s)=U_c\subseteq U=U(x)$, $x'\in \delta(x,u)=F(x,u)$ and $s'=H(x')$. Since $x\in s$, we have $x'\in F(s,\{u\})$. Then, let us consider the two possible cases: \begin{itemize} \item $x'\in \mathcal X$ -- We have by (\ref{eq p1b}), $x'\in {\Phi}_\sigma(s_{I_\sigma},u_{J_\sigma})$, for all $\sigma\in \Sigma$. Since $x'\in \mathcal X$, then $s'\in X_c^0$, it follows from $x'\in s'$ that $s'\cap {\Phi}_\sigma(s_{I_\sigma},u_{J_\sigma})\neq \emptyset$, for all $\sigma\in \Sigma$. Then, for all $\sigma \in \Sigma$, $s'_{I_\sigma}\in X_\sigma^0$ and $s_{I_\sigma}'\cap \pi_{I_\sigma} ({\Phi}_\sigma(s_{I_\sigma},u_{J_\sigma}))\neq \emptyset$. From (\ref{eq trans1a}), $s_{I_\sigma}'\in \delta_\sigma(s_{I_\sigma},u_{J_\sigma})$, for all $\sigma\in \Sigma$ and by (\ref{eq trans2a}) we have $s'\in \delta_c(s,u)$. \item $x'\notin \mathcal X$ -- Then, $F(s,\{u\}) \not\subseteq \mathcal X$. Then, from Assumption~\ref{assum set} and the fact that $(I_1^c,\dots,I_m^c)$ is a partition of $I$, it follows that there exists $\sigma \in \Sigma$ such that $\pi_{I_\sigma^c}(F(s,\{u\})) \not\subseteq \mathcal X_{I_\sigma^c}$. From (\ref{eq p1b}), we have $\pi_{I_\sigma^c}( {\Phi}_\sigma(s_{I_\sigma},u_{J_\sigma})) \not\subseteq \mathcal X_{I_\sigma^c}$. Then, from (\ref{eq trans1b}), $Out_\sigma \in \delta_\sigma(s_{I_\sigma},u_{J_\sigma})$, and from (\ref{eq trans2b}), $Out \in \delta_c(s,u)$. Since $x'\notin \mathcal X$, $s'=Out$. \end{itemize} The case $s=Out$ trivially satisfies Definition~\ref{def simulation} since $U_c(Out)=\emptyset$ by definition of $S_c$. \end{proof} Note that the composed abstraction $S_c$ is only created in this section to prove the feedback refinement relationship but one should avoid computing it in practice since it would defeat the purpose of the compositional approach. We end the section by stating an instrumental result, which will be used in Section~\ref{sec comparison} when comparing abstractions obtained from different system decompositions. \begin{lemma} \label{prop ag2} Under Assumption~\ref{assum set}, for all $s\in X_c^0$ and $u\in U_c$, $Out \in \delta_c(s,u)$ if and only if there exists $\sigma \in \Sigma$ such that $\pi_{I_\sigma^c}(\Phi_\sigma(s_{I_\sigma},u_{J_\sigma}))\not\subseteq \mathcal X_{I_\sigma^c}$. \end{lemma} \begin{proof} Sufficiency is straightforward from (\ref{eq trans1b}) and (\ref{eq trans2b}). As for necessity, if $Out\in \delta_c(s,u)$, then there exists a subsystem $\sigma$ such that $Out_\sigma\in \delta_\sigma(s_{I_\sigma},u_{J_\sigma})$. From (\ref{eq trans1b}), either $\pi_{I_\sigma^c}(\Phi_\sigma(s_{I_\sigma},u_{J_\sigma}))\not\subseteq \mathcal X_{I_\sigma^c}$ (in which case the property holds), or $\pi_{I_\sigma^c}(\Phi_\sigma(s_{I_\sigma},u_{J_\sigma})) \subseteq \mathcal X_{I_\sigma^c}$ and $\pi_{I_\sigma}(\Phi_\sigma(s_{I_\sigma},u_{J_\sigma})) \cap \mathcal X_{I_\sigma} =\emptyset$. Then, by Assumption~\ref{assum set} and since $I^o_\sigma=I_\sigma \setminus I_\sigma^c$, it follows that $\pi_{I^o_\sigma}(\Phi_\sigma(s_{I_\sigma},u_{J_\sigma})) \cap \mathcal X_{I^o_\sigma} =\emptyset$. Then, by (\ref{eq p1b}), $\pi_{I^o_\sigma}(F(s,\{u\})) \cap \mathcal X_{I^o_\sigma} =\emptyset$. Thus, it follows that $\pi_{I^o_\sigma}(F(s,\{u\})) \nsubseteq \mathcal X_{I^o_\sigma}$. From Assumption~\ref{assum set}, there exists $i\in I_\sigma^o$, such that $\pi_{i}(F(s,\{u\})) \nsubseteq \mathcal X_{i}$. Then, let $\sigma' \in \Sigma$ such that $i\in I_{\sigma'}^c$, then $\pi_{I_{\sigma'}^c}(F(s,\{u\})) \nsubseteq \mathcal X_{I_{\sigma'}^c}$. By (\ref{eq p1b}), it follows that $\pi_{I_{\sigma'}^c}(\Phi_\sigma(s_{I_{\sigma'}},u_{J_{\sigma'}})) \nsubseteq \mathcal X_{I_{\sigma'}^c}$ and the property holds. \end{proof} \section{Compositional safety synthesis} \label{sec synthesis} In this section, we consider the problem of synthesizing a safety controller for transition system $S$ and safe set $\mathcal X$. Because of the feedback refinement relation from $S$ to $S_c$, this can be done by solving the safety synthesis problem for transition system $S_c$ and safe set $X_c^0$. We propose a compositional approach, which works on the symbolic subsystems $S_\sigma$ and does not require computing the composed abstraction $S_c$. For $\sigma \in \Sigma$, let $C_\sigma^*:X_\sigma \rightarrow 2^{U_\sigma}$ be the maximal safety controller for transition system $S_\sigma$ and safe set $X_\sigma^0$. Since $S_\sigma$ has only finitely many states and inputs, $C_\sigma^*$ can be computed in finite time using a fixed point algorithm~\cite{tabuada2009symbolic}. Now, let the controller $C_c:X_c \rightarrow 2^{U_c}$ be defined by $C_c(Out)=\emptyset$ and \begin{equation} \label{eq Cc} \forall s\in X_c^0,\; C_c(s)=\{u \in U_c |\; u_{J_\sigma} \in C^*_\sigma(s_{I_\sigma}), \forall \sigma \in \Sigma \}. \end{equation} \begin{thm} \label{th safety composition} Under Assumption~\ref{assum set}, $C_c$ is a safety controller for transition system $S_c$ and safe set $X_c^0$. \end{thm} \begin{proof} From Proposition~\ref{prop input composed} and since $C_c(Out)=\emptyset$, it is clear that for all $s\in X_c$, we have $C_c(s) \subseteq U_c(s)$. $C_c(Out)=\emptyset$ also gives $dom(C_c)\subseteq X_c^0$. Then, let $s\in dom(C_c)\subseteq X_c^0$, $u\in C_c(s)$ and $s'\in \delta_c(s,u)$. If $s'\notin X_c^0$, then $s'=Out$ and from (\ref{eq trans2b}), there exists $\sigma \in \Sigma$, such that $Out_\sigma \in \delta_\sigma(s_{I_\sigma},u_{J_\sigma})$, which contradicts the fact that $u_{J_\sigma} \in C^*_\sigma(s_{I_\sigma})$ with $C^*_\sigma$ safety controller for transition system $S_\sigma$ and safe set $X_\sigma^0$. Hence, we necessarily have $s'\in X_c^0$, and from (\ref{eq trans2a}), it follows that $s'_{I_\sigma} \in \delta_\sigma(s_{I_\sigma},u_{J_\sigma})$, for all $\sigma\in \Sigma$. Moreover, $u_{J_\sigma} \in C^*_\sigma(s_{I_\sigma})$ gives that $s'_{I_\sigma} \in dom(C^*_\sigma)$. Then for all $\sigma \in \Sigma$, let $u'_{\sigma} \in C^*_\sigma(s'_{I_\sigma})$. Since $(J_1,\dots,J_m)$ is a partition of $J$, there exists $u' \in U_c$ such that $u'_{J_\sigma}=u'_{\sigma}$ for all $\sigma \in \Sigma$. Then, by (\ref{eq Cc}), $u'\in C_c(s')$ and thus $s'\in dom(C_c)$. It follows that $C_c$ is a safety controller for transition system $S_c$ and safe set $X_c^0$. \end{proof} \begin{remark} Since the sets of modeled state components $I_\sigma$ may overlap, it is in principle possible that $dom(C_c)=\emptyset$ while $dom(C_\sigma^*)\ne \emptyset$, for all $\sigma \in \Sigma$. The reason is that an element of $dom(C_c)$ is obtained from states in $dom(C_\sigma^*)$, which coincide on their common modeled states, as shown in (\ref{eq Cc}). \end{remark} \subsection{Particular case: non-overlapping state sets} \label{sec particular cases} Though $C_\sigma^*$ is a maximal safety controller for all $\sigma\in \Sigma$, the safety controller $C_c$ is generally not maximal. Maximality can be obtained when the set of modeled states $I_\sigma$, $\sigma\in \Sigma$ do not overlap (or equivalently when for all $\sigma \in \Sigma$, $I_\sigma^c=I_\sigma$). In that case, the following result holds: \begin{prop} \label{pro safety max} Under Assumption~\ref{assum set}, let $I_\sigma^c=I_\sigma$, for all $\sigma\in \Sigma$. Then, $C_c$ is the maximal safety controller for transition system $S_c$ and safe set $X_c^0$. \end{prop} \begin{proof} Let $C_c' : X_c \rightarrow 2^{U_c}$ be a safety controller for transition system $S_c$ and safe set $X_c^0$. For $\sigma\in \Sigma$, let the controllers $C_\sigma': X_\sigma \rightarrow 2^{U_\sigma}$ be defined by $C_\sigma'(Out_\sigma)=\emptyset$ and for all $s_\sigma \in X_\sigma^0$, \begin{equation} \label{eq proj safe} C_\sigma'(s_\sigma)=\{u_\sigma \in \pi_{J_\sigma}(C_c'(s)) |\; s\in X_c^0, \; s_{I_\sigma}=s_\sigma\}. \end{equation} Let us show that $C_\sigma'$ is a safety controller for system $S_\sigma$ and safe set $X_\sigma^0$. Following Remark~\ref{remark:subsystem1}, and since $C_\sigma'(Out_\sigma)=\emptyset$, it is clear that for all $s_\sigma \in X_\sigma$, we have $C_\sigma'(s_\sigma) \subseteq U_\sigma(s_\sigma)$. $C_\sigma'(Out_\sigma)=\emptyset$ also gives $dom(C_\sigma')\subseteq X_\sigma^0$. Then, let $s_\sigma\in dom(C_\sigma')$, $u_\sigma \in C_\sigma'(s_\sigma)$ and $s_\sigma' \in \delta_\sigma(s_\sigma,u_\sigma)$, let us prove that $s'_\sigma \in dom (C_\sigma')$. By (\ref{eq proj safe}), there exists $s\in dom(C_c')$ and $u\in C_c'(s)$ such that $s_{I_\sigma}=s_\sigma$ and $u_{J_\sigma}=u_\sigma$. Since $C_c'$ is a safety controller, $\delta_c(s,u)\subseteq dom(C_c') \subseteq X_c^0$. Moreover, since the sets $I_\sigma$ are not overlapping, it follows from (\ref{eq trans2a}) that there exists $s'\in \delta_c(s,u)$ such that $s'_{I_\sigma}=s'_\sigma$. Then, $s'\in dom(C_c')$ and (\ref{eq proj safe}) give that $s'_\sigma \in dom (C_\sigma')$. Hence $C_\sigma'$ is a safety controller for system $S_\sigma$ and safe set $X_\sigma^0$. Then, by maximality of $C_\sigma^*$, it follows that for all $s_\sigma\in X_\sigma^0$, $C_\sigma'(s_\sigma) \subseteq C_\sigma^*(s_\sigma)$. Finally, let $s\in dom(C_c')$ and $u\in C_c'(s)$, then by (\ref{eq proj safe}), $u_{J_\sigma} \in C_\sigma'(s_{I_\sigma})\subseteq C_\sigma^*(s_{I_\sigma})$ for all $\sigma \in \Sigma$. By (\ref{eq Cc}), $u\in C_c(s)$, which shows the maximality of $C_c$. \end{proof} \section{Comparisons} \label{sec comparison} In this section, we provide theoretical comparisons between abstractions and controllers given by the previous approach using two different system decompositions. In addition to the set of state and input indices defined in Section~\ref{sub compo decomposition}, let us consider partitions $(\hat I_1^c,\dots,\hat I_{\hat m}^c)$ and $(\hat J_1,\dots,\hat J_{\hat m})$ of the state and input indices and subsets of state indices $(\hat I_1,\dots, \hat I_{\hat m})$ with $\hat I_{\hat \sigma}^c \subseteq \hat I_{\hat \sigma}$, for all ${\hat \sigma} \in \hat \Sigma=\{1,\dots,{\hat m}\}$. We define the same objects as before (i.e. subsystems, abstraction, controllers, etc.) for this system decomposition and denote them with hatted notations. We make the following assumption on the two system decompositions under consideration. \begin{assum} \label{assum set inclusion} There exists a surjective map $\gamma: \hat \Sigma \rightarrow \Sigma$ such that, for all $\hat \sigma \in \hat \Sigma$ and $\sigma=\gamma(\hat\sigma) \in \Sigma$, $$ \hat I^c_{\hat \sigma} \subseteq I^c_\sigma, \; \hat I_{\hat \sigma} \subseteq I_\sigma, \; \hat J_{\hat \sigma} \subseteq J_\sigma. $$ \end{assum} From the previous assumption, and since $(\hat I_1^c,\dots,\hat I_{\hat m}^c)$ and $(\hat J_1,\dots,\hat J_{\hat m})$ are partitions of the state and input indices, we have that \begin{equation} \label{eq set inclusion} \forall \sigma \in \Sigma, \bigcup_{\hat \sigma \in \gamma^{-1} (\sigma)} \hat I^c_{\hat \sigma} = I^c_\sigma\; \text{ and } \bigcup_{\hat \sigma \in \gamma^{-1} (\sigma)} \hat J_{\hat \sigma} = J_\sigma. \end{equation} In addition, we will make the following mild assumption on the over-approximations of the reachable sets: \begin{assum} \label{assum reachable set comparison} For all $\mathcal X'' \subseteq \mathcal X' \subseteq \mathbb{R}^n$, $\mathcal U'' \subseteq \mathcal U' \subseteq \mathcal U$, the following inclusion holds $$ \overline{F}(\mathcal X'',\mathcal U'') \subseteq \overline{F}(\mathcal X',\mathcal U'). $$ \end{assum} This assumption can be shown to be satisfied by most existing techniques for over-approximating the reachable set, and in particular by those mentioned in Section~\ref{sub preli cooperative}. In addition, under Assumptions~\ref{assum set inclusion} and~\ref{assum reachable set comparison}, it follows from (\ref{eq reachable set Si}), that for all $\hat \sigma \in \hat \Sigma$ and $\sigma=\gamma(\hat\sigma) \in \Sigma$, \begin{equation} \label{eq reachable set comparison} \forall s\in \mathcal P,\; u\in \mathcal V,\; \Phi_\sigma(s_{I_\sigma},u_{J_\sigma}) \subseteq \hat \Phi_{\hat \sigma}(s_{\hat I_{\hat \sigma}},u_{\hat J_{\hat \sigma}}) . \end{equation} \subsection{Abstractions} \label{sub compare alternating} We start by comparing the compositional abstractions $S_c$ and $\hat S_c$ resulting from the two different decompositions: \begin{thm} \label{prop alternating sufficient} Under Assumptions~\ref{assum set}, \ref{assum set inclusion} and~\ref{assum reachable set comparison}, the identity map is a feedback refinement relation from $S_c$ to $\hat S_c$: $ S_c \preceq_{\mathcal{FR}} \hat S_c$. \end{thm} \begin{proof} Let us first remark that $X_c^0= \hat X_c^0$, $X_c = \hat X_c$ and $U_c=\hat U_c$. Then, from Proposition~\ref{prop input composed}, for all $s\in X_c^0 = \hat X_c^0$, $U_c(s)=U_c=\hat U_c=\hat U_c(s)$. Since $\hat U_c(Out)=\emptyset$, Definition \ref{def simulation} holds if $\delta_c(s,u)\subseteq \hat{\delta}_c(s,u)$, for all $s \in X_c^0$, $u\in U_c$. Hence, let $s \in X_c^0$, $u\in U_c$ and $s'\in \delta_c(s,u)$, then let us consider the two possible cases: \begin{itemize} \item $s'\in X_c^0$ -- We have by (\ref{eq trans2a}) and (\ref{eq trans1a}) that $$ \forall \sigma \in \Sigma,\; s_{I_\sigma}'\cap\pi_{I_\sigma}(\Phi_\sigma(s_{I_\sigma},u_{J_\sigma}))\neq\emptyset. $$ Then, from (\ref{eq reachable set comparison}), follows that $$ \forall \hat \sigma \in \hat \Sigma \text{ and } \sigma=\gamma(\hat \sigma),\; s_{I_\sigma}'\cap\pi_{I_\sigma}(\hat \Phi_{\hat \sigma}(s_{\hat I_{\hat \sigma}},u_{\hat J_{\hat \sigma}}))\neq\emptyset. $$ By Assumption~\ref{assum set inclusion}, $\hat I_{\hat \sigma} \subseteq I_\sigma$, for all $\hat \sigma \in \hat \Sigma$ and $\sigma=\gamma(\hat \sigma)$. Thus it follows that $$ \forall \hat \sigma \in \hat \Sigma,\; s_{\hat I_{\hat \sigma}}'\cap\pi_{\hat I_{\hat \sigma}}(\hat \Phi_{\hat \sigma}(s_{\hat I_{\hat \sigma}},u_{\hat J_{\hat \sigma}}))\neq\emptyset. $$ Then, from (\ref{eq trans1a}) and (\ref{eq trans2a}), we have $s'\in \hat{\delta}_c(s,u)$. \item $s'=Out$ -- From Lemma~\ref{prop ag2}, we know that there exists $\sigma \in \Sigma$, such that $\pi_{I_\sigma^c}(\Phi_\sigma(s_{I_\sigma},u_{J_\sigma}))\nsubseteq\mathcal X_{I_\sigma^c}$. Then, from Assumption~\ref{assum set}, there exists $i\in I_\sigma^c$ such that $\pi_{i}(\Phi_\sigma(s_{I_\sigma},u_{J_\sigma}))\nsubseteq\mathcal X_{i}$. From (\ref{eq set inclusion}), there exists $\hat \sigma \in \hat \Sigma$, such that $\sigma=\gamma(\hat \sigma)$ and $i\in \hat I_{\hat \sigma}^c$. From (\ref{eq reachable set comparison}), it follows that $\pi_{i}(\hat \Phi_{\hat \sigma}(s_{\hat I_{\hat \sigma}},u_{\hat J_{\hat \sigma}}))\nsubseteq\mathcal X_{i}$ and $\pi_{\hat I_{\hat \sigma}^c}(\hat \Phi_{\hat \sigma}(s_{\hat I_{\hat \sigma}},u_{\hat J_{\hat \sigma}}))\nsubseteq\mathcal X_{\hat I_{\hat \sigma}^c}$. Then, from (\ref{eq trans1b}) and (\ref{eq trans2b}), we have $Out \in \hat{\delta}_c(s,u)$. \end{itemize} \end{proof} Note that the conditions in the previous Theorem are only sufficient conditions, since depending on the dynamics of the system, a feedback refinement relation could also exist between two unrelated decompositions (in terms of index set inclusion). \begin{remark} Theorem~\ref{prop alternating sufficient} gives an indication on how one should modify the sets of indices to reduce the conservatism of the compositional symbolic abstraction. Firstly, one can keep the same number of subsystems and the same controlled states $I_\sigma^c$ and modeled control input $J_\sigma$, while considering additional modeled but uncontrolled states in $I_\sigma^o$. Secondly, one can merge two or more subsystems by merging their controlled states, modeled control inputs and modeled but uncontrolled states. \end{remark} \subsection{Controllers} \label{sub compare safety} We now compare the controllers obtained by the approach described in Section~\ref{sec synthesis}. The comparison of controllers is more delicate than the comparison of abstractions and we shall need the additional assumption that the sets of indices $I_\sigma$ do not overlap (note that the sets $\hat I_{\hat \sigma}$ may still overlap). \begin{corollary} \label{pro safety comparison} Under Assumptions~\ref{assum set}, \ref{assum set inclusion} and~\ref{assum reachable set comparison}, let $I_\sigma^c=I_\sigma$, for all $\sigma\in \Sigma$. Then, for all $s\in X_c$, $\hat C_c(s)\subseteq C_c(s)$. \end{corollary} \begin{proof} From Theorem~\ref{th safety composition}, $\hat C_c$ is a safety controller for system $\hat S_c$ and safe set $\hat X_c$. From Theorem~\ref{prop alternating sufficient}, it follows that $\hat C_c$ is also a safety controller for system $S_c$ and safe set $X_c$. Then, by Proposition~\ref{pro safety max}, the maximality of $C_c$ gives us for all $s\in X_c$, $\hat C_c(s)\subseteq C_c(s)$. \end{proof} Let us remark that the assumption that the sets of indices $I_\sigma$ do not overlap is instrumental in the proof since it uses Proposition~\ref{pro safety max}. The question whether similar results hold in the absence of such assumption is an open question, which is left as future research. \subsection{Complexity} \label{sub compare complexity} In this section, we discuss the computational complexity of the approach and show the advantage of using a compositional approach rather than a centralized one. Let $|.|$ denote the cardinality of a set. The computation of symbolic subsystem $S_\sigma$ requires a number of reachable set approximations equal to $\prod_{i\in I_\sigma} |\mathcal P_i| \times \prod_{j\in J_\sigma} |\mathcal V_j|$, each creating up to $(1+\prod_{i\in I_\sigma} |\mathcal P_i|)$ successors. This results in an overall time and space complexity $\mathcal C_1$ of computing all symbolic subsystems $S_\sigma$, $\sigma \in \Sigma$: $$ \mathcal C_1 =\mathcal O \Big(\sum_{\sigma\in \Sigma} \big(\prod_{i\in I_\sigma} |\mathcal P_i|^2 \times \prod_{j\in J_\sigma} |\mathcal V_j |\big)\Big). $$ The computation of the safety controller $C_\sigma$ by a fixed point algorithm requires a number of iteration which is bounded by the number of states in the safe set $X_\sigma^0$: $\prod_{i\in I_\sigma} |\mathcal P_i|$. The complexity order of computing an iteration can be bounded by the number of transitions in $S_\sigma$. This results in an overall time and space complexity $\mathcal C_2$ of computing all safety controllers $C_\sigma$, $\sigma \in \Sigma$: $$ \mathcal C_2 =\mathcal O \Big(\sum_{\sigma\in \Sigma} \big(\prod_{i\in I_\sigma} |\mathcal P_i|^3 \times \prod_{j\in J_\sigma} |\mathcal V_j |\big)\Big). $$ To illustrate the advantage of using a compositional approach, let us consider two extremal cases in the particular case where the number of state and input component are equal $I=J$. The centralized case corresponds to $\Sigma=\{1\}$, with $I_1=J_1=I$. In that case the complexity of the overall approach is of order $\mathcal O \Big( \prod_{i\in I} |\mathcal P_i|^3 \times| \mathcal V_i | \Big).$ The fully decentralized case corresponds to $\Sigma=I=J$, with $I_\sigma=J_\sigma=\{\sigma\}$ for all $\sigma \in \Sigma$. In that case the complexity of the overall approach is of order $\mathcal O \Big( \sum_{i\in I} |\mathcal P_i|^3 \times| \mathcal V_i | \Big).$ Hence, one can see that while the complexity of the centralized approach is exponential in the number of state and input components $|I|$, it becomes linear with the fully decentralized approach. Intermediate decompositions enable to balance the computational complexity and the conservativeness of the approach, in view of the discussions in Sections~\ref{sub compare alternating} and~\ref{sub compare safety}. \section{Numerical illustration} \label{sec simulation} In this section, we illustrate the results of this paper on the temperature regulation in a circular building of $n\geq 3$ rooms, each equipped with a heater. For each room $i\in\{1,\dots,n\}$, the variations of the temperature $T_i$ are described by the discrete-time model adapted from~\cite{pola2016decentralized}: \begin{equation*} \label{eq simulation} T_i^+=T_i+\alpha(T_{i+1}+T_{i-1}-2T_i)+\beta(T_e-T_i)+\gamma(T_h-T_i)u_i, \end{equation*} where $T_{i+1}$ and $T_{i-1}$ are the temperature of the neighbor rooms (with $T_0=T_n$ and $T_{n+1}=T_1$), $T_e=-1\,^\circ C$ is the outside temperature, $T_h=50\,^\circ C$ is the heater temperature, $u_i\in[0,0.6]$ is the control input for room $i$ and the conduction factors are given by $\alpha=0.45$, $\beta=0.045$ and $\gamma=0.09$. This model can be proved to be monotone as defined in~\cite{angeli_monotone}, which allows us to use efficient algorithms for over-approximating the reachable sets~\cite{meyer2015adhs,coogan2015mixed}. Moreover, the over-approximation scheme satisfies Assumption~\ref{assum reachable set comparison}. The safe set $\mathcal X$ is given by a $n$-dimensional interval (specified later) which is uniformly partitioned into $\lambda_T$ intervals per component (for a total of $\lambda_T^n$ symbols in $\mathcal P$) and the control set $\mathcal U=[0,0.6]^n$ is uniformly discretized into $\lambda_u$ values per component (for a total of $\lambda_u^n$ values in $\mathcal V$). We consider $3$ possible system decompositions, which provides us with $3$ different abstractions: \begin{itemize} \item $S_c^1$, the centralized case (i.e. $m=1$), with a single subsystem containing all states and controls, with $I_1^1=I_1^{c1}=J_1^1=\{1,\dots,n\}$; \item $S_c^2$, a general case from Section~\ref{sec compositional} with $m=n$ subsystems, $I_\sigma^{c2}= J_\sigma^2=\{\sigma\}$ and $I_\sigma^2=\{\sigma-1,\sigma,\sigma+1\}$ for all $\sigma\in\Sigma=\{1,\dots,m\}$; \item $S_c^3$, a case with $m=n$ subsystems and non-overlapping state sets as in Section~\ref{sec particular cases}, with $I_\sigma^3=I_\sigma^{c3}=J_\sigma^3=\{\sigma\}$ for all $\sigma\in\Sigma$. \end{itemize} Both $S_c^2$ and $S_c^3$ have one subsystem per room, but subsystems of $S_c^3$ only focus on the state and control of the considered room, while subsystems of $S_c^2$ also model (but do not control) the temperatures of both neighbor rooms. Since Assumption~\ref{assum set inclusion} holds for both pairs $(S_c^1,S_c^2)$ and $(S_c^2,S_c^3)$, Theorem~\ref{prop alternating sufficient} immediately gives the feedback refinements $S_c^1 \preceq_{\mathcal{FR}}S_c^2 \preceq_{\mathcal{FR}}S_c^3$. Corollary~\ref{pro safety comparison} also holds for the pair $(S_c^1,S_c^2)$ since $I_1^{c1}=I_1^1$, but it is not guaranteed to hold for the pair $(S_c^2,S_c^3)$ since $I_\sigma^{c2}\neq I_\sigma^2$. In the following, we report numerical results in two different conditions. The numerical implementation has been done using Matlab on a laptop with a $2.6$ GHz CPU and $8$ GB of RAM. \medskip {\bf Case 1:} {$n=4$, $\mathcal X=[17,22]\times[19,22]\times[20,23]\times[20,22]$.} The abstractions and syntheses are generated in the $6$ cases corresponding to state partitions with $\lambda_T\in\{5,10,20\}$ and input discretizations with $\lambda_u\in\{3,4\}$. Table~\ref{tab 1} reports the cardinalities $|\mathcal{P}|=\lambda_T^4$ of the state partition $\mathcal{P}$, and $|dom(C_c)|$ of the domain of the safety controllers for each abstraction $S_c^1$, $S_c^2$ and $S_c^3$. Table~\ref{tab 3} reports the computation times (in seconds) required to create the abstractions and synthesize safety controllers on all subsystems of $S_c^1$, $S_c^2$ and $S_c^3$. \begin{table}[!t] \begin{center} \begin{tabular}{|c|c||c||c|c|c|} \hline $\lambda_T$ & $\lambda_u$ & $|\mathcal{P}|=\lambda_T^4$ & $|dom(C_c^1)|$ & $|dom(C_c^2)|$ & $|dom(C_c^3)|$\\ \hline $5$ & $3$ & $625$ & $525$ & $0$ & $0$\\ $5$ & $4$ & $625$ & $525$ & $0$ & $0$\\ $10$ & $3$ & $10000$ & $8900$ & $8710$ & $0$\\ $10$ & $4$ & $10000$ & $8900$ & $8710$ & $0$\\ $20$ & $3$ & $160000$ & $145180$ & $143480$ & $0$\\ $20$ & $4$ & $160000$ & $145180$ & $143480$ & $0$\\ \hline \end{tabular} \medskip \caption{\label{tab 1} Number of elements in the domains of the safety controllers for the safe set $\mathcal X=[17,22]\times[19,22]\times[20,23]\times[20,22]$.} \end{center} \end{table} \begin{table}[!t] \begin{center} \begin{tabular}{|c|c||c|c|c|} \hline $\lambda_T$ & $\lambda_u$ & $S_c^1$ & $S_c^2$ & $S_c^3$\\ \hline $5$ & $3$ & $1.80$ & $0.17$ & $0.07$\\ $5$ & $4$ & $5.49$ & $0.20$ & $0.07$\\ $10$ & $3$ & $64$ & $0.46$ & $0.06$\\ $10$ & $4$ & $210$ & $0.56$ & $0.06$\\ $20$ & $3$ & $6044$ & $2.87$ & $0.09$\\ $20$ & $4$ & $18339$ & $3.84$ & $0.44$\\ \hline \end{tabular} \medskip \caption{\label{tab 3} Computation times (in seconds) for the safe set $\mathcal X=[17,22]\times[19,22]\times[20,23]\times[20,22]$.} \end{center} \end{table} We check numerically that Theorem~\ref{prop alternating sufficient} and Corollary~\ref{pro safety comparison} hold. In particular, in these conditions, the safety inclusion $C_c^3(s)\subseteq C_c^2(s)$ for all $s\in\mathcal{P}$ trivially holds due to $dom(C_c^3)=\emptyset$, although Corollary~\ref{pro safety comparison} could not provide theoretical guarantees in this case. Two main conclusions on the proposed compositional approach can be obtained from Tables~\ref{tab 1} and~\ref{tab 3}. Firstly, while the compositional case without state overlap (as in $S_c^3$, Section~\ref{sec particular cases} and~\cite{meyer2015adhs}) fails to synthesize safety controllers, the general case allowing state overlaps (as in $S_c^2$ and Section~\ref{sec compositional}) provides significantly better safety results for a relatively small addition to the computation time. Secondly, the compositional approach with state overlaps $S_c^2$ requires a negligible computation time compared to the large computational cost of the centralized approach $S_c^1$ (e.g.\ in the last row of Table~\ref{tab 3}, we need less than $4$ \emph{seconds} for $S_c^2$ and more than $5$ \emph{hours} for $S_c^1$), while still obtaining similar safety results as long as the state partition $\mathcal{P}$ is not too coarse. In addition to having more information in each subsystem of $S_c^2$ compared to those in the non-overlapping case $S_c^3$, the better safety results in $S_c^2$ can also be explained by the shapes that can be taken by the domain of the safety controllers with each approach. On the one hand, the safety domain $dom(C_c^3)$ in the non-overlapping case $S_c^3$ can only take the form of a hyper-rectangle in $\mathbb{R}^4$ since it is obtained by the Cartesian product of the one-dimensional safety domains $dom(C_\sigma^3)$ of its subsystems. On the other hand, the general case with state overlaps $S_c^2$ is more permissive since its subsystems $S_\sigma^2$ have a three-dimensional state space, thus allowing more complicated shapes of their safety domains $dom(C_\sigma^2)$ as displayed in Figure~\ref{fig simu} for subsystem $\sigma=4$. $S_c^2$ thus has more chances finding a safety domain compatible with the considered system dynamics and control objective. \medskip \ifdouble \begin{figure}[tbh] \centering \includegraphics[width=1\columnwidth]{Partial_3D_2} \caption{Visualization of the domain $dom(C_\sigma^2)$ of the safety controller for subsystem $\sigma=4$ of $S_c^2$. Each axis is associated with one component of the RGB color model to facilitate the visualization of depth.} \label{fig simu} \end{figure} \else \begin{figure}[tbh] \centering \includegraphics[width=0.6\textwidth]{Partial_3D_2} \caption{Visualization of the domain $dom(C_\sigma^2)$ of the safety controller for subsystem $\sigma=4$ of $S_c^2$. Each axis is associated with one component of the RGB color model to facilitate the visualization of depth.} \label{fig simu} \end{figure} \fi {\bf Case 2:} {$n=20$, $\mathcal X=[19,21]^{20}$, $\lambda_T=10$, $\lambda_u=5$.} A second example is proposed to demonstrate the scalability of the compositional approach in a $20$-room building. Note that the safe set $\mathcal X=[19,21]^{20}$ is only chosen homogeneous in all rooms for convenience of notation, and the proposed approach is still applicable for other safe sets. Since this case is clearly out of reach from the centralized approach of $S_c^1$, we focus on the compositional abstractions $S_c^2$ and $S_c^3$ with and without state overlaps, respectively. For the non-overlapping case of $S_c^3$, the total computation time is $0.12$ second and the resulting safety controller is empty ($dom(C_c^3)=\emptyset$). For the case with state overlaps of $S_c^2$, the total computation time is $3.04$ seconds and the resulting safety controller covers the whole safe set $\mathcal{X}$ ($dom(C_c^2)=\mathcal{P}$). Therefore, in addition to the scalability of both these compositional approaches, this simulation also confirms the conclusions of the previous example that the method with state overlaps provides significantly better safety results at a reduced computational cost. We also obtain similarly low computation times while not having to rely on the homogeneity of the specifications as it is the case in~\cite{pola2016decentralized}. \section{Conclusion} \label{sec conclusion} In this paper, we presented a new compositional approach for symbolic controller synthesis. The dynamics are decomposed into subsystems that give a partial description of the global model. It is remarkable that the sets of states of subsystems can overlap. Symbolic abstractions can be computed for each subsystem, and a local safety controller can be synthesized such that the composition of the obtained controllers is proved to realize the global safety specification. Numerical experiments demonstrate the significant complexity reduction compared to centralized approaches and the advantages obtained from the introduction of state overlaps in the subsystems. Future work will focus on extending the approach to other types of specifications such as reachability or more general properties specified by automata or temporal logic formula. \bibliographystyle{abbrv}
2,869,038,156,493
arxiv
\section{Introduction} \label{intro} The understanding of a symmetry that a physical system possesses, as well as this symmetry's breaking pattern allows us to explain uniquely a wide variety of phenomena in many areas of physics, including elementary particle physics and condensed matter physics \cite{Michel}. The mathematical formulation of symmetries related to the Lie group action consists of the detection of the stratification of the representation space of corresponding symmetry group. Dealing with closed quantum systems the symmetries are realized by the unitary group actions and the quantum state space plays the role of the symmetry group representation space. Below, having in mind these observations, we will outline examples of the stratifications occurring for a quantum system composed of a pair of 2-level systems, two qubits. We will analyze symmetries associated with two subgroups of the special unitary group $SU(4)$. More precisely, we will consider the 7-dimensional subspace $\mathfrak{P}_X$ of a generic 2-qubit state space, the family of $X$-states (for definition see \cite{YuEberly2007}, \cite{AAJMS2017} and references therein) and reveal two types of its partition into the set of points having the same symmetry type. The primary stratification originates from the action of the invariance group of $X$-states, named the global unitary group $G_{{}_X} \subset SU(4)$, whereas the secondary one is due to the action of the so-called local group $LG_{{}_X}\subset G_X$ of the $X$-states. \section{X-states and their symmetries} \label{sec-1} The mixed 2-qubit $X\--$states can be defined based on the purely algebraic consideration. The idea is to fix the subalgebra $ \mathfrak{g}_X:=\mathfrak{su}(2)\oplus\mathfrak{su}(2) \oplus\mathfrak{u}(1) \in \mathfrak{su}(4) $ of the algebra $\mathfrak{su}(4)$ and define the density matrix of $X-$states as \begin{equation}\label{eq:XmatrixAlg} \varrho_X= \frac{1}{4}\left(I + i\mathfrak{g}_X \right)\,. \end{equation} In order to coordinatize the $X\--$state space we use the tensorial basis for the $\mathfrak{su}(4)$ algebra, $\sigma_{\mu\nu} =\sigma_\mu\otimes\sigma_\nu, \ \mu,\nu =0,1,2,3$. It consists of all possible tensor products of two copies of Pauli matrices and a unit $2\times 2$ matrix, $\sigma_\mu=(I, \sigma_1,\sigma_2,\sigma_3)\,,$ which we order as follows (see the details in \cite{AAJMS2017}): \begin{eqnarray} &&\lambda_1, \dots, \lambda_{15} = \frac{i}{2}\left(\sigma_{x0}, \sigma_{y0}, \sigma_{z0}, \sigma_{0x}, \sigma_{0y}, \sigma_{0z},\sigma_{xx}, \sigma_{xy}, \sigma_{xz}, \sigma_{yx}, \sigma_{yy}, \sigma_{yz}, \sigma_{zx}, \sigma_{zy}, \sigma_{zz} \right)\,. \end{eqnarray} In this basis the 7-dimensional subalgebra $\mathfrak{g}_X $ is generated by the subset $\alpha_X=\left( \lambda_{3}, \lambda_{6}, \lambda_{7}, \lambda_{8}, \lambda_{10}, -\lambda_{11}, \lambda_{15} \right),$ and thus the unit norm $X\--$state density matrix is given by the decomposition: \begin{equation}\label{eq:XmatrixExp} \varrho_X= \frac{1}{4}\left(I +2 i\sum_{\lambda_k \in \alpha_X} h_k\lambda_k\right)\,. \end{equation} The real coefficients $h_k$ are subject to the polynomial inequalities ensuring the semi-positivity of the density matrix, $\varrho_X \geq 0:$ \begin{equation}\label{eq:positivity} \mathfrak{P}_X= \{h_i \in \mathbb{R}^7 \ | \ \left(h_3\pm h_6\right){}^2+\left(h_8\pm h_{10}\right){}^2+\left(h_7\pm h_{11}\right){}^2\leq (1\pm h_{15})^2 \}\,. \end{equation} Using the definition (\ref{eq:XmatrixAlg}) one can conclude that the $X-$state space $\mathfrak{P}_X$ is invariant under the 7-parameter group, $ G_X :=\exp (\mathfrak{g}_X)\in SU(4):$ \begin{equation} g\varrho_X g^\dag \in \mathfrak{P}_X\,, \qquad \forall g \in G_X\,. \end{equation} Group $G_X$ plays the same role for the $X\--$states as the special unitary group $SU(4)$ plays for a generic 4-level quantum system, and thus is termed the \textit{global unitary group} of $X\--$states. According to \cite{AAJMS2017}, group $G_X$ admits the representation: \begin{equation}\label{eq:G_XRepr} G_X=P_{\pi}\left( \begin{array}{c|c} {e^{-i {\omega_{15}}}SU(2)}& 0 \\ \hline 0 &{e^{i {\omega_{15}}}SU(2)^\prime}\\ \end{array} \right)P_{\pi}\,,\quad \mbox{with}\quad P_{\pi}=\left(\begin{matrix} 1&0&0& 0\\ 0&0&0&1\\ 0&0&1&0\\ 0&1&0&0 \end{matrix} \right)\,. \end{equation} Correspondingly, the \textit{local unitary group} of the $X\--$states is \begin{equation}\label{eq:LG_XRepr} LG_X=P_{\pi}\exp(\imath \frac{\phi_1}{2}) \otimes \exp(\imath \frac{\phi_2}{2})P_{\pi}\ \subset G_X\,. \end{equation} \section{Global orbits and state space decomposition} \label{sec-2} Now we give a classification of the global $G_X$-orbits according to their dimensionality and isotropy group. Every density matrix $\varrho_X $ can be diagonalized by some element of the global $G_X$ group. In other words, all global $G_X$-orbits can be generated from the density matrices, whose eigenvalues form the partially ordered simplex $\underline{\Delta}_3$, depicted on Figure \ref{Fig:PartOrderedSimplex}. \begin{figure} \centering \sidecaption \includegraphics[width=5cm,clip]{PartOrderedSimplex} \caption{The tetrahedron $ABCD$ describes the partially ordered simplex $\underline{\Delta}_3:= \{ \ \sum_{i=1}^4 r_i =1\,, \ \{1 \geq r_1 \geq r_2 \geq 0 \}\cup\{ 1 \geq r_3 \geq r_4 \geq 0\} \}$ of the density matrix eigenvalues, while the tetrahedron $ABC'D'$ inside it corresponds to a 3D simplex with the following complete order: $ \{ \ \sum_{i=1}^4 r_i =1 \,, \; 1 \geq r_1 \geq r_2 \geq r_3 \geq r_4 \geq 0\ \} \,$.}{ \label{Fig:PartOrderedSimplex}} \label{fig-3} \end{figure} The tangent space to the $G_X$-orbits is spanned by the subset of linearly independent vectors, built from the vectors: \( t_k = [\lambda_k,\varrho_X], \ \lambda_k \in \alpha_X\,. \) The number of independent vectors $t_k$ determines the dimensionality of the $G_{{}_X}$-orbits and is given by the rank of the $7 \times 7$ Gram matrix: \begin{equation}\label{eq:Gram} \mathcal{G}_{kl} = \frac{1}{2} \mbox{Tr}(t_k t_l)\,. \end{equation} The Gram matrix (\ref{eq:Gram}) has three zero eigenvalues and two double multiplicity eigenvalues: \begin{eqnarray} \mu_\pm&=& -\frac{1}{8}\left(\left(h_3\pm h_6\right){}^2+\left(h_8\pm h_{10}\right){}^2+\left(h_7\pm h_{11}\right){}^2\right)\,. \end{eqnarray} Correspondingly, the $G_X$-orbits have dimensionality of either 4, 2 or 0. The orbits of maximal dimensionality, {$\mbox{dim}\left( {\mathcal{O}}\right)_{\mbox{\small Gen}}=4$}, are characterized by non-vanishing $\mu_\pm\neq 0$ and consist of the set of density matrices with a generic spectrum, $\sigma(\varrho_x)= (r_1,r_2,r_3,r_4)\,.$ If the density matrices obey the equations \begin{equation} h_6=\pm h_3\,, \ h_{10}=\pm h_8\,, \ \ h_{11}=\pm h_7\,, \end{equation} they belong to the so-called degenerate orbits, $\mbox{dim}\left({\mathcal{O}}\right)_{\pm}=2\,. $ The latter are generated from the matrices which have the double degenerate spectrum of the form, $\sigma(\varrho_x)= (p,p,r_3,r_4)$ and $\sigma(\varrho_x)= (r_1,r_2,q,q)$ respectively. Finally, there is a single orbit $\mbox{dim}\left({\mathcal{O}}\right)_{0}= 0$, corresponding to the maximally mixed state $\varrho_X = \frac{1}{4}I$. Considering the diagonal representative of the generic $G_{{}_X}$-orbit one can be convinced that its \textit{isotropy group} is \begin{equation} H=P_{\pi}\left( \begin{array}{c|c} e^{i\omega}\exp{i{\frac{\gamma_1}{2}\sigma_3}}& {}^{\mbox{\Large 0 }} \\\hline {}_{\mbox{\Large 0 }}&e^{-i\omega}\exp{ i{\frac{\gamma_2}{2}\sigma_3}}\\ \end{array} \right) P_{\pi}\,, \end{equation} while for a diagonal representative with a double degenerate spectrum the isotropy group is given by one of two groups: \begin{equation*} H_{+}=P_{\pi}\left( \begin{array}{c|c} e^{i\omega}SU(2)& {}^{\mbox{\Large 0 }} \\\hline {}_{\mbox{\Large 0 }}&e^{-i\omega}\exp{i{\frac{\gamma_2}{2}\sigma_3}}\\ \end{array} \right) P_{\pi}\,, \quad H_{-}=P_{\pi}\left( \begin{array}{c|c} e^{i\omega}\exp{i{\frac{\gamma_1}{2}\sigma_3}}& {}^{\mbox{\Large 0 }} \\\hline {}_{\mbox{\Large 0 }}&e^{-i\omega}SU(2)^\prime\\ \end{array} \right) P_{\pi}\,. \end{equation*} For the single, zero dimensional orbit the isotropy group $H_0$ coincides with the whole invariance group, $H_0=G_X$. Therefore, the isotropy group of any element of $G_{{}_X}$-orbits belongs to one of these conjugacy classes: $[H], [H_\pm]$ or $[H_{0}]$. Moreover, a straightforward analysis shows that $[H_{+}]=[H_{-}].$ Hence, any point $\varrho \in \mathfrak{P}_X$ belongs to one of three above-mentioned types of $G_{{}_X}$-orbits\footnote{ The \textit{orbit type} $[\varrho]$ of a point $\varrho \in \mathfrak{P}_X$ is given by the conjugacy class of the isotropy group of point $\varrho,$ i.e., $[\varrho]= [G_{\varrho_X}] $\,. }, denoted afterwards as $[H_{t}]\,, t =1,2,3.$ For a given $H_t$, the associated \textit{stratum} $\mathfrak{P}_{[H_t]}\,,$ defined as the set of all points whose stabilizer is conjugate to $H_t$: $$\mathfrak{P}_{[H_t]}: =\{ y \in \ \mathfrak{P}_X|\ \mbox{isotropy~group~of~} y \mbox{~is~conjugate~to}\ H_t\} $$ determines the sought-for decomposition of the state space $\mathfrak{P}_{X}$ into strata according to the orbit types: \begin{equation} {\mathfrak{P}}_X =\bigcup_{\mbox{orbit types}}{\mathfrak{P}}_{[H_i]}\,. \end{equation} The strata ${\mathfrak{P}}_{(H_i)}$ are determined by this set of equations and inequalities: \begin{eqnarray} &(1)& {\mathfrak{P}}_{[H]}: = \{h_i \in \mathfrak{P}_X \ |\ \mu_+ > 0,\, \mu_- > 0\,\}\,,\\ &(2)& {\mathfrak{P}}_{[H_+]}\cup {\mathfrak{P}}_{[H_-]} := \{h_i \in \mathfrak{P}_X \ |\ \mu_+ = 0,\, \mu_- > 0\,\}\cup \{h_i \in \mathfrak{P}_X \ | \ \mu_+ > 0,\, \mu_- = 0\,\}\,,\\ &(3)& {\mathfrak{P}}_{[H_0]}: = \{h_i \in \mathfrak{P}_X \ |\ \mu_+ = 0,\, \mu_- = 0\,\}\,. \end{eqnarray} \section{Local orbits and state space decomposition} Analogously, one can build up the $X\--$ state space decomposition associated with the local group $LG_{{}_X}$ action. For this action the dimensionality of $LG_{{}_X}$-orbits is given by the rank of the corresponding $2 \times 2$ Gram matrix constructed out of vectors $t_3$ and $t_6$. Since its eigenvalues read: \begin{eqnarray} \mu_1= -\frac{1}{8}\left(\left(h_8+h_{10}\right){}^2+\left(h_7+h_{11}\right){}^2\right)\,,\qquad \mu_2=-\frac{1}{8}\left(\left(h_8-h_{10}\right){}^2+\left(h_7-h_{11}\right){}^2\right)\,, \end{eqnarray} the $LG_X$-orbits are either generic ones with the dimensionality of {$\mbox{dim}\left( {\mathcal{O}_L}\right)_{\mbox{\small Gen}}=2$}, or degenerate {$\mbox{dim}\left({\mathcal{O}_L}\right)_{\pm}=1$}, or exceptional ones, {$\mbox{dim}\left({\mathcal{O}_L}\right)_{0}=0$}. The $LG_X$-orbits can be collected into the strata according to their orbit type. There are three types of strata associated with the ``local'' isotropy subgroups of $LH \in LG_X \,$. Correspondingly, one can define the following ``local'' strata of state space: \begin{itemize} \item the generic stratum, ${\mathfrak{P}}^L_{[I]}$, which has a trivial isotropy type, $[I],$ and is represented by the inequalities: ${\mathfrak{P}}^L_{[I]}: = \{h_i \in \mathfrak{P}_X \ |\ \mu_1 > 0,\, \mu_2 > 0\,\}\,, $ \item the degenerate stratum, ${\mathfrak{P}}^L_{[H_L^\pm]}$, collection of the orbits whose type is $[H_L^\pm]\,,$ with the subgroup either $H_L^+= I\times\exp{(iu\sigma_3)},$ or $H_L^-= \exp{(iv\sigma_3)}\times I$. The stratum defining equations read respectively: \begin{equation} h_{10} = \pm h_8\,, \ \quad \ h_{11} = \pm h_7\,, \end{equation} \item the exceptional stratum, ${\mathfrak{P}}_{[LG_X]}$ of the type $[LG_X]\,$, determined by the equations: $h_{11}=h_{10}=h_8=h_7=0\,$. \end{itemize} Therefore, the local group action prescribes the following stratification of 2-qubit $X\--$state space: \begin{equation} {\mathfrak{P}}_X ={\mathfrak{P}}_{[I]}\cup{\mathfrak{P}}_{[H^+_L]}\cup{\mathfrak{P}}_{[H^-_L]}\cup{\mathfrak{P}}_{[LG_X]}\,. \end{equation} \section{Concluding remarks} In the present article we describe the stratification of 2-qubit $X\--$state space associated with the adjoint action of the global and local unitary groups. The global unitary symmetry is related to the properties of a system as a whole, while the local symmetries comprise information on the entanglement, cf. \cite{Chen2010}. In an upcoming publication, based on the introduced stratification of state space, we plan to analyze an interplay between these two symmetries and particularly determine the entanglement/separability characteristics of every stratum.
2,869,038,156,494
arxiv
\section{Introduction} Over the past fifty years NN$\rightarrow$NN$\pi$ reactions have received considerable interest. Of those, $\rm pp \rightarrow d\pi^+$ is probably the one which has been most extensively studied. This is because it is experimentally much easier to identify a two-particle final state. Most older measurements of this reaction are concentrated at higher energies where production via the $\Delta$ resonance dominates. With the advent of storage ring technology and internal targets the energy regime closer to threshold has become accesible. The first NN$\rightarrow$NN$\pi$ measurements close to threshold were restricted to cross section and analyzing power measurements [Ref.1-8], since polarized internal targets were not yet available. Measurements of spin correlation coefficients close to threshold became feasible only recently with the availability of windowless and pure polarized targets internal to storage rings [Ref. 9-11]. At energies well above threshold a number of measurements of spin correlation coefficients in $\rm pp \rightarrow d\pi^+$ exist. Of these, the ones closest to threshold are measurements of $\rm A_{zz}$ at 401 and 425~MeV [Ref. 12] which have been performed using an external beam and a polarized target. A parametrization in terms of partial wave amplitudes of the $\rm pp \rightarrow d\pi^+$ data from threshold to 580~MeV was carried out by Bugg fifteen years ago [Ref. 13]. A more recent, updated partial-wave analysis is maintained by the Virginia group [Ref. 14]. With only the cross section and the analyzing power as input, the number of free parameters is usually lowered by theoretical input such as a constraint on the phases of the amplitudes which is provided by Watson's theorem [Ref.15]. In this respect, a measurement of spin correlation parameters represents crucial new information because one can relate these observables to certain combination of amplitudes without any model assumptions. Close to threshold, s and p wave amplitudes in the pion channel with a possible small admixture of d waves, are sufficient to parametrize the data. In the following, we will demonstrate that combinations of the spin correlation observables presented here are directly sensitive to the strength of these d waves. \section{Experimental Arrangement} In this paper we report measurements of spin correlation coefficients in $\rm \vec p \vec p \rightarrow d\pi^+$ at 350.5, 375.0 and 400.0~MeV at center-of-mass angles between 25$^{\circ}$ and 65$^{\circ}$. The experiment was carried out at the Indiana Cooler with the PINTEX\footnote{{\bf P}olarized {\bf IN}ternal {\bf T}arget {\bf EX}periment} setup. PINTEX is located in the A-region of the Indiana Cooler. In this location the dispersion almost vanishes and the horizontal and vertical betatron functions are small, allowing the use of a narrow target cell. The target setup consists of an atomic beam source[Ref.16] which injects polarized hydrogen atoms into the storage cell. Vertically polarized protons from the cyclotron were stack-injected into the ring at 197~MeV, reaching an orbiting current of several 100~$\mu$A within a few minutes. The beam was then accelerated. After typically 10 minutes of data taking, the remaining beam was discarded, and the cycle was repeated. The target and detector used for this experiment are the same as described in [Ref. 9], and a detailed account of the apparatus can be found in [Ref. 17]. The internal polarized target consisted of an open-ended 25~cm long storage cell of 12~mm diameter and 25~$\mu$m wall thickness. The cell is coated with teflon to avoid depolarization of atoms colliding with the wall. During data taking, the target polarization $\vec{Q}$ is changed every 2~s pointing in sequence, up or down ($\pm$ y), left or right ($\pm$ x), and along or opposite to the beam direction ($\pm$ z). The magnitude of the polarization was typically $\vert \vec{Q} \vert \sim $ 0.75 and is the same within $\pm$ 0.005 for all orientations [Ref. 18,19]. The detector arrangement consists of a stack of scintillators and wire chambers, covering a forward cone between polar angles of 5 and 30$^{\circ}$. From the time of flight and the relative energy deposited in the layers of the detector, the outgoing charged particles are identified as pions, protons or deuterons. The detector system was optimized for an experiment to study the spin dependence in $\rm \vec p \vec p \rightarrow pp\pi^0$ and $\rm \vec p \vec p \rightarrow pn\pi^+$ near threshold [Ref. 9,10] The $\rm \vec p \vec p \rightarrow d\pi^+$ data presented here are a by-product of that experiment. Date were taken with vertical as well as longitudinal beam polarization. To achieve non-vertical beam polarization the proton spin was precessed by two spin-rotating solenoids located in non-adjacent sections of the six-sided Cooler. The vertical and longitudinal components of the beam polarization $\vec{P}$ at the target are about equal, with a small sideways component, while its magnitude was typically $\vert \vec{P} \vert \sim $ 0.6. Since the solenoid fields are fixed in strength, the exact polarization direction depends on beam energy after acceleration. In alternating measurement cycles, the sign of the beam polarization is reversed. More details on the preparation of non-vertical beam polarization in a storage ring can be found in Ref. 19. \section{Data acquisition and processing} For each beam polarization direction, data are acquired for all 12 possible polarization combinations of beam ($+,-$) and target ($\pm$ x, $\pm$ y, $\pm$ z). The event trigger is such that two charged particles are detected in coincidence. Then, a minimum $\chi^2$ fit to the hits in each of the four wire chamber planes is performed in order to determine how well the event conforms with a physical two-prong event that originates in the target. Events with $\chi^2 \leq$ 5 were included in the final data sample. The information from the $\chi^2$ fit allows us to determine the polar and azimuthal angles of both charged particles. In the case of a two-particle final state the two particles are coplanar and the expected difference between the two azimuthal angles is $\Delta \phi$=180 $^{\circ}$. Events between 150$^{\circ} \leq \Delta \phi \leq$ 210$^{\circ}$ are accepted for the final analysis. The polar angles of the pion and the deuteron are correlated such that the deuteron, because of its mass, exits at angles close to the beam whereas pion laboratory angles as large as 180$^{\circ}$ are kinematically allowed. At energies below 350.5~MeV the limited acceptance of the detector setup is caused by deuterons travelling through the central hole of the detector stack, whereas at higher energies the angular coverage is more and more restricted because of $\pi^+$ missing the acceptance of 30$^{\circ}$. Events within 1$^{\circ}$ of the predicted correlation of the $\pi^+$-d polar scattering angles were included in the final analysis. Since there is a kinematically allowed maximum deuteron angle which depends on the beam energy, only those two-prong events were included in the final data sample for which the deuteron reaction angle was $\leq$ 7$^{\circ}$, 8$^{\circ}$ and 9$^{\circ}$ at 350.5, 375.0, 400.0~MeV respectively. The correlation between the polar angles of the pion and the deuteron and the coplanarity condition uniquely determine the $pp \rightarrow d\pi^+$ reaction channel. Therefore, the inclusion of particle identification gates in the analysis did not change the spin correlation coefficients by more than a fraction of an error bar. Consequently, no particle identification gates were used in the final analysis. Since an open-ended storage cell is used, the target is distributed along the beam (z-) axis. The resulting target density is roughly triangular extending from $\rm z=-12$~cm to $\rm z=+12$~cm with z=0 being the location where the polarized atoms are injected. The angular acceptance of the detector depends on the origin of the event along the z-axis. Specifically, the smallest detectable angle increases towards the upstream end of the cell. It was only possible to detect deuterons which originated predominatly from the upstream part of the target. Since the kinematically allowed maximum deuteron angle increases with energy, vertex coordinates between z=$-$12~cm and 4, 6, 8~cm at 350.5, 375.0 and 400.0~MeV, respectively, were accepted into the data sample. This way, background events originating from the downstream cell walls were suppressed. The known spin correlation coefficients of proton-proton elastic scattering [Ref. 20] are used to monitor beam and target polarization, concurrently with the acquisition of $\rm pp \rightarrow d\pi^+$ events. To this end, coincidences between two protons exiting near $\Theta_{lab} = 45^{\circ}$ are detected by two pairs of scintillators placed behind the first wire chamber at azimuthal angles $\pm 45^{\circ}$ and $\pm 135^{\circ}$. From this measurement, the products $\rm P_yQ_y$ and $\rm P_zQ_z$ of beam and target polarization are deduced. Values for the products of beam and target polarization can be found in Table 1 of Ref. 11. Background arises from reactions in the walls of the target cell and from outgassing of the teflon coating [Ref. 17]. The contribution arising from a $\sim 1 \%$ impurity in the target gas is negligible. Background, which is not rejected by the software cuts described above, manifests itself as a broad distribution underlying the peak at $\Delta \phi$=180$^{\circ}$. A measurement with a nitrogen target matches the shape of the $\Delta \phi$ distribution seen with the H target, except for the peak at 180$^{\circ}$. This shape is used to subtract the background under the $\Delta \phi$ peak. Within statistics, the background shows no spin dependence and is independent of the polar angle. The subtracted background ranges from 2$\%$ to 5$\%$. \section{Determination of A$_y$, A$_{xx}$, A$_{yy}$ and A$_{xz}$} The analyzing power and spin correlation coefficients were determined using the method of diagonal scaling. Previously, we used diagonal scaling to analyze a series of experiments to measure spin correlation coefficients in pp elastic scattering and a detailed desciption of the formalism can be found in Ref. 21. Since we only measure the {\it product} of beam and target polarization $\rm P \cdot Q$, we cannot normalize beam and target analyzing power independently. Because of the identity of the colliding particles, beam and target analyzing power as a function of center-of-mass angle are related such that $\rm A_y^b(\theta) = -A_y^t(180^{\circ}-\theta)$, in particular $\rm A_y^b(90^{\circ}) = -A_y^t(90^{\circ})$. Consequently, for comparison with theory we use the quantity $\rm \sqrt{-(A^b_y(\theta) \cdot A^t_y(180^{\circ}-\theta))}$ for which the absolute normalization depends on P$\cdot$Q and therefore is known. The final data with their statistical errors are shown in Fig.~1. \section{Partial waves} When one restricts the angular momentum of the pion to l$_{\pi} \leq $2, there are seven partial waves possible, each corresponding to a given initial and final state with all angular momentum quantum numbers given. Using the usual nomenclature (see, e.g., Ref. 13), these amplitudes are \begin{equation} \begin{array}{lll} a_0\ :\ ^1S_0 & \rightarrow\ ^3S_1,\ l_{\pi}\ & =\ 1 \\[1.2ex] a_1\ :\ ^3P_1 & \rightarrow\ ^3S_1,\ l_{\pi}\ & =\ 0 \\[1.2ex] a_2\ :\ ^1D_2 & \rightarrow\ ^3S_1,\ l_{\pi}\ & =\ 1 \\[1.2ex] a_3\ :\ ^3P_1 & \rightarrow\ ^3S_1,\ l_{\pi}\ & =\ 2 \\[1.2ex] a_4\ :\ ^3P_2 & \rightarrow\ ^3S_1,\ l_{\pi}\ & =\ 2 \\[1.2ex] a_5\ :\ ^3F_2 & \rightarrow\ ^3S_1,\ l_{\pi}\ & =\ 2 \\[1.2ex] a_6\ :\ ^3F_3 & \rightarrow\ ^3S_1,\ l_{\pi}\ & =\ 2 \ . \end{array} \end{equation} Very close to threshold, only s-wave pions are produced, and a single amplitude (a$_1$) is sufficient to describe the reaction. Slightly above threshold, the p-wave a$_2$ becomes significant, while the other p-wave (a$_0$) remains small and is often neglected entirely. Among the d-wave amplitudes, at, for instance, 400~MeV, a$_6$ is the largest, followed by a$_4$, a$_3$, and a$_5$ with a factor of ~20 between a$_5$ and a$_6$. This is known indirectly from partial-wave analyses (Ref. 13,14) of angular distributions of cross section and analyzing power. The spin correlation data presented here offer a more direct study of the d-wave strength in $\rm pp \rightarrow d\pi^+$. In order to demonstrate this, we need to express the observables in terms of the seven partial-wave amplitudes (eq.1). This has previously been done for the cross section and the analyzing power (Ref. 22). For the spin correlation coefficients, we find analogously \begin{equation} \begin{array}{ll} \sigma & =\ {1 \over {16 \pi}} [a_0^2+a_1^2+a_2^2+C_1+(a_2^2-2 \sqrt{2}Re(a_0a_2^*) \\[1.2ex] & +B_1+C_2)P_2^0(cos\theta)+ C_3P_4^0(cos \theta)] \\[1.2ex] \end{array} \end{equation} \begin{equation} \begin{array}{ll} \sigma & (A_{xx}+A_{yy}) =\ {1 \over {16 \pi}} [-2a_0^2-2a_2^2+C_4+(-2a_2^2 \\[1.2ex] & +4 \sqrt{2}Re(a_0a_2^*)+C_5)P_2^0(cos \theta)+C_6P_4^0(cos \theta)] \\[1.2ex] \end{array} \end{equation} \begin{equation} \begin{array}{ll} \sigma & (A_{zz}) =\ {1 \over {16 \pi}} [-a_0^2+a_1^2-a_2^2+C_1-C_4+(-a_2^2 \\[1.2ex] & +2 \sqrt{2}Re(a_0a_2^*)+B_1+C_2-C_5)P_2^0(cos \theta) \\[1.2ex] & +(C_3-C_6)P_4^0(cos \theta)] \\[1.2ex] \end{array} \end{equation} \begin{equation} \begin{array}{ll} \sigma & (A_{xx}-A_{yy}) =\ {1 \over {16 \pi}}[(B_2+C_1)P_2^2(cos \theta) \\[1.2ex] & +C_8P_4^2(cos \theta)], \end{array} \end{equation} with $\sigma$ being the unpolarized differential cross section. The observables in Eqs.~2-5 are functions of the center-of-mass reaction angle, $\theta$, and the functions $P_l^m(cos \theta)$ are the usual Legendre polynomials. In the above equations, the terms B$_n$ are caused by interference between s- and d-waves and are given by a sum of terms Re(a$_1$ a$_i^*$), (i=3,4,5,6), weighted by numerical factors which follow from angular momentum coupling. The terms C$_n$ contain d-waves only, i.e., a sum of a$_i^2$ and Re(a$_i$ a$_k^*$) (i,k=3,4,5,6), again weighted by the appropriate numerical factors. A derivation of eqs.2-5 can be found in Ref. 23. From eq.~5 one sees that the combination $\rm A_{xx}-A_{yy}$ vanishes at all angles when there are no d-waves. A departure from this behaviour would be caused by an interference between s- and d-waves. It is also easy to see from (eqs.~2-4) that for all reaction angles, the following holds \begin{equation} A_{zz}-A_{xx}-A_{yy}=1-\delta \end{equation} where $\delta$ contains only C$_n$ terms, i.e., only d-wave amplitudes. Thus, both quantities, $\rm A_{xx}-A_{yy}$ and $\rm A_{zz}-A_{xx}-A_{yy}-1$ are a direct measure of d-wave contributions. Clearly, $\rm A_{xx}-A_{yy}$ provides a more sensitive test because here d-waves interfere with the dominant a$_1$ amplitude. \section{Final data and comparison with theory} We compare our data to the two newest, published phase shift solutions, namely \begin{description} \item[SP96] the published partial-wave analysis of the VPI group [Ref. 24]; range of validity 0-550 MeV, where the quoted energy is the laboratory energy of the pion in $\rm \pi^+ d \rightarrow p p$. For this solution a combined analysis of $\rm pp \rightarrow pp$, $\rm \pi d \rightarrow \pi d$ and $\rm \pi^+ d \rightarrow p p$ was performed. In particular, the overall phase in $\rm \pi^+ d \rightarrow p p$ was determined for this solution. \item[BU93] the published partial-wave analysis of the VPI group and D.V. Bugg [Ref. 25]; range of validity 9-256 MeV. \end{description} Numerical values for both phase shift analyses have been obtained from the SAID interactive program [Ref. 14]. As can be seen from Fig.~1, our data are in good agreement with both phase shift solutions, although the agreement with SP96 (overall $\chi^2$=0.90) is better than with BU93 (overall $\chi^2$=1.29). As a function of energy we have $\chi^2$=0.90, 0.87, 0.99 (SP96) and $\chi^2$=1.64, 0.88, 1.50 (BU93) at 350.5, 375.0 and 400.0~MeV respectively. This indicates that the energy dependence of SP96 near threshold is slightly closer to the data. Also shown in Fig.~1 are the $\rm A_{zz}$ data of Ref. 12 which agree with our data. The quantity $\rm A_{xx}-A_{yy}$ provides a sensitive and direct test of d-wave contributions in $\rm \vec p \vec p \rightarrow \rm d \pi^+ $. Our data are consistent with negligible d-wave contributions even at 400~MeV (see Fig.~1). \section{summary} In summary, we have measured the analyzing power and spin correlation coefficients in $\vec p \vec p \rightarrow d\pi^+$. The spin correlation coefficients allow a direct determination of d-wave contributions. In particular, $\rm A_{xx}-A_{yy}$ provides a sensitive test because here d-waves interfere with the dominant $a_1$ amplitude. Our data are consistent with negligible d-wave contributions even at 400~MeV. In addition, our data are well described by a partial wave solution (SP96) of the Virginia group. Even at \begin{acknowledgements} We are grateful for the untiring efforts of the accelerator operations group at IUCF, in particular G. East and T. Sloan. This work was supported in part by the National Science Foundation and the Department of Energy. This work has been supported by the US National Science Foundation under Grants PHY-9602872, PHY-9901529, PHY-9722556, and by the US Department of Energy under Grant DE-FG02-88ER40438. \end{acknowledgements}
2,869,038,156,495
arxiv
\section{Introduction and Main Results} The question of obtaining Poincar\'{e}-type inequalities (or more generally entropy inequalities) for pure jump L\'{e}vy processes was studied in the last decades, e.g.\ see \cite{BL,Wu,Cha}. In particular, it was proved by \cite[Corollary 4.2]{Wu} and \cite[Theorem 23]{Cha} that \begin{equation}\label{phiineq}\textrm{Ent}_\mu^\Phi(f)\le \iint D_\Phi(f(x),f(x+z))\,\nu_\mu(dz)\,\mu(dx),\quad f\in C_b^\infty(\mathds R^d), f>0\end{equation} with $$\textrm{Ent}_\mu^\Phi(f)=\int \Phi (f)\, d\mu-\Phi\Big(\int f d\mu\Big)$$ and $D_\Phi$ is the so-called Bergman distance associated with $\Phi$: $$D_\Phi(a,b)=\Phi(a)-\Phi(b)-\Phi'(b)(a-b),$$ where $\mu$ is a rather general probability measure and $\nu_\mu$ is the (singular) L\'{e}vy measure associated to $\mu$. By setting $\Phi(x)=x^2$ and $\Phi(x)=x\log x$, $\textrm{Ent}_\mu^\Phi(f)$ becomes the classical variance $\textup{Var}_\mu(f)$ and entropy $ \textrm{Ent}_\mu(f)$ respectively, and so \eqref{phiineq} yields the Poincar\'{e} inequality and the entropy inequality for the choice of measure $(\mu,\nu_\mu)$. Note that either one of the measures $\mu$ and $\nu_\mu$ in \eqref{phiineq} uniquely specifies the other, and so this is a strong constraint to study functional inequalities for general non-local Dirichlet forms. The first breakthrough in this direction was established in \cite{MRS} by using the methods from harmonic analysis, and then was extended in \cite{Gre} to $L^p$ weighted Poincar\'{e} inequalities and generalized logarithmic Sobolev inequalities in an abstract situation. Let $V$ be a locally bounded measurable function on $\mathds R^d$ such that $\int e^{-V(x)}\,dx=1$; that is, $\mu_V(dx):=e^{-V(x)}\,dx$ is a probability measure on $\mathds R^d$. The main result in \cite{MRS} (see \cite[Theorem 1.2]{MRS}) states that, if $V\in C^2(\mathds R^d)$ such that for some constant $\varepsilon>0$, \begin{equation*}\frac{(1-\varepsilon)|\nabla V(x)|^2}{2} -\Delta V(x)\to\infty,\qquad |x|\to \infty,\end{equation*} then there exist two positive constants $\delta$ and $C_0$ such that for all $f\in C_b^\infty(\mathds R^d)$, $$\aligned\int (f-\mu_V(f))^2\big(1+|\nabla V|^{\alpha}\big) &\,\mu_V(dx) \leqslant C_0D_{\alpha,V,\delta}(f,f),\endaligned$$ where \begin{equation}\label{MRS}D_{\alpha,V,\delta}(f,f)=\iint \frac{(f(y)-f(x))^2}{|y-x|^{d+\alpha}}e^{-\delta|y-x|}\,dy \,\mu_V(dx).\end{equation} According to the paragraph below \cite[Remark 1.3]{MRS}, \eqref{MRS} is natural in the sense that: we should regard the measure ${|y-x|^{-(d+\alpha)}}e^{-\delta|y-x|}\,dy$ as the L\'{e}vy measure, and $\mu_V(dx)$ as the ambient measure. Namely, $D_{\alpha,V,\delta}$ does get rid of the constraint of $(\mu,\nu_\mu)$ in \eqref{phiineq}, and it should be a typical example in study functional inequalities for non-local Dirichlet forms. This leds us to consider the Dirichlet form $( D_{\rho, V},\mathscr{ D}(D_{\rho, V}))$ as follows. Let $\rho$ be a strictly positive measurable function on $(0,\infty)$ such that $\int_{(0,\infty)}\rho(r)(1\wedge r^2)r^{d-1}\,dr<\infty.$ Let $L^2(\mu_V)$ be the space of Borel measurable functions $f$ on $\mathds R^d$ such that $\mu_V(f^2):=\int f^2(x)\,\mu_V(dx)<\infty$. Set $$\aligned D_{\rho, V}(f,f):=&\iint_{x\neq y}\big(f(x)-f(y)\big)^2\rho(|x-y|)\,dy\,\mu_V(dx)\\ \mathscr{D}(D_{\rho,V}):=&\bigg\{ f\in L^2(\mu_V): D_{\rho,V}(f,f)<\infty\bigg\}.\endaligned$$ According to \cite[Example 2.2]{CW}, we know that $( D_{\rho, V},\mathscr{ D}(D_{\rho, V}))$ is a symmetric Dirichlet form such that $C_b^\infty(\mathds R^d)\subset \mathscr{D}(D_{\rho, V}) $, where $C_b^\infty(\mathds R^d)$ denotes the set of smooth functions on $\mathds R^d$ with bounded derivatives for all orders. The purpose of this note is to present sufficient conditions for Poincar\'{e} type inequalities (i.e.\ Poincar\'{e} inequalities, weak Poincar\'{e} inequalities and super Poincar\'{e} inequalities), entropy inequalities and Beckner-type inequalities for $( D_{\rho, V},\mathscr{ D}(D_{\rho, V}))$. We first state the main result for Poincar\'{e} type inequalities of $( D_{\rho, V},\mathscr{ D}(D_{\rho, V}))$. \begin{theorem}\label{th1} $(1)$ If there exists a constant $c>0$ such that for any $x$, $y\in \mathds R^d$ with $x\neq y$, \begin{equation}\label{th1.1}\big(e^{V(x)}+e^{V(y)}\big)\rho(|x-y|)\ge c,\end{equation} then the following Poincar\'{e} inequality \begin{equation}\label{th1.1.1}\mu_V(f-\mu_V(f))^2 \le c^{-1} D_{\rho,V}(f,f), \quad f\in \mathscr{ D}(D_{\rho, V})\end{equation} holds. $(2)$ For any probability measure $\mu_V$, the following weak Poincar\'{e} inequality \begin{equation}\label{th1.2.1}\mu_V(f-\mu_V(f))^2\le \alpha(r)D_{\rho,V}(f,f)+r\|f\|_\infty^2,\quad r>0, f\in \mathscr{ D}(D_{\rho, V})\end{equation} holds with $$\alpha(r)=\inf\Bigg\{\frac{1}{\inf\limits_{0<|x-y|\le s}\big[(e^{V(x)}+e^{V(y)})\rho(|x-y|)\big]}: \iint_{|x-y|> s}\mu_V(dy)\,\mu_V(dx)\le \frac{r}{2}\Bigg\}.$$ $(3)$ Suppose that there exists a nonnegative locally bounded measurable function $w$ on $\mathds R^d$ such that $$\lim\limits_{|x|\to\infty}w(x)=\infty,$$ and for any $x$, $y\in \mathds R^d$ with $x\neq y$, \begin{equation}\label{th1.2} e^{V(x)}+e^{V(y)}\ge\frac{ w(x)+w(y)}{\rho(|x-y|)}.\end{equation} Then the following super Poincar\'{e} inequality \begin{equation}\label{th1.2.2}\mu_V(f^2)\le r D_{\rho,V}(f,f)+ \beta(r)\mu_V(|f|)^2,\quad r>0, f\in \mathscr{ D}(D_{\rho, V}) \end{equation} holds with $$\beta(r)=\inf\bigg\{\frac{2\mu_V(\omega)}{\inf\limits_{|x|\ge t}\omega(x)}+\beta_t(t\wedge s): \frac{2}{\inf_{|x|\ge t}w(x)}+s\le r\textrm{ and } t,s>0\bigg\},$$ where for any $t>0$, $$\beta_t(s)=\inf\left\{ \frac{c_0\Big(\sup\limits_{|z|\le 2t}e^{V(z)}\Big)^2}{u^d\Big(\inf\limits_{|z|\le t} e^{V(z)}\Big)}: \frac{c_0\Big(\sup\limits_{0<\varepsilon\le u}\rho(\varepsilon)^{-1}\Big)\Big(\sup\limits_{|z|\le 2t}e^{V(z)}\Big)}{u^d\Big(\inf\limits_{|z|\le t} e^{V(z)}\Big)}\le s \textrm{ and } u>0 \right\}.$$ \end{theorem} \bigskip To illustrate the power of Theorem \ref{th1}, we will consider the following examples. \begin{example}\label{ex1} Let $\mu_V(dx)=e^{-V(x)}\,dx:=C_{d,\varepsilon} (1+|x|)^{-(d+\varepsilon)}\,dx$ with $\varepsilon>0$, and $\rho(r)=r^{-(d+\alpha)}$ with $\alpha\in (0,2)$. \begin{itemize} \item[(1)] If $\varepsilon\ge \alpha$, then the Poincar\'{e} inequality \eqref{th1.1.1} holds with $c=\frac{2^{1-(d+\alpha)}}{C_{d,\varepsilon}}$. \item[(2)] If $0<\varepsilon<\alpha$, then the weak Poincar\'{e} inequality \eqref{th1.2.1} holds with $$\alpha(r)=c_1\big(1+ r^{-(\alpha-\varepsilon)/\varepsilon}\big)$$ for some constant $c_1>0$. \item[(3)] If $\varepsilon>\alpha$, then the super Poincar\'{e} inequality \eqref{th1.2.2} holds with $$\beta(r)=c_2\bigg(1+ r^{-\frac{d}{\alpha}-\frac{(d+\varepsilon)(d+2\alpha)}{\alpha(\varepsilon-\alpha)}}\bigg)$$ for some constant $c_2>0$. \end{itemize} According to \cite[Corollary 1.2]{WW}, we know that all the conclusions above are optimal. \end{example} \begin{example} Let $\mu_V(dx)=e^{-V(x)}\,dx:=C_{d,\alpha,\varepsilon}(1+|x|)^{-(d+\alpha)}\log^\varepsilon(e+|x|)\,dx$ with $\varepsilon\in\mathds R$, and $\rho(r)=r^{-(d+\alpha)}$ with $\alpha\in (0,2)$. \begin{itemize} \item[(1)] If $\varepsilon\le0$, then the Poincar\'{e} inequality \eqref{th1.1.1} holds with $c=\frac{2^{1-(d+\alpha)}}{C_{d,\varepsilon}}$. \item[(2)] If $\varepsilon>0$, then the weak Poincar\'{e} inequality \eqref{th1.2.1} holds with $$\alpha(r)=c_3\big(1+ \log^{\varepsilon}(1+r^{-1})\big)$$ for some constant $c_3>0$. \item[(3)] If $\varepsilon<0$, then the super Poincar\'{e} inequality \eqref{th1.2.2} holds with $$\beta(r)=\exp\Big(c_4(1+r^{1/\varepsilon})\Big)$$ for some constant $c_4>0$. \end{itemize} By \cite[Corollary 1.3]{WW}, all the conclusions above are also sharp. \end{example} \begin{example} Let $\mu_V(dx)=e^{-V(x)}\,dx:=C_\lambda e^{-\lambda |x|}\,dx$ with $\lambda>0$, and $\rho(r)=e^{-\delta r}r^{-(d+\alpha)}$ with $\delta\ge0$ and $\alpha\in (0,2)$. Therefore, if $\lambda>2\delta$, then the super Poincar\'{e} inequality \eqref{th1.2.2} holds with $\beta(r)=c_5\Big(1+ r^{-\frac{d}{\alpha}-\frac{2\lambda(d+2\varepsilon)}{\alpha(\lambda-2\delta)}}\Big)$ for some constant $c_5>0$. In particular, the Poincar\'{e} inequality \eqref{th1.1.1} holds. Note that, this conclusion can not be deduced from \cite[Theorem 1.1]{MRS}, see also the statement before \eqref{MRS}. \end{example} Next, we turn to study entropy inequalities and Beckner-type inequalities for $( D_{\rho, V},\mathscr{ D}(D_{\rho, V}))$. Recall that for any $f\in \mathscr{ D}(D_{\rho, V})$ with $f>0$, $$\textrm{Ent}_{\mu_V}(f):=\mu_V(f\log f)-\mu_V(f)\log\mu_V(f).$$ \begin{theorem}\label{th2} Suppose that \eqref{th1.1} is satisfied. Then the following entropy inequality \begin{equation}\label{th2.2}\textrm{Ent}_{\mu_V}(f) \le c^{-1} D_{\rho,V}(f,\log f)\end{equation} holds for all $f\in \mathscr{ D}(D_{\rho, V})$ with $f>0$; and moreover, the following Beckner-type inequality also holds: for any $p\in(1,2]$ and $f\in \mathscr{ D}(D_{\rho, V})$ with $f\ge0$, \begin{equation}\label{th2.3}\mu_V(f^p)-\mu_V(f)^p\le c^{-1} D_{\rho,V}(f,f^{p-1}).\end{equation} \end{theorem} The entropy inequality \eqref{th2.2} and Beckner-type inequality \eqref{th2.3} are stronger than the Poincar\'{e} inequality \eqref{th1.1.1} (To see this, one can apply these inequalities to the function $1+\varepsilon f$ and then take the limit as $\varepsilon\to0$). Clearly, the Beckner-type inequality \eqref{th2.3} reduces to the Poincar\'{e} inequality \eqref{th1.1.1} if $p=2$, whereas dividing both sides by $p-1$ and taking the limit as $p\to1$ we obtain the entropy inequality \eqref{th2.2}. As mentioned in the remarks below \eqref{MRS}, comparing Theorem \ref{th2} with \eqref{phiineq} the improvement is due to that we do not impose any link between the measure $\mu_V(dx)$ on $x$ and the singular measure $\rho(|z|)\,dz$ on $z=y-z.$ This is to our knowledge the first result that gets rid of the strong constraint for entropy inequalities and Beckner-type inequalities of non-local Dirichlet forms. \begin{example}\label{ex2}[{\bf Continuation of Example \ref{ex1}}] Let $$\mu_V(dx)=e^{-V(x)}\,dx:=C_{d,\varepsilon} (1+|x|)^{-(d+\varepsilon)}\,dx$$ with $\varepsilon>0$, and $\rho(r)=r^{-(d+\alpha)}$ with $\alpha\in (0,2)$. \begin{itemize} \item[(1)] If $\varepsilon\ge \alpha$, then, according to Theorem \ref{th2}, the entropy inequality \eqref{th2.2} and Beckner-type inequality \eqref{th2.3} hold with $c=\frac{2^{1-(d+\alpha)}}{C_{d,\varepsilon}}$. \item[(2)] If $0<\varepsilon<\alpha$, then, according to \cite[Corollary 1.2]{WW}, the Poincar\'{e} inequality \eqref{th1.1.1} does not hold. Hence, by the remark below Theorem \ref{th2}, both the entropy inequality \eqref{th2.2} and Beckner-type inequality \eqref{th2.3} do not hold. \end{itemize} According to Examples \ref{ex1}, \ref{ex2} and \cite[Corollary 1.2]{WW}, we know that for the probability measure $\mu_V(dx)=e^{-V(x)}\,dx:=C_{d,\alpha} (1+|x|)^{-(d+\alpha)}\,dx$, it fulfills the entropy inequality \eqref{th2.2} and the Beckner-type inequality \eqref{th2.3}, but not the super Poincar\'{e} inequality \eqref{th1.2.2}. \end{example} \section{Proofs of Theorems and Example \ref{ex1}} \begin{proof}[Proof of Theorem $\ref{th1}$] (1) For any $f\in \mathscr{D}(D_{\rho, V}),$ \begin{equation}\label{proof1}\aligned &\frac{1}{2} \iint \big(f(x)-f(y)\big)^2\,\mu_V(dy)\,\mu_V(dx)\\ &=\frac{1}{2} \iint \big(f^2(x)+f^2(y)-2f(x)f(y)\big)\,\mu_V(dy)\,\mu_V(dx)\\ &=\mu_V(f^2)-\mu_V(f)^2=\mu_V(f-\mu_V(f))^2.\endaligned\end{equation} On the other hand, by \eqref{th1.1}, we find that $$\aligned &\frac{1}{2} \iint \big(f(x)-f(y)\big)^2\,\mu_V(dy)\,\mu_V(dx)\\ &= \frac{1}{2}\iint_{x\neq y} \big(f(x)-f(y)\big)^2\rho(|x-y|)\rho(|x-y|)^{-1}\,\mu_V(dy)\,\mu_V(dx)\\ &\le c^{-1}\iint_{x\neq y} \big(f(x)-f(y)\big)^2\rho(|x-y|)\,\frac{e^{-V(x)}+e^{-V(y)}}{2}\,dy\,dx\\ &= c^{-1} D_{\rho,V}(f,f), \endaligned$$ which, along with \eqref{proof1}, yields the required assertion. (2) According to \eqref{proof1}, for any $s>0$ and $f\in \mathscr{D}(D_{\rho, V}),$ \begin{align*} \mu_V(f-\mu_V(f))^2&=\frac{1}{2} \iint (f(x)-f(y))^2\,\mu_V(dy)\,\mu_V(dx)\\ &=\frac{1}{2}\iint_{0<|x-y|\le s} (f(x)-f(y))^2\,\mu_V(dy)\,\mu_V(dx) \\ &\quad +\frac{1}{2}\iint_{|x-y|> s} (f(x)-f(y))^2\,\mu_V(dy)\,\mu_V(dx)\\ &\le \iint_{0<|x-y|\le s} (f(x)-f(y))^2\rho(|x-y|)\\ &\qquad\qquad\times \bigg[\frac{e^{-V(x)-V(y)}}{(e^{-V(x)}+e^{-V(y)})\rho(|x-y|)}\bigg]\frac{e^{-V(x)}+e^{-V(y)}}{2}\,dy\,dx\\ &\quad+ 2\|f\|_\infty^2 \iint_{|x-y|> s} \,\mu_V(dy)\,\mu_V(dx)\\ &\le \bigg(\sup_{0<|x-y|\le s}\frac{1}{(e^{V(x)}+e^{V(y)})\rho(|x-y|)}\bigg) D_{\rho,V}(f,f)\\ &\quad+\bigg(2 \iint_{|x-y|>s} \,\mu_V(dy)\,\mu_V(dx)\bigg)\|f\|_\infty^2. \end{align*} The desired assertion follows from the definition of $\alpha$. (3) For any $f\in \mathscr{D}(D_{\rho, V})$, by Jensen's inequality, $$\aligned \mu_V((f-\mu_V(f))^2w)&=\int \Big(f(x)-\int f(y)\,\mu_V(dy)\Big)^2w(x)\,\mu_V(dx)\\ &=\int\Big(\int(f(x)-f(y))\,\mu_V(dy)\Big)^2w(x)\,\mu_V(dx)\\ &\le \iint (f(x)-f(y))^2w(x)\,\mu_V(dy)\,\mu_V(dx). \endaligned$$ This implies that $$\aligned \mu_V((f-\mu_V(f))^2w)&\le \iint (f(x)-f(y))^2\frac{w(x)+w(y)}{2}\,\mu_V(dy)\,\mu_V(dx). \endaligned$$ Thus, by \eqref{th1.2}, we arrive at \begin{equation}\label{prof1}\aligned\mu_V((f-\mu_V(f))^2w)&\le\iint (f(x)-f(y))^2\\ &\qquad\qquad\times \rho(|x-y|)\frac{e^{V(x)}+e^{V(y)}}{2}\,\mu_V(dy)\,\mu_V(dx)\\ &\le D_{\rho,V}(f,f).\endaligned \end{equation} Next, we will follow the proof of \cite[Proposition 1.6]{CW} to obtain the super Poincar\'{e} inequality from \eqref{prof1}. We first claim that $\mu_V(\omega)<\infty$. In fact, let $C_c^\infty(\mathds R^d)$ be the set of smooth functions on $\mathds R^d$ with compact support. Choose a function $g\in C_c^{\infty}(\mathds R^d)$ such that $g(x)=0$ for every $|x|\ge 1$ and $\mu_V(g)=1$. Then, applying this test function $g$ into (\ref{prof1}) and noting the fact that $C_c^\infty(\mathds R^d)\subset C_b^\infty(\mathds R^d)\subset \mathscr{D}(D_{\rho, V})$, we have \begin{equation*} \begin{split} \int_{\{|x|\ge 1\}}\omega(x)\,\mu_V(dx)&\le \int \big(g(x)-\mu_V(g)\big)^2\omega(x)\,\mu_V(dx)\leqslant D_{\rho,V}(g,g)<\infty. \end{split} \end{equation*} Since the function $\omega$ is bounded on $\{x \in \mathds R^d: |x|\le 1\}$, $\int_{\{|x|\le 1\}}\omega(x)\,\mu_V(dx)<\infty$. Combining both estimates above, we prove the desired claim. For any $t>1$ large enough and $f \in \mathscr{D}(D_{\rho, V})$, by (\ref{prof1}), we have \begin{equation*} \begin{split} \int_{\{|x|\ge t\}}f^2(x)\,\mu_V(dx)&\le \frac{1}{\inf\limits_{|x|\ge t}\omega(x)}\int f^2(x)\omega(x)\, \mu_V(dx)\\ &\le \frac{2}{\inf\limits_{|x|\ge t}\omega(x)}\int \big(f(x)-\mu_V(f)\big)^2\omega(x)\, \mu_V(dx)\\ &\quad+ \frac{2}{\inf\limits_{|x|\ge t}\,\omega(x)}\int \mu_V(f)^2\omega(x)\, \mu_V(dx)\\ &\le \frac{2}{\inf\limits_{|x|\ge t}\omega(x)}\bigg(D_{\rho,V}(f,f)+ {\mu_V(\omega)}\,\mu_V(|f|)^2\bigg), \end{split} \end{equation*} where the second inequality follows from the inequality that for any $a$, $b\in\mathds R$, $a^2\le 2(a-b)^2+2b^2.$ On the other hand, Lemma \ref{lemma} below shows that the local super Poincar\'{e} inequality \begin{equation}\label{local} \begin{split} & \int_{\{|x|\le t\}}f^2(x)\,\mu_V(dx)\le sD_{\rho,V}(f,f)+ \beta_t(t\wedge s)\mu_V(|f|)^2,\quad s>0 \end{split} \end{equation} holds for any $t > 1$ and $f\in \mathscr{D}(D_{\rho, V})$. Combining both estimates above, we get that for $t>1$ large enough and any $f\in \mathscr{D}(D_{\rho, V})$, \begin{equation*} \begin{split} \mu_V(f^2)\le& \Big(\frac{2}{\inf\limits_{|x|\ge t}\omega(x)}+s\Big)D_{\rho,V}(f,f)+\bigg(\frac{2\mu_V(\omega)}{\inf\limits_{|x|\ge t}\omega(x)}+\beta_t(t\wedge s)\bigg)\mu_V(|f|)^2,\quad s>0. \end{split} \end{equation*} This, along with $\lim\limits_{|x|\rightarrow \infty}\omega(x)=\infty$ and the definition of $\beta$, yields the required super Poincar\'{e} inequality. \end{proof} For the local super Poincar\'{e} inequality \eqref{local} in part (3) of the proof above, we can see from the following \begin{lemma}\label{lemma} For any $f\in \mathscr{D}(D_{\rho, V})$ and $r>0$, we have $$ \int_{B(0,r)}f^2(x)\,\mu_V(dx)\le s D_{\rho, V}(f,f)+\beta_r(r\wedge s) \mu_V(|f|)^2\quad s>0,$$ where $$\beta_r(s)=\inf\left\{ \frac{2\Big(\sup\limits_{|z|\le 2r}e^{V(z)}\Big)^2}{|B(0,t)|\Big(\inf\limits_{|z|\le r} e^{V(z)}\Big)}: \frac{2\Big(\sup\limits_{0<\varepsilon\le t}\rho(\varepsilon)^{-1}\Big) \Big(\sup\limits_{|z|\le 2r}e^{V(z)}\Big)}{|B(0,t)|\Big(\inf\limits_{|z|\le r} e^{V(z)}\Big)}\le s\textrm{ and }t>0\right\},$$ and $|B(0,t)|$ denotes the volume of the ball with radius $t$. \end{lemma} \begin{proof} (1) For any $0<s\le r$ and $f\in \mathscr{D}(D_{\rho, V})$, define $$f_s(x):=\frac{1}{|B(0,s)|}\int_{B(x,s)}f(z)\,dz,\quad x\in B(0,r).$$ We have $$\sup_{x\in B(0,r)}|f_s(x)|\le \frac{1}{|B(0,s)|} \int_{B(0,2r)}|f(z)|\,dz,$$ and $$\aligned \int_{B(0,r)}|f_s(x)|\,dx&\le \int_{B(0,r)}\frac{1}{|B(0,s)|}\int_{B(x,s)}|f(z)|\,dz\,dx\\ &\le \int_{B(0,2r)}\bigg(\frac{1}{|B(0,s)|}\int_{B(z,s)}\,dx\bigg)|f(z)|\,dz\le \int_{B(0,2r)}|f(z)|\,dz. \endaligned$$ Thus, $$\aligned\int_{B(0,r)}f_s^2(x)\,dx\le & \Big(\sup_{x\in B(0,r)}|f_s(x)|\Big) \int_{B(0,r)}|f_s(x)|\,dx\\ \le &\frac{1}{|B(0,s)|} \bigg(\int_{B(0,2r)}|f(z)|\,dz\bigg)^2.\endaligned$$ Therefore, for any $f\in \mathscr{D}(D_{\rho, V})$ and $0<s\le r,$ by Jensen's inequality, $$\aligned\int_{B(0,r)}&f^2(x)\,dx\\ \le & 2\int_{B(0,r)}\big(f(x)-f_s(x)\big)^2\,dx+ 2\int_{B(0,r)}f^2_s(x)\,dx\\ \le &2\int_{B(0,r)}\frac{1}{|B(0,s)|}\int_{B(x,s)}(f(x)-f(y))^2\,dy\,dx+ \frac{2}{|B(0,s)|} \bigg(\int_{B(0,2r)}|f(z)|\,dz\bigg)^2\\ \le & \bigg(\frac{2\sup_{0<\varepsilon\le s}\rho(\varepsilon)^{-1}}{|B(0,s)|}\bigg)\int_{B(0,r)}\int_{B(x,s)}(f(x)-f(y))^2\rho(|x-y|)\,dy\,dx\\ &+ \frac{2}{|B(0,s)|} \bigg(\int_{B(0,2r)}|f(z)|\,dz\bigg)^2\\ \le & \bigg(\frac{2\sup_{0<\varepsilon\le s}\rho(\varepsilon)^{-1}}{|B(0,s)|}\bigg)\int_{B(0,2r)}\int_{B(0,2r)}(f(x)-f(y))^2\rho(|x-y|)\,dy\,dx\\ &+ \frac{2}{|B(0,s)|} \bigg(\int_{B(0,2r)}|f(z)|\,dz\bigg)^2.\endaligned$$ (2) According to the inequality above, for any $f\in \mathscr{D}(D_{\rho, V})$ and $0<s\le r,$ \begin{align*}\int_{B(0,r)}&f^2(x)\,\mu_V(dx)\\ \le & \frac{1}{\inf_{|z|\le r} e^{V(z)}}\int_{B(0,r)}f^2(x)\,dx\\ \le & \bigg(\frac{2\big(\sup_{0<\varepsilon\le s}\rho(\varepsilon)^{-1}\big)}{|B(0,s)|\big(\inf_{|z|\le r} e^{V(z)}\big)}\bigg)\int_{B(0,2r)}\int_{B(0,2r)}(f(x)-f(y))^2\rho(|x-y|)\,dy\,dx\\ &+ \frac{2}{|B(0,s)|\big(\inf_{|z|\le r} e^{V(z)}\big)} \bigg(\int_{B(0,2r)}|f(z)|\,dz\bigg)^2\\ \le & \bigg(\frac{2\big(\sup_{0<\varepsilon\le s}\rho(\varepsilon)^{-1}\big)\big(\sup_{|z|\le 2r}e^{V(z)}\big)}{|B(0,s)|\big(\inf_{|z|\le r} e^{V(z)}\big)}\bigg)\\ &\qquad\qquad \times \int_{B(0,2r)}\int_{B(0,2r)}(f(x)-f(y))^2\rho(|x-y|)\,dy\,\mu_V(dx)\\ &+ \frac{2\big(\sup_{|z|\le 2r}e^{V(z)}\big)^2}{|B(0,s)|\big(\inf_{|z|\le r} e^{V(z)}\big)} \bigg(\int_{B(0,2r)}|f(x)|\,\mu_V(dx)\bigg)^2\\ \le & \bigg(\frac{2\big(\sup_{0<\varepsilon\le s}\rho(\varepsilon)^{-1}\big)\big(\sup_{|z|\le 2r}e^{V(z)}\big)}{|B(0,s)|\big(\inf_{|z|\le r} e^{V(z)}\big)}\bigg)D_{\rho, V}(f,f)\\ &+ \frac{2\big(\sup_{|z|\le 2r}e^{V(z)}\big)^2}{|B(0,s)|\big(\inf_{|z|\le r} e^{V(z)}\big)} \mu_V(|f|)^2.\end{align*} The desired assertion for the case $0<s\le r$ follows from the conclusion above and the definition of $\beta_r$. (3) When $s>r$, by (2), $$ \int_{B(0,r)}f^2(x)\,\mu_V(dx)\le r D_{\rho, V}(f,f)+\beta_r(r) \mu_V(|f|)^2\le s D_{\rho, V}(f,f)+\beta_r(r\wedge s) \mu_V(|f|)^2.$$ The proof is completed. \end{proof} We present the following two remarks for the proof of Theorem \ref{th1.1}. \begin{itemize} \item[(1)] The proof above is efficient for the following more general non-local Dirichlet form $$\aligned \widetilde{D}_{j,V}(f,f):&=\iint_{x\neq y} \big(f(x)-f(y)\big)^2j(x,y)\,\mu_V(dy)\,\mu_V(dx),\\ \mathscr{D}(\widetilde{D}_{j,V}):&=\left\{f\in L^2(\mu_V):\widetilde{D}_{j,V}(f,f)<\infty\right\},\endaligned$$ where $j$ is a Borel measurable function on $\mathds R^{2d} \setminus\{(x, y) \in \mathds R^{2d}: x = y\}$ such that $j(x, y) >0$ and $j(x, y) = j(y, x)$. See \cite[Section 2]{CW} for details. \item[(2)] The argument above also works for $L^p$ $(p>1)$ setting. For instance, it can yield the statement as follows. If \eqref{th1.1} holds, then the following $L^p$-Poincar\'{e} inequality $$\mu_V(|f-\mu_V(f)|^p)\le 2c^{-1} \iint_{x\neq y} \frac{\big|f(x)-f(y)\big|^p}{|x-y|^{d+\alpha}}\,dy\,\mu_V(dx)=:2c^{-1}D_{\rho,V,p}(f,f)$$ holds, where $$ f\in \mathscr{D}(D_{\rho,V,p}):=\bigg\{f\in L^p(\mu_V): D_{\rho,V,p}(f,f)<\infty\bigg\},$$ and $L^p(\mu_V)$ denotes the set of Borel measurable functions $f$ on $\mathds R^d$ such that $\int |f|^p(x)\,\mu_V(dx)<\infty.$ The proof is based on the argument of Theorem \ref{th1} (1) and the fact that for any $f\in L^p(\mu_V)$, $$\mu_V(|f-\mu_V(f)|^p)\le \iint |f(x)-f(y)|^p\,\mu_V(dy)\,\mu_V(dx),$$ due to the H\"{o}lder inequality. The readers can refer to \cite{HZ} for related discussion about $L^p$-Poincar\'{e} inequalities of local Dirichlet forms. \end{itemize} Now, we are in a position to give the \begin{proof}[Proof of Theorem $\ref{th2}$] (a) For any $f\in \mathscr{D}(D_{\rho, V})$ with $f>0$, by the Jensen inequality, \begin{equation}\label{ent} \begin{split} \textrm{Ent}_{\mu_V}(f)&=\mu_V(f\log f)-\mu_V(f)\log \mu_V(f)\\ &\le \mu_V(f\log f)-\mu_V(f)\mu_V(\log f)\\ &= \frac{1}{2}\iint \bigg[f(x)\log f(x)+f(y)\log f(y)\\ &\qquad\qquad-f(x)\log f(y)- f(y)\log f(x)\bigg]\,\mu_V(dy)\,\mu_V(dx)\\ &= \frac{1}{2}\iint (f(x)-f(y))(\log f(x)-\log f(y))\,\mu_V(dy)\,\mu_V(dx).\end{split}\end{equation} Next, following the argument of Theorem 1.1 (1), we can obtain that under \eqref{th1.1}, for any $f\in \mathscr{D}(D_{\rho, V})$ with $f>0$, $$\aligned &\frac{1}{2}\iint (f(x)-f(y))(\log f(x)-\log f(y))\,\mu_V(dy)\,\mu_V(dx)\le c^{-1} D_{\rho,V}(f,\log f), \endaligned$$ which, along with \eqref{ent}, completes the proof of the inequality \eqref{th2.2}. (b) For any $p\in (1,2]$, $f\in \mathscr{D}(D_{\rho, V})$ with $f\ge0$, by the H\"{o}lder inequality, \begin{equation} \label{beckner} \begin{split}\mu_V(f^p)&-\mu_V(f)^p\\ \le&\mu_V(f^p)-\mu_V(f)\mu_V(f^{p-1})\\ =&\frac{1}{2}\iint \bigg[f^p(x)+f^p(y)-f(x)f^{p-1}(y)- f(y)f^{p-1}(x)\bigg]\,\mu_V(dy)\,\mu_V(dx)\\ =&\frac{1}{2}\iint (f(x)-f(y))(f^{p-1}(x)-f^{p-1}(y))\,\mu_V(dy)\,\mu_V(dx).\end{split}\end{equation} Therefore, the desired Beckner-type inequality \eqref{th2.3} follows from \eqref{beckner} and the following fact $$\aligned &\frac{1}{2}\iint (f(x)-f(y))(f^{p-1}(x)-f^{p-1}(y))\,\mu_V(dy)\,\mu_V(dx)\le c^{-1} D_{\rho,V}(f, f^{p-1}), \endaligned$$ where we have used \eqref{th1.1} again. \end{proof} To close this section, we present \begin{proof}[Sketch of the Proof of Example $\ref{ex1}$] In this setting, $e^{-V(x)}=C_{d,\varepsilon}(1+|x|)^{-(d+\varepsilon)}$ and $\rho(r)=r^{-d-\alpha}.$ By the $C_r$-inequality, for any $x$, $y\in\mathds R^d$ and $\varepsilon>0$, $$|x-y|^{d+\varepsilon}\le 2^{d+\varepsilon-1}(|x|^{d+\varepsilon}+|y|^{d+\varepsilon})\le 2^{d+\varepsilon-1}\big((1+|x|)^{d+\varepsilon}+(1+|y|)^{d+\varepsilon}\big).$$ (a) For any $\varepsilon\ge \alpha$, $$({e^{V(x)}+e^{V(y)}})\rho({|x-y|})\ge \frac{C_{d,\varepsilon}^{-1}\big((1+|x|)^{d+\varepsilon}+(1+|y|)^{d+\varepsilon}\big)}{2^{d+\alpha-1}\big((1+|x|)^{d+\alpha}+(1+|y|)^{d+\alpha}\big)}\ge \frac{2^{1-(d+\alpha)}}{C_{d,\varepsilon}}.$$ Combining it with Theorem \ref{th1} (1), we get the first assertion. (b) For any $\varepsilon<\alpha$, $$ \inf\limits_{0<|x-y|\le s}\big[(e^{V(x)}+e^{V(y)})\rho(|x-y|)\big]\ge C_{d,\varepsilon}^{-1}2^{1-(d+\varepsilon)}s^{\varepsilon-\alpha}.$$ Then, choosing $s= c r^{-1/\varepsilon}$ in the definition of $\alpha$, we arrive at the second assertion. (c) For any $\varepsilon>\alpha$, \eqref{th1.2} holds with $\omega(x)=c_1(1+|x|)^{\varepsilon-\alpha}$, and $$\beta_t(s)\le c_2(1+s^{-d/\alpha}t^{(d+\varepsilon)(2+d/\alpha)}).$$ Then, the third assertion follows from the definition of $\beta$ by taking $s=c_3r$ and $t=c_4r^{-1/(\varepsilon-\alpha)}$. \end{proof} \section{Applications: Porous media equations} Functional inequalities for non-local Dirichlet forms appear throughout the probability literature, and also are interesting in analysis, e.g.\ see references in \cite{MRS,Gre}. This section is mainly motivated by \cite{DGGW, Wang} for the description of the convergence rate of porous media equations by using $L^p$ functional inequalities. Let $(L_{\rho,V},\mathscr{D}(L_{\rho,V}))$ be the generator corresponding to Dirichlet form $(D_{\rho,V}, \mathscr{D}(D_{\rho,V}))$. Consider the following equation \begin{equation}\label{appl1}\partial_tu(t, \cdot)=L_{\rho,V}\{u(t, \cdot)^m\},\quad u(0,\cdot)=f,\end{equation} where $m>1$, $f$ is a bounded measurable function on $\mathds R^d$ and $u^m:=\textrm{sgn}(u)|u|^m.$ We call $T_tf:=u(t,\cdot)$ a solution to the equation \eqref{appl1}, if $u(t,\cdot)^m\in \mathscr{D}(L_{\rho,V})$ for all $t>0$ and $u^m\in L^1_{\textrm{loc}}([0,\infty)\to \mathscr{D}(D_{\rho,V}); dt)$ such that, for any $g\in\mathscr{D}(D_{\rho,V}),$ $$\mu_V(u(t,\cdot)g)=\mu_V(fg)-\int_0^tD_{\rho,V}(u(s,\cdot)^m,g)\,ds,\quad t>0.$$ \begin{theorem} Assume that for any bounded measurable function $f\in\mathscr{D}(L_{\rho,V})$ the equation \eqref{appl1} has a unique solution $T_tf$. If \eqref{th1.1} holds, then $$\mu_V((T_tf)^2)\le \bigg[\mu_V(f^2)^{-(m-1)/2}+{c^{-1}(m-1)t}\bigg]^{-2/(m-1)},\quad t\ge0, \,\,\mu_V(f)=0.$$ \end{theorem} \begin{proof} The argument of Theorem \ref{th2} gives us that, under \eqref{th1.1} for any $m>1$ and $f\in \mathscr{D}(L_{\rho,V})$ with $\mu_V(f)=0$, \begin{equation}\label{proof1th4} \mu_V(f^{m+1})\le c^{-1} D_{\rho,V}(f,f^{m}). \end{equation} Now, let $f$ be a function such that $\mu_V(f)=0$. Then, by the definition of the solution to the equation \eqref{appl1}, $\mu_V(T_tf)=0$ for all $t\ge0$. According to \eqref{proof1th4}, we obtain that $$\aligned \frac{d \mu_V(T_tf)^2}{dt}&=2\mu_V\Big(T_tf \partial_t T_tf\Big)=2\mu_V(T_tf L\{(T_tf)^m\})\\ &=-2D_{\rho,V}(T_tf, (T_tf)^m)\le -2c^{-1}\mu_V((T_tf)^{m+1})\\ &\le -2c^{-1}\Big[ \mu_V((T_tf)^2) \Big]^{\frac{m+1}{2}}, \endaligned$$ where in the inequality we have used the H\"{o}lder inequality. The required assertion easily follows from the inequality above. \end{proof} \ \noindent{\bf Acknowledgements.} The author would like to thank two referees, Professor Feng-Yu Wang and Dr.\ Xin Chen for helpful comments. Financial support through National Natural Science Foundation of China (No.\ 11201073) and the Program for New Century Excellent Talents in Universities of Fujian (No.\ JA11051 and JA12053) is also gratefully acknowledged.
2,869,038,156,496
arxiv
\section{Introduction}\label{intro} A {\it strong Maltsev condition} is a positive primitive sentence in the language of clones. That is, it is a sentence expressing the existence of some clone elements satisfying some equalities. The name derives from the original example in \cite{maltsev}: A.~I.~Maltsev proved that the class of varieties whose members have permuting congruences is exactly the class of varieties whose clones satisfy the p.p.\ clone sentence \begin{equation} \label{sigma} \tag*{$\sigma:$} (\exists p)((p(x,x,y)\approx y)\;\&\; (p(x,y,y)\approx x)). \end{equation} A variety is said to satisfy a strong Maltsev condition if its clone does. In this article I will say that a class of varieties is {\it definable by a strong Maltsev condition} if it is exactly the class of all varieties that satisfy the strong Maltsev condition. A {\it Maltsev condition} (without the word \emph{strong}) is a sequence $\Sigma = (\sigma_n)_{n\in\omega}$ of successively weaker strong Maltsev conditions ($\sigma_n\vdash \sigma_{n+1}$ for all $n$). A variety $\mathcal V$ satisfies $\Sigma$ if its clone satisfies $\sigma_n$ for some $n$. A class of varieties is {\it definable by a Maltsev condition}, or is {\it Maltsev definable}, if it is exactly the class of all varieties that satisfy some Maltsev condition $\Sigma$. A class of varieties definable by a strong Maltsev condition $\sigma$ is also definable by the ordinary Maltsev condition $\Sigma = (\sigma, \sigma, \ldots)$ that is a constant sequence. In this article, I investigate classes of varieties that are not Maltsev definable, but which become Maltsev definable relative to some weak `ground' Maltsev condition. That is, suppose that $\mathscr{P}$ is a property of varieties. Let $\Gamma$ be a Maltsev condition. I will investigate some instances where the class of varieties satisfying $\mathscr{P}$ is not Maltsev definable, but the class of varieties satisfying both $\mathscr{P}$ and $\Gamma$ is Maltsev definable. In symbols, I might write ${\mathscr P}+\Gamma = \Sigma$ to mean that, restricted to varieties satisfying the ground condition $\Gamma$, the class of varieties satisfying the condition ${\mathscr P}$ is definable by the Maltsev condition $\Sigma$. I then say that the class of varieties satisfying $\mathscr{P}$ is {\it Maltsev definable relative to $\Gamma$}. In this article, the `ground' Maltsev condition will always be `the existence of a Taylor term'. It is known, through Corollaries~5.2 and 5.3 of \cite{taylor}, that an idempotent variety has a Taylor term if and only if it contains no algebra with at least $2$ elements in which every operation interprets as a projection operation. It is easy to see that this means exactly that `the existence of a Taylor term' is the weakest nontrivial idempotent Maltsev condition. It is known that `the existence of a Taylor term' is expressible as a strong Maltsev condition, see \cite{olsak}. We investigate relative Maltsev definability for the ten commutator properties $\mathscr{P}$ from the following list. To understand these statements completely, it is necessary to know the definitions of $\C C(x,y;z)$ (Definition~\ref{centralizer_def}), of $[x,y]$ (Definition~\ref{commutator_def}), and of (relative) right or left annihilators (Definition~\ref{annihilator_def}). For intuition about these statements, it may help to remember that for the variety of groups ``the commutator operation'' coincides with the usual commutator operation of group theory ($[M,N]=[M,N]_{\textrm{group}}$) while for the variety of commutative rings ``the commutator operation'' coincides with ideal product ($[I,J]=I\cdot J$). ``The centralizer relation'', $\C C(x,y;z)$, coincides with the relation $[x,y]\leq z$ in both cases. For commutative rings, ``the relative (right or left) annihilator of $J$ modulo $I$'' is the colon ideal $(I:J)=\{r\in R\;|\;rJ\subseteq I\}$, while ``the annihilator of $J$'' is special case $(0:J)$. \begin{itemize} \setlength{\itemindent}{-10pt} \item $[x,y]=[y,x]$ (Commutativity of the commutator.) \item $[x+y,z]=[x,z]+[y,z]$ (Left distributivity of the commutator.) \item $[x,y+z]=[x,y]+[x,z]$ (Right distributivity of the commutator.) \item $[x,y]=[x,y']\Longrightarrow [x,y]=[x,y+y']$ (Right semidistributivity of the commutator.) \item Given $x$, there exists a largest $y$ such that $[x,y]=0$ (Right annihilators exist.) \item Given $x, z$, there exists a largest $y$ such that $\C C(x,y;z)$ (Relative right annihilators exist.) We write $(z:x)_R$ for the relative right annihilator of $x$ modulo $z$, when it exists. \item $\C C(x,y;z)\Longleftrightarrow \C C(y,x;z)$ (Symmetry of the centralizer relation in its first two places.) \item $\C C(x,y;z)\Longleftrightarrow [x,y]\leq z$ (The centralizer relation is determined by the commutator.) \item $\C C(x,y;z)\;\&\; (z\leq z')\Longrightarrow \C C(x,y;z')$ (Stability of the centralizer relation under lifting in its third place.) \item $\C C(x,y;z)\;\&\; (z\leq z'\leq x\cap y)\Longrightarrow \C C(x,y;z')$ (Weak stability of the centralizer relation under lifting in its third place.) \end{itemize} The main results of this article may be summarized as follows. First, I explain why no one of the ten commutator properties listed above is Maltsev definable [Section~\ref{examples}]. Then I explain why the following are equivalent for varieties $\mathcal V$ with a Taylor term: \begin{itemize} \setlength{\itemindent}{-10pt} \item $\mathcal V$ is congruence modular. \item The commutator is left distributive throughout $\mathcal V$. \item The commutator is right distributive throughout $\mathcal V$. \item The centralizer relation is symmetric in its first two places throughout $\mathcal V$. \item Relative right annihilators exist throughout $\mathcal V$. \item The centralizer relation is determined by the commutator. \item The centralizer relation is stable under lifting in its third place. \end{itemize} See Theorems~\ref{main2} and \ref{main3}. Also, for varieties $\mathcal V$ with a Taylor term, the following are equivalent: \begin{itemize} \setlength{\itemindent}{-10pt} \item $\mathcal V$ has a difference term. \item The commutator is commutative throughout $\mathcal V$. \item Right annihilators exist throughout $\mathcal V$. \item The commutator is right semidistributive throughout $\mathcal V$. \item The centralizer relation is weakly stable under lifting in its third place. \end{itemize} See Theorems~\ref{main1}, \ref{main1.5}, \ref{main4}. A specific Maltsev condition defining the class of congruence modular varieties may be found in \cite[Section~2]{day}. A specific Maltsev condition defining the class of varieties with a difference term may be found in \cite[Section~4]{kissterm}. Thus, Theorems~\ref{main1}, \ref{main2}, \ref{main1.5}, \ref{main3}, and \ref{main4} establish the Maltsev definability of all ten commutator properties relative to the existence of a Taylor term. The proofs of relative Maltsev definability for the ten commutator properties identified will be called the ``primary'' results of this article, and will be identified as such when we prove them. All other results are considered ``secondary'', although some secondary results are as interesting as the primary results. For example, some nontrivial commutator-theoretic facts are proved in Section~\ref{facts} whose proofs do not require the existence of a Taylor term. In addition to this, a commutator-theoretic characterization of the class of varieties that have a weak difference term is established in Theorem~\ref{characterization_of_weak}. For background, I direct the reader to Section~2.4 of \cite{shape} for a discussion of Maltsev conditions and Section~2.5 of \cite{shape} for a discussion of the properties of the centralizer relation $\C C(x,y;z)$. The most important elements from this source will be reproduced below when needed. In particular, it will be necessary to know the definitions of $\C C(x,y;z)$ (Definition~\ref{centralizer_def}), of $[x,y]$ (Definition~\ref{commutator_def}), of (relative) right or left annihilators (Definition~\ref{annihilator_def}), of a difference term (Definition~\ref{left_right}), and of a Taylor term (see the opening paragraph of Section~\ref{main}). \bigskip \section{The ten commutator properties are not Maltsev definable}\label{examples} The variety $\mathcal V$ of sets has the properties that $\C C(\alpha,\beta;\delta)$ and $[\alpha,\beta]=0$ hold for any $\alpha,\beta,\delta\in \Con(\m a)$, $\m a\in {\mathcal V}$. This implies that each of the following are true in the variety of sets: \begin{itemize} \item $[x,y]=[y,x]$ \item $[x+y,z]=[x,z]+[y,z]$ \item $[x,y+z]=[x,y]+[x,z]$ \item $[x,y]=[x,y']\Longrightarrow [x,y]=[x,y+y']$ \item Given $x$, there exists a largest $y$ such that $[x,y]=0$ \item Given $x, z$, there exists a largest $y$ such that $\C C(x,y;z)$ \item $\C C(x,y;z)\Longleftrightarrow \C C(y,x;z)$ \item $\C C(x,y;z)\Longleftrightarrow [x,y]\leq z$ \item $\C C(x,y;z)\;\&\; (z\leq z')\Longrightarrow \C C(x,y;z')$ \item $\C C(x,y;z)\;\&\; (z\leq z'\leq x\cap y)\Longrightarrow \C C(x,y;z')$ \end{itemize} \noindent If one of these properties $\mathscr P$ were Maltsev definable, then, since the variety of sets is interpretable in any variety, every variety would satisfy $\mathscr P$. To prove that no one of these properties is Maltsev definable it suffices to exhibit varieties where the properties fail. All the properties fail in the variety of semigroups $\mathcal V = {\sf H}{\sf S}{\sf P}(\mathbb Z_2 \times\mathbb S_2)$ where $\mathbb Z_2$ is the $2$-element group considered as a semigroup and $\mathbb S_2$ is the $2$-element semilattice. This is a variety of commutative semigroups satisfying $x^3\approx x$. In this variety, the term $T(x,y,z) = xyz$ is a Taylor term for $\mathcal V$ (see \cite[Definition~2.15]{shape} or the opening paragraph of Section~\ref{main} below). One can conclude this by noting that $T$ is idempotent in $\mathcal V$ (since $T(x,x,x)\approx x^3\approx x$) and satisfies $i$-th place Taylor identities in $\mathcal V$ for every $i$ (since $T(x,y,z)\approx xyz\approx zxy\approx T(z,x,y)$). From the main results of this article, the fact that $\mathcal V$ has a Taylor term implies that, if $\mathcal V$ had one of the commutator properties listed above, then $\mathcal V$ would have a difference term. Then, from Theorem~\ref{diff_char} below, any pentagon in a congruence lattice of a member of $\mathcal V$ would have a `neutral' critical interval. This is not the case, since $\Con(\mathbb Z_2\times \mathbb S_2)$ is a pentagon and its critical interval is abelian. The lattice $\Con(\mathbb Z_2\times \mathbb S_2)$ is indicated in Figure~\ref{fig1} with some congruences identified using the notation ``partition : congruence'' or ``congruence : partition''. \begin{figure}[ht] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(20,33) \put(0,15){\circle*{1.2}} \put(10,0){\circle*{1.2}} \put(10,30){\circle*{1.2}} \put(20,10){\circle*{1.2}} \put(20,20){\circle*{1.2}} \put(10,0){\line(-2,3){10}} \put(10,0){\line(1,1){10}} \put(10,30){\line(-2,-3){10}} \put(10,30){\line(1,-1){10}} \put(20,20){\line(0,-1){10}} \put(-52.5,13){$(0,0), (0,1)\mid (1,0), (1,1) :\beta$} \put(22,9){$\delta : (0,0), (1,0)\mid (0,1) \mid (1,1)$} \put(22,19){$\theta: (0,0), (1,0)\mid (0,1), (1,1)$} \put(9,32){$1$} \put(9,-5){$0$} \end{picture} \medskip \caption{\sc $\Con(\mathbb Z_2\times \mathbb S_2)\cong \m n_5$.}\label{fig1} \end{center} \end{figure} \noindent By hand computations, or by UACalc \cite{uacalc}, it can be shown that $\C C(\theta,\theta;\delta)$. This means that the critical interval of this copy of $\m n_5$ is abelian, hence is \emph{not} neutral. \section{Commutator theoretic results true for every variety}\label{facts} In this section we prove some new facts about the commutator which we will need later in the paper. They have been extracted from their rightful places in the next section and recorded here solely because the proofs require no ground Maltsev condition among their hypotheses. Our notation follows that of \cite{freese-mckenzie}, and we direct the reader to that source for fuller explanations. For example, $\Con(\m a)$ is the congruence lattice of $\m a$. The meet (= intersection) and join of congruences $\alpha, \beta\in\Con(\m a)$ will be denoted $\alpha\cap \beta$ and $\alpha+\beta$. We might write $u\stackrel{\alpha}{\equiv}v$ as an alternative to $(u,v)\in\alpha$. When $\delta\leq \theta$, $I[\delta,\theta]$ denotes the interval in $\Con(\m a)$ consisting of all congruences between $\delta$ and $\theta$, namely $I[\delta,\theta]=\{x\in\Con(\m a)\;|\;\delta\leq x\leq \theta\}$. A five-element sublattice of $\Con(\m a)$ is called a pentagon if it is isomorphic to the lattice depicted in Figure~\ref{fig1}. The critical interval of a pentagon is the interval that corresponds to $I[\delta,\theta]$ in Figure~\ref{fig1}. If $\alpha\in\Con(\m a)$, then $\m a(\alpha)$ denotes the subalgebra of $\m a\times \m a$ whose universe is $\alpha$ (reference for notation: page 37 of \cite{freese-mckenzie}). We use product notation for congruences of $\m a(\alpha)$, so for $\beta,\gamma\in\Con(\m a)$ we let $\beta_1 = \{((x,y),(z,w))\in A(\alpha)^2\;|\;(x,z)\in\beta\}$ and $\gamma_2 = \{((x,y),(z,w))\in A(\alpha)^2\;|\;(y,w)\in\gamma\}$ (reference: page 85 of \cite{freese-mckenzie}).\footnote{Observe that the definitions of the relations $\beta_1$ and $\gamma_2$ depend on the choice of $\alpha$.} Following \cite{freese-mckenzie}, we deviate from this convention by using $\eta_1$ and $\eta_2$ in place of $0_1$ and $0_2$. E.g., $\eta_1 = \{((x,y),(z,w))\in A(\alpha)^2\;|\;x=z\}$. We typically write $\beta_1\times \gamma_2$ for $\beta_1\cap \gamma_2$. Given $\beta\in\Con(\m a)$, we let $\Delta_{\alpha,\beta}$ be the congruence on $\m a(\alpha)$ generated by the $\beta$-diagonal relation $ \{((x,x),(z,z))\in A(\alpha)^2\;|\;(x,z)\in\beta\} $ (reference: page 37 of \cite[Definition~4.7]{freese-mckenzie}). A fact we use when necessary is that \begin{equation} \label{delta_gen} \Delta_{\alpha,\beta}\leq \beta_1\times \beta_2 \end{equation} always holds, since the generators of the congruence $\Delta_{\alpha,\beta}$ lie in the congruence $\beta_1\times \beta_2$. Next we define $S,T$-matrices and the centralizer relation. The definitions are made for \emph{tolerances} of an algebra $\m a$. A tolerance on $\m a$ is a reflexive, symmetric, compatible binary relation. (A \emph{congruence} is a transitive tolerance.) \begin{df}\label{matrices_def} If $S$ and $T$ are tolerances on an algebra $\m a$, then an {\bf $S,T$-matrix} is a $2\times 2$ matrix of elements of $\m a$ of the form \[ \left[\begin{array}{cc} p&q\\ r&s \end{array}\right]= \left[\begin{array}{cc} t(\wec{a},\wec{u})&t(\wec{a},\wec{v})\\ t(\wec{b},\wec{u})&t(\wec{b},\wec{v}) \end{array}\right]\] where $t(\wec{x},\wec{y})$ is an $(m+n)$-ary term operation of $\m a$, $\wec{a} \wrel{S} \wec{b}$, and $\wec{u}\wrel{T}\wec{v}$. The set of all $S,T$-matrices of $\m a$ is denoted $M(S,T)$. \end{df} The symmetry of tolerances guarantees that the set $M(S,T)$ is invariant under the operations of interchanging rows or columns. \begin{df}\label{centralizer_def} Let $S$ and $T$ be tolerances of an algebra $\m a$ and let $\delta$ be a congruence on $\m a$. If $p\stackrel{\delta}{\equiv} q$ implies that $r\stackrel{\delta}{\equiv} s$ whenever $$ \left[\begin{array}{cc} p&q\\ r&s \end{array}\right]\in M(S,T), $$ then we say that {\bf $\C C(S,T;\delta)$ holds}, or {\bf $S$ centralizes $T$ modulo $\delta$}. \end{df} Many of the basic properties of the centralizer relation are proved in Theorem~2.19 of \cite{shape}. I copy the statement of that theorem here because its many items will be referenced repeatedly throughout this article. \begin{thm}\label{basic_centrality} Let $\m a$ be an algebra with tolerances $S, S', T, T'$ and congruences $\alpha, \alpha_i,\beta,\delta,\delta',\delta_j$. The following are true. \begin{enumerate} \item[(1)] {\rm (Monotonicity in the first two variables)} If $\C C(S,T;\delta)$ holds and $S'\subseteq S$, $T'\subseteq T$, then $\C C(S',T';\delta)$ holds. \item [(2)] $\C C(S,T;\delta)$ holds if and only if $\C C(\Cg a(S),T;\delta)$ holds. \item [(3)] $\C C(S, T; \delta)$ holds if and only if $\C C(S, \delta\circ T\circ \delta; \delta)$ holds. \item [(4)] If $T\cap \delta = T\cap\delta'$, then $\C C(S,T;\delta)$ holds if and only if $\C C(S,T;\delta')$ holds. \item [(5)] {\rm (Semidistributivity in the first variable)} If $\C C(\alpha_i,T;\delta)$ holds for all $i\in I$, then $\C C(\bigvee_{i\in I}\alpha_i,T;\delta)$ holds. \item[(6)] If $\C C(S,T;\delta_j)$ holds for all $j\in J$, then $\C C(S,T;\bigwedge_{j\in J}\delta_j)$ holds. \item [(7)] If $T\cap(S\circ (T\cap\delta)\circ S) \subseteq\delta$, then $\C C(S,T;\delta)$ holds. \item [(8)] If $\beta\cap(\alpha+(\beta\cap\delta))\leq\delta$, then $\C C(\alpha,\beta;\delta)$ holds. \item [(9)] Let $\m b$ be a subalgebra of $\m a$. If $\C C(S, T; \delta)$ holds in $\m a$, then $\C C(S|_{\m b}, T|_{\m b}; \delta|_{\m b})$ holds in $\m b$. \item [(10)] If $\delta'\leq \delta$, then the relation $\C C(S, T; \delta)$ holds in $\m a$ if and only if $\C C(S/\delta', T/\delta'; \delta/\delta')$ holds in $\m a/\delta'$. \end{enumerate} \end{thm} The commutator operation is defined in terms of the centralizer relation. \begin{df}\label{commutator_def} Let $S,T$ be tolerances on an algebra $\m a$. The commutator $[S,T]$ equals the least congruence $\delta$ on $\m a$ for which $\C C(S,T;\delta)$ holds. \end{df} According to Definition~\ref{commutator_def}, $[S,T]=0$ holds if and only if $\C C(S,T;0)$ holds. By Theorem~\ref{basic_centrality}~(5), if $T$ is a tolerance on some algebra, then the join $\alpha$ of all congruences $\alpha_i$ satisfying $\C C(\alpha_i,T;0)$ satisfies $\C C(\alpha,T;0)$. Using Theorem~\ref{basic_centrality}~(2) we see that this join $\alpha$ is a congruence and it is the largest congruence $x$ such that $\C C(x,T;0)$ or equivalently the largest $x$ such that $[x,T]=0$. We denote this largest $x$ by $(0:T)$ and call it the annihilator of $T$. If we want to emphasize that the annihilator $x=(0:T)$ appears in the left variable of the commutator in the equation $[x,T]=0$ we will add a subscript $L$ to write $x=(0:T)_L$ and say that $(0:T)_L$ the left annihilator of $T$. For the same reasons, given a tolerance $T$ and a congruence $\delta\in\Con(\m a)$ there exists a largest tolerance $\alpha$ such that $\C C(\alpha,T;\delta)$ which we denote $(\delta:T)$ or $(\delta:T)_L$. We call $(\delta:T)_L$ the \underline{relative} left annihilator of $T$ \underline{modulo $\delta$}. We record the notation we have just introduced in Definition~\ref{annihilator_def}. Although the definitions from \cite{shape} of the centralizer relation and the commutator operation involve tolerance relations (reflexive, symmetric, compatible binary relations) rather than congruence relations (transitive tolerances), in this paper we henceforth concentrate on the centralizer, commutator, and annihilators of congruences only. \begin{df}\label{annihilator_def} Let $\m a$ be an algebra and let $\delta, \beta\in\Con(\m a)$ be congruences on $\m a$. \begin{enumerate} \item The largest congruence $\alpha\in\Con(\m a)$ satisfying $\C C(\alpha,\beta;0)$ is called the \emph{left annihilator of $\beta$} and it is denoted $(0:\beta)_L$. If there is a largest congruence $\alpha\in\Con(\m a)$ satisfying $\C C(\beta,\alpha;0)$ it is called the \emph{right annihilator of $\beta$} and it is denoted $(0:\beta)_R$. \item The \emph{relative left annihilator of $\beta$ modulo $\delta$}, denoted $(\delta:\beta)_L$, is the largest congruence $\alpha\in\Con(\m a)$ satisfying $\C C(\alpha,\beta;\delta)$. If there is a largest congruence $\alpha\in\Con(\m a)$ satisfying $\C C(\beta,\alpha;\delta)$ it is called the \emph{relative right annihilator of $\beta$ modulo $\delta$}, and it is denoted $(\delta:\beta)_R$. \end{enumerate} \end{df} \bigskip The first new result in this section shows that if a variety contains an algebra whose congruence lattice contains a certain kind of pentagon with an abelian critical interval, then the variety contains an algebra with a pentagon satisfying other (usually stronger) abelianness conditions. \begin{figure}[ht] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(20,33) \put(0,15){\circle*{1.2}} \put(10,0){\circle*{1.2}} \put(10,30){\circle*{1.2}} \put(20,10){\circle*{1.2}} \put(20,20){\circle*{1.2}} \put(10,0){\line(-2,3){10}} \put(10,0){\line(1,1){10}} \put(10,30){\line(-2,-3){10}} \put(10,30){\line(1,-1){10}} \put(20,20){\line(0,-1){10}} \put(-4.5,13){$\beta$} \put(22,9){$\delta$} \put(22,19){$\theta$} \put(9,32){$\alpha$} \put(9,-5){$0$} \end{picture} \bigskip \caption{\sc $\textrm{Con}(\m A)$ or $\textrm{Con}(\m B)$.}\label{fig2} \end{center} \end{figure} \begin{thm} \label{better_pentagons} {\rm (Better pentagons)} Let $\mathcal V$ be an arbitrary variety and assume that $\mathcal V$ contains an algebra $\m a$ with congruences $\beta, \theta,\delta$ generating a pentagon, as shown in Figure~\ref{fig2}. Assume that $\C C(\theta,\theta;\delta)$ holds and $\C C(\beta,\theta;\delta)$ fails.\footnote{The assumption that ``$\C C(\beta,\theta;\delta)$ fails'' will always hold if $\mathcal V$ has a Taylor term -- see Theorem~\ref{memoir_pentagon}.} There exists an algebra $\m b\in \mathcal V$ with congruences ordered as in Figure~\ref{fig2} and satisfying $\C C(\alpha,\alpha;\beta)$ and $\C C(\theta,\theta;0)$. \end{thm} \begin{proof} Let $\m a$ have congruences $\beta, \theta, \delta$ with the properties described. We will find a pentagon of the desired type in the congruence lattice of the algebra $\m a(\beta)$. The desired pentagon will be the one depicted in Figure~\ref{fig3}. \begin{figure}[ht] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(20,38) \put(-20,0){% \put(0,15){\circle*{1.2}} \put(10,0){\circle*{1.2}} \put(10,30){\circle*{1.2}} \put(20,10){\circle*{1.2}} \put(20,20){\circle*{1.2}} \put(10,0){\line(-2,3){10}} \put(10,0){\line(1,1){10}} \put(10,30){\line(-2,-3){10}} \put(10,30){\line(1,-1){10}} \put(20,20){\line(0,-1){10}} \put(-4.5,13){$\delta_1$} \put(22,9){$(\delta_1\cap \Delta_{\beta,\theta})+(\delta_2\cap\Delta_{\beta,\theta})=:\Psi$} \put(22,19){$\Delta_{\beta,\gamma}+(\delta_1\cap \Delta_{\beta,\theta})+(\delta_2\cap\Delta_{\beta,\theta})=:\Omega$} \put(5,32){$\gamma_1:=\delta_1+(\delta_2\cap\Delta_{\beta,\theta})$} \put(6,-5){$\delta_1\cap\Delta_{\beta,\theta}$} } \end{picture} \bigskip \caption{\sc A sublattice of $\textrm{Con}(\m A(\beta))$.}\label{fig3} \end{center} \end{figure} The congruence $\gamma$ that appears in this figure will be defined in the course of the proof. To establish that the congruences indicated form a pentagon with the required properties we will use the fact that $\C C(\beta,\theta;\delta)$ fails. According to Definition~\ref{centralizer_def}, this assumption yields a $\beta,\theta$-matrix \[ \begin{bmatrix}p&q\\r&s \end{bmatrix}= \begin{bmatrix} t(\wec{a},\wec{c})&t(\wec{a},\wec{d})\\ t(\wec{b},\wec{c})&t(\wec{b},\wec{d}) \end{bmatrix} \] where $(a_i,b_i)\in\beta$, $(c_j,d_j)\in\theta$, $(p,q)\in\delta$, and $(r,s)\in\theta-\delta$. This implies that $((r,p), (s,q))\in \Delta_{\beta,\theta}\cap \delta_2$, $((r,p), (s,q))\in \Delta_{\beta,\theta}\cap \theta_1$, but $((r,p), (s,q))\notin \delta_1$. The comparabilities indicated in the chain on the left side in Figure~\ref{fig3}, namely \begin{equation} \label{chain1} \delta_1\cap\Delta_{\beta,\theta}\;\leq\; \delta_1\;\leq\; \delta_1+(\delta_2\cap\Delta_{\beta,\theta}), \end{equation} are obvious: the middle congruence $\delta_1$ is a meetand on the left of \eqref{chain1} and a joinand on the right of \eqref{chain1}. To show that the middlemost and the rightmost elements of chain \eqref{chain1} are distinct, observe that $((r,p), (s,q))$ lies in the rightmost congruence of \eqref{chain1} but not in the middlemost. To show that the middlemost and leftmost elements of chain \eqref{chain1} are distinct, choose $(u,v)\in \beta-\theta$. This is possible since $\beta\not\leq\theta$ (see Figure~\ref{fig2}). Both pairs $(u,u), (u,v)$ belong to $\m a(\beta)$ and $((u,u), (u,v))\in\delta_1$. We have $\Delta_{\beta,\theta}\leq \theta_1\times \theta_2$ by \eqref{delta_gen}, so $\delta_1\cap \Delta_{\beta,\theta}\leq \delta_1\times \theta_2$. However the pair $((u,u), (u,v))$ does not belong to $\delta_1\times \theta_2$, since $(u,v)\notin\theta$. This shows that $((u,u), (u,v))$ is contained in the middlemost element of the chain in \eqref{chain1}, but not the leftmost. Let us use ``$\gamma_1$'' to denote the top congruence $\delta_1+\delta_2\cap\Delta_{\beta,\theta}$ in Figure~\ref{fig3}. Here, note that since $\delta_1+\delta_2\cap\Delta_{\beta,\theta}$ is strictly above $\delta_1$, it is indeed of the form $\gamma_1$ for some $\gamma\in\Con(\m a)$ satisfying $\gamma > \delta$. Moreover, since $\gamma_1=\delta_1+(\delta_2\cap\Delta_{\beta,\theta}) \stackrel{\eqref{delta_gen}}{\leq} \delta_1+(\delta_2\cap (\theta_1\times \theta_2))= \delta_1+(\theta_1\times \delta_2)=\theta_1$ we have that \begin{equation} \label{gamma_location} \delta < \gamma \leq \theta \end{equation} in $\Con(\m a)$. Note also that \begin{equation} \label{gamma_1_location} \Delta_{\beta,\gamma}\leq \gamma_1\times\gamma_2\leq \gamma_1 \end{equation} in $\Con(\m a(\beta))$. We have a chain on the right side of Figure~\ref{fig3}, namely $$ \begin{array}{rll} \delta_1\cap\Delta_{\beta,\theta}&\leq (\delta_1\cap \Delta_{\beta,\theta})+(\delta_2\cap\Delta_{\beta,\theta})& (\delta_2\cap\Delta_{\beta,\theta})\\ &\leq \Delta_{\beta,\gamma} +(\delta_1\cap \Delta_{\beta,\theta})+(\delta_2\cap\Delta_{\beta,\theta})\hphantom{AAA}&\Delta_{\beta,\gamma}\\ &\leq \delta_1+\delta_2\cap\Delta_{\beta,\theta}=\gamma_1& \delta_1. \end{array} $$ One sees that, on each of these lines, the congruence immediately to the right of the ``$\leq$'' is obtained from the preceding congruence in the chain by joining the additional congruence indicated on the same line at the far right. The claim just made involves three assertions, the first two of which are formally true. For the third claim, which is the claim that we obtain $\gamma_1$ when we join $\delta_1$ to $\Omega=\Delta_{\beta,\gamma} +(\delta_1\cap \Delta_{\beta,\theta})+(\delta_2\cap\Delta_{\beta,\theta})$, we use (\ref{gamma_1_location}) in the last step of the following computation. $$ \begin{array}{rl} \delta_1+\Omega&= \delta_1+(\Delta_{\beta,\gamma} +(\delta_1\cap \Delta_{\beta,\theta})+(\delta_2\cap\Delta_{\beta,\theta}))\\ &= \Delta_{\beta,\gamma} +(\delta_1+(\delta_1\cap \Delta_{\beta,\theta}))+(\delta_2\cap\Delta_{\beta,\theta}))\\ &= \Delta_{\beta,\gamma} +(\delta_1+\delta_2\cap\Delta_{\beta,\theta})\\ &= \Delta_{\beta,\gamma} +\gamma_1\\ &=\gamma_1.\\ \end{array} $$ Next, we argue that the comparability \begin{equation} \label{chain2} \Psi:=(\delta_1\cap \Delta_{\beta,\theta})+(\delta_2\cap\Delta_{\beta,\theta}) \leq \Delta_{\beta,\gamma}+(\delta_1\cap \Delta_{\beta,\theta})+(\delta_2\cap\Delta_{\beta,\theta}) =:\Omega, \end{equation} is strict. \begin{clm}\label{strict} $\Psi<\Omega$. \end{clm} \noindent {\it Proof of Claim~\ref{strict}.} If $(u,v)\in\gamma-\delta$, then the pair $((u,u),(v,v))\in\Delta_{\beta,\gamma}$ will belong to $\Omega$, since $\Delta_{\beta,\gamma}$ is a summand of $\Omega$. We shall argue that $((u,u),(v,v))\notin\Psi$. Our path will be to show that the $\Psi$-class of $(u,u)$ is contained in the $\delta_1\times \delta_2$-class of $(u,u)$. This will suffice, since we have $(u,v)\notin\delta$ and therefore $((u,u),(v,v))\notin\delta_1\times \delta_2$, so we will be able to derive that $((u,u),(v,v))\notin\Psi$. Since Figure~\ref{fig2} is a pentagon in which $\beta\cap (\theta+(\beta\cap\delta))\leq \delta$ holds, Theorem~\ref{basic_centrality}~(8) guarantees that the relation $\C C(\theta,\beta;\delta)$ holds. This property can be restated in this way: if $D$ is the set of pairs $(x,y)\in \m a(\beta)$ that satisfy $(x,y)\in\delta$, then the subset $D \subseteq \m a(\beta)$ is a union of $\Delta_{\beta,\theta}$-classes. In other words, $(e,f)\in \delta$ and $(e,f)\stackrel{\Delta_{\beta,\theta}}{\equiv} (g,h)$ jointly imply $(g,h)\in \delta$. We can apply this information to the intersection congruence $\delta_1\cap \Delta_{\beta,\theta}$ to derive that if \begin{itemize} \item $(e,f)\stackrel{\Delta_{\beta,\theta}}{\equiv} (g,h)$, \item $(e,f)\in\delta$, and we also have \item $(e,g)\in\delta$, \end{itemize} then $(g,h), (f,h)\in\delta$. (Here $(f,h)\in\delta$ follows from $f\stackrel{\delta}{\equiv}e\stackrel{\delta}{\equiv}g\stackrel{\delta}{\equiv}h$.) In conclusion, if $(e,f)\in\delta$ and $((e,f),(g,h)) \in\delta_1\cap \Delta_{\beta,\theta}$, then necessarily $(g,h)\in\delta$ and $((e,f),(g,h)) \in\delta_2\cap \Delta_{\beta,\theta}$. This establishes that, when $(e,f)\in\delta$, the $\delta_1\cap \Delta_{\beta,\theta}$-class of $(e,f)$ agrees with the $\delta_2\cap \Delta_{\beta,\theta}$-class of $(e,f)$, and hence agrees with the $\Psi$-class of $(e,f)$ ($\Psi$ is the join of $\delta_1\cap \Delta_{\beta,\theta}$ and $\delta_2\cap \Delta_{\beta,\theta}$). Therefore, when $(e,f)\in\delta$, the $\Psi$-class of $(e,f)$ is contained in the $\delta_1\times\delta_2$-class of $(e,f)$. By applying this reasoning to $(e,f)=(u,u)$ we see that $((u,u),(v,v))\notin\Psi$, since the $\delta_1\times\delta_2$-class of $(u,u)$ does not contain $(v,v)$. \hfill\rule{1.3mm}{3mm} \bigskip What remains to do to establish that our congruences form a pentagon is to show that (i) $\delta_1+\Psi = \gamma_1$ and (ii) $\delta_1\cap \Omega=\delta_1\cap\Delta_{\beta,\theta}$. For the first of these we calculate that \[ \begin{array}{rl} \delta_1+\Psi &= (\delta_1+(\delta_1\cap \Delta_{\beta,\theta}))+(\delta_2\cap\Delta_{\beta,\theta})\\ &= \delta_1+(\delta_2\cap\Delta_{\beta,\theta})\\ &=\gamma_1. \end{array} \] For~(ii), we use the fact $\gamma\leq \theta$ from \eqref{gamma_location} to derive that $\Delta_{\beta,\gamma}\leq \Delta_{\beta,\theta}$. Therefore all summands in \[ \Omega = \Delta_{\beta,\gamma}+(\delta_1\cap \Delta_{\beta,\theta})+(\delta_2\cap\Delta_{\beta,\theta}) \] lie below $\Delta_{\beta,\theta}$. Since one of the summands is $\delta_1\cap \Delta_{\beta,\theta}$, we obtain \[ \delta_1\cap \Delta_{\beta,\theta}\leq \Omega\leq \Delta_{\beta,\theta}. \] If we meet this chain throughout with $\delta_1$ we obtain \[ \delta_1\cap \Delta_{\beta,\theta}\leq \delta_1\cap \Omega\leq \delta_1\cap \Delta_{\beta,\theta}, \] or $\delta_1\cap \Omega= \delta_1\cap \Delta_{\beta,\theta}$, which is what (ii) asserts. We have established the pentagon shape, so what is left is to establish that the asserted centralities hold. In \eqref{gamma_location} we showed above that $\delta < \gamma\leq \theta$ in $\Con(\m a)$. Since $\C C(\theta,\theta;\delta)$ holds in $\Con(\m a)$, we get $\C C(\theta_1,\theta_1;\delta_1)$ in $\Con(\m a(\beta))$ by Theorem~\ref{basic_centrality}~(10) and the Correspondence Theorem. We then get $\C C(\gamma_1,\gamma_1;\delta_1)$ by monotonicity (Theorem~\ref{basic_centrality}~(1)). This shows that the interval $I[\delta_1,\gamma_1]$ between $\delta_1$ and the top of the pentagon, $\gamma_1$, is abelian. Using this, we can derive that the interval $I[\delta_1\cap\Delta_{\beta,\theta},\Omega]$ between $\Omega = \Delta_{\beta,\gamma}+(\delta_1\cap \Delta_{\beta,\theta})+(\delta_2\cap\Delta_{\beta,\theta})$ and the bottom of the pentagon is also abelian, as follows: By the facts that $\C C(\gamma_1,\gamma_1;\delta_1)$ and $\Omega\leq \gamma_1$, we can get $\C C(\Omega,\Omega;\delta_1)$. We always have $\C C(\Omega,\Omega;\Omega)$ according to Theorem~\ref{basic_centrality}~(8). Then, by Theorem~\ref{basic_centrality}~(6), we get $\C C(\Omega,\Omega;\delta_1\cap\Omega)$, which is the claim that the interval between $\Omega$ and the bottom of the pentagon is abelian. This completes the proof of the theorem up to relabeling the congruences in Figure~\ref{fig3}. (In particular, since the bottom element of Figure~\ref{fig3} is labeled $0$, we should factor our algebra and take $\m b = \m a(\beta)/(\delta_1\cap \Delta_{\beta,\theta})$.) \end{proof} \begin{lm} \label{asymmetry} Assume that $\m a$ is an algebra whose commutator operation is not commutative. Some quotient $\m b$ of $\m a$ will have congruences $\alpha,\beta\in\Con(\m b)$ such that $[\beta,\alpha]=0<[\alpha,\beta]$. \end{lm} \begin{proof} For this proof (and later proofs) we will adopt ``relative commutator'' notation first introduced above \cite[Theorem~4.22]{order-theoretic}. This notation is useful for discussing the relationship between the commutator operation in $\m a$ and the commutator operations on quotients of $\m a$. Define \[ [\alpha,\beta]_{\varepsilon}:= \bigcap \{\gamma\;|\; (\gamma\geq \varepsilon)\;\textrm{and}\;\C C(\alpha,\beta;\gamma)\}. \] It is easy to see from Theorem~\ref{basic_centrality}~(10) that this notation has the property that if $\varepsilon\leq \alpha, \beta$, then $[\alpha/\varepsilon,\beta/\varepsilon]= [\alpha,\beta]_{\varepsilon}/\varepsilon$, so the ordinary (= unsubscripted) commutator operation $[-,-]$ on $\Con(\m a/\varepsilon)$ is reflected by the operation $[-,-]_{\varepsilon}$ on the interval $I[\varepsilon,1]$ of $\Con(\m a)$. If $\m a\in \mathcal V$ has noncommutative commutator, then it has congruences $\gamma,\delta\in \Con(\m a)$ such that $[\gamma,\delta]\not\leq [\delta,\gamma]$. Set $\varepsilon=[\delta,\gamma]$. This is a congruence which lies below both $\gamma$ and $\delta$. The fact that $[\gamma,\delta]\not\leq [\delta,\gamma]=\varepsilon$ implies that $\C C(\gamma,\delta;\varepsilon)$ fails, hence $[\gamma,\delta]_{\varepsilon}\neq \varepsilon=[\delta,\gamma]$. But $[\gamma,\delta]_{\varepsilon}\geq \varepsilon$ from the definition of the relative commutator notation. Hence we have $[\delta,\gamma]_{\varepsilon} = [\delta,\gamma] < [\gamma,\delta]_{\varepsilon}$. This means that the algebra $\m a/\varepsilon$ has congruences $\delta/\varepsilon, \gamma/\varepsilon$ satisfying $[\delta/\varepsilon,\gamma/\varepsilon] = 0 < [\gamma/\varepsilon,\delta/\varepsilon]$. By changing notation to work modulo $\varepsilon$ we have a quotient of $\m a$ with congruences $\alpha = \gamma/\varepsilon, \beta = \delta/\varepsilon$ satisfying $[\beta,\alpha]=0<[\alpha,\beta]$. \end{proof} We will use Lemma~\ref{asymmetry} in the next result where we connect left and right distributivity of the commutator with commutativity of the commutator. \begin{thm} \label{distributive_thm} Let $\mathcal V$ be an arbitrary variety. \begin{enumerate} \item If the commutator is left distributive throughout $\mathcal V$, \[(\forall x, y, z)\;\;[x+y,z]=[x,z]+[y,z],\] then it is also commutative throughout $\mathcal V$ \[(\forall x, y)\;\;[x,y]=[y,x].\] \item If the commutator is right distributive throughout $\mathcal V$, \[(\forall x, y, z)\;\;[x,y+z]=[x,y]+[x,z],\] then the commutator satisfies the following ``partial commutativity'' on comparable pairs of congruences. \[(\forall x, y)\;\;(y\leq x) \Rightarrow [x,y]\leq [y,x].\] \end{enumerate} \end{thm} \begin{proof} We start by proving the contrapositive form of Item~(1), so assume that the commutator fails to be commutative throughout $\mathcal V$. There must be some $\m a\in \mathcal V$ that has congruences $\alpha,\beta\in \Con(\m a)$ such that $[\alpha,\beta]\not\leq [\beta,\alpha]$. By Lemma~\ref{asymmetry} we may assume that $[\beta,\alpha]=0<[\alpha,\beta]$. Recall that $\eta_1=0_1=\{((w,x),(y,z))\in A(\alpha)^2\mid w=y\}$ and $\eta_2=0_2 =\{((w,x),(y,z))\in A(\alpha)^2\mid x=z\}$ are the restrictions of the coordinate projection kernels of $\m a^2$ to the subalgebra $\m a(\alpha)$. Let $\delta\colon \m a\to \m a(\alpha)\colon x\mapsto (x,x)$ be the diagonal embedding. We will write $D$ for the set-theoretic image $\delta(A)$ and $\m d$ for the algebra-theoretic image $\delta(\m a)$ (the diagonal subuniverse of $\m a(\alpha)$). For a congruence $\theta\in\Con(\m a)$, we write $\delta(\theta)$ for $\{((x,x),(y,y))\in A(\alpha)^2\;|\;(x,y)\in\theta\}$, which is a congruence on $\m d$. \begin{clm} \label{left_add1} The diagonal subuniverse $D\leq \m a(\alpha)$ is a union of singleton $([\eta_1,\Delta_{\alpha,\beta}]+[\eta_2,\Delta_{\alpha,\beta}])$-classes. \end{clm} \noindent {\it Proof of Claim~\ref{left_add1}.} Since $[\beta,\alpha]=0$, the set $D$ is a union of $\Delta_{\alpha,\beta}$-classes. No two elements of $D$ are related by $\eta_1$, so every element of $D$ is a singleton $(\eta_1\cap \Delta_{\alpha,\beta})$-class. Since $[\eta_1,\Delta_{\alpha,\beta}]$ is contained in $\eta_1\cap \Delta_{\alpha,\beta}$, every element of $D$ is a singleton $[\eta_1, \Delta_{\alpha,\beta}]$-class. Similarly every element of $D$ is a singleton $[\eta_2, \Delta_{\alpha,\beta}]$-class. Therefore every element of $D$ is a singleton $([\eta_1, \Delta_{\alpha,\beta}]+[\eta_2, \Delta_{\alpha,\beta}])$-class. \hfill\rule{1.3mm}{3mm} \bigskip \begin{clm} \label{left_add2} The diagonal subuniverse $D\leq \m a(\alpha)$ is not a union of singleton $([\eta_1+\eta_2,\Delta_{\alpha,\beta}])$-classes. \end{clm} \noindent {\it Proof of Claim~\ref{left_add2}.} We will show that the restriction of the congruence $[\eta_1+\eta_2,\Delta_{\alpha,\beta}]$ to $D$ is not the equality relation, and this will prove that $D$ is not a union of singleton $([\eta_1+\eta_2,\Delta_{\alpha,\beta}])$-classes. For this, notice that \[ [\eta_1+\eta_2,\Delta_{\alpha,\beta}]|_{\m d}\geq [(\eta_1+\eta_2)|_{\m d},\Delta_{\alpha,\beta}|_{\m d}] = [\delta(\alpha),\delta(\beta)]=\delta([\alpha,\beta]) > 0. \] The leftmost inequality is derived from Theorem~\ref{basic_centrality}~(9). \hfill\rule{1.3mm}{3mm} \bigskip Claims \ref{left_add1} and \ref{left_add2} show that $[\eta_1+\eta_2,\Delta_{\alpha,\beta}]\neq [\eta_1,\Delta_{\alpha,\beta}]+[\eta_2,\Delta_{\alpha,\beta}]$, so the commutator is not left distributive on $\m a(\alpha)$. \bigskip Next we argue the contrapositive form of Item~(2) of the theorem. Assume that there is some $\m a\in \mathcal V$ that has congruences $\alpha,\beta\in \Con(\m a)$ such that (i) $\beta\leq \alpha$ but (ii) $[\alpha,\beta]\not\leq [\beta,\alpha]$. Consulting the proof of Lemma~\ref{asymmetry}, we see that we may refine these assumptions to (i) $\beta\leq \alpha$ and (ii) $[\beta,\alpha]=0<[\alpha,\beta]$. \begin{clm} \label{right_add1} The diagonal subuniverse $D\leq \m a(\alpha)$ is a union of singleton $([\eta_1,\eta_2]+[\eta_1,\Delta_{\alpha,\beta}])$-classes. \end{clm} \noindent {\it Proof of Claim~\ref{right_add1}.} We have $[\eta_1,\eta_2]\leq \eta_1\cap \eta_2 = 0$, so $[\eta_1,\eta_2]$ is the equality relation on $\m a(\alpha)$ and all $[\eta_1,\eta_2]$-classes are singletons. Hence the subuniverse $D$ consists of singleton $[\eta_1,\eta_2]$-classes. We may copy the proof of Claim~\ref{left_add1} to establish that $D$ is a union of singleton $[\eta_1,\Delta_{\alpha,\beta}]$-classes. (The situation here is the same as the one there, except here we have the extra property that $\beta\leq\alpha$.) It follows that $D$ is a union of singleton $([\eta_1,\eta_2]+[\eta_1,\Delta_{\alpha,\beta}])$-classes. \hfill\rule{1.3mm}{3mm} \bigskip \begin{clm} \label{right_add2} $D$ is not a union of singleton $[\eta_1,\eta_2+\Delta_{\alpha,\beta}]$-classes. \end{clm} \noindent {\it Proof of Claim~\ref{right_add2}.} For this proof, note that $\eta_2+\Delta_{\alpha,\beta}=\beta_2$. We started the proof by arranging that $[\alpha,\beta]>0$. This means that there is an $\alpha,\beta$-matrix \[ \begin{bmatrix} t(\wec{a},\wec{u}) & t(\wec{a},\wec{v}) \\ t(\wec{b},\wec{u}) & t(\wec{b},\wec{v}) \end{bmatrix}= \begin{bmatrix} p & q \\ r & s \end{bmatrix}, \;\;\; \wec{a}\wrel{\alpha}\wec{b},\;\; \wec{u}\wrel{\beta}\wec{v}, \] with $p=q$ but $r\neq s$. Consider the $\eta_1, \beta_2$-matrix of $\m a(\alpha)$ \begin{equation} \tag{M}\label{MM} \begin{bmatrix} t\left((\wec{b},\wec{a}), (\wec{u}, \wec{u})\right)& t\left((\wec{b},\wec{a}), (\wec{u}, \wec{v})\right)\\ t\left((\wec{b},\wec{b}), (\wec{u}, \wec{u})\right)& t\left((\wec{b},\wec{b}), (\wec{u}, \wec{v})\right) \end{bmatrix}= \begin{bmatrix} (r, p) & (r, q) \\ (r, r) & (r, s). \end{bmatrix} \end{equation} The fact that this truly is an $\eta_1,\beta_2$-matrix of $\m a(\alpha)$ follows from our assumption that $\beta\leq \alpha$ (and this is the only place in the argument where this assumption is needed). Namely, to know that \eqref{MM} is an $\eta_1,\beta_2$-matrix of $\m a(\alpha)$ we need to know that $\m a(\alpha)$ contains all pairs of the form $(b_i,a_i)$, $(u_i,u_i)$, $(b_i,b_i)$, and $(u_i,v_i)$. It is easy to see that $\m a(\alpha)$ contains all pairs of these types except possibly those of the type $(u_i,v_i)$. Such pairs lie in $\beta$, so if $\beta\leq \alpha$ they will also lie in $A(\alpha)=\alpha$. We have $(r, p)=(r, q)$, so $\left((r, r),(r, s)\right)$ belongs to $[\eta_1,\beta_2]$. Since $(r,r) \in D$ and $(r,s)\notin D$, the $[\eta_1,\beta_2]$-class of $(r,r)\in D$ is not a singleton, hence $D$ is not a union of singleton $[\eta_1,\beta_2]$-classes. \hfill\rule{1.3mm}{3mm} \bigskip Claims \ref{right_add1} and \ref{right_add2} show that $[\eta_1,\eta_2+\Delta_{\alpha,\beta}]\neq [\eta_1,\eta_2]+[\eta_1,\Delta_{\alpha,\beta}]$, so the commutator is not right distributive on $\m a(\alpha)$. \end{proof} \section{Main results}\label{main} Now we prove results which seem to require a ground Maltsev condition. The most important theorems of this section will be proved under the assumption of ``existence of a Taylor term'', \cite[Definition~2.15]{shape}. A Taylor term for a variety $\mathcal V$ is a term $T(x_1,\ldots,x_{n})$ such that $\mathcal V$ satisfies the identity $T(x,\ldots,x)\approx x$ and, for each $i$ between $1$ and $n$, $\mathcal V$ satisfies some identity of the form $T(\wec{w})\approx T(\wec{z})$ with $w_i \neq z_i$. Any identity of the form $T(\wec{w})\approx T(\wec{z})$ with $w_i \neq z_i$ is called an ``$i$-th place Taylor identity'' of $T$. Some of the results of this section will be proved under the stronger assumptions ``existence of a difference term'' or ``existence of a weak difference term'', Definition~\ref{left_right}. The class of varieties with a difference term is definable by a Maltsev condition. The same is true for the class of varieties with a weak difference term. The Mal\-tsev conditions were identified in principle in \cite{kearnes-szendrei} in Theorem~4.8 and the paragraph following the proof of that theorem. We define weak and ordinary difference terms next. \begin{df} \label{left_right} Let $\mathcal V$ be a variety. A ternary $\mathcal V$-term $t(x,y,z)$ shall be called \begin{enumerate} \item a \emph{right Maltsev term} for $\mathcal V$ if ${\mathcal V}\models t(x,x,y)\approx y$. \item a \emph{left Maltsev term} for $\mathcal V$ if ${\mathcal V}\models t(x,y,y)\approx x$. \item a \emph{Maltsev term} for $\mathcal V$ if it is both a right and left Maltsev term. \item a \emph{right difference term} for $\mathcal V$ if, for any $\m b\in {\mathcal V}$, $t^{\m b}(a,a,b) = b$ holds whenever the pair $(a,b)$ is contained in an abelian congruence. \item a \emph{left difference term} for $\mathcal V$ if, for any $\m b\in {\mathcal V}$, $t^{\m b}(a,b,b) = a$ holds whenever the pair $(a,b)$ is contained in an abelian congruence. \item a \emph{weak difference term} for $\mathcal V$ if it is both a right and left difference term. \item a \emph{difference term} for $\mathcal V$ if it is a right Maltsev term and a left difference term. \end{enumerate} \end{df} This left/right terminology is not standard, but it is introduced here because the new concept ``right difference term'' will play a role in the proof of Theorem~\ref{characterization_of_weak} (e.g., in Claim~\ref{delta_char}). As we noted in the Introduction, there is a weakest nontrivial idempotent Maltsev condition. The class of varieties defined by this condition is the the class of varieties with a Taylor term. As noted before Definition~\ref{left_right}, the classes of varieties with (i) a weak difference term or (ii) an ordinary difference term are also definable by idempotent Maltsev conditions. Ordinary difference terms are formally stronger than weak difference terms, which are stronger than Taylor terms. These differences in strength are strict. The algebra $\m i$ of Example~\ref{simple_ex} generates a variety that has a Taylor term, but does not have a weak difference term. (The justification for this claim is given in that example.) The semigroup $\mathbb Z_2\times \mathbb S_2$ described in Section~\ref{examples} generates a variety with a weak difference term, but with no difference term. (The justification for this claim is given in that section.) Throughout this section, the assumption that the variety under consideration has a Taylor term will be used to invoke the following theorem, which expresses limits on the behavior of the centralizer relation. \begin{thm} \label{memoir_pentagon} \textrm{\rm (\cite[Theorem 4.16(2)(i)]{shape})} Let $\mathcal V$ be a variety that has a Taylor term and let $\m a$ be a member of $\mathcal V$. There is no pentagon sublattice of $\Con(\m a)$, labeled as in Figure~\ref{fig4}, such that $\C C(\beta,\theta;\delta)$ holds. \begin{figure}[ht] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(20,33) \put(0,15){\circle*{1.2}} \put(10,0){\circle*{1.2}} \put(10,30){\circle*{1.2}} \put(20,10){\circle*{1.2}} \put(20,20){\circle*{1.2}} \put(10,0){\line(-2,3){10}} \put(10,0){\line(1,1){10}} \put(10,30){\line(-2,-3){10}} \put(10,30){\line(1,-1){10}} \put(20,20){\line(0,-1){10}} \put(-4.5,13){$\beta$} \put(22,9){$\delta$} \put(22,19){$\theta$} \put(9,32){$\alpha$} \end{picture} \bigskip \caption{\sc (Theorem~\ref{memoir_pentagon}) A pentagon in $\textrm{Con}(\m A)$ will not satisfy $\C C(\beta,\theta;\delta)$.}\label{fig4} \end{center} \end{figure} \end{thm} We restate this theorem in a positive and formally stronger way. \begin{thm} \label{memoir_pentagon_2} Let $\mathcal V$ be a variety that has a Taylor term and let $\m a$ be a member of $\mathcal V$. Given any pentagon sublattice of $\Con(\m a)$, labeled as in Figure~\ref{fig5}, $[\alpha,x]_{\delta} = x$ holds for every $x\in I[\delta,\theta]$. (Equivalently, if $\delta \leq y < x\leq \theta$, then $\C C(\alpha,x;y)$ fails.) \hfill $\Box$ \begin{figure}[ht] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(20,33) \put(0,15){\circle*{1.2}} \put(10,0){\circle*{1.2}} \put(10,30){\circle*{1.2}} \put(20,10){\circle*{1.2}} \put(20,15){\circle*{1.2}} \put(20,20){\circle*{1.2}} \put(10,0){\line(-2,3){10}} \put(10,0){\line(1,1){10}} \put(10,30){\line(-2,-3){10}} \put(10,30){\line(1,-1){10}} \put(20,20){\line(0,-1){10}} \put(-4.5,13){$\beta$} \put(22,9){$\delta$} \put(22,14){$x$} \put(22,19){$\theta$} \put(9,32){$\alpha$} \end{picture} \bigskip \caption{\sc If $\delta\leq x\leq \theta$, then $[\alpha,x]_{\delta} = x$.}\label{fig5} \end{center} \end{figure} \end{thm} There are two observations to make to see that Theorem~\ref{memoir_pentagon} and Theorem~\ref{memoir_pentagon_2} have the same content. The first observation is: (i) $\C C(\beta,\theta;\delta)$ holds in the pentagon of Figure~\ref{fig4} if and only if (ii) $\C C(\alpha,\theta;\delta)$ holds in that pentagon. One obtains (ii)$\Rightarrow$(i) by monotonicity of the centralizer in its first place (Theorem~\ref{basic_centrality}~(1)). One obtains (i)$\Rightarrow$(ii) by assuming $\C C(\beta,\theta;\delta)$, deriving $\C C(\delta,\theta;\delta)$ from Theorem~\ref{basic_centrality}~(8), then deriving $\C C(\beta+\delta,\theta;\delta)$ from Theorem~\ref{basic_centrality}~(5). Since $\alpha = \beta+\delta$, we get that (ii) holds. The second observation is that if $\delta\leq y <x\leq \theta$, then $\{\alpha, \beta, x, y, \beta\cap y\}$ is another pentagon in $\Con(\m a)$. Applying Theorem~\ref{memoir_pentagon} to this new pentagon we get that $\C C(\beta,x;y)$ fails. Using the idea of the preceding paragraph, this is equivalent to the assertion that $\C C(\alpha,x;y)$ fails whenever $\delta\leq y <x\leq \theta$. In particular, since $\delta\leq [\alpha,x]_{\delta} \leq x\leq \theta$ and $\C C(\alpha,x;[\alpha,x]_{\delta})$ holds, we cannot have $[\alpha,x]_{\delta} < x$. The alternative is that $[\alpha,x]_{\delta} = x$ whenever $\delta\leq x\leq \theta$. On the other hand, if we have $[\alpha,x]_{\delta} = x$ for $x=\theta$, then we recover the conclusion of Theorem~\ref{memoir_pentagon}. The first main theorem of this section gives a commutator-theoretic characterization of varieties with a weak difference term (Theorem~\ref{characterization_of_weak}). Using this characterization, we shall deduce that any variety with a Taylor term and commutative commutator must have a weak difference term (Theorem~\ref{commutative_implies_weak}). The proofs of these two results were developed from an analysis of a simple example, which we describe first. The inclusion of this example is meant to help guide the reader through the lengthy proof of Theorem~\ref{characterization_of_weak}. \begin{example} \label{simple_ex} Let $\mathbb R$ be the real line considered as a $1$-dimensional real vector space. Let $\mathbb R^{\circ}$ be the reduct of $\mathbb R$ to the idempotent linear operations of the form $f_r(x,y)=rx+(1-r)y$, $0<r<1$. Let $\m i$ be the subalgebra of $\mathbb R^{\circ}$ whose universe is the unit interval $I=[0,1]$. Thus, $\m i = \langle [0,1]; \{f_r(x,y)\;|\;0<r<1\}\rangle$ is a subalgebra of a reduct of an abelian algebra $\mathbb R$, which makes $\m i$ an abelian algebra. From Definition~\ref{left_right}, the concepts of ``Maltsev term'', ``difference term'', and ``weak difference term'' all coincide for abelian algebras. The fact that $I$ is closed under all of the $f_r$-operations and is not closed under the unique Maltsev operation $x-y+z$ of $\mathbb R$ shows that neither $\mathbb R^{\circ}$ nor $\m i$ have Maltsev operations, and therefore neither $\mathbb R^{\circ}$ nor $\m i$ has a weak difference term. ${\mathcal V}(\m i)$ does have a Taylor term, namely $T(x,y) = f_{\frac{1}{2}}(x,y)=\frac{1}{2}x+\frac{1}{2}y$. This is a Taylor operation for ${\mathcal V}(\m i)$, since $T(x,x)\approx x$ and $T(x,y)\approx T(y,x)$ hold in $\m i$, and the latter is both a first-place and a second-place Taylor identity for $T(x,y)$ in ${\mathcal V}(\m i)$. The algebra $\m i$ is free in ${\mathcal V}(\m i)$ over the $2$-element generating set $\{0,1\}$. We identify some congruences in $\Con(\m i\times \m i)$ and indicate the sublattice they generate in Figure~\ref{fig6}. \begin{figure}[ht] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(20,35) \put(0,15){\circle*{1.2}} \put(15,0){\circle*{1.2}} \put(15,30){\circle*{1.2}} \put(30,15){\circle*{1.2}} \put(22.5,7.5){\circle*{1.2}} \put(15,0){\line(-1,1){15}} \put(15,0){\line(1,1){15}} \put(15,30){\line(-1,-1){15}} \put(15,30){\line(1,-1){15}} \put(14,-5){$0$} \put(14,32){$1$} \put(-4.5,14){$\eta_1$} \put(32,14){$\eta_2$} \put(24,6){$\delta=\mathrm{Cg}(((0,0),(1,0)))$} \end{picture} \bigskip \caption{\sc A sublattice of $\Con(\m i\times \m i)$.}\label{fig6} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(20,32) \thinlines \put(0,0){\line(1,0){30}} \put(0,0){\line(0,1){30}} \put(30,30){\line(-1,0){30}} \put(30,30){\line(0,-1){30}} \thicklines \linethickness{1mm} \put(0,27){\line(1,0){30}} \put(0,24){\line(1,0){30}} \put(0,21){\line(1,0){30}} \put(0,18){\line(1,0){30}} \put(0,15){\line(1,0){30}} \put(0,12){\line(1,0){30}} \put(0,9){\line(1,0){30}} \put(0,6){\line(1,0){30}} \put(0,3){\line(1,0){30}} \put(0,0){\line(1,0){30}} \put(0,30){\circle*{1.6}} \put(3,30){\circle*{1.6}} \put(6,30){\circle*{1.6}} \put(9,30){\circle*{1.6}} \put(12,30){\circle*{1.6}} \put(15,30){\circle*{1.6}} \put(18,30){\circle*{1.6}} \put(21,30){\circle*{1.6}} \put(24,30){\circle*{1.6}} \put(27,30){\circle*{1.6}} \put(30,30){\circle*{1.6}} \put(0,0){\circle*{1.6}} \put(30,00){\circle*{1.6}} \put(-10,-1){(0,0)} \put(-10,29){(0,1)} \put(31,-1){(1,0)} \put(31,29){(1,1)} \end{picture} \bigskip \caption{\sc The partition of $\m i\times \m i$ induced by $\delta=\mathrm{Cg}(((0,0),(1,0)))$.}\label{fig7} \end{center} \end{figure} The projection kernels are the congruences $\eta_1=\mathrm{Cg}(((0,0),(0,1)),((1,0),(1,1)))$ and $\eta_2=\mathrm{Cg}(((0,0),(1,0)),((0,1),(1,1)))$. The congruence $\eta_1$ partitions the ``square'' $\m i\times \m i$ into congruence classes that are ``vertical lines'' and $\eta_2$ partitions $\m i\times \m i$ into congruence classes that are ``horizontal lines''. The interesting congruence is $\delta=\mathrm{Cg}(((0,0),(1,0)))$. The partition of $\m i\times \m i$ it yields is depicted in Figure~\ref{fig7}. In the partition depicted in Figure~\ref{fig7}, all congruence classes of $\delta$ agree with those of $\eta_2$ except the class that is the ``top line'', $X:=\m i\times \{1\}$. The top line $X$ is a single $\eta_2$-class, and hence a union of $\delta$-classes, but $\delta$ restricts to be the equality relation on the top line $X$ while $\eta_2$ restricts to be the total relation. This unusual structure for $\delta$ can be exploited the following way. The operation $T(x,y) = f_{\frac{1}{2}}(x,y)=\frac{1}{2}x+\frac{1}{2}y$ may be used to create a $1,\eta_2$-matrix \medskip $$ \begin{bmatrix} T\left((1,0), (0,1)\right)& T\left((1,0), (1,1)\right)\\ T\left((1,1), (0,1)\right)& T\left((1,1), (1,1)\right)\\ \end{bmatrix}= \begin{bmatrix} (.5,.5) & (1,.5)\\ (.5, 1) & (1,1) \end{bmatrix} $$ \medskip \noindent where the elements on the top row are $\delta$-related while the elements on the bottom row are not. This matrix witnesses that $\neg \C C(1,\eta_2;\delta)$. In the quotient $(\m i\times \m i)/\delta$ we must have $[\overline{1},\overline{\eta}_2]>0$ where $\overline{\eta}_2:=\eta_2/\delta$ and $\overline{1}:=1/\delta$. It is possible to argue that $[\overline{\eta}_2,\overline{1}]=0$, and therefore that $[\overline{\eta}_2,\overline{1}]\neq [\overline{1},\overline{\eta}_2]$. Although this example is special, the location of noncommutativity in $\mathcal V$ is general as we shall see in the proofs of the next two results. \end{example} \begin{thm} \label{characterization_of_weak} The following are equivalent for a variety $\mathcal V$. \begin{enumerate} \item $\mathcal V$ has a weak difference term. \item Whenever $\m a\in \mathcal V$ and $\alpha\in\Con(\m a)$ is abelian, the interval $I[0,\alpha]$ consists of permuting equivalence relations. \item Whenever $\m a\in \mathcal V$ and $\alpha\in\Con(\m a)$ is abelian, the interval $I[0,\alpha]$ is modular. \item Whenever $\m a\in \mathcal V$ and $\alpha\in\Con(\m a)$, there is no pentagon labeled as in Figure~\ref{fig8} with $[\alpha,\alpha]=0$. (No ``spanning pentagon'' in $I[0,\alpha]$ if $\alpha$ is abelian.) \item Whenever $\m a\in \mathcal V$ and $\alpha\in\Con(\m a)$, there is no pentagon labeled as in Figure~\ref{fig8} where $[\alpha,\alpha]=0$ and $\C C(\theta,\alpha;\delta)$. \begin{figure}[ht] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(20,33) \put(0,15){\circle*{1.2}} \put(10,0){\circle*{1.2}} \put(10,30){\circle*{1.2}} \put(20,10){\circle*{1.2}} \put(20,20){\circle*{1.2}} \put(10,0){\line(-2,3){10}} \put(10,0){\line(1,1){10}} \put(10,30){\line(-2,-3){10}} \put(10,30){\line(1,-1){10}} \put(20,20){\line(0,-1){10}} \put(-4.5,13){$\beta$} \put(22,9){$\delta$} \put(22,19){$\theta$} \put(9,32){$\alpha$} \put(9,-5){$0$} \end{picture} \bigskip \caption{\sc Forbidden sublattice if both $[\alpha,\alpha]=0$ and $\C C(\theta,\alpha;\delta)$ hold.}\label{fig8} \end{center} \end{figure} \end{enumerate} \end{thm} \begin{proof} [$(1)\Rightarrow(2)$] Assume that $t(x,y,z)$ is a weak difference term for $\mathcal V$, $\m a\in \mathcal V$, and $\alpha\in\Con(\m a)$ is abelian. If $\sigma,\tau\in I[0,\alpha]$ and $a\stackrel{\sigma}{\equiv} b \stackrel{\tau}{\equiv} c$, then $a=t^{\m a}(a,b,b)\stackrel{\tau}{\equiv} t^{\m a}(a,b,c) \stackrel{\sigma}{\equiv} t^{\m a}(b,b,c)=c$. This is all that is needed to verify that $\sigma\circ\tau\subseteq \tau\circ\sigma\;\;(\subseteq \sigma\circ\tau)$. [$(2)\Rightarrow(3)$] Every lattice of permuting equivalence relations is modular. [$(3)\Rightarrow(4)$] A lattice is modular if and only if it has no pentagon sublattice. Notice that Item~(3) asserts that when $\alpha$ is abelian, then $I[0,\alpha]$ contains no pentagon sublattice. Item~(4) asserts something slightly more: when $\alpha$ is abelian, then $I[0,\alpha]$ contains no ``spanning pentagon'' sublattice, by which we mean a pentagon whose bottom is $0$ and whose top is $\alpha$. [$(4)\Rightarrow(5)$] Item~(5) is identical to Item~(4) except the pentagons in Item~(5) are more restricted. In Item~(5) we assert no abelian interval contains a spanning pentagon \emph{which also satisfies $\C C(\theta,\alpha;\delta)$}. [$(5)\Rightarrow(1)$] This implication is the only nontrivial claim of the theorem. We prove it in the contrapositive form: if $\mathcal V$ does not have a weak difference term, then it will contain an algebra with a pentagon like the one described in Item~(5). Let \begin{itemize} \item $\m f=\m f_{\mathcal V}(x,y)$ be the free $\mathcal V$-algebra over the set $\{x,y\}$. \item $\theta =\Cg {\m f}(x,y)$. \item $\overline{\m f} = \m f/[\theta,\theta]$. \item $\overline{x} = x/[\theta,\theta], \overline{y} = y/[\theta,\theta]$. \item $\overline{\theta} = \theta/[\theta,\theta]=\Cg {\overline{\m f}}(\overline{x},\overline{y})$. \end{itemize} It follows from properties of the commutator (Theorem~\ref{basic_centrality}~(10)) that $[\overline{\theta},\overline{\theta}]=0$. \begin{clm} \label{free_abelian} {\rm ($\overline{\theta}$ is the ``free principal abelian congruence'' in $\mathcal V$)} Suppose that $\m b\in {\mathcal V}$ and $\beta\in\Con(\m b)$ satisfies $[\beta,\beta]=0$. For any $(a,b)\in \beta$ there is a unique homomorphism $\overline{\varphi}\colon \overline{\m f}\to \m b$ satisfying $\overline{x}\mapsto a, \overline{y}\mapsto b$. \end{clm} \emph{Proof of Claim~\ref{free_abelian}.} Given $(a, b)\in\beta$, let $\varphi\colon \m f=\m f_{\mathcal V}(x,y)\to \m b$ be the homomorphism determined by $x\mapsto a, y\mapsto b$. Since $\C C(\beta,\beta;0)$ holds in $\m b$, $\C C(\varphi^{-1}(\beta),\varphi^{-1}(\beta);\ker(\varphi))$ holds in $\overline{\m f}$. Since $(x,y)\in \varphi^{-1}(\beta)$, we have $\C C(\theta,\theta;\ker(\varphi))$ for $\theta = \mathrm{Cg}(x,y)$ by monotonicity. Therefore $[\theta,\theta]\leq \ker(\varphi)$ holds. We may factor $\varphi$ modulo $[\theta,\theta]$ as \[\varphi\colon \m f\to \m f/[\theta,\theta]= \overline{\m f}\stackrel{\overline{\varphi}}{\to} \m b\] where the first map is the natural map. This yields the desired map $\overline{\varphi}\colon \overline{\m f}\to \m b\colon \overline{x}\mapsto a, \overline{y}\mapsto b$. The uniqueness is a consequence of the fact that $\overline{\m f}$ is generated by $\{\overline{x},\overline{y}\}$, so any homomorphism with domain $\overline{\m f}$ is uniquely determined by its values on this set. \hfill\rule{1.3mm}{3mm} \bigskip Let $\m a$ be the subalgebra of $\overline{\m f}\times \overline{\m f}$ that is generated by the set \[ G=\{(\overline{x},\overline{x}), (\overline{y},\overline{x}), (\overline{x},\overline{y}), (\overline{y},\overline{y})\}. \] Since $\overline{\m f}$ is generated by $\{\overline{x}, \overline{y}\}$, the universe of $\m a$ is the reflexive, symmetric, compatible, binary relation (or ``tolerance'') generated by the pair $(\overline{x},\overline{y})$ on the algebra $\overline{\m f}$. Since $G\subseteq \overline{\theta}$ we have $A\subseteq \overline{\theta}$, so in fact $\m a\leq \overline{\m f}(\overline{\theta})\leq \overline{\m f}\times \overline{\m f}$. This is enough to draw some conclusions about $\m a$. The most tautological conclusion that follows from $A\subseteq \overline{\theta}$ is that if $(\overline{u},\overline{v})\in A$, then $(\overline{u},\overline{v})\in \overline{\theta}$, so any pair in $\m a$ generates an abelian congruence in $\overline{\m f}$. This fact will be used without reminders. A less obvious conclusion is that, since $\overline{\theta}$ is an abelian congruence that relates $\overline{x}$ to $\overline{y}$ in $\overline{\m f}$, the congruence $\overline{\theta}_1\times \overline{\theta}_2$ is an abelian congruence on $\overline{\m f}\times\overline{\m f}$ whose restriction to $\m a$ is an abelian congruence that relates any two elements of $G$. Let $\eta_1$ and $\eta_2$ be the coordinate projection kernels of $\overline{\m f}\times \overline{\m f}$ restricted to $\m a$. Following Example~\ref{simple_ex}, let $\delta = \Cg {\m a}(((\overline{x},\overline{x}),(\overline{y},\overline{x})))$. Let $X = (\overline{y},\overline{y})/\eta_2$ be the $\eta_2$-class of $(\overline{y},\overline{y})$. The set $X$ plays the role of the ``top edge'' in Example~\ref{simple_ex}. Elements of $X$ have the form $(\overline{P},\overline{y})$ for certain elements $\overline{P}\in \overline{\m f}$ satisfying $\overline{P}\stackrel{\overline{\theta}}{\equiv} \overline{y}\stackrel{\overline{\theta}}{\equiv}\overline{x}$ in $\overline{\m f}$. \begin{clm} \label{delta_char} {\rm (Characterization of $\delta|_X$)} Two pairs $(\overline{P},\overline{y})$ and $(\overline{Q},\overline{y})$ lying in $X$ are $\delta$-related if and only if there is a ternary $\mathcal V$-term $t_{\overline{P},\overline{Q}}(x,y,z)$ such that \begin{enumerate} \item[($\dagger$)] $t_{\overline{P},\overline{Q}}$ is a right difference term for $\mathcal V$ (cf.\ Definition~\ref{left_right}), and \item[($\ddagger$)] $t_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{P})=\overline{Q}$. \end{enumerate} \end{clm} \emph{Proof of Claim~\ref{delta_char}.} Recall that $\delta$ is the principal congruence on $\m a$ that is generated by the pair $((\overline{x},\overline{x}),(\overline{y},\overline{x}))$. Since $X$ is an $\eta_2$-class and $\delta\leq \eta_2$, if $(\overline{P},\overline{y})$ and $(\overline{Q},\overline{y})$ lying in $X$ are $\delta$-related, then they are connected by a Maltsev chain that lies entirely inside $X$. A link of such a Maltsev chain has the form $(f((\overline{x},\overline{x})),f((\overline{y},\overline{x})))$ or $(f((\overline{y},\overline{x})),f((\overline{x},\overline{x})))$ for some polynomial $f\in \textrm{Pol}_1(\m a)$. Since $\m a$ is generated by the set $G=\{(\overline{x},\overline{x}), (\overline{y},\overline{x}), (\overline{x},\overline{y}), (\overline{y},\overline{y})\}$, we may assume that the parameters of the polynomial $f$ lie in $G$, and hence we may write \[ f((z,w))=s^{\m a}((z,w),(\overline{x},\overline{x}), (\overline{y},\overline{x}), (\overline{x},\overline{y}), (\overline{y},\overline{y})) \] for some $5$-ary $\mathcal V$-term $s$. If $(f((\overline{x},\overline{x})),f((\overline{y},\overline{x})))= ((\overline{R},\overline{y}),(\overline{S},\overline{y}))$ is a link in a Maltsev chain in $X$, then there must exist such a $5$-ary term $s$ such that \[ \begin{array}{rl} s^{\m a}((\overline{x},\overline{x}),(\overline{x},\overline{x}), (\overline{y},\overline{x}), (\overline{x},\overline{y}), (\overline{y},\overline{y})) &= (\overline{R},\overline{y})\\ s^{\m a}((\overline{y},\overline{x}),(\overline{x},\overline{x}), (\overline{y},\overline{x}), (\overline{x},\overline{y}), (\overline{y},\overline{y})) &= (\overline{S},\overline{y}), \end{array} \] which simplifies to the coordinate equations \begin{equation}\label{coordinatewise} \begin{array}{rl} s^{\overline{\m f}}(\overline{x},\overline{x},\overline{y},\overline{x},\overline{y}) &= \overline{R}\\ s^{\overline{\m f}}(\overline{y},\overline{x},\overline{y},\overline{x},\overline{y}) &= \overline{S}\\ s^{\overline{\m f}}(\overline{x},\overline{x},\overline{x},\overline{y},\overline{y}) &= \overline{y}. \end{array} \end{equation} Let $t_{\overline{R},\overline{S}}(x,y,z) = s(x,y,y,z,z)$. \begin{subclm} \label{deltacharsubclaim1} {\rm (The $(\dagger)$-part of ``only if'' when $((\overline{R},\overline{y}),(\overline{S},\overline{y}))$ is a Maltsev link)} \;\;\; \begin{center} $t_{\overline{R},\overline{S}}$ is a right difference term for $\mathcal V$. \end{center} \end{subclm} \emph{Proof of Subclaim~\ref{deltacharsubclaim1}.} We derive from the definition of $t_{\overline{R},\overline{S}}$ and the third equation in \eqref{coordinatewise} that $t^{\overline{\m f}}_{\overline{R},\overline{S}}(\overline{x},\overline{x},\overline{y})= s^{\overline{\m f}}(\overline{x},\overline{x},\overline{x},\overline{y},\overline{y}) =\overline{y}$. The claim then follows from the fact that $\overline{\theta} = \Cg {\overline{\m f}}(\overline{x},\overline{y})$ is the ``free principal abelian congruence'' in $\mathcal V$. Specifically, given any $\m b\in {\mathcal V}$ and any pair $(a,b)$ from an abelian congruence of $\m b$, Claim~\ref{free_abelian} guarantees a unique homomorphism $\overline{\varphi}\colon \overline{\m f}\to \m b$ satisfying $\overline{x}\mapsto a, \overline{y}\mapsto b$. Applying this homomorphism to the equality $t^{\overline{\m f}}_{\overline{R},\overline{S}}(\overline{x},\overline{x},\overline{y})=\overline{y}$ yields $t^{\m b}_{\overline{R},\overline{S}}(a,a,b)=b$. This is all that is required to prove that $t_{\overline{R},\overline{S}}$ is right difference term for $\mathcal V$. (Cf.\ Definition~\ref{left_right}~(4).) \hfill\rule{1.3mm}{3mm} \bigskip \begin{subclm} \label{deltacharsubclaim2} {\rm (The $(\ddagger)$-part of ``only if'' when $((\overline{R},\overline{y}),(\overline{S},\overline{y}))$ is a Maltsev link)} \;\;\; \begin{center} $t_{\overline{R},\overline{S}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{R})=\overline{S}$. \end{center} \end{subclm} \emph{Proof of Subclaim~\ref{deltacharsubclaim2}.} The following matrix is a $\overline{\theta},\overline{\theta}$-matrix in $\overline{\m f}$: \begin{equation}\label{tricky_matrix} \begin{bmatrix} s^{\overline{\m f}}(\overline{x},\overline{x},\overline{y},\overline{x},\overline{y})& s^{\overline{\m f}}(\overline{x},\overline{x},\overline{x},\overline{R},\overline{R})\\ s^{\overline{\m f}}(\overline{y},\overline{x},\overline{y},\overline{x},\overline{y})& s^{\overline{\m f}}(\overline{y},\overline{x},\overline{x},\overline{R},\overline{R}) \end{bmatrix}= \begin{bmatrix} \overline{R} & t_{\overline{R},\overline{S}}^{\overline{\m f}}(\overline{x},\overline{x},\overline{R})\\ \overline{S} & t_{\overline{R},\overline{S}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{R}) \end{bmatrix} = \begin{bmatrix} \overline{R} & \overline{R} \\ \overline{S} & t_{\overline{R},\overline{S}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{R}) \end{bmatrix}. \end{equation} The justifications for the claims that these matrices are equal and that the leftmost is an $\overline{\theta},\overline{\theta}$-matrix follow from the facts that (i) $(\overline{x},\overline{y}), (\overline{y},\overline{y}), (\overline{R},\overline{y}), (\overline{S},\overline{y})$ belong to $X\subseteq A$, hence $\overline{x},\overline{y},\overline{R},\overline{S}$ belong to the same $\overline{\theta}$-class, (ii) equations \eqref{coordinatewise} hold, (iii) $t_{\overline{R},\overline{S}}(x,y,z):=s(x,y,y,z,z)$, and (iv) $t_{\overline{R},\overline{S}}$ is a right difference term for $\mathcal V$ (Subclaim~\ref{deltacharsubclaim1}). Item (i) is enough to show that the leftmost matrix is a $\overline{\theta},\overline{\theta}$-matrix, Items (ii) and (iii) are enough to show that the leftmost matrix reduces to the middlemost, and Item (iv) is enough to show that the middlemost matrix reduces to the rightmost one. Since $\overline{\theta}$ is abelian and the top row of the matrix in \eqref{tricky_matrix} is constant, the bottom row must also be constant. \hfill\rule{1.3mm}{3mm} \bigskip \begin{subclm} \label{only_if_delta_char} The ``only if'' part of Claim~\ref{delta_char} holds. \end{subclm} \emph{Proof of Subclaim~\ref{only_if_delta_char}.} Subclaims~\ref{deltacharsubclaim1} and \ref{deltacharsubclaim2} prove the ``only if'' part of Claim~\ref{delta_char} when $((\overline{R},\overline{y}),(\overline{S},\overline{y}))$ is a Maltsev link. Maltsev links are Maltsev chains of length $1$. Here we prove that the ``only if'' part holds for Maltsev chains of any length. For this, let $\Omega$ be the relation on $X$ consisting of all pairs $((\overline{P},\overline{y}), (\overline{Q},\overline{y}))$ which satisfy ($\dagger$) and ($\ddagger$) of Claim~\ref{delta_char}. $\Omega$ will contain all pairs $((\overline{R},\overline{y}), (\overline{S},\overline{y}))$ that are Maltsev links, so to prove this subclaim it suffices to prove that $\Omega$ is an equivalence relation on $X$. \bigskip ($\Omega$ is {\bf reflexive}) Given $((\overline{P},\overline{y}), (\overline{P},\overline{y}))$, the third projection term $t_{\overline{P},\overline{P}}(x,y,z) := z$ witnesses membership in $\Omega$. To see this, note that both ($\dagger$) and ($\ddagger$) are trivial when $\overline{P}=\overline{Q}$ and $t_{\overline{P},\overline{P}}(x,y,z)=z$: \begin{enumerate} \item[($\dagger$)] $t_{\overline{P},\overline{P}}(x,y,z):=z$ is a right difference term for $\mathcal V$. (It is even right Maltsev, which is formally stronger.) \item[($\ddagger$)] $t_{\overline{P},\overline{P}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{P})=\overline{P}$. \end{enumerate} \bigskip ($\Omega$ is {\bf symmetric}) Assume that the ternary term $s_{\overline{P},\overline{Q}}(x,y,z)$ witnesses membership in $\Omega$ for the pair $((\overline{P},\overline{y}), (\overline{Q},\overline{y}))$. We argue that the ternary term $t_{\overline{P},\overline{Q}}(x,y,z):=s_{\overline{P},\overline{Q}}(y,x,z)$ obtained by swapping the first two variables in $s_{\overline{P},\overline{Q}}(x,y,z)$ witnesses membership in $\Omega$ for $((\overline{Q},\overline{y}), (\overline{P},\overline{y}))$: \begin{enumerate} \item[($\dagger$)] $t_{\overline{P},\overline{Q}}$ is a right difference term for $\mathcal V$. \end{enumerate} \medskip Reason: Choose $(a,b)$ generating an abelian congruence in some $\m b\in {\mathcal V}$. $t_{\overline{P},\overline{Q}}^{\m b}(a,a,b)=s_{\overline{P},\overline{Q}}^{\m b}(a,a,b)=b$. \medskip \begin{enumerate} \item[($\ddagger$)] $t_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{Q})=\overline{P}$. \end{enumerate} \medskip Reason: We know from ($\dagger$) for $s_{\overline{P},\overline{Q}}$ and from the fact that $s_{\overline{P},\overline{Q}}(x,y,z)$ witnesses membership in $\Omega$ for $((\overline{P},\overline{y}), (\overline{Q},\overline{y}))$ that \[s_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{P}) =\overline{Q}= s_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{x},\overline{x},\overline{Q}). \] The following is a $\overline{\theta},\overline{\theta}$-matrix in $\overline{\m f}$. \begin{equation}\label{tricky_matrix2} \begin{bmatrix} s_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{P})& s_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{x},\overline{x},\overline{Q})\\ s_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{y},\overline{y},\overline{P})& s_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{x},\overline{y},\overline{Q}) \end{bmatrix}= \begin{bmatrix} \overline{Q}& \overline{Q}\\ \overline{P}& s_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{x},\overline{y},\overline{Q}) \end{bmatrix}= \begin{bmatrix} \overline{Q}& \overline{Q}\\ \overline{P}& t_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{Q}) \end{bmatrix}. \end{equation} Since $\overline{\theta}$ is abelian and this matrix is constant on the first row we must have $t_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{Q})=\overline{P}$, which is the statement to be proved. \bigskip ($\Omega$ is {\bf transitive}) Assume that the ternary term $r_{\overline{P},\overline{Q}}(x,y,z)$ witnesses membership in $\Omega$ for the pair $((\overline{P},\overline{y}), (\overline{Q},\overline{y}))$ and that $s_{\overline{Q},\overline{W}}(x,y,z)$ witnesses membership in $\Omega$ for the pair $((\overline{Q},\overline{y}), (\overline{W},\overline{y}))$. We shall argue that the ternary term $t_{\overline{P},\overline{W}}(x,y,z):=s_{\overline{Q},\overline{W}}(x,y,r_{\overline{P},\overline{Q}}(x,y,z))$ witnesses that $((\overline{P},\overline{y}), (\overline{W},\overline{y}))$ belongs to $\Omega$. \begin{enumerate} \item[($\dagger$)] $t_{\overline{P},\overline{W}}$ is a right difference term for $\mathcal V$. \end{enumerate} \medskip Reason: Choose $(a,b)$ generating an abelian congruence in some $\m b\in {\mathcal V}$. We have $t_{\overline{P},\overline{W}}^{\m b}(a,a,b)= s_{\overline{Q},\overline{W}}^{\m b}(a,a, r_{\overline{P},\overline{Q}}^{\m b}(a,a,b))= s_{\overline{Q},\overline{W}}^{\m b}(a,a,b)=b$. \medskip \begin{enumerate} \item[($\ddagger$)] $t^{\overline{\m f}}(\overline{y},\overline{x},\overline{P})=\overline{W}$. \end{enumerate} \medskip Reason: $t_{\overline{P},\overline{W}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{P})= s_{\overline{Q},\overline{W}}^{\overline{\m f}}(\overline{y},\overline{x}, r_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{P}))= s_{\overline{Q},\overline{W}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{Q})= \overline{W}$. \hfill\rule{1.3mm}{3mm} \bigskip \begin{subclm}\label{if} {\rm (``if'' statement in Claim~\ref{delta_char})} Assume that $(\overline{P},\overline{y})$ and $(\overline{Q},\overline{y})$ lie in $X$ and there is a ternary $\mathcal V$-term $t(x,y,z)$ such that \begin{enumerate} \item[($\dagger$)] $t_{\overline{P},\overline{Q}}$ is a right difference term for $\mathcal V$, and \item[($\ddagger$)] $t_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{y},\overline{x},\overline{P})=\overline{Q}$. \end{enumerate} Then $(\overline{P},\overline{y})$ and $(\overline{Q},\overline{y})$ are $\delta$-related. \end{subclm} \emph{Proof of Subclaim~\ref{if}.} Since $t_{\overline{P},\overline{Q}}$ is a right difference term and $\overline{x}, \overline{y}, \overline{P}, \overline{Q}$ belong to a single class of the abelian congruence $\overline{\theta}$ we have both $t_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{x},\overline{x},\overline{P})=\overline{P}$ and $t_{\overline{P},\overline{Q}}^{\overline{\m f}}(\overline{x},\overline{x},\overline{y})=\overline{y}$. Hence, working with pairs in $\m a$, \[ \begin{array}{rl} (\overline{P},\overline{y})&=t_{\overline{P},\overline{Q}}^{\m a}(\underline{(\overline{x},\overline{x})},(\overline{x},\overline{x}),(\overline{P},\overline{y}))\\ \vphantom{|}&\\ &\stackrel{\delta}{\equiv} t_{\overline{P},\overline{Q}}^{\m a}(\underline{(\overline{y},\overline{x})},(\overline{x},\overline{x}),(\overline{P},\overline{y}))\\ \vphantom{|}&\\ &= (\overline{Q},\overline{y}). \end{array} \] In moving from the first line to the second we have underlined the only change, indicating where we replaced $(\overline{x},\overline{x})$ with the $\delta$-related pair $(\overline{y},\overline{x})$. In moving from the second line to the third we made coordinatewise use of ($\dagger$) and ($\ddagger$) for $t_{\overline{P},\overline{Q}}$. \hfill\rule{1.3mm}{3mm} \bigskip This completes the proof of Claim~\ref{delta_char}. \hfill\rule{1.3mm}{3mm} \bigskip \begin{clm} \label{delta_neq_eta_2} $((\overline{x},\overline{y}),(\overline{y},\overline{y}))\notin\delta.$ \end{clm} \emph{Proof of Claim~\ref{delta_neq_eta_2}.} Assume instead that $((\overline{x},\overline{y}),(\overline{y},\overline{y}))\in\delta.$ Then, for $\overline{P}=\overline{x}$ and $\overline{Q}=\overline{y}$, we have $((\overline{P},\overline{y}),(\overline{Q},\overline{y}))\in\delta|_X$. Claim~\ref{delta_char} guarantees the existence of a ternary term $t_{\overline{x},\overline{y}}(x,y,z)$ such that ($\dagger$) $t_{\overline{x},\overline{y}}$ is a right difference term for $\mathcal V$ and ($\ddagger$) \begin{equation}\label{1-sided-diff} t_{\overline{x},\overline{y}}(\overline{y},\overline{x},\overline{x})=\overline{y}. \end{equation} In Claim~\ref{free_abelian} we showed that $\overline{x}$ and $\overline{y}$ are generators of the ``free principal abelian congruence'' in $\mathcal V$. We used this information in Subclaim~\ref{deltacharsubclaim1} to prove that the equality $t_{\overline{R},\overline{S}}(\overline{x},\overline{x},\overline{y})=\overline{y}$ in $\overline{\m f}$ suffices to prove that $t_{\overline{R},\overline{S}}$ is a right difference term for $\mathcal V$. The same argument can be applied here to prove that \eqref{1-sided-diff} suffices to prove that $t_{\overline{x},\overline{y}}$ is a left difference term for $\mathcal V$. But now $t_{\overline{x},\overline{y}}$ is both a right and a left difference term, which contradicts our initial assumption that $\mathcal V$ has no weak difference term (= left and right difference term). \hfill\rule{1.3mm}{3mm} \bigskip At present we have no understanding of how $\delta$ behaves off of the set $X$. To deal with this, we enlarge $\delta$ to the largest congruence below $\eta_2$ that ``behaves like $\delta$ on $X$''. Since $X$ is an $\eta_2$-class, all congruences $\gamma$ satisfying $\gamma\leq \eta_2$ have the property that $X$ is a union of $\gamma$-classes. This implies that the set of those $\gamma$ satisfying \begin{enumerate} \item[(i)] $\gamma\leq \eta_2$ and \item[(ii)] $\gamma|_X\subseteq \delta|_X$ \end{enumerate} contains $\delta$ and is closed under complete join. Let $\varepsilon\in \Con(\m a)$ be the join of all congruences satisfying (i) and (ii). Since $\delta$ is a joinand, we get that $\delta\leq \varepsilon\leq\eta_2$ and $\varepsilon|_X = \delta|_X$. Since $((\overline{x},\overline{y}),(\overline{y},\overline{y}))\in\eta_2$ and, by Claim~\ref{delta_neq_eta_2}, $((\overline{x},\overline{y}),(\overline{y},\overline{y}))\notin\delta|_X$, we get that $\eta_2|_X\neq \delta|_X=\varepsilon|_X$, and therefore $\delta\leq \varepsilon < \eta_2$. This is enough to imply that $\{\eta_1, \eta_2, \varepsilon\}$ generates a pentagon in $\Con(\m a)$ that is labeled like the one in Figure~\ref{fig9}. In this figure, it might happen that $\delta=\varepsilon$, but no other pair of differently-labeled congruences in the figure could be equal. \begin{figure}[ht] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(20,33) \put(0,15){\circle*{1.2}} \put(10,0){\circle*{1.2}} \put(10,30){\circle*{1.2}} \put(20,10){\circle*{1.2}} \put(20,15){\circle*{1.2}} \put(20,20){\circle*{1.2}} \put(10,0){\line(-2,3){10}} \put(10,0){\line(1,1){10}} \put(10,30){\line(-2,-3){10}} \put(10,30){\line(1,-1){10}} \put(20,20){\line(0,-1){10}} \put(-4.5,14){$\eta_1$} \put(22,9){$\delta$} \put(22,14){$\varepsilon$} \put(22,19){$\eta_2$} \put(4,32){$\eta_1+\eta_2$} \put(9,-5){$0$} \end{picture} \bigskip \caption{\sc $\delta\leq \varepsilon < \eta_2,\;\; \delta|_X=\varepsilon|_X$.}\label{fig9} \end{center} \end{figure} \begin{clm} \label{varepsilon_char} {\rm (Characterization of $\varepsilon$)} \[ \varepsilon = \{(a,b)\in\eta_2\;|\; \forall f\in\textrm{\rm Pol}_1(\m a)\;\big(f(a), f(b))\in X\Rightarrow (f(a),f(b))\in\delta\big)\}. \] \end{clm} \emph{Proof of Claim~\ref{varepsilon_char}.} It is not difficult to see that the relation on the right hand side of the equality symbol is (i) an equivalence relation contained in $\eta_2$ (ii) that is closed under the application of unary polynomials and (iii) whose restriction to $X$ is contained in $\delta$. It is also clear that the relation on the right hand side of the equality symbol contains all other relations with these three properties, hence it is the largest congruence $\gamma\leq \eta_2$ satisfying $\gamma|_X\subseteq \delta|_X$. This is enough to conclude that the relation on the right hand side of the equality symbol is $\varepsilon$. \hfill\rule{1.3mm}{3mm} \bigskip \begin{clm} \label{cent1} {\rm (i)} $\eta_1+\eta_2$ is abelian and {\rm (ii)} $\C C(\eta_2,\eta_1+\eta_2;\varepsilon)$ holds. \end{clm} \emph{Proof of Claim~\ref{cent1}.} For (i), we already noted in the paragraph preceding Claim~\ref{delta_char}, $\overline{\theta}_1\times \overline{\theta}_2$ is an abelian congruence of $\m a$. Since $\m a\leq \overline{\m f}(\overline{\theta})$ we have $\eta_1, \eta_2\leq \overline{\theta}_1\times \overline{\theta}_2$. From this we derive that $\eta_1+\eta_2\leq \overline{\theta}_1\times \overline{\theta}_2$ and then that $\eta_1+\eta_2$ is an abelian congruence of $\m a$. For (ii), we must show that, given an $\eta_2,(\eta_1+\eta_2)$-matrix \begin{equation}\label{varepsilon_matrix} \begin{bmatrix} t(\wec{a},\wec{u}) & t(\wec{a},\wec{v}) \\ t(\wec{b},\wec{u}) & t(\wec{b},\wec{v}) \end{bmatrix}= \begin{bmatrix} p & q \\ r & s \end{bmatrix}, \;\;\; \end{equation} if $p\stackrel{\varepsilon}{\equiv} q$, then $r\stackrel{\varepsilon}{\equiv} s$. We shall assume that $p\stackrel{\varepsilon}{\equiv} q$ and $r\not\stackrel{\varepsilon}{\equiv} s$ and argue to a contradiction. We do have $r\stackrel{\eta_2}{\equiv} p \stackrel{\varepsilon}{\equiv} q\stackrel{\eta_2}{\equiv} s$. Since $\varepsilon\leq\eta_2$, all elements $p, q, r, s$ are $\eta_2$-related. If indeed $r\not\stackrel{\varepsilon}{\equiv} s$, then by Claim~\ref{varepsilon_char} there is a unary polynomial $f(x)$ of $\m a$ such that $f(r), f(s)\in X$ and $f(r)\not\stackrel{\delta}{\equiv} f(s)$. We may apply $f$ to the matrix in \eqref{varepsilon_matrix} to obtain another $\eta_2,(\eta_1+\eta_2)$-matrix \begin{equation}\label{varepsilon_matrix_2} \begin{bmatrix} f(p) & f(q) \\ f(r) & f(s) \end{bmatrix}. \;\;\; \end{equation} The new matrix has the same properties as the old one, except now we also have all entries in $X = (\overline{y},\overline{y})/\eta_2$. Let us write $(\overline{P},\overline{y}), (\overline{Q},\overline{y}), (\overline{R},\overline{y}), (\overline{S},\overline{y})$ for $f(p), f(q), f(r), f(s)$. By Claim~\ref{delta_char}, the fact that $(\overline{P},\overline{y}) \stackrel{\delta|_X}{\equiv}(\overline{Q},\overline{y})$ holds yields a right difference term $t_{\overline{P},\overline{Q}}$ such that $ t_{\overline{P},\overline{Q}}(\overline{y},\overline{x},\overline{P})=\overline{Q}. $ We must have $ t_{\overline{P},\overline{Q}}(\overline{y},\overline{x},\overline{R})\neq\overline{S}, $ or the same right difference term would yield $(\overline{R},\overline{y})\stackrel{\delta|_X}{\equiv}(\overline{S},\overline{y})$, which is false. Now consider the $\eta_2,(\eta_1+\eta_2)$-matrix \begin{equation}\label{varepsilon_matrix_3} t_{\overline{P},\overline{Q}}\left( \begin{bmatrix} (\overline{y},\overline{y}) & (\overline{x},\overline{y}) \\ (\overline{y},\overline{y}) & (\overline{x},\overline{y}) \\ \end{bmatrix}, \begin{bmatrix} (\overline{x},\overline{y}) & (\overline{x},\overline{y}) \\ (\overline{x},\overline{y}) & (\overline{x},\overline{y}) \\ \end{bmatrix}, \begin{bmatrix} (\overline{P},\overline{y}) & (\overline{Q},\overline{y}) \\ (\overline{R},\overline{y}) & (\overline{S},\overline{y}) \end{bmatrix} \right) = \begin{bmatrix} (\overline{Q},\overline{y}) & (\overline{Q},\overline{y}) \\ (t_{\overline{P},\overline{Q}}(\overline{y},\overline{x},\overline{R}),\overline{y}) & (\overline{S},\overline{y}) \end{bmatrix}. \end{equation} The rightmost matrix witnesses that $\C C(\eta_2,\eta_1+\eta_2;0)$ fails, since the top row is constant while the bottom row is not, since $ t_{\overline{P},\overline{Q}}(\overline{y},\overline{x},\overline{R})\neq\overline{S}. $ But the failure of $\C C(\eta_2,\eta_1+\eta_2;0)$ contradicts $\C C(\eta_1+\eta_2,\eta_1+\eta_2;0)$, which we established in part (i) of this claim. This completes the proof of (ii). \hfill\rule{1.3mm}{3mm} \bigskip We have constructed the desired pentagon, so to complete the proof of $\neg (1)\;\Rightarrow\; \neg(5)$ of Theorem~\ref{characterization_of_weak} we just have to explain how to relabel the elements of the pentagon. Relabel each congruence in the sequence $(\eta_1+\eta_2,\eta_1,\eta_2,\varepsilon,0)$ of the pentagon of in Figure~\ref{fig9} with the corresponding label in $(\alpha,\beta,\theta,\delta,0)$ to obtain the pentagon in Figure~\ref{fig8}. From Claim~\ref{cent1} we have that $\alpha$ is abelian and $\C C(\theta,\alpha;\delta)$, as desired. \end{proof} The next result represents a half-step toward proving that a variety with a Taylor term and commutative commutator has a difference term. It turns out that we only need to assume that commutativity of the commutator on comparable pairs of congruences to prove this result. \begin{thm} \label{commutative_implies_weak} If $\mathcal V$ has a Taylor term and, whenever $x \leq y$ in $\Con(\m a)$ for $\m a\in {\mathcal V}$, the equation $[x,y]=[y,x]$ is satisfied, then $\mathcal V$ must have a weak difference term. \end{thm} \begin{proof} We shall assume that $\mathcal V$ has a Taylor term and does not have a weak difference term and argue that $\mathcal V$ contains an algebra with a noncommutative commutator. Our construction produces an algebra in $\mathcal V$ which has a pair of comparable congruences $x\leq y$ such that $[x,y]\neq [y,x]$. Since we have assumed that $\mathcal V$ does not have a weak difference term, Theorem~\ref{characterization_of_weak} guarantees that there is some algebra $\m a\in {\mathcal V}$ whose congruence lattice contains a pentagon \begin{figure}[ht] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(20,33) \put(0,15){\circle*{1.2}} \put(10,0){\circle*{1.2}} \put(10,30){\circle*{1.2}} \put(20,10){\circle*{1.2}} \put(20,20){\circle*{1.2}} \put(10,0){\line(-2,3){10}} \put(10,0){\line(1,1){10}} \put(10,30){\line(-2,-3){10}} \put(10,30){\line(1,-1){10}} \put(20,20){\line(0,-1){10}} \put(-4.5,13){$\beta$} \put(22,9){$\delta$} \put(22,19){$\theta$} \put(9,32){$\alpha$} \put(9,-5){$0$} \end{picture} \bigskip \caption{\sc Both $[\alpha,\alpha]=0$ and $\C C(\theta,\alpha;\delta)$ hold.}\label{fig10} \end{center} \end{figure} \noindent where $\alpha$ is abelian and $\C C(\theta,\alpha;\delta)$ holds. We claim that the algebra $\m a/\delta\in{\mathcal V}$ has noncommutative commutator. Specifically, we claim that \begin{equation}\label{noncom} [\,\theta/\delta, \alpha/\delta\,]\;=\;0\;\neq \; [\,\alpha/\delta,\theta/\delta\,]. \end{equation} To see this, first observe that since $\C C(\theta,\alpha;\delta)$ holds we have $[\,\theta/\delta,\alpha/\delta\,]=0$ from Theorem~\ref{basic_centrality}~(10), and this is the equality in \eqref{noncom}. It remains to prove the inequality $[\,\alpha/\delta,\theta/\delta\,]\neq 0$ from \eqref{noncom}. If instead we had equality, then again by Theorem~\ref{basic_centrality}~(10) we would have that $\C C(\alpha,\theta;\delta)$ holds. By monotonicity (Theorem~\ref{basic_centrality}~(1)), we would have $\C C(\beta,\theta;\delta)$ holds. This contradicts Theorem~\ref{memoir_pentagon}. Thus, for $x=\theta/\delta$ and $y=\alpha/\delta$ we have $x\leq y$ and $[x,y]\neq [y,x]$. \end{proof} Next we begin a sequence of results to strengthen the conclusion of Theorem~\ref{commutative_implies_weak} from ``weak difference term'' to ``difference term''. The argument is completed in Theorem~\ref{main1} below. \begin{thm}\label{stable} Assume that $\mathcal V$ has a weak difference term. If $\m a$ in $\mathcal V$ has congruences satisfying \begin{enumerate} \item $\alpha\geq \theta\geq \delta$, and \item $[\alpha,\theta]=0$, then \end{enumerate} $\C C(\theta,\alpha;\delta)$ holds. \end{thm} \begin{proof} Let $d(x,y,z)$ be some fixed weak difference term for $\mathcal V$. This is a proof by contradiction, so assume that the hypotheses hold and that the conclusion $\C C(\theta,\alpha;\delta)$ fails. This failure is witnessed by a $\theta,\alpha$-matrix \[ \begin{bmatrix}p&q\\r&s \end{bmatrix}= \begin{bmatrix} t(\wec{a},\wec{c})&t(\wec{a},\wec{d})\\ t(\wec{b},\wec{c})&t(\wec{b},\wec{d}) \end{bmatrix} \] where $(a_i,b_i)\in\theta$, $(c_j,d_j)\in\alpha$, $(p,q)\in\delta$, and $(r,s)\notin\delta$. Since $(r,p)\in\theta$, $(q,s)\in\theta$, and $(p,q)\in \delta\leq \theta$ we have $r\stackrel{\theta}{\equiv} p \stackrel{\delta}{\equiv} q \stackrel{\theta}{\equiv} s$, so $p, q, r$ and $s$ are all $\theta$-related. From the hypotheses (1) $\alpha\geq \theta$, and (2) $[\alpha,\theta]=0$, we derive that $[\theta,\theta]=0$ by monotonicity. This implies that $d(x,y,z)$ acts like a Maltsev operation on the $\theta$-class containing $p,q,r,s$. Let $t'(\wec{x},\wec{y}) = d(t(\wec{x},\wec{y}),t(\wec{x},\wec{c}),t(\wec{b},\wec{c}))$. We have an $\alpha,\theta$-matrix \[ \begin{bmatrix} t'(\wec{a},\wec{c})&t'(\wec{b},\wec{c})\\ t'(\wec{a},\wec{d})&t'(\wec{b},\wec{d}) \end{bmatrix} = \begin{bmatrix} d(p,p,r)&d(r,r,r)\\ d(q,p,r)&d(s,r,r) \end{bmatrix} = \begin{bmatrix} r&r\\ d(q,p,r)&s \end{bmatrix}. \] In moving from the middlemost matrix to the rightmost matrix we use the fact that $d$ acts like a Maltsev operation on the $\theta$-class containing $p,q,r,s$. Since this matrix is an $\alpha,\theta$-matrix, the top row is constant, and $[\alpha,\theta]=0$, we derive that the bottom row is constant, i.e. $d(q,p,r)=s$. This proves that $s = d(q,\underline{p},r)\stackrel{\delta}{\equiv} d(q,\underline{q},r) = r$, which contradicts our earlier assumptions that $(p,q)\in\delta$ and $(r,s)\notin\delta$. \end{proof} \begin{thm} \label{diff_char} \textrm{\rm (\cite[Theorem 3.3(a)$\Leftrightarrow$(b)]{diff})} If $\mathcal V$ is a variety, then the following conditions are equivalent. \begin{enumerate} \item[(a)] $\mathcal V$ has a difference term. \item[(b)] For each $\m a\in {\mathcal V}$ the solvability relation is a congruence on $\Con(\m a)$ which is preserved by homomorphisms. Furthermore, whenever the pentagon $\m N_5$ is a sublattice of $\Con(\m a)$, then the critical interval is neutral. \end{enumerate} \end{thm} (A congruence interval $I[\delta,\theta]$ is called \emph{neutral} if it contains no nontrivial abelian subinterval $I[x,y]$, equivalently if $[y,y]_x=y$ whenever $\delta\leq x<y\leq \theta$.) \begin{cor} \label{diff_char2} A variety has a difference term if and only if it has a weak difference term and whenever $\m N_5$ is a sublattice of $\Con(\m a)$, then the critical interval is neutral. \end{cor} \begin{proof} This result will be derived from Theorem~\ref{diff_char}. Assume first that $\mathcal V$ has a difference term. This term is also a weak difference term for $\mathcal V$. Also, whenever $\m N_5$ is a sublattice of $\Con(\m a)$, then the critical interval must be neutral by Theorem~\ref{diff_char} (a)$\Rightarrow$(b). Conversely, assume that $\mathcal V$ has a weak difference term and whenever $\m N_5$ is a sublattice of $\Con(\m a)$, then the critical interval is neutral. Our goal is to prove that $\mathcal V$ has a difference term. According to Theorem~\ref{diff_char}, what remains to show is that for each $\m a\in {\mathcal V}$ the solvability relation is a congruence on $\Con(\m a)$ which is preserved by homomorphisms. This property always holds for varieties with a weak difference term, as we now explain. Following the notation of Chapter 6 of \cite{shape}, write $\alpha\lhd\beta$ to mean that $\alpha\leq\beta$ and that $\C C(\beta,\beta;\alpha)$ holds ($\beta$ is abelian over $\alpha$). The solvability relation is defined so that $\gamma{}\stackrel{s}{\sim}{} \delta$ holds exactly when there is a finite chain \[ \gamma\cap\delta = \varepsilon_0\lhd \cdots \lhd \varepsilon_n = \gamma+\delta. \] A related notion, $\infty$-solvability, is defined in \cite[Definition~6.5]{shape} the same way, but with finite chains replaced by continuous well-ordered chains. That is, $\alpha\mathrel{\lhd\!\!\lhd}\beta$ means $\alpha\leq \beta$ and there is continuous well-ordered chain $(\varepsilon_{\mu})_{\mu<\kappa+1}$ such that $\alpha=\varepsilon_0$, $\varepsilon_{\mu}\lhd \varepsilon_{\mu+1}$ for $\mu<\kappa+1$ and $\varepsilon_{\lambda} = \bigcup_{\mu<\lambda} \varepsilon_{\mu}$ for limit $\lambda\leq\kappa$, and $\bigcup_{\mu<\kappa} \varepsilon_{\mu}=\beta$. Then $\gamma$ is $\infty$-solvably related to $\delta$ if $\gamma\cap\delta \mathrel{\lhd\!\!\lhd} \gamma+\delta$. The claims that we need to prove here for solvability were proved for $\infty$-solvability in \cite{shape}, and the proofs given there work for our purposes. In particular, Lemma~6.10 of \cite{shape} proves that, for any $\gamma$, \begin{itemize} \item if $\alpha\lhd \beta$, then $\alpha\cap\gamma\lhd \beta\cap \gamma$ and $\alpha+\gamma\lhd \beta+\gamma$. \end{itemize} This is the technical lemma which is used in Theorem~6.11 of \cite{shape} to prove that the $\infty$-solvability relation is compatible with finite meets and arbitrary joins. The same arguments show that the ordinary solvability relation is compatible with finite meets and finite joins. The fact that the $\infty$-solvability relation is preserved by homomorphisms is proved in Theorem~6.19~(1) of \cite{shape}. The same proof works here for the ordinary solvability relation. This completes the proof (sketch) for the converse. \end{proof} The next lemma refines the statement of Theorem~\ref{memoir_pentagon_2} for varieties that do \emph{not} have a difference term. This lemma will be used in the proofs of Theorem~\ref{commutative_plus_weak}, Theorem~\ref{main1.5}, and Theorem~\ref{main4}. \begin{lm} \label{commutative_plus_Taylor_lm} If $\mathcal V$ has a Taylor term and does not have a difference term, then $\mathcal V$ contains an algebra $\m a$ with congruences labeled as in Figure~\ref{fig11} satisfying the following commutator conditions: \begin{enumerate} \item[(1)] $[\alpha,\theta]=0$, \item[(2)] $\C C(\theta,\alpha;\delta)$, and \item[(3)] $[\alpha,x]_{\delta}=x$ for all $x\in I[\delta,\theta]$. \end{enumerate} \begin{figure}[ht] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(20,33) \put(0,15){\circle*{1.2}} \put(10,0){\circle*{1.2}} \put(10,30){\circle*{1.2}} \put(20,10){\circle*{1.2}} \put(20,15){\circle*{1.2}} \put(20,20){\circle*{1.2}} \put(10,0){\line(-2,3){10}} \put(10,0){\line(1,1){10}} \put(10,30){\line(-2,-3){10}} \put(10,30){\line(1,-1){10}} \put(20,20){\line(0,-1){10}} \put(-4.5,13){$\beta$} \put(22,9){$\delta$} \put(22,14){$x$} \put(22,19){$\theta$} \put(9,32){$\alpha$} \put(9,-5){$0$} \end{picture} \bigskip \caption{\sc $\mathcal V$ has a Taylor term but not a difference term.}\label{fig11} \end{center} \end{figure} \end{lm} \begin{proof} We split the proof into two cases depending on whether $\mathcal V$ has a weak difference term. For the first case, assume that $\mathcal V$ \emph{does not} have a weak difference term. Theorem~\ref{characterization_of_weak}~(5) guarantees that some $\m a\in \mathcal V$ has a pentagon in its congruence lattice, labeled as in Figure~\ref{fig11}, where (i) $[\alpha,\alpha]=0$ and (ii) $\C C(\theta,\alpha;\delta)$. Since (i) is stronger than Item (1) of this lemma statement (by monotonicity of the commutator), and (ii) is the same as (2), we only have to verify Item (3) of the lemma statement. That follows from Theorem~\ref{memoir_pentagon_2}, since $\mathcal V$ has a Taylor term. For the second case, assume that $\mathcal V$ \emph{does} have a weak difference term. We still assume that $\mathcal V$ \emph{does not} have a difference term. By Corollary~\ref{diff_char2}, the fact that $\mathcal V$ does not have a difference term means that some $\m a\in {\mathcal V}$ has a pentagon in its congruence lattice, labeled as in Figure~\ref{fig11}, where the critical interval $I[\delta,\theta]$ is not neutral. The nonneutrality means that $[x,x]_{\delta}<x$ for some $x\in I[\delta,\theta]$. We have \[ \delta \leq [x,x]_{\delta}<x \leq \theta, \] so we can alter the pentagon by shrinking $I[\delta,\theta]$ to $I[[x,x]_{\delta},x]$. This produces a new pentagon with abelian critical interval. Change to this pentagon and change notation. That is, assume that $\{\beta,\delta,\theta\}$ generates a pentagon in $\Con(\m a)$, labeled as in Figure~\ref{fig4}, with critical interval $I[\delta,\theta]$ where $\C C(\theta,\theta;\delta)$ holds. By Theorem~\ref{memoir_pentagon}, $\C C(\beta,\theta;\delta)$ fails, since $\mathcal V$ has a Taylor term. Now, citing Theorem~\ref{better_pentagons} (Better pentagons), we may further adjust our pentagon so that $\C C(\theta,\theta;0)$ holds. We have $\C C(\beta,\theta;0)$ by Theorem~\ref{basic_centrality}~(8), so for $\alpha = \beta+\theta$ we have $\C C(\alpha,\theta;0)$ by Theorem~\ref{basic_centrality}~(5). This may be written as $[\alpha,\theta]=0$, which is Item (1) of the lemma statement. Since $\alpha\geq \theta\geq \delta$, we may invoke Theorem~\ref{stable} to derive that $\C C(\theta,\alpha;\delta)$ holds. This is Item~(2) of the lemma statement. We derive Item (3) from Theorem~\ref{memoir_pentagon_2}, as we did in the first case of this proof. \end{proof} \begin{thm} \label{commutative_plus_weak} If $\mathcal V$ has a weak difference term and the commutator is commutative on pairs of comparable congruences, then $\mathcal V$ has a difference term. \end{thm} \begin{proof} Assume that $\mathcal V$ has a weak difference term and does not have a difference term. The hypotheses of Lemma~\ref{commutative_plus_Taylor_lm} hold, so some $\m a\in {\mathcal V}$ has a pentagon in its congruence lattice with congruences labeled as in Figure~\ref{fig11}, and which satisfies the commutator conditions (1), (2), and (3) of Lemma~\ref{commutative_plus_Taylor_lm}. In $\m a/\delta$ the congruences $x=\alpha/\delta$ and $y=\theta/\delta$ satisfy $0<y <x$, $[y,x]=0$ (by Item (2) of that lemma) and $[x,y]=y$ (by Item (3) of that lemma). \end{proof} The next theorem is one of the primary results of this article. \begin{thm} \label{main1} The following are equivalent for a variety $\mathcal V$. \begin{enumerate} \item[(1)] $\mathcal V$ has a difference term \item[(2)] $\mathcal V$ has a Taylor term and has commutative commutator. \item[(3)] $\mathcal V$ has a Taylor term and the commutator operation is commutative on pairs of comparable congruences. \end{enumerate} \end{thm} \begin{proof} The class of varieties that have a difference term is definable by a nontrivial idempotent Maltsev condition. (The reason that the class of varieties with a difference term is definable by an idempotent Maltsev condition is explained in the paragraph following the proof of Theorem~4.8 of \cite{kearnes-szendrei}. A specific Maltsev condition defining the class of varieties with a difference term is in \cite[Section~4]{kissterm}.) The weakest nontrivial idempotent Maltsev condition is the one that asserts the existence of a Taylor term. Thus, the implication (1)$\Rightarrow$(2) follows from Lemma~2.2 of \cite{diff}, which proves that a variety with a difference term has commutative commutator. Item~(2) is formally stronger than Item~(3), so it remains to prove that Item~(3) implies Item~(1). For this, combine Theorems~\ref{commutative_implies_weak} and \ref{commutative_plus_weak}. \end{proof} \bigskip Now we turn to an examination of distributivity of the commutator. You will recall that we proved some results about the left or right distributivity of the commutator in Theorem~\ref{distributive_thm}. The results obtained there were left/right-asymmetric, but that asymmetry disappears when a Taylor term is present, as we establish with the next two results. \begin{thm} \label{refinement} If $\mathcal V$ has a Taylor term, then for any $\m A\in \mathcal V$ and any congruences $\alpha, \beta\in \Con(\m a)$ the following are equivalent: \begin{enumerate} \item[(a)] $[\beta,\alpha]=0$ and $[\alpha,\alpha\cap\beta]=0$. \item[(b)] $\Delta_{\alpha,\beta}$ is disjoint from the coordinate projection kernels of $\m a(\alpha)$. \item[(c)] $[\alpha,\beta] = 0$ and $[\beta,\alpha\cap\beta]=0$. \item[(d)] $\Delta_{\beta,\alpha}$ is disjoint from the coordinate projection kernels of $\m a(\beta)$. \end{enumerate} In particular, if $\mathcal V$ has a Taylor term, then $\mathcal V$ satisfies the commutator congruence quasi-identity \[ [\beta,\alpha]=0 \;\;\wrel{\&}\;\; [\alpha,\alpha\cap\beta]=0 \;\wrel{\Longrightarrow}\; [\alpha,\beta]=0. \] \end{thm} \begin{proof} This proof is a refinement of the proof of Lemma~4.4 of \cite{kearnes-szendrei}. To prove that (a) implies (b), it suffices to prove that (a) implies that $\eta_2\cap\Delta_{\alpha,\beta}=0$. For then, by interchanging the two coordinates of $\m a(\alpha)$, the same argument will show that $\eta_1\cap\Delta_{\beta,\beta}=0$ also. Let $\theta = \eta_2\cap\Delta_{\alpha,\beta}\in\Con(\m a(\alpha))$. Assuming (a), that $[\beta,\alpha]=0$ holds, the diagonal of $\m a(\alpha)$ is a union of $\Delta_{\alpha,\beta}$-classes. No two distinct diagonal elements are related by $\eta_2$, and $\theta = \eta_2\cap \Delta_{\alpha,\beta}$, so it follows that each element of the diagonal of $\m a(\alpha)$ is a singleton $\theta$-class. Choose an arbitrary pair $((a,c),(b,c))\in\theta$. Let $T(x_1,\ldots,x_{n})$ be a Taylor term for $\mathcal V$. Consider a first-place Taylor identity $T(x,\wec{w})\approx T(y,\wec{z})$ where $\wec{w}, \wec{z}\in \{x,y\}^{n-1}$. Substitute $b$ for all occurrences of $x$ and $c$ for all occurrences of $y$. This yields $T(b,\bar{u}) = T(c,\bar{v})$ where all $u_i$ and $v_i$ are in $\{b,c\}$. Since $(b,c)\in\m A(\alpha)$ we have $b\stackrel{\alpha}{\equiv} c$, hence $(u_i,v_i)\in\alpha$ for all $i$. This implies that $p((x,y)) = (T(x,\bar{u}),T(y,\bar{v}))$ is a unary polynomial of $\m a(\alpha)$. The equation $T(x,\wec{w})\approx T(y,\wec{z})$ implies that $p((b,c))$ lies on the diagonal of $\m a(\alpha)$. The element $p((a,c))$ is $\theta$-related to $p((b,c))$, and each element of the diagonal is a singleton $\theta$-class, therefore $p((a,c)) = p((b,c))$. This has the consequence that $T(a,\bar{u}) = T(b,\bar{u})$ where each $u_i\in \{b,c\}$. Now, since $((a,c),(b,c))\in\theta \leq\Delta_{\alpha,\beta}\leq \beta_1\times\beta_2$ we get that $(a,b)\in\beta$. Since $(a,c)$ and $(b,c)$ are elements of our algebra we have $a\stackrel{\alpha}{\equiv} c\stackrel{\alpha}{\equiv}b$, so $(a,b)\in\alpha$. Together, the last two sentences show that $(a,b)\in\alpha\cap\beta$. Now, applying $[\alpha,\alpha\cap\beta]=0$ to the equality $T(a,\bar{u}) = T(b,\bar{u})$, we deduce that $T(a,\bar{y}) = T(b,\bar{y})$ for any $\bar{y}$ whose entries lie in the $\alpha$-class containing $a, b$ and $c$. The argument we just gave concerning $a$, $b$ and $T$, which showed that $T(a,\bar{y}) = T(b,\bar{y})$ whenever each $y_i$ is in the $\alpha$-class containing $a, b$, and $c$ is an argument which works in each of the $n$ variables of $T$. That is, \[T(y_1,\ldots,y_{i-1},a,y_{i+1},\ldots,y_n) = T(y_1,\ldots,y_{i-1},b,y_{i+1},\ldots,y_n)\] for each $i$ and any choice of values for $y_1,\ldots,y_n$ in the $\alpha$-class of $a, b$, and $c$. Therefore, using the fact that $T$ is idempotent, we have \[ a=T(a,a,\ldots,a)=T(b,a,\ldots,a)=\cdots=T(b,b,\ldots,b)=b. \] This proves that $(a,c)=(b,c)$. Since $((a,c),(b,c))\in\theta$ was arbitrarily chosen, $\theta =\eta_2\cap\Delta_{\alpha,\beta}= 0$, completing the proof that (a) implies (b). Now we argue by contraposition that (b) implies (c). Assume that (c) fails because $[\alpha,\beta]>0$. There is an $\alpha,\beta$-matrix \[ \begin{bmatrix} p&q\\r&s\\ \end{bmatrix} = \begin{bmatrix} t(\wec{e},\wec{u})& t(\wec{e},\wec{v})\\ t(\wec{f},\wec{u})& t(\wec{f},\wec{v})\\ \end{bmatrix} \] where $(e_i,f_i)\in \alpha$, $(u_j,v_j)\in\beta$, $p=q$, and $r\neq s$. The pair \[ ((r,p), (s,q)) =(t((\wec{f},\wec{e}),(\wec{u},\wec{u})), t((\wec{f},\wec{e}),(\wec{v},\wec{v}))) \] belongs to $\eta_2$ (since $p=q$) and $\Delta_{\alpha,\beta}$ (since $((u_i,u_i), (v_i,v_i))\in\Delta_{\alpha,\beta}$), but not to $\eta_1$ (since $r\neq s$). Therefore, $((r,p), (s,q))\in(\eta_2\cap\Delta_{\alpha,\beta})-0$, establishing that $\eta_2\cap\Delta_{\alpha,\beta}\neq 0$. This shows that if (c) fails because $[\alpha, \beta]>0$, then (b) fails because $\eta_2\cap\Delta_{\alpha,\beta}>0$. Now assume that (c) fails because $[\beta,\alpha\cap \beta]>0$. There is a $\beta,\alpha\cap\beta$-matrix \[ \begin{bmatrix} p&q\\r&s\\ \end{bmatrix} = \begin{bmatrix} t(\wec{e},\wec{u})& t(\wec{e},\wec{v})\\ t(\wec{f},\wec{u})& t(\wec{f},\wec{v})\\ \end{bmatrix} \] where $(e_i,f_i)\in \beta$, $(u_j,v_j)\in\alpha\cap \beta$, $p=q$, and $r\neq s$. The pair \[ ((p,q), (r,s)) =(t((\wec{e},\wec{e}),(\wec{u},\wec{v})), t((\wec{f},\wec{f}),(\wec{u},\wec{v}))) \] belongs to $\Delta_{\alpha,\beta}$ (since $((e_i,e_i),(f_i,f_i))\in \Delta_{\alpha,\beta}$). The pair $(r,p)$ belongs to $\beta$ (since $(p,r)=(t(\wec{e},\wec{u}),t(\wec{f},\wec{u}))$ and $(e_i,f_i)\in\beta$). Hence \[ (r,r)\;\stackrel{\Delta_{\alpha,\beta}}{\equiv}\; (p,p)=(p,q)\;\stackrel{\Delta_{\alpha,\beta}}{\equiv}\;(r,s). \] Therefore, $((r,r), (r,s)) \in \eta_1\cap \Delta_{\alpha,\beta}$, but $((r,r), (r,s))\notin \eta_2$ since $r\neq s$. This shows that if (c) fails because $[\beta,\alpha\cap \beta]>0$, then (b) fails because $\eta_1\cap \Delta_{\alpha,\beta} > 0$. At this point we have shown that (a) $\Rightarrow$ (b) $\Rightarrow$ (c). Interchanging the roles of $\alpha$ and $\beta$ in these arguments we deduce that (c) $\Rightarrow$ (d) $\Rightarrow$ (a). This shows that all four properties are equivalent. The commutator congruence quasi-identity of the theorem statement follows from the equivalence of (a) and (c). \end{proof} \begin{thm} \label{dist_implies_comm} If $\mathcal V$ has a Taylor term, then the commutator operation in $\mathcal V$ is left distributive if and only if it is right distributive. If either distributivity condition holds, then the commutator is commutative in $\mathcal V$. \end{thm} \begin{proof} By Theorem~\ref{distributive_thm}~(1), if $\mathcal V$ has left distributive commutator, then it has commutative commutator, hence it has right distributive commutator. This part of the theorem did not require the assumption that $\mathcal V$ has a Taylor term. Now assume that $\mathcal V$ has a Taylor term and has right distributive commutator. We shall argue that the commutator operation in $\mathcal V$ is commutative, hence left distributive. This is a proof by contradiction, so assume also that some algebra in $\mathcal V$ has noncommutative commutator. By Lemma~\ref{asymmetry}, we may assume that some $\m a\in{\mathcal V}$ has congruences $\alpha$ and $\beta$ such that $0=[\beta,\alpha]<[\alpha,\beta]$. Theorem~\ref{refinement} implies that $0<[\alpha,\alpha\cap \beta]$. This means that for $x:=\alpha$ and $y:=\alpha\cap \beta$ we have $y\leq x$, $0<[x,y]$, and $[y,x]=[\alpha\cap\beta,\alpha]\leq [\beta,\alpha]=0$, so $[x,y]\not\leq [y,x]$, in contradiction to Theorem~\ref{distributive_thm}~(2). \end{proof} We strengthen the previous theorem with the following result, which is one of the primary results of this article. \begin{thm} \label{main2} If $\mathcal V$ has a Taylor term, then the following are equivalent: \begin{enumerate} \item $\mathcal V$ is congruence modular. \item The commutator is left distributive in $\mathcal V$. \item The commutator is right distributive in $\mathcal V$. \end{enumerate} \end{thm} \begin{proof} For any congruence modular variety, the commutator is both left and right distributive. (See \cite[Proposition~4.3]{freese-mckenzie} for the fact that the modular commutator is right distributive and commutative.) This shows that Item~(1) implies both Item~(2) and Item~(3). By Theorem~\ref{dist_implies_comm}, Items~(2) and (3) are equivalent in the presence of a Taylor term, and they imply that the commutator is commutative in $\mathcal V$. From commutativity, we derive the existence of a difference term from Theorem~\ref{main1}. Thus, it remains to prove that if $\mathcal V$ has (left) distributive commutator and a difference term, then $\mathcal V$ is congruence modular. This fact follows from Theorem~3.2~(i) of \cite{lipparini}, but we give an argument for this based on the results of this paper. We are going to argue by contradiction, so assume that the commutator is left distributive throughout $\mathcal V$, but there is some algebra in $\mathcal V$ that does not have a modular congruence lattice. We can find such an algebra $\m a\in \mathcal V$ with congruences $\beta, \theta, \delta \in\Con(\m a)$ generating a pentagon satisfying $\delta < \theta$, $\beta\cap \theta =0 \leq \delta$, and $\alpha:=\beta+\delta\geq \theta$ (Figure~\ref{fig12}). \begin{figure}[ht] \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(20,33) \put(0,15){\circle*{1.2}} \put(10,0){\circle*{1.2}} \put(10,30){\circle*{1.2}} \put(20,10){\circle*{1.2}} \put(20,20){\circle*{1.2}} \put(10,0){\line(-2,3){10}} \put(10,0){\line(1,1){10}} \put(10,30){\line(-2,-3){10}} \put(10,30){\line(1,-1){10}} \put(20,20){\line(0,-1){10}} \put(-4.5,13){$\beta$} \put(22,9){$\delta$} \put(22,19){$\theta$} \put(9,32){$\alpha$} \put(9,-5){$0$} \end{picture} \bigskip \caption{\sc $\textrm{Con}(\m A)$.}\label{fig12} \end{center} \end{figure} By the left distributivity of the commutator, \[ [\alpha,\theta]=[\beta+\delta,\theta]=[\beta,\theta]+[\delta,\theta] =0+[\delta,\theta] \leq \delta. \] In this line we are using that $[\beta,\theta]\leq \beta\cap \theta=0$ to obtain the last equality and last inequality. Let $\sigma$ denote $[\alpha,\theta]=[\delta,\theta]$, a congruence which satisfies $0\leq \sigma\leq \delta$. Working modulo $\sigma$, and writing $\overline{x}$ for $x/\sigma$ for any congruence $x\geq \sigma$, we have \begin{enumerate} \item $\overline{\alpha}\geq \overline{\theta}\geq \overline{\delta}$, and \item $[\overline{\alpha},\overline{\theta}]=0$. \end{enumerate} Since $\mathcal V$ has a difference term, Theorem~\ref{stable} guarantees that $\C C(\overline{\theta},\overline{\alpha};\overline{\delta})$ holds in $\m a/\sigma$. Since $\overline{\alpha}\geq \overline{\theta}\geq \overline{\delta}$, Theorem~\ref{basic_centrality}~(10) guarantees that in $\m a/\delta$ we have $\C C(\overline{\overline{\theta}},\overline{\overline{\alpha}};0)$, where $\overline{\overline{x}}$ denotes $x/\delta$. Because $\C C(\overline{\overline{\theta}},\overline{\overline{\alpha}};0)$ holds and the commutator is commutative in $\mathcal V$ we derive that \[ 0 = [\overline{\overline{\theta}},\overline{\overline{\alpha}}]= [\overline{\overline{\alpha}}, \overline{\overline{\theta}}]. \] Hence $\C C(\overline{\overline{\alpha}},\overline{\overline{\theta}};0)$ holds in $\m a/\delta$. Theorem~\ref{basic_centrality}~(10) guarantees that $\C C(\alpha,\theta;\delta)$ holds in $\m a$, and so by monotonicity $\C C(\beta,\theta;\delta)$ holds in $\m a$. This instance of centrality in a pentagon contradicts Theorem~\ref{memoir_pentagon}. \end{proof} \bigskip Next we consider the existence of right annihilators and the right semidistributivity of the commutator. \begin{thm} \label{main1.5} If $\mathcal V$ has a Taylor term, then the following are equivalent: \begin{enumerate} \item $\mathcal V$ has a difference term. \item Right annihilators exist throughout $\mathcal V$. \item The commutator is right semidistributive throughout $\mathcal V$. \end{enumerate} \end{thm} \begin{proof} We shall argue that $(1)\Rightarrow(2)\Rightarrow(3)$ and $\neg(1)\Rightarrow \neg(3)$. Assume (1), that $\mathcal V$ has a difference term. According to Theorem~\ref{main1}, $\mathcal V$ has commutative commutator. Since the left annihilator of any congruence on any algebra exists, right annihilators will also exist in any variety with a commutative commutator (and $(0:\beta)_R = (0:\beta)_L$ will hold). This proves that (2) holds. Now assume that (2) holds. Assume that $\m a\in {\mathcal V}$ and that $\alpha,\beta,\beta'\in\Con(\m a)$ satisfy $[\alpha,\beta]=[\alpha,\beta']$. The congruence $\delta:=[\alpha,\beta]$ is below each of $\alpha, \beta, \beta'$ according to Theorem~\ref{basic_centrality}~(7), and both $\C C(\alpha,\beta;\delta)$ and $\C C(\alpha,\beta';\delta)$ hold since $[\alpha,\beta]=\delta=[\alpha,\beta']$. Let's factor by $\delta=[\alpha,\beta]$ to obtain $\overline{\m a} = \m a/\delta\in{\mathcal V}$, $\overline{\alpha}=\alpha/\delta$, $\overline{\beta}=\beta/\delta$, $\overline{\beta}'=\beta'/\delta$. Both $\C C(\overline{\alpha},\overline{\beta};0)$ and $\C C(\overline{\alpha},\overline{\beta}';0)$ hold in $\overline{\m a}$ by Theorem~\ref{basic_centrality}~(10). This implies that $\overline{\beta}, \overline{\beta}'\leq (0:\overline{\alpha})_R$ in $\Con(\overline{\m a})$. Hence $\overline{\beta} + \overline{\beta}'\leq (0:\overline{\alpha})_R$. Hence $\C C(\overline{\alpha},\overline{\beta}+\overline{\beta}';0)$ in $\overline{\m a}$ by the definition of $(0:\overline{\alpha})_R$. Hence $\C C(\alpha,\beta+\beta';\delta)$ in $\m a$ by Theorem~\ref{basic_centrality}~(10). Hence \begin{equation}\label{inequalities} [\alpha,\beta]\leq [\alpha,\beta+\beta']\leq \delta=[\alpha,\beta]. \end{equation} Here the first $\leq$ is an instance of the monotonicity of the commutator in its second variable, while the second $\leq$ follows from $\C C(\alpha,\beta+\beta';\delta)$ and the definition of the commutator. Altogether, the line (\ref{inequalities}) yields that $[\alpha,\beta] = [\alpha,\beta+\beta']$, completing the proof of the right semidistributivity of the commutator. The rest of the argument is devoted to establishing the difficult implication $\neg(1)\Rightarrow \neg(3)$. We start with the assumptions that $\mathcal V$ has a Taylor term but does not have a difference term and construct a failure of right semidistributivity in the congruence lattice of some algebra in $\mathcal V$. Since $\mathcal V$ has a Taylor term but does not have a difference term, Lemma~\ref{commutative_plus_Taylor_lm} guarantees the existence of an algebra $\m a\in \mathcal V$ with a pentagon in its congruence lattice, labeled as in Figure~\ref{fig11}, satisfying the commutator conditions (1), (2), and (3) of that lemma. We shall start our construction with the quotient $\m b=\m a/\delta$. Writing $\overline{x}$ for $x/\delta$, Lemma~\ref{commutative_plus_Taylor_lm} guarantees that $\m b$ has congruences $0<\overline{\theta}<\overline{\alpha}$ such that (i) $[\overline{\theta},\overline{\alpha}]=0$ (from Item (2) of that lemma) and (ii) $[\overline{\alpha},\overline{x}]=\overline{x}$ for any congruence $\overline{x}$ satisfying $0 \leq \overline{x}\leq\overline{\theta}$ (from Item (3) of that lemma). In particular, we shall use the part of (ii) that guarantees that $[\overline{\alpha},\overline{\theta}]=\overline{\theta}$. Let $\m d = \m b(\overline{\alpha})$. Write $\Delta$ for $\Delta_{\overline{\alpha},\overline{\theta}}\;(\in\Con(\m d))$. Write $\Gamma$ for $\overline{\theta}_1\times \eta_2\;(\in\Con(\m d))$. It is clear that $\Delta, \Gamma, \leq \overline{\theta}_1\times \overline{\theta}_2$. As usual, the coordinate projection kernels on $\m d = \m b(\overline{\alpha})$ will be denoted $\eta_1$ and $\eta_2$, but to minimize subscripts below we shall use $\eta$ as a duplicate name for $\eta_1$ (that is, $\eta:=\eta_1$). \begin{figure}[ht] \begin{center} \setlength{\unitlength}{.95mm} \begin{picture}(40,50) \put(20,0){\circle*{1.2}} \put(20,5){\circle*{1.2}} \put(20,10){\circle*{1.2}} \put(20,15){\circle*{1.2}} \put(20,20){\circle*{1.2}} \put(20,25){\circle*{1.2}} \put(20,40){\circle*{1.2}} \put(20,50){\circle*{1.2}} \put(40,5){\circle*{1.2}} \put(40,15){\circle*{1.2}} \put(40,25){\circle*{1.2}} \put(40,45){\circle*{1.2}} \put(0,10){\circle*{1.2}} \put(0,20){\circle*{1.2}} \put(0,30){\circle*{1.2}} \put(0,45){\circle*{1.2}} \put(0,10){\line(0,1){35}} \put(20,0){\line(0,1){50}} \put(40,5){\line(0,1){40}} \put(20,0){\line(4,1){20}} \put(20,10){\line(4,1){20}} \put(20,20){\line(4,1){20}} \put(20,5){\line(-4,1){20}} \put(20,15){\line(-4,1){20}} \put(20,25){\line(-4,1){20}} \put(20,40){\line(4,1){20}} \put(20,40){\line(-4,1){20}} \put(41.5,4){$\Gamma^0=\Gamma$} \put(-15.5,9){$\Delta=\Delta^1$} \put(41.5,14){$\Gamma^2$} \put(-6.5,19){$\Delta^3$} \put(41.5,24){$\Gamma^4$} \put(-6.5,29){$\Delta^5$} \put(41.5,44){$\Gamma^{\infty}$} \put(-6.5,44){$\Delta^{{}\!\!\infty}$} \put(21,4){$\eta^1$} \put(15,9){$\eta^2$} \put(21,14){$\eta^3$} \put(15,19){$\eta^4$} \put(21,24){$\eta^5$} \put(21,37){$\eta^{\infty}$} \put(-5,36){$\vdots$} \put(42,33){$\vdots$} \put(22,30){$\vdots$} \put(19,52){$\eta$} \put(14,-4.5){$\eta^0=0$} \end{picture} \bigskip \caption{\sc A herringbone-like portion of $\textrm{Con}(\m d)$.}\label{fig13} \end{center} \end{figure} Define sequences of congruences \begin{align} \label{eqns} \Delta^1&= \Delta,\quad\quad \Delta^{2n+1}=\Delta+\eta^{2n}\\ \Gamma^0&= \Gamma,\quad\quad \Gamma^{2n+2}=\Gamma+\eta^{2n+1} \notag\\ \eta^0&= 0,\quad\quad \eta^{2n+1}=\eta\cap \Delta^{2n+1}, \quad\quad \eta^{2n}=\eta\cap \Gamma^{2n}, \notag \\ \Delta^{\infty}&= \bigcup \Delta^{2n+1}, \quad\quad \Gamma^{\infty} = \bigcup \Gamma^{2n}, \quad\quad \eta^{\infty} = \bigcup \eta^n. \notag \end{align} See Figure~\ref{fig13} for a depiction of the ordering of these congruences in $\Con(\m d)$. This figure need not be a sublattice of $\Con(\m d)$ and the congruences in the figure need not be distinct, but the indicated (non-strict) comparabilities hold and the meet or join of any element in the central chain with any element in a side chain is depicted correctly, as we explain in the next claim. \begin{clm} \label{order} \mbox{} \begin{enumerate} \item $\eta^0\leq \eta^1\leq \eta^2\leq \cdots\leq \eta^{\infty}\leq \eta\cap(\Delta+\Gamma)$. \item $\Delta=\Delta^1 \leq \Delta^3\leq \cdots \leq \Delta^{\infty}\leq \Delta+\Gamma$ and $\Gamma=\Gamma^0 \leq \Gamma^2\leq \cdots \leq \Gamma^{\infty}\leq \Delta+\Gamma$. \item $\eta\cap \Delta^{\infty} = \eta^{\infty} = \eta\cap \Gamma^{\infty}$. \item $\Delta^{\infty}+\Gamma^{\infty}=\Delta+\Gamma\leq \overline{\theta}_1\times \overline{\theta}_2$. \item The sets $\{\eta^{2m+1}, \eta^{2m+2}, \eta^{2m+3}, \Delta^{2m+1}, \Delta^{2m+3}\}$ and $\{\eta^{2n}, \eta^{2n+1}, \eta^{2n+2}, \Gamma^{2n}, \Gamma^{2n+2}\}$ are sublattices of $\Con(\m d)$ for every $m, n\geq 0$. \item The meet or join of any element in the central chain with any element in a side chain is depicted correctly in Figure~\ref{fig13}. \end{enumerate} \end{clm} {\it Proof of Claim~\ref{order}.} For Item (1), observe that $\eta^0 = 0=\eta\cap\Gamma^0\leq \eta$. If $\eta^{2n}\leq \eta$ for some $n$, then $\eta^{2n} \leq \eta\cap(\Delta+\eta^{2n})=\eta^{2n+1}$ and $\eta^{2n+1}=\eta\cap(\Delta+\eta^{2n})\leq \eta$. Similarly, if $\eta^{2n+1}\leq \eta$ for some $n$, then $\eta^{2n+1} \leq \eta\cap(\Gamma+\eta^{2n+1})=\eta^{2n+2}$ and $\eta^{2n+2}=\eta\cap(\Gamma+\eta^{2n+1})\leq \eta$. Inductively we get that $\eta^0\leq \eta^1\leq \eta^2\leq\cdots$ and that all of these elements lie below $\eta$. To complete the proof of (1) it will suffice to show that $\eta^k\leq \Delta+\Gamma$ for all $k$. This is true of $k=0$ since that $\eta^0=0$. If $\eta^{2n}\leq \Delta+\Gamma$, then $\Delta+\eta^{2n}\leq \Delta+\Gamma$, so $\eta^{2n+1}=\eta\cap(\Delta+\eta^{2n})\leq \Delta+\Gamma$. Similarly, if $\eta^{2n+1}\leq \Delta+\Gamma$, then $\Gamma+\eta^{2n+1}\leq \Delta+\Gamma$, so $\eta^{2n+2}=\eta\cap(\Gamma+\eta^{2n+1})\leq \Delta+\Gamma$. By induction, $\eta^k\leq \Delta+\Gamma$ for all $k$ (hence $\eta^{\infty}\leq \Delta+\Gamma$, too). For Item (2), the facts that (i) the $\eta$-sequence is increasing and bounded above by $\Delta+\Gamma$ and (ii) $\Delta^{2n+1}=\Delta+\eta^{2n}$ are jointly sufficient to imply that the $\Delta$-sequence is increasing and bounded above by $\Delta+\Gamma$. A similar argument proves that the $\Gamma$-sequence is increasing and bounded above by $\Delta+\Gamma$. For Item (3), $\eta\cap\Delta^{\infty} = \eta\cap\left(\bigcup \Delta^{2n+1}\right) =\bigcup (\eta\cap \Delta^{2n+1}) =\bigcup \eta^{2n+1} =\eta^{\infty}. $ Also $\eta\cap\Gamma^{\infty} = \eta\cap\left(\bigcup \Gamma^{2n}\right) =\bigcup (\eta\cap \Gamma^{2n}) =\bigcup \eta^{2n} =\eta^{\infty}. $ For the equality in Item~(4), we have $\Delta \leq \Delta^{\infty}\leq \Delta+\Gamma$ and $\Gamma \leq \Gamma^{\infty}\leq \Delta+\Gamma$. Joining these yields $\Delta+\Gamma \leq \Delta^{\infty}+\Gamma^{\infty}\leq \Delta+\Gamma$, so $\Delta^{\infty}+\Gamma^{\infty}= \Delta+\Gamma$. For the inequality in Item~(4), we have $\Delta = \Delta_{\overline{\alpha},\overline{\theta}}\leq \overline{\theta}_1\times \overline{\theta}_2$ and $\Gamma = \overline{\theta}_1\times \eta_2\leq \overline{\theta}_1\times \overline{\theta}_2$, so $\Delta+\Gamma \leq \overline{\theta}_1\times \overline{\theta}_2$. We have already established in Items~(1) and (2) that $\eta^{2m+1}\leq \eta^{2m+2}\leq \eta^{2m+3}$ and $\Delta^{2m+1}\leq \Delta^{2m+3}$. For the first part of Claim~\ref{order}~(5), it remains to show that (i) $\Delta^{2m+1}+\eta^{2m+2}=\Delta^{2m+3}$ and (ii) $\Delta^{2m+1}\cap \eta^{2m+3}=\eta^{2m+1}$. For (i), $\Delta^{2m+1}+\eta^{2m+2}=(\Delta+\eta^{2m})+\eta^{2m+2}= \Delta+(\eta^{2m}+\eta^{2m+2})=\Delta+\eta^{2m+2}=\Delta^{2m+3}$. For (ii), recall that $\eta^{2m+1}=\eta\cap \Delta^{2m+1}\leq \Delta^{2m+1}$. Intersect the inequalities $\eta^{2m+1}\leq \eta^{2m+3}\leq \eta$ throughout with $\Delta^{2m+1}$ to obtain $\eta^{2m+1}=\Delta^{2m+1}\cap \eta^{2m+1}\leq \Delta^{2m+1}\cap \eta^{2m+3}\leq \Delta^{2m+1}\cap \eta=\eta^{2m+1}$. The middle value must equal the outer value, so $\Delta^{2m+1}\cap \eta^{2m+3}=\eta^{2m+1}$. Item~(6) is a consequence of Items~(1), (2), and (5). For example, the fact that $\eta^8+\Gamma^0=\Gamma^8$ may be argued: \[ \begin{array}{rll} \eta^8+\Gamma^0 &=(\eta^8+\eta^2)+\Gamma^0& (1)\\ &=\eta^8+(\eta^2+\Gamma^0)&\\ &=\eta^8+\Gamma^2& (5)\\ &=\eta^8+\Gamma^4& (1)+(5)\\ &=\eta^8+\Gamma^6& (1)+(5)\\ &=\Gamma^8& (5) \end{array} \] while the fact that $\eta^8\cap \Gamma^0=0$ may be argued: \[ \begin{array}{rll} \eta^8\cap \Gamma^0 &=\eta^8\cap (\Gamma^6\cap \Gamma^0)& (2)\\ &=(\eta^8\cap \Gamma^6)\cap \Gamma^0&\\ &=\eta^6\cap \Gamma^0& (5)\\ &=\eta^4+\Gamma^0& (2)+(5)\\ &=\eta^2+\Gamma^0& (2)+(5)\\ &=0& (5) \end{array} \] \hfill\rule{1.3mm}{3mm} \bigskip From Claim~\ref{order}~(4) we have $\eta\cap \Delta^{\infty} = \eta^{\infty}= \eta\cap \Gamma^{\infty}$. It follows from this and Theorem~\ref{basic_centrality}~(8) that $\C C(\eta,\Delta^{\infty};\eta^{\infty})$ and $\C C(\eta,\Gamma^{\infty};\eta^{\infty})$ hold. The rest of the proof is devoted to proving that $\C C(\eta,\Delta^{\infty}+\Gamma^{\infty};\eta^{\infty})$ does \emph{not} hold. If we do this, then, factoring by $\eta^{\infty}$ (which is below all congruences involved), we get that in $\m d/\eta^{\infty}$ we have the following failure of the right semidistributive law: \[ [\eta/\eta^{\infty},\Delta^{\infty}/\eta^{\infty}]=0= [\eta/\eta^{\infty},\Gamma^{\infty}/\eta^{\infty}],\; \textrm{ but } [\eta/\eta^{\infty},\Delta^{\infty}/\eta^{\infty}+\Gamma^{\infty}/\eta^{\infty}]\neq 0. \] Since $\Delta^{\infty}+\Gamma^{\infty}=\Delta+\Gamma$, we can write our remaining goal as: \begin{goal} \label{goal} Show that $\C C(\eta,\Delta+\Gamma;\eta^{\infty})$ fails. \end{goal} Recall from the fourth paragraph of this proof (i.e. of the proof of Theorem~\ref{main1.5}) that $0<\overline{\theta}<\overline{\alpha}$ and $[\overline{\alpha},\overline{\theta}]=\overline{\theta}$. This puts us in a position to mimic the construction in Theorem~\ref{distributive_thm}~(2). As was the case there (with $\alpha, \beta$ there replaced by $\overline{\alpha},\overline{\theta}$ here), there is an $\overline{\alpha},\overline{\theta}$-matrix \[ \begin{bmatrix} t(\wec{a},\wec{u}) & t(\wec{a},\wec{v}) \\ t(\wec{b},\wec{u}) & t(\wec{b},\wec{v}) \end{bmatrix}= \begin{bmatrix} p & q \\ r & s \end{bmatrix}, \;\;\; \wec{a}\wrel{\overline{\alpha}}\wec{b},\;\; \wec{u}\wrel{\overline{\theta}}\wec{v}, \] with $p=q$ but $r\neq s$. The fact that this is an $\overline{\alpha},\overline{\theta}$-matrix implies, in particular, that $(p,r), (q,s)\in \overline{\alpha}$ and $(p,q), (r,s)\in \overline{\theta}$. \begin{clm} \label{special_matrix} \begin{equation} \tag{MM}\label{MMM} \begin{bmatrix} t\left((\wec{b},\wec{a}),(\wec{u}, \wec{u})\right)& t\left((\wec{b},\wec{a}),(\wec{u}, \wec{v})\right)\\ t\left((\wec{b},\wec{b}),(\wec{u}, \wec{u})\right)& t\left((\wec{b},\wec{b}), (\wec{u}, \wec{v})\right) \end{bmatrix}= \begin{bmatrix} (r, p) & (r, q) \\ (r, r) & (r, s) \end{bmatrix} \end{equation} is an $\eta, (\Delta+\Gamma)$-matrix of $\m d = \m b(\overline{\alpha})$ that is constant on the first row and not constant on the second row. \end{clm} {\it Proof of Claim~\ref{special_matrix}.} To show that the matrix given is truly an $\eta, (\Delta+\Gamma)$-matrix, we first argue that the elements of the form $(b_i,a_i)$, $(u_i,u_i)$, $(b_i,b_i)$, and $(u_i,v_i)$ belong to the algebra $\m d=\m b(\overline{\alpha})$. This is so, because $\wec{a}\wrel{\overline{\alpha}}\wec{b},\;\; \wec{u}\wrel{\overline{\theta}}\wec{v}$, $\overline{\theta}\subseteq \overline{\alpha}$, and the universe of $\m d$ is $\overline{\alpha}$. Next, we need to argue that $((b_i,a_i),(b_i,b_i))\in\eta=\eta_1$, and $((u_i,u_i), (u_i,v_i))\in\Delta+\Gamma$. The former is clear, since $(b_i,a_i)$ and $(b_i,b_i)$ have the same first coordinate. The latter is clear, since $\Delta = \Delta_{\overline{\alpha},\overline{\theta}}$ and $\Gamma=\overline{\theta}_1\times\eta_2$ and for each subscript $i$ we have $(u_i,v_i)\in\overline{\theta}$, so \[ (u_i,u_i)\stackrel{\Delta}{\equiv} (v_i,v_i) \stackrel{\Gamma}{\equiv} (u_i,v_i). \] We have shown that the matrix in \eqref{MMM} is truly an $\eta_1, (\Delta+\Gamma)$-matrix. The first row is constant and the second is not (since $p=q$ and $r\neq s$ as one sees in the lines before the statement of Claim~\ref{special_matrix}). \hfill\rule{1.3mm}{3mm} \bigskip Establishing Goal~\ref{goal}, that $\C C(\eta,\Delta+\Gamma;\eta^{\infty})$ fails, is equivalent to establishing that there exists some $\eta,(\Delta+\Gamma)$-matrix whose first row lies in $\eta^{\infty}$ and whose second row does not. We shall argue that the matrix in \eqref{MMM} is such a matrix. Already we know from Claim~\ref{special_matrix} that this matrix is an $\eta,(\Delta+\Gamma)$-matrix. We also know that the first row lies in $\eta^{\infty}$, since the first row is constant. The rest of the proof is devoted to showing that the second row does not belong to $\eta^{\infty}$. For this, let $V=r/\overline{\theta}$ be the $\overline{\theta}$-class of $r$ in $\m b$ and let $U=V\times V = (r, r)/(\overline{\theta}_1\times\overline{\theta}_2)$ be the $\overline{\theta}_1\times\overline{\theta}_2$-class of $(r, r)$ in $\m b(\overline{\alpha})$ and let $0_U$ be the equality relation on $U$. Since $(r,s)\in\overline{\theta}$, we have $(r,r), (r,s)\in U$. Since $r\neq s$, we also have $(r,r) \neq (r,s)$. We shall accomplish Goal~\ref{goal} by showing that $\eta^{\infty}|_U = 0_U$, so $((r,r),(r,s))\notin \eta^{\infty}$. \begin{clm} \label{U} $U$ is a union of $\Delta$-classes and a union of $\Gamma$-classes. In fact, $U$ is a union of congruence classes for each of the congruences $\Delta^{2n+1}, \Gamma^{2n}, \eta^k$ and $\Delta^{\infty}, \Gamma^{\infty}, \eta^{\infty}$. \end{clm} {\it Proof of Claim~\ref{U}.} Since $U$ is a single class of the congruence $\overline{\theta}_1\times\overline{\theta}_2$, it is a union of congruence classes of any smaller congruence. According to Claim~\ref{order}, all of the congruences $\Delta=\Delta^1$, $\Gamma=\Gamma^0$, $\Delta^{2n+1}$, $\Gamma^{2n}$, $\eta^k$, $\Delta^{\infty}$, $\Gamma^{\infty}$, $\eta^{\infty}$ are contained in $\Delta+\Gamma$, which is contained in $\overline{\theta}_1\times\overline{\theta}_2$. \hfill\rule{1.3mm}{3mm} \bigskip \begin{clm} \label{restriction_is_zero1} $\eta^1|_U= 0_U$. \end{clm} {\it Proof of Claim~\ref{restriction_is_zero1}.} This claim is proved by localizing the proof of (a)$\Rightarrow$(b) of Theorem~\ref{refinement} to the congruence class $U$. Since $0<\overline{\theta}<\overline{\alpha}$ in $\Con(\m b)$ and $0=[\overline{\theta},\overline{\alpha}]\;(\geq [\overline{\theta},\overline{\theta}])$ we get that $\overline{\theta}$ is an abelian congruence of $\m b$ and hence $\overline{\theta}_1\times \overline{\theta}_2$ is an abelian congruence of $\m b(\overline{\alpha})$. The $\overline{\theta}_1\times \overline{\theta}_2$-class $U$ is therefore a class of an abelian congruence. Recall that $\eta = \eta_1$ is the first projection kernel of $\m d = \m b(\overline{\alpha})$. If $((c,a),(c,b))\in\eta^1|_U = \eta|_U\cap \Delta|_U$, then since $(c,a), (c,b)\in U$ it must be that $a\stackrel{\overline{\theta}}{\equiv}c\stackrel{\overline{\theta}}{\equiv}b$. Let $T(x_1,\ldots,x_{n})$ be a Taylor term for $\mathcal V$. Consider a first-place Taylor identity $T(x,\wec{w})\approx T(y,\wec{z})$ where $\wec{w}, \wec{z}\in \{x,y\}^{n-1}$. Substitute $b$ for all occurrences of $x$ and $c$ for all occurrences of $y$. This yields $T(b,\bar{u}) = T(c,\bar{v})$ where all $u_i$ and $v_i$ are in $\{b,c\}$. The facts that $\overline{\theta}\leq \overline{\alpha}$, $[\overline{\theta},\overline{\alpha}]=0$, and $U$ is a $\overline{\theta}_1\times \overline{\theta}_2$-class imply that the diagonal of $U$ is a single $\Delta=\Delta_{\overline{\alpha},\overline{\theta}}$-class of $\m d = \m b(\overline{\alpha})$. As in the proof of Theorem~\ref{refinement}, the fact that $(T(b,\wec{u}),T(c,\wec{v}))$ lies on the diagonal of $U$ implies that $(T(a,\wec{u}),T(c,\wec{v}))=(T(b,\wec{u}),T(c,\wec{v}))$, so $T(a,\wec{u})=T(b,\wec{u})$ where each $u_i\in \{b,c\}$. By the $\overline{\theta},\overline{\theta}$-term condition, $T(a,\bar{y}) = T(b,\bar{y})$ for any $\bar{y}$ whose entries lie in the $\overline{\theta}$-class containing $a, b$ and $c$. As in the proof of Theorem~\ref{refinement}, this conclusion holds in every place of $T$. That is, \[T(y_1,\ldots,y_{i-1},a,y_{i+1},\ldots,y_n) = T(y_1,\ldots,y_{i-1},b,y_{i+1},\ldots,y_n)\] for each $i$ and any choice of values for $y_1,\ldots,y_n$ in the $\overline{\theta}$-class of $a, b$, and $c$. Using the fact that $T$ is idempotent, we have \[ a=T(a,a,\ldots,a)=T(b,a,\ldots,a)=\cdots=T(b,b,\ldots,b)=b. \] This proves that $(c,a)=(c,b)$. Since $((c,a),(c,b))\in\eta^1|_U$ was arbitrarily chosen, $\eta^1|_U = \eta|_U\cap \Delta|_U = 0_U$. \hfill\rule{1.3mm}{3mm} \bigskip \begin{clm} \label{restriction_is_zero} $\eta^{\infty}|_U = 0_U$. \end{clm} {\it Proof of Claim~\ref{restriction_is_zero}.} Since $U$ is a single $\overline{\theta}_1\times\overline{\theta}_2$-class, the restriction map from the congruence interval $I[0,\overline{\theta}_1\times\overline{\theta}_2]$ in $\Con(\m d)$ to the lattice of equivalence relations on the set $U$ is a complete lattice homomorphism. (Under this map a congruence $x$ maps to $x|_U=x\cap (U\times U)$.) If you apply this restriction map to the congruences in Figure~\ref{fig13} (excluding $\eta$, which need not be in the interval $I[0,\overline{\theta}_1\times\overline{\theta}_2]$) you will get a similarly-ordered set of equivalence relations on $U$. If you replace each congruence $x$ in Figure~\ref{fig13} with $x|_U$, then all the claims of Claim~\ref{order} remain true. In particular, the set $\{\eta^{0}|_U, \eta^{1}|_U, \eta^{2}|_U, \Gamma^0|_U, \Gamma^2|_U\}$ is a sublattice that is a quotient of a pentagon. By Claim~\ref{restriction_is_zero1}, $\eta^0|_U = 0_U = \eta^1|_U$. Since we are dealing with a quotient of a pentagon, we derive that $\Gamma^0|_U = \Gamma^2|_U$, since $\Gamma^0|_U = \Gamma^0|_U+\eta^0|_U=\Gamma^0|_U+\eta^1|_U=\Gamma^2|_U$. Then we have $\eta^2|_U = \eta^1|_U = \eta^0|_U$, since $\eta^2|_U = \eta^2|_U\cap \Gamma^2|_U=\eta^2|_U\cap \Gamma^0|_U=\eta^0|_U$. In summary, from $\eta^1|_U = \eta^0|_U$ we derive $\Gamma^2|_U = \Gamma^0|_U$ from which we derive $\eta^2|_U = \eta^1|_U$. A similar argument now allows us to derive from $\eta^2|_U = \eta^1|_U$ that $\Delta^3|_U = \Delta^1|_U$ from which we derive $\eta^3|_U = \eta^2|_U$. This may be continued to derive $\eta^0|_U = \eta^1|_U = \eta^2|_U = \cdots$, $\Gamma^0|_U = \Gamma^2|_U = \Gamma^4|_U = \cdots$, and $\Delta^1|_U = \Delta^3|_U = \Delta^5|_U = \cdots$. Taking the complete joins of these constant sequences we get $\eta^{\infty}|_U = \eta^0|_U = 0$, $\Gamma^{\infty}|_U = \Gamma^0|_U$, and $\Delta^{\infty}|_U = \Delta^0|_U$. \hfill\rule{1.3mm}{3mm} \bigskip \begin{clm} \label{centrality_fails} $\C C(\eta,\Delta+\Gamma;\eta^{\infty})$ fails. \end{clm} {\it Proof of Claim~\ref{centrality_fails}.} From Claim~\ref{special_matrix} we know that matrix \eqref{MMM} is an $\eta,(\Delta+\Gamma)$-matrix whose first row is constant, hence the elements in the first row are congruent modulo $\eta^{\infty}$. The second row is not constant but lies in $U$. Since $\eta^{\infty}|_U=0_U$, the elements in the second row are not congruent modulo $\eta^{\infty}$. Thus matrix \eqref{MMM} witnesses the failure of $\C C(\eta,\Delta+\Gamma;\eta^{\infty})$. \hfill\rule{1.3mm}{3mm} \bigskip We complete the proof of Theorem~\ref{main1.5} by reiterating ideas mentioned before the statement of Goal~\ref{goal}. By Claim~\ref{order}~(3), $\eta\cap \Delta^{\infty}=\eta\cap \Gamma^{\infty}=\eta^{\infty}$, so all of the congruences $\eta, \Delta^{\infty}, \Gamma^{\infty}$ lie above $\eta^{\infty}$. In $\Con(\m d/\eta^{\infty})$, let $x = \eta/\eta^{\infty}, y = \Delta^{\infty}/\eta^{\infty}, z = \Gamma^{\infty}/\eta^{\infty}$. We have $x\cap y = 0 = x\cap z$, so $\C C(x,y;0)$ and $\C C(x,z;0)$ hold in $\m d/\eta^{\infty}$. But we do not have $\C C(x,y+z;0)$ in $\m d/\eta^{\infty}$, since this translates to $\C C(\eta,\Delta^{\infty}+\Gamma^{\infty};\eta^{\infty})$ in $\m d$, which is the same statement as $\C C(\eta,\Delta+\Gamma;\eta^{\infty})$. We proved in Claim~\ref{centrality_fails} that $\C C(\eta,\Delta+\Gamma;\eta^{\infty})$ fails in $\m d$. \end{proof} Next we show that four of the centralizer properties from the Introduction are Maltsev definable relative to the existence of a Taylor term. The following is another of the primary results of this article. \begin{thm} \label{main3} Let $\mathcal V$ be a variety that has a Taylor term. The following are equivalent properties for $\mathcal V$: \begin{enumerate} \item[(1)] $\mathcal V$ is congruence modular. \item[(2)] The centralizer relation is symmetric in its first two places throughout $\mathcal V$. \newline {\rm ($\C C(x,y;z)\Longleftrightarrow \C C(y,x;z)$.)} \item[(3)] Relative right annihilators exist. \newline {\rm (Given $x, z$, there is a largest $y$ such that $\C C(x,y;z)$. Write $y=(z:x)_R$.)} \item[(4)] The centralizer relation is determined by the commutator throughout $\mathcal V$. \newline ({\rm $\C C(x,y;z)\Longleftrightarrow [x,y]\leq z$.}) \item[(5)] The centralizer relation is stable under lifting in its third place throughout $\mathcal V$. \newline {\rm ($\C C(x,y;z)\;\&\; (z\leq z')\Longrightarrow \C C(x,y;z')$.)} \end{enumerate} \end{thm} \begin{proof} The fact that the bi-implication in Item~(2) holds in every congruence modular variety is proved in \cite[Proposition~4.2]{freese-mckenzie}. (The bi-implication is (1)(iii)$\Leftrightarrow$(1)(iv) of Proposition~4.2 of the reference.) Hence (1)$\Rightarrow$(2). We next explain how to derive (3) from (2): It follows from Theorem~\ref{basic_centrality}~(5) that the relative \emph{left} annihilator, $(\delta:\theta)_L$, exists for any $\delta,\theta\in\Con(\m a)$ on any algebra $\m a$. From (2), which asserts the symmetry of the centralizer relation in its first two places, it follows that relative right annihilators must also exist and that $(\delta:\theta)_R=(\delta:\theta)_L$ for every $\delta$ and $\theta$. The fact that the bi-implication in Item~(4) holds in every congruence modular variety is (1)(iii)$\Leftrightarrow$(1)(v) of \cite[Proposition~4.2]{freese-mckenzie}. Hence (1)$\Rightarrow$(4). We also have (4)$\Rightarrow$(5), for the following reason: (4) asserts that the centralizer relation is equivalent to the relation $[x,y]\leq z$, which is a relation that is stable under lifting in $z$. Thus, (4) implies that the centralizer is stable under lifting in its third place, which is Item (5). So far we have (1)$\Rightarrow$(2)$\Rightarrow$(3) and (1)$\Rightarrow$(4)$\Rightarrow$(5). To finish the proof it will suffice to establish (3)$\Rightarrow$(1) and (5)$\Rightarrow$(1). In this paragraph we prove (3)$\Rightarrow$(1) by contradiction. Therefore, assume that (3) holds (relative right annihilators exist) and (1) fails ($\mathcal V$ is not congruence modular). We also assume throughout the proof that the global hypotheses of the theorem hold ($\mathcal V$ is a variety that has a Taylor term). If relative right annihilators (those of the form $(\delta:\theta)_R$) always exist, then ordinary right annihilators (those of the form $(0:\theta)_R$) must also exist. Since $\mathcal V$ has a Taylor term and the property that ordinary right annihilators exist throughout $\mathcal V$, it follows from Theorem~\ref{main1.5} that $\mathcal V$ has a difference term. Since we have assumed that $\mathcal V$ is not congruence modular, there will exist an algebra $\m a\in {\mathcal V}$ with a pentagon in $\Con(\m a)$. We label it as in Figure~\ref{fig4}. For this algebra we have $\C C(\theta,\beta;\delta)$ and $\C C(\theta,\delta;\delta)$ by Theorem~\ref{basic_centrality}~(7). Hence $\beta, \delta\leq (\delta:\theta)_R$, which forces $\alpha = \beta+\delta\leq (\delta:\theta)_R$, or equivalently $\C C(\theta,\alpha;\delta)$. By monotonicity in the middle place we derive $\C C(\theta,\theta;\delta)$, which implies that the critical interval of the pentagon, $I[\delta,\theta]$, is abelian. But according to Theorem~\ref{diff_char}, critical intervals of pentagons are neutral in varieties with a difference term. We have arrived at a contradiction, since nontrivial congruence intervals like $I[\delta,\theta]$ cannot be both abelian ($[\theta,\theta]_{\delta}=\delta$) and neutral ($[x,x]_{y}=x$ for $\delta\leq y\leq x\leq \theta$). This completes the proof that (1), (2), and (3) are equivalent. In this paragraph we prove (5)$\Rightarrow$(1) by contradiction. Therefore, assume that (5) holds (the centralizer is stable under lifting in its third place) and (1) fails ($\mathcal V$ is not congruence modular). We proceed as above: since we have assumed that $\mathcal V$ is not congruence modular, there will exist an algebra $\m a\in {\mathcal V}$ with a pentagon in $\Con(\m a)$, which we label as in Figure~\ref{fig4}. According to Theorem~\ref{basic_centrality}~(8), we have that $\C C(\beta,\theta;\beta\cap\theta)$ holds. Since we have assumed that the centralizer relation is stable under lifting in its third place, $\C C(\beta,\theta;\delta)$ holds. This contradicts Theorem~\ref{memoir_pentagon}. This completes the argument that Items (1), (4), and (5) are equivalent. \end{proof} Finally we show that the property of weak stability of the centralizer relation in its third place is Maltsev definable relative to the existence of a Taylor term. This is our last primary result. \begin{thm} \label{main4} Let $\mathcal V$ be a variety that has a Taylor term. The following are equivalent properties for $\mathcal V$: \begin{enumerate} \item[(1)] $\mathcal V$ has a difference term. \item[(2)] The centralizer relation is weakly stable under lifting in its third place throughout $\mathcal V$. {\rm ($\C C(x,y;z)\;\&\; (z\leq z'\leq x\cap y)\Longrightarrow \C C(x,y;z')$.)} \end{enumerate} \end{thm} \begin{proof} For (1)$\Rightarrow$(2), assume that $\mathcal V$ has a difference term and that some $\m a\in {\mathcal V}$ has congruences $x=\alpha, y=\beta, z=\delta, z'=\gamma$ such that $\C C(\alpha,\beta;\delta)\;\&\; (\delta\leq \gamma\leq \alpha\cap \beta)$. By \cite[Lemma~2.3~(i)$\Rightarrow$(ii)]{diff}, $\C C(\alpha,\beta;\delta)$ implies $[\alpha,\beta]_{\delta}=\delta$. It now follows from \cite[Lemma~2.4]{diff} that $[\alpha,\beta]_{\gamma}=[\alpha,\beta]+\gamma =\gamma$. By \cite[Lemma~2.3~(ii)$\Rightarrow$(i)]{diff}, $\C C(\alpha,\beta;\gamma)$ holds. This establishes the weak stability property. Now we argue that if $\mathcal V$ has a Taylor term and does not have a difference term, then the centralizer will not be weakly stable in its third place in some instances. By Lemma~\ref{commutative_plus_Taylor_lm}, $\mathcal V$ has an algebra $\m a$ with a pentagon in its congruence lattice, which satisfies the commutator conditions (1), (2), and (3) of that lemma. Take $x=\alpha, y=\theta, z=0, z'=\delta$. From the lemma, $[\alpha,\theta]=0$, so $\C C(x,y;z)$ holds. By our choices, $z\leq z'\leq x\cap y$. Since $[\alpha,\theta]_{\delta}=\theta$ we have $\neg \C C(x,y;z')$. This shows that weak stability fails. \end{proof} \section{Outro} \subsection{The intended applications} The results of this paper help to decide some cases of the following question: Given a finite algebra $\m a$ of finite type, does the variety $\mathcal V = {\sf H}{\sf S}{\sf P}(\m a)$ have commutative commutator? If $\mathcal V$ has a Taylor term, then the answer is affirmative if and only if $\mathcal V$ also has a difference term. There are known algorithms to decide whether $\mathcal V$ has a Taylor term and whether $\mathcal V$ has a difference term whenever $\mathcal V$ is generated by a finite algebra of finite type. (These algorithms are implemented in UACalc, \cite{uacalc}). This gives a path to answer the question algorithmically for finite algebras of finite type that have a Taylor term. Even in the case where $\mathcal V = {\sf H}{\sf S}{\sf P}(\m a)$ does not have a Taylor term, the results of this paper might apply. Suppose that $\m a$ is a finite algebra of finite type and $\mathcal V = {\sf H}{\sf S}{\sf P}(\m a)$ does not have a Taylor term. It is possible that some $\m b\in {\mathcal V}$ generates a subvariety ${\mathcal U} ={\sf H}{\sf S}{\sf P}(\m b)$ that has a Taylor term but does not have a difference term. In this case, the subvariety will not have commutative commutator, so $\mathcal V$ cannot have commutative commutator. If there is such a $\m b\in {\mathcal V}$, then there must exist such a $\m b$ that is free on three generators in the subvariety it generates, hence will be a quotient of the finite, relatively free algebra $\m f_{\mathcal V}(3)$. Determining whether such a $\m b$ exists is a matter of a finite amount of computation. A concrete example where this happens is when $\m a$ is the semigroup $\mathbb Z_2\times \mathbb S_2\times \mathbb L_2$ and $\m b = \mathbb Z_2\times \mathbb S_2$. (Here $\mathbb Z_2$ is the $2$-element group considered as a semigroup, $\mathbb S_2$ is the $2$-element semilattice, and $\mathbb L_2$ is the $2$-element left zero semigroup.) In this example, ${\mathcal V} ={\sf H}{\sf S}{\sf P}(\m a)$ does not have a Taylor term, but one may still apply the results of this paper to derive that the commutator is not commutative in $\mathcal V$ since the subvariety ${\mathcal U} ={\sf H}{\sf S}{\sf P}(\m b)$ has a Taylor term and does not have a difference term. (Contrast with this example: the algebra $\m c = \mathbb S_2\times \mathbb L_2$ generates a variety with noncommutative commutator, but the results of this paper do not help to establish this since every subvariety of ${\sf H}{\sf S}{\sf P}(\m c)$ that has a Taylor term also has a difference term.) \bigskip Another intended application of the results of this paper is to help understand whether some theorems are expressed with optimal hypotheses. For example, in \cite{park}, \'{A}gnes Szendrei, Ross Willard and I proved Park's Conjecture for varieties with a difference term. Park's Conjecture is the conjecture that a finitely generated variety of finite type is finitely based whenever it has a finite residual bound. One question received after the publication of that paper was: How hard would it be to generalize the proof in \cite{park}, which assumes the existence of a difference term, to establish Park's Conjecture for varieties with a \emph{weak} difference term?\footnote{Note: % A finitely generated variety has a weak difference term if and only if it has a Taylor term.} Our proof in \cite{park} depends on the commutativity of the commutator in some places. Thus one may ask: if one were to refine the proof in \cite{park} so that it proves Park's Conjecture for varieties that have a \emph{weak} difference term and commutative commutator, would this refinement constitute a proper generalization of the result in \cite{park}? The answer is negative, according to Theorem~\ref{commutative_plus_weak} of this paper. That is, the class of varieties which have a weak difference term and commutative commutator is exactly the same as the class of varieties with a difference term. Any proper generalization of the result in \cite{park} must apply to some varieties $\mathcal V$ in which either (i) $\mathcal V$ has no Taylor term or (ii) the commutator operation in $\mathcal V$ is not commutative. \bigskip \subsection{Some problems from \cite{lipparini}} The results of this article partially solve some problems posed by Paolo Lipparini in \cite{lipparini}. The problems I refer to are: \bigskip \noindent {\bf Problems 1.7 of \cite{lipparini}.} \begin{enumerate} \item[(a)] Find conditions implying (if possible, equivalent to) left join distributivity, right join distributivity or commutativity of the commutator. \bigskip \item[(b)] In particular, is there a (weak) Mal'cev condition strictly weaker than modularity and implying left join distributivity of the commutator? \bigskip \item[(c)] Does right join distributivity always imply left join distributivity? \bigskip \item[(e)] Answer the above questions at least in the particular cases of varieties with a (weak) difference term, $M$-permutable varieties, locally finite varieties (omitting type ${\bf 1}$ or some other type). \end{enumerate} \bigskip \noindent {\bf Partial solutions.} \noindent In this article we work at the level of varieties. At this level, we can say the following. Regarding Problem 1.7(a), we have characterized those varieties with a Taylor term that have left distributive, right distributive, or commutative commutator. Regarding Problem 1.7(b), Theorem~\ref{main2} implies that there is \underline{no} idempotent Maltsev condition strictly weaker than modularity that implies left distributivity of the commutator. Regarding Problem 1.7(c), Theorem~\ref{distributive_thm}~(1) shows that left distributivity of the commutator throughout a variety implies commutativity of the commutator throughout the variety. Hence left distributivity implies right distributivity in any variety. We do not know if, conversely, right distributivity implies left distributivity in every variety. Nevertheless we have shown that left and right distributivity are equivalent for varieties with a Taylor term in Theorem~\ref{dist_implies_comm}. Regarding Problem 1.7(e), if some variety $\mathcal V$ has a difference term, a weak difference term, is $M$-permutable, or is a locally finite omitting type ${\bf 1}$, then $\mathcal V$ has a Taylor term. In these settings we have classified the varieties that have left distributive, right distributive, or commutative commutator. \bigskip \subsection{Two problems from \cite{64problems}} The results of this article solve two problems from the list of 64 open problems posed at the Workshop on Tame Congruence Theory which was held at the Paul Erd\H{o}s Summer Research Center of Mathematics in 2001. \bigskip \noindent {\bf Problem 10.6 of \cite{64problems}.} Let $\mathcal V$ be a locally finite variety that omits type {\bf 1}. Is it true that if $[\alpha,\beta] = [\beta,\alpha]$ for all congruences $\alpha, \beta$ of algebras in $\mathcal V$, then $\mathcal V$ has a difference term? \bigskip A locally finite variety omits type {\bf 1} if and only if it has a Taylor term, according to Lemma~9.4 and Theorem~9.6 of \cite{hobby-mckenzie}. Thus, Problem~10.6 of \cite{64problems} asks about the truth of Theorem~\ref{main1} of this paper in the restricted setting of locally finite varieties. Theorem~\ref{main1} provides an affirmative answer. \bigskip \noindent {\bf Problem 10.7 of \cite{64problems}.} Are there natural conditions on a variety $\mathcal V$ under which the implications $$ [\alpha,\beta]=[\alpha,\gamma] \Longrightarrow [\alpha,\beta]=[\alpha,\beta+\gamma] \;\;\textrm{ and }\;\; [\beta,\alpha]=[\gamma,\alpha] \Longrightarrow [\beta,\alpha]=[\beta+\gamma,\alpha] $$ hold throughout the variety $\mathcal V$? (Consider, e.g., the condition `$\mathcal V$ has a difference term'.) \bigskip Problem~10.7 of \cite{64problems} asks for natural conditions guaranteeing the right or left semidistributivity of the commutator for varieties, and suggests that having a difference term might be such a condition. Every variety has left semidistributive commutator by Theorem~\ref{basic_centrality}~(5) and the definition of the commutator, so the nontrivial part of this problem is the question about right semidistributivity. Theorem~\ref{main1.5} proves that, for varieties with a Taylor term, the condition proposed in Problem 10.7 of \cite{64problems} (that $\mathcal V$ has a difference term) is a necessary and sufficient condition guaranteeing that $\mathcal V$ has right (and left) semidistributive commutator. \bigskip \subsection{A problem from \cite{novi_sad}} The results of this article solve a problem I posed at the 90th Arbeitstagung Allgemeine Algebra held at the University of Novi Sad in 2015. There I gave a talk entitled \emph{Problems on the frontier of commutator theory}. These twenty-five problems were not published formally, but the slides for the talk are posted at \cite{novi_sad}. The thirteenth problem asks \bigskip \noindent {\bf Problem.} Does $\exists$weak difference term + symmetric commutator imply $\exists$difference term? \bigskip \noindent This problem is answered affirmatively in Theorem~\ref{commutative_plus_weak} of this paper. The affirmative answer is strengthened in two ways to \begin{center} \underline{$\exists$Taylor term} + symmetric commutator \underline{$\Longleftrightarrow$} $\exists$difference term. \end{center} in Theorem~\ref{main1}. The conclusion is: If you want to prove some case of some conjecture about varieties, and you need (i) a Taylor term and (ii) a commutative commutator throughout your variety for the proof, then the assumption that the variety has a difference term guarantees both (i) and (ii) and it is the optimal hypothesis that guarantees both. \bibliographystyle{plain}
2,869,038,156,497
arxiv
\section{Introduction} While more than two hundred planets have been detected outside our Solar System, the minority of them that transit a bright parent star have had the highest impact on our overall understanding of these objects (see review by Charbonneau et al. 2007). Indeed, the structure and atmospheric composition of the non-transiting extrasolar planets detected by radial velocity (RV) measurements remain unknown because the only information available from the RV time series are the orbital parameters (except the inclination of the orbital plane and the longitude of the ascending node), and only a lower limit for the planetary mass. Transiting planets are the only ones for which accurate estimates of mass, radius, and, by inference, composition can be obtained. The brightest of these systems can be monitored during primary and secondary transits with high-precision instruments, allowing us to characterize their composition and atmosphere, and learn what these other worlds look like. Since 2005, many exciting results on transiting planets have been obtained with the $Spitzer$ $Space$ $Telescope$: the detection of the thermal emission from four giant planets (Charbonneau et al. 2005; Deming et al. 2005, 2006; Harrington et al. 2007), the precise measurement of an infrared planetary radius (Richardson et al. 2006), the measurement of the infrared spectrum from two planets (Richardson et al. 2007; Grillmair et al. 2007), and the measurement of the phase-dependent brightness variations from HD 189733b (Knutson et al. 2007). These results have demonstrated the very high potential of the $Spitzer$ $Space$ $Telescope$ to characterize transiting planets and motivated many theoretical works (e.g. Barman et al. 2005; Burrows et al. 2005; Williams et al. 2006; Barman 2007). All of these studies have targeted gaseous giant planets. Many Neptune-mass and even Earth-class planets have been detected by the RV technique (e.g. Rivera et al. 2006; Bonfils et al. 2007; Udry et al. 2007), but until very recently none of these small mass planets had been caught in transit. Remarkably, the transits of a known Neptune-sized planet have just been detected (Gillon et al. 2007, hereafter G07). The host star, GJ 436, is a very close-by M-dwarf ($d$ = 10.2 pc). The planet itself was first detected by RV measurements (Butler et al. 2004, hereafter B04; Maness et al. 2007, hereafter M07). It is by far the closest, smallest and least massive transiting planet detected so far. Its mass is slightly larger than Neptune's at $M$ = 22.6 $\pm$ 1.9 $M_\oplus $, while its orbital period is 2.64385 $\pm$ 0.00009 days (M07). The shape and depth of the ground-based transit lightcurves show that the planet is crossing the host star disc near its limb (G07). Assuming a stellar radius $R$ = 0.44 $\pm$ 0.04 $R_\odot$, G07 measured a planet size comparable to that of Uranus and Neptune. Considering this measurement and current planet models, GJ 436b should be composed of refractory species, with a large fraction of water ice, probably surrounded by a thin H/He envelope. Nevertheless, the original uncertainty on the radius presented in G07 is too large ($\sim 10 \%$) to confidently claim the presence of the H/He envelope. Indeed, the lower end of the G07 1-sigma radius range is close to the theoretical mass-radius line for a pure water ice planet from Fortney et al. (2007). Consequently, GJ 436b could be an ice giant planet very similar to Neptune or an "ocean planet" (L\'eger et al. 2004), and a more precise radius measurement is needed to discriminate between these two compositions. The planetary radius uncertainty in G07 was mainly due to the error on the radius of the primary. Principally because of the presence of correlated noise (Pont et al. 2006), ground-based photometry of a shallow transit like the one of GJ 436b is generally not accurate enough to break the degeneracy between the impact parameter and the primary radius, and fails to allow an independent determination of the primary radius. A prior constraint on the stellar radius based on evolution models and/or observational constraints is needed, limiting the final precision on the planetary radius. This is no more the case with high signal-to-noise space-based photometry. In this case, both radii can be determined very precisely from the lightcurve analysis, assuming only the primary mass or a stellar mass-radius relation. In the case of GJ 436b, the $Spitzer$ telescope is particulary well suited for this task. Indeed, the host star is rather bright in the near-infrared (K $\sim$ 6), and the weak limb-darkening at these wavelengths is a huge advantage, as demonstrated by the recent HD 209458b and HD 189733b radius measurements (Richardson et al. 2006; Knutson et a. 2007). We report here the $Spitzer$ observations of a primary transit of GJ 436b within the 8 $\mu$m band of the InfraRed Array Camera (IRAC; Fazio et al. 2004). The analysis of the observations allows us to obtain a precise measurement of the primary and planetary radii, bringing an important constraint on the composition of the planet, which reveals to be very similar to Neptune. Section 2 describes the observations, the reduction procedure and the resulting photometry. Our analysis of the obtained time series is described in Section 3. Our conclusions are presented in Section 4. \section{Observations and data reduction} GJ 436 was observed on June 29th UT for 3.4h covering the transit, resulting in 28480 frames. On top of that, a blank field located a few minutes away from the target was imaged just before and after the observations in order to characterize pixel responses and to improve overall field flatness. Data acquisition was made using IRAC in its 8$\mu$m band, subarray mode, with an effective exposure time of 0.32s to avoid saturation problems due to the high brightness of GJ 436 at these wavelengths. Observing with IRAC in one channel only avoids repointing processes which reduces observational time efficiency. Subarray mode consists of windowing the full array into a 32x32 pixels window, allowing a better time sampling. Each resulting data file consists of a set of 64 of these windowed images, delivered to the community as BCD (Basic Calibrated Datasets) files after having been processed by a dedicated pipeline at Spitzer Science Center. We combine each set of 64 images using a 3-sigma clipping to get rid of transient events in the pixel grid, yielding 445 stacked images with a temporal sampling of $\sim$ 28s. Using the time stamps in the image headers, we compute the JD corresponding to the center of each integration, and add the relevant heliocentric correction to obtain the date in Heliocentric Julian Date (HJD). We convert fluxes from the Spitzer units of specific intensity (MJy/sr) to photon counts, and aperture photometry is performed on the stacked images. The aperture radius giving the best compromise between the noise from the sky background and from the centroid variations is found to be 4.0 pixels. An estimate for the sky background is derived from an annulus of 12--24 pixels and subtracted from the measured flux. As already noticed by similar works (e.g. Charbonneau et al. 2005), time series obtained with IRAC in the 8$\mu$m band show a gradual detector-induced rise of the measured signal, probably due to charge trapping. Despite this instrumental effect, the eclipse can be clearly seen in the raw data (see Fig. 1). To correct for the instrumental rise effect, we zero-weight the eclipse and the 80 first points of the time series (to avoid the steepest part of the rise in our modeling of the effect) and we divide the lightcurve by the best fitting asymptotic function with three free parameters. We then evaluate the average flux outside the eclipses and use it to normalize the time series\footnote{Our final photometric time series is available only in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strabg.fr/cgi-bin/qcat?J/A+A/}. The $rms$ of the resulting lightcurve evaluated outside the eclipse is 0.7 mmag. \begin{figure} \label{fig:a} \centering \includegraphics[width=9.0cm]{8283fig1.ps} \caption{Raw IRAC 8$\mu$m photometric time series for the primary transit of GJ 436b (arbitrary unit). The best fitting asymptotic function for the instrumental rise effect is superimposed (solid line).} \end{figure} \section{Time series analysis} Despite its closeness to its host star, GJ 436b exhibits a clear orbital eccentricity $e \sim 0.16$ (B04; M07). To properly analyze the primary transit, the eccentricity and the argument of the periapse have to be taken into account. In the case of a circular orbit, the formula connecting the projected separation of the centres (in units of the primary radius) $z$ and the time of observation is: \begin{equation}\label{eq:a} z = \frac{a}{R_\ast} [(\sin n t)^2 + (\cos i \cos n t)^2] ^{1/2}\textrm{,} \end{equation} where $a$ is the semi-major axis, $R_\ast$ is the stellar radius, $n$ is the mean motion $2\pi / P$, and $i$ is the orbital inclination. In case of a non-zero orbital eccentricity $e$, equation (1) becomes: \begin{equation}\label{eq:a} z = \frac{r}{R_\ast} [(\sin n_2 t)^2 + (\cos i \cos n_2 t)^2] ^{1/2}\textrm{,} \end{equation} where $r$ is the orbital distance and $n_2$ the angular frequency at the orbital location of the transit. $r$ and $n_2$ are given by the formulae: \begin{equation}\label{eq:b} r_\ast = \frac{a (1 - e^2)}{(1 + e \cos f)}\textrm{,} \end{equation} \begin{equation}\label{eq:c} n_2 = n \frac{(1 + e \cos f)^2}{(1 - e^2)^{3/2}}\textrm{,} \end{equation} where $f$ is the true anomaly, i.e. the angle between the transit location and the periapse. The formula connecting this latter and the argument of periapse $\omega$ is simply: \begin{equation}\label{eq:c} f = \frac{\pi}{2} - \omega\textrm{.} \end{equation} Using the values for $e$ and $\omega$ from M07 and equation (2), transit profiles are fitted to the primary transit data using the Mandel \& Agol (2002) algorithm, the orbital elements in M07, and quadratic limb darkening coefficients\footnote{$a = 0.045 \pm 0.01$ and $b = 0.095 \pm 0.015$} derived from a stellar model atmosphere with $T_{eff}$ = 3500 K, $\log g$ = 4.5 and [Fe/H] = 0.0. As in G07, the mass of the star is fixed to $M = 0.44 \pm 0.04$ $M_\odot$. The free parameters of the fit are the radius of the planet $R_p$ and the primary $R_\ast$, the orbital inclination $i$ and the central epoch of the transit $T_p$. Starting from initial guess values for these parameters, the $\chi^2$ of the fit is computed over a large grid of values. The grid cell corresponding to the lowest $\chi^2$ is then considered as the starting point of the next step. The same process is then repeated twice, using a finer grid. At this stage, the downhill-simplex AMOEBA algorithm (Press et al. 1992) is used to reach the $\chi^2$ minimum. Figure 2 shows the best fitting theoretical curve superposed on the data. To obtain realistic error bars on the free parameters, we have to take into account the uncertainty coming from the stellar and orbital parameters, and also the one coming from the initial guess for the free parameters. The possible presence of correlated noise in the lightcurve has also to be considered (Pont et al. 2006). In particular, it is clear that the calibration of the detector-induced rise of the signal is able to produce systematic errors. To account for these effects, we use the following Monte-Carlo procedure. A high number (10,000) of lightcurve fits is performed. For each fit, we use randomly chosen orbital and stellar parameters from a normal distribution with a width equals to the published uncertainties and randomly chosen initial guess values for the free parameters from a wide rectangular distribution. Furthermore, the residuals of the initial fit are shifted sequentially about a random number, and then added to the eclipse model. The purpose of this procedure called the ``prayer bead'' (Moutou et al. 2004) is to take into account the actual covariant noise level of the lightcurve. We estimated the error bars from the distribution of the 10,000 derived values for the free parameters. The obtained value for $R_p$, $R_\ast$, $i$ and $T_p$ and their error bars are given in Table 1. To test the influence of the eccentricity on the obtained values for the radii and the orbital inclination, we made a fit assuming a circular orbit, obtaining $R_p = 4.15$ $R_\oplus$, $R_\ast = 0.459$ $R_\odot$, $i = 85.97^o$ (impact parameter = 0.842), and $T_p = 2454280.78188 $ HJD. The influence of the eccentricity reveals thus to be weak. This is due to the fact that $f$ at transit is close to 90$^o$, i.e., the transit location is nearly perpendicular to the periapse. \begin{figure} \label{fig:c} \centering \includegraphics[width=9.0cm]{8283fig2.ps} \caption{$Top$: Final time series for the primary transit. The best-fit theoretical curve is superimposed. $Bottom$: The residuals of the fit ($rms$ = 0.7 mmag).} \end{figure} While fixing the primary mass in the fit is justified by the fact that spectroscopic and photometric observations bring a strong constraint on this parameter (see discussion in M07), it does not benefit from the fact that the shape of the transit is not governed by the primary radius $R_\ast$ but by the density of the primary $\rho_\ast$ (Seager \& Mall\'en-Ornelas, 2003). Instead of assuming a primary mass but a stellar mass--radius relation, one can determine $\rho_\ast$, $R_p/R_\ast$, the impact parameter $b$ and the transit timing $T_p$, then use the assumed mass-radius relation to obtain $R_\ast$, and finally get $R_p$ from the measured radii ratio. To test the influence of the used prior constraint on the host star (mass $vs$ mass--radius relation), we perform a new fit to the data considering $\rho_\ast$, $R_p/R_\ast$, the impact parameter $b$ and $T_p$ as free parameters, and used the relation $M_\ast = R_\ast$ (see Ribas et al. 2006 and references therein) to ultimately determine the stellar and planetary radius. At the end, we obtain $R_p = 4.31$ $R_\oplus$, $R_\ast = 0.477$ $R_\odot$, $i = 85.84^o$ (impact parameter = 0.859), and $T_p = 2454280.78190 $ HJD. These values are within the 1-sigma error bars of our fit assuming a fixed value for the stellar mass. This agreement between two independent fits using a different prior constraint on the host star is a good indication of the robustness of our solution presented in Table 1. \begin{table} \begin{tabular}{l l } \hline\hline \\ Stellar Radius [$R_\odot$] & $0.463^{+0.022}_{-0.017}$ \\[4pt] Orbital inclination [$^\circ$] & $85.90^{+0.19}_{-0.18}$ \\[4pt] Impact parameter & $0.849^{+0.010}_{-0.013}$ \\[4pt] Planet Radius [$R_\oplus$]& $4.19^{+0.21}_{-0.16}$ \\[4pt] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [km]& $26720^{+1340}_{-1020}$\\[4pt] Mid-transit timing [HJD] & $2454280.78186^{+0.00015}_{-0.00008}$\\[4pt] \hline\hline & \\ \end{tabular} \caption{Parameters derived in this work for the GJ 436 system, host star and transiting planet.} \label{param} \end{table} \section{Conclusions} The exquisite quality of the $Spitzer$ photometry allows the breaking of the degeneracy between the impact parameter and the primary radius, leading to a significantly more accurate radius measurement than the one presented in G07. Comparing our new value for the planetary radius to the theoretical models of Fortney et al. (2007), and using $M = 22.6 \pm 1.9 $ $M_\oplus $, we notice that GJ 436b is now more than 3-sigmas away from the "pure ice" composition mass-radius line (see Fig. 3). As noticed in G07, the presence of a significant amount of methane and ammonia in addition to water within a pure ice planet could slightly increase the radius above the theoretical value for a pure water ice planet. Nevertheless, a planet composed only of ice (and thus without any rock) and widely enriched in ammonia and methane is largely improbable in the current paradigm: all the icy objects in the Solar System have a considerable fraction of rock and have their ice composition dominated by water. The presence of an H/He envelope can thus be considered as very likely. The first transiting hot Neptune appears to be very similar to our Neptune. Before the detection of the transits, Lecavelier des Etangs (2007) showed theoretically that GJ 436b could not be a low mass gaseous planet. Indeed, a density lower than $\sim$ 0.7 g cm$^{-3}$ would have led to an intense atmospheric evaporation. The density deduced from our radius measurement ($\sim $ 1.7 g cm$^{-3}$) is well above this limit. Despite the closeness to the host star, the H/He envelope of the planet should thus be stable and resistant to evaporation over long timescales. \begin{figure} \label{fig:d} \centering \includegraphics[width=9.0cm]{8283fig3.ps} \caption{Location of GJ 436b in a planetary mass-radius diagram, compared to the one of Uranus and Neptune (open diamonds) and the theoretical mass-radius relations from Fortney et al. (2007) for pure water ice (solid line), pure rock (dashed line) and pure iron (dotted line) planets. } \end{figure} \begin{acknowledgements} This work is based on observations made with the $Spitzer$ $Space$ $Telescope$, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under NASA contract 1407. X.B. acknowledges support from the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia (Portugal) in the form of a fellowship (references SFRH/BPD/21710/2005). B.-O.D. acknowledges the support of the Fonds National Suisse de la Recherche Scientifique. G. Tinetti and O. Grasset are greatfully acknowledged for providing us informations about icy objects in the Solar System. We are also grateful to P. Magain and D. S\'egransan for helpful discussions, and to F. Allard for her support and suggestions. We thank A. Ofir for his contribution. We acknowledge the referee I. Snellen for the fast review of the paper and valuable suggestions. \end{acknowledgements} \bibliographystyle{aa}
2,869,038,156,498
arxiv
\section{Introduction}\label{sec:intro} The properties of baryons have recently been studied in QCD in a systematic expansion in $1/N_c$, where $N_c$ is the number of colors\cite{dm,j,djm,jm}. In the limit $N_c \rightarrow \infty$, it has been shown that the baryon sector of QCD has an exact contracted $SU(2F)$ spin-flavor symmetry, where $F$ is the number of light quark flavors\cite{dm,gervaissakita}. This contracted spin-flavor symmetry follows from consistency conditions on meson-baryon scattering amplitudes which must be satisfied for the theory to be unitary. The spin-flavor structure of baryons for finite $N_c$ is given by studying $1/N_c$ corrections to the large $N_c$ limit. The consistency conditions severely restrict the form of subleading $1/N_c$ corrections, so definite predictions can be made at subleading orders. The $1/N_c$ expansion has been used to obtain results for baryon axial vector currents and magnetic moments up to corrections of relative order $1/N_c^2$ and for baryon masses up to relative order $1/N_c^3$ for two and three light quark flavors. Salient results include the vanishing of $1/N_c$ corrections to pion-baryon coupling ratios in a given strangeness sector and an equal spacing rule for decuplet $\rightarrow$ octet pion couplings in different strangeness sectors. For the case of three flavors, additional results have been obtained which do not assume $SU(3)$ flavor symmetry and are therefore valid to all orders in $SU(3)$ symmetry breaking. These results give insight into the structure of flavor $SU(3)$ breaking in the baryon sector. The predictions of the $1/N_c$ expansion for baryons are in good agreement with experiment, and explain the phenomenological success of spin-flavor symmetry for the baryon sector of QCD. There are two natural approaches to the study of the spin-flavor algebra of baryons for large $N_c$. One can solve the consistency conditions by constructing irreducible representations of the $N_c \rightarrow \infty$ contracted $SU(2F)$ spin-flavor symmetry\cite{djm}. These irreducible baryon representations are constructed using standard techniques from the theory of induced representations, and are very closely related to the collective coordinate quantization of the Skyrmion in the Skyrme model\cite{skyrme,anw,mattis}. One can also construct solutions to the consistency conditions using quark operators, an approach which is closely related to the non-relativistic quark model. The two methods are equivalent, since the non-relativistic quark model and Skyrme model are identical in the $N_c\rightarrow\infty$ limit \cite{am}. The quark model approach was discussed in detail in refs.~\cite{cgo,lm}, and used to derive results for large $N_c$ baryons. One nice feature of the quark approach is that it is closely tied to the intuitive picture of baryons as quark bound states, and the $1/N_c$ counting is simply related to quark Feynman diagrams. This connection is obvious when the quarks are heavy. The Skyrme and non-relativistic quark model realizations of the large $N_c$ spin-flavor algebra for baryons in QCD are identical in the $N_c \rightarrow \infty$ limit. At finite $N_c$, the Skyrme and quark representations differ in their organization of $1/N_c$ corrections, but they give equivalent results at a given order in $1/N_c$. In the Skyrme representation, the contracted spin-flavor algebra is realized exactly, which implies that the irreducible baryon representations of the contracted algebra are infinite dimensional. In contrast, the quark representation uses the non-relativistic quark model algebra for finite $N_c$. The baryon spectrum, in this case, is finite; it consists of a tower of baryon states that terminates at spin $N_c/2$. The Skyrme and quark representations both give rise to operator identities which eliminate redundant operators at a given order in the $1/N_c$ expansion. These identities are much simpler in the Skyrme representation than in the quark representation for two flavors\cite{djm}. (In particular, some of the operator identites used in the original analysis are not obvious in the quark description, and have not been derived using this method. The derivation is supplied in this work.) However, both the Skyrme and quark representations become quite complicated for more than two flavors. There has been no derivation of all the non-trivial operator identities for the case of three flavors using either method. These operator identities are required for a systematic analysis of the $1/N_c$ expansion for baryons in the three flavor case. In this paper, we study the spin-flavor structure of baryons for an arbitrary number of colors and flavors. We present a brief review of the quark model representation in Section~\ref{sec:qrep}, and the $1/N_c$ counting rules for baryon operators in Section~\ref{sec:ncount}. All the baryon operators are classified using the $SU(2F)$ spin-flavor symmetry group in Section~\ref{sec:classify}. All the non-trivial operator identities among the baryon operators are derived in Section~\ref{sec:derive}. The set of independent operators and the operator identities have an elegant group-theoretic classification. The operator analysis of Sections~\ref{sec:classify}--\ref{sec:derive} is done for the general case of $F$ quark flavors. The special cases of two and three light flavors which are of principal physical interest are considered explicitly in Section~\ref{sec:2and3}. The operator identities are used to derive a simple operator reduction rule in Section~\ref{sec:opanalysis}, which gives the linearly independent baryon operators at a given order in the $1/N_c$ expansion. The operator analysis for the case of baryons with three flavors without any assumption of $SU(3)$ symmetry is given in Section~\ref{sec:broken}. The operator analysis is then used to study the static baryon properties (axial currents, masses, magnetic moments, and hyperon non-leptonic decays) in the $SU(3)$ limit, to first order in perturbative $SU(3)$ breaking, and for completely broken $SU(3)$ flavor symmetry in Sections~\ref{sec:axial}--\ref{sec:nonlep}. Readers not interested in some of the details can skip Sections~\ref{sec:classify}--\ref{sec:opanalysis}, and refer only to the identities (for three flavors) in Tables~\ref{tab:su6iden},~\ref{tab:broken0} and~\ref{tab:brokennot0} and the operator reduction rules in Sections~\ref{sec:opanalysis} and~\ref{sec:broken} before proceeding to the discussion of the static baryon properties. Additional group theory required in the analysis is given in the appendices. We reproduce some of the earlier results for the baryon axial currents, masses and magnetic moments \cite{dm,j,djm,jm,cgo,lm}. In addition, new results are presented for the three flavor case in the symmetry limit, and to first order in symmetry breaking. New results are also presented for the hyperon non-leptonic decay amplitudes. To make the results accessible to a wider audience, we will present detailed comparisons of the large $N_c$ predictions with the experimental data in another paper \cite{ddjm}. The operator analysis in this paper is discussed almost entirely using the quark representation. The connection with the Skyrme model is discussed in Section~\ref{sec:skyrme}. The quark representation uses the algebraic structure of the non-relativistic quark model to classify all the baryon operators. It is important to stress, however, that the results of this paper do not assume that the non-relativistic quark model is valid, or that the quarks in the baryon are non-relativistic. A more detailed discussion of the connection between the quark basis and large $N_c$ QCD can be found in refs.~\cite{djm,cgo,lm}. Finally, we restrict our analysis principally to the ground-state baryons. Excited baryons have been considered in ref.~\cite{cgkm}. \section{The Quark Representation}\label{sec:qrep} The quark representation of the spin-flavor symmetry of large $N_c$ baryons is based on the non-relativistic quark model picture. However, as emphasized in the introduction, using the quark model realization of the contracted spin-flavor symmetry does not mean that we are treating the quarks in the baryon as non-relativistic. The non-relativistic quark model algebra provides a convenient way of writing the results of a $1/N_c$ calculation in QCD, which are valid even for baryons with massless quarks. We will refer to the quark representation rather than the non-relativistic quark model, to emphasize this distinction. In the quark representation, one defines a set of quark creation and annihilation operators, $q^\dagger_{ \alpha}$ and $q^{\alpha}$, where $\alpha=1, \ldots, F$ represents the $F$ quark flavors with spin up, and $\alpha=F+1,\ldots, 2F$, the $F$ quark flavors with spin down. The antisymmetry of the $SU(N_c)$ color $\epsilon$-symbol and Fermi statistics implies that the ground-state baryons contain $N_c$ quarks in the completely symmetric representation of spin $\otimes$ flavor (see Fig.~\ref{fig:groundstate}), so one can omit the color quantum numbers of the quark operators for the spin-flavor analysis and treat them as bosonic objects. Thus, the quark operators satisfy the bosonic commutation relation \begin{equation}\label{IIi} \left[q^{\alpha},q^\dagger_{\beta}\right] = \delta^\alpha_\beta . \end{equation} In this work, we consider the spin-flavor structure of the ground-state baryons for $N_c$ large and finite, and odd. The completely symmetric $N_c$-quark representation of $SU(2F)$ contains baryons with spin $1/2$, $3/2$, $\ldots$ , $N_c/2$, which transform as the flavor representations shown in Table~I, respectively. For two flavors, the baryon states can be labeled by their spin $J$ and isospin $I$, $(J,I) = (1/2,1/2)$, $(3/2,3/2)$, $\ldots$, $(N_c/2,N_c/2)$. For three flavors, the spin-1/2 baryons have the weight diagram shown in Fig.~\ref{fig:weight1/2}, and the spin-3/2 baryons have the weight diagram shown in Fig.~\ref{fig:weight3/2}. Generically, the spin $J$ weight diagram has an edge with $2J+1$ weights, and an edge with $(N_c+2)/2-J$ weights. The multiplicity starts at one for the outermost weights, and increases by one as one moves inward, until one reaches the point at which the weights are triangular. From this point inwards, the multiplicity remains constant. The dimension of the representation is $ab(a+b)/2$, where $a$ and $b$ are the number of weights on the two edges. The weight diagrams of the spin-$1/2$ and spin-$3/2$ baryons reduce to the baryon octet and decuplet for $N_c=3$. For $F>2$, the baryon flavor representations grow rapidly with $N_c$, and are not the same as the flavor representations for $N_c=3$. This dependence of the flavor representations on $N_c$ leads to subtleties in obtaining results for $N_c=3$. Quark operators can be classified according to whether they are zero-body, one-body, $\ldots$, or $n$-body operators. A zero-body operator contains no $q$ or $q^\dagger$. There is a unique zero-body operator, the identity operator~$\openone$. A one-body operator acts on a single quark. The one-body operators consist of the quark number operator $q^\dagger q$ and the spin-flavor adjoint $q^\dagger \Lambda^A q$, $A = 1, \ldots, (2F)^2-1$, where $\Lambda^A$ is a spin-flavor generator. Two-body operators involve two $q$'s and two $q^\dagger$'s, and act upon two quarks. Two-body operators can be written either as bilinears in the one-body operators or in normal ordered form (e.g. $q^\dagger q^\dagger q q$). Normal ordered two-body operators are ``pure'' two-body operators, in the sense that they have vanishing matrix elements on single quark states. Similarly, one can consider $n$-body operators either as polynomials of degree $n$ in the one-body operators, or in normal ordered form with $n$ $q$'s and $n$ $q^\dagger$'s. $n$-body operators acting on a $N_c$-quark state typically have matrix elements of order $N_c^n$ because of combinatoric factors associated with inserting the operator in the $N_c$-quark state. Since we eventually will be interested in classifying operators according to their spin and flavor representations, it is convenient to decompose the $SU(2F)$ adjoint one-body operator $q^\dagger \Lambda^A q$ into representations of $SU(2)\times SU(F)$, \begin{eqnarray} &&J^i = q^\dagger \left(J^i \otimes \openone \right) q\qquad (1,0),\nonumber \\ &&T^a = q^\dagger \left(\openone \otimes T^a \right) q\qquad (0,adj),\label{IIii} \\ &&G^{ia} = q^\dagger \left(J^i \otimes T^a \right) q \qquad (1, adj), \nonumber \end{eqnarray} where $J^i$ are the spin generators, $T^a$ are the flavor generators, and $G^{ia}$ are the spin-flavor generators. The transformation properties of these generators under $SU(2) \times SU(F)$ are given in eq.~(\ref{IIii}), where $adj$ denotes the $F^2-1$ dimensional adjoint representation of $SU(F)$. Throughout this work, uppercase letters ($A$, $B$, \ldots) denote indices transforming according to the adjoint representation of the $SU(2F)$ spin-flavor group, lowercase letters ($a$, $b$, \ldots) denote indices transforming according to the adjoint representation of the $SU(F)$ flavor group, and ($i$, $j$, \ldots) denote indices transforming according to the vector representation of spin. The matrices $J^i$ and $T^a$ on the right-hand side of eq.~(\ref{IIii}) are in the fundamental representations of $SU(2)$ and $SU(F)$, respectively, and are normalized so that \begin{eqnarray}\label{two} {\rm Tr}\ J^i J^j &=& {1\over 2}\, \delta^{ij},\nonumber \\ {\rm Tr}\ T^a T^b &=& {1\over 2}\, \delta^{ab}. \label{IIiii} \end{eqnarray} The spin-flavor matrices $\Lambda^A$ normalized to \begin{equation} {\rm Tr}\ \Lambda^A \Lambda^B = {1\over 2}\, \delta^{AB},\label{IIIiv} \end{equation} are $\left(J^i \otimes \openone \right)/\sqrt{F}$, $\left(\openone \otimes T^a \right)/ \sqrt2$ and $\sqrt2 \left(J^i \otimes T^a \right)$, so that the properly normalized $SU(2F)$ operators are $J^i/\sqrt{F}$, $T^a/\sqrt2$ and $\sqrt2\, G^{ia}$. \section{Large $\bf N_c$ Power Counting}\label{sec:ncount} The baryons in QCD are color singlet states of $N_c$ quarks. The $N_c$-dependence of operator matrix elements in baryon states can be obtained using the double line notation of `t~Hooft~\cite{thooft}. The $N_c$ counting rules were discussed extensively by Witten~\cite{witten}, and more recently in refs.~\cite{cgo,lm}. The basic result can be given very simply using an illustrative example. Consider the baryon matrix element of a one-quark QCD operator ${\cal O}_{QCD}=\bar q \Gamma q$, where $\Gamma$ is a Dirac and flavor matrix.\footnote{Note that the quark field $q$ in the QCD operator is not the same as the quark operator $q$ of the quark representation.}\ For example, the operator could be the flavor singlet axial current, with $\Gamma=\gamma_\mu\gamma_5$, or a flavor octet vector current, with $\Gamma = T^a \gamma_\mu$, etc. The baryon matrix element of ${\cal O}_{QCD}$ is obtained by inserting the operator on any of the $N_c$ quark lines, as show in Fig.~\ref{fig:opmatrix}a. There are $N_c$ insertions, and each graph is of order one, so that a one-quark QCD operator has a matrix element which is at most of order $N_c$. The matrix element is not necessarily of order $N_c$, however, since there may be cancellations among the $N_c$ insertions on the various quark lines. All planar graphs with additional gluon exchanges (Fig.~\ref{fig:opmatrix}b) are of the same order in $N_c$ as Fig.~\ref{fig:opmatrix}a, whereas graphs with additional exchanges of non-planar gluons are suppressed by powers of $1/N_c$ relative to Fig.~\ref{fig:opmatrix}a. The QCD operator is given by an expansion in $1/N_c$ in terms of operators in the quark representation. At leading order, the QCD operator has an expansion of the form \begin{equation} {\cal O}_{QCD}= \sum_{n,k} c^{(n)}_k {1\over N_c^{n-1}} {\cal O}^{(n)}_k , \label{IIIi} \end{equation} where the sum is over all possible $n$-body operators ${\cal O}^{(n)}_k$, $n= 0, \ldots, N_c$, with the same spin and flavor quantum numbers as ${\cal O}_{QCD}$, with coefficients $c^{(n)}_k$ of order unity. Subleading $1/N_c$ corrections to the leading order expression~(\ref{IIIi}) can be included by adding $1/N_c$ corrections to the coefficients $c^{(n)}_k$. The complicated QCD dynamics is parametrized by the unknown coefficients $c_k^{(n)}$. A comparison of the form of this expansion with the Feynman diagrams in Fig.~\ref{fig:opmatrix} is suggestive. The one-body operator can be thought of as arising from the operator insertion graphs depicted in Fig.~\ref{fig:opmatrix}a. The single gluon exchange graphs of Fig.~\ref{fig:opmatrix}b produce two-body operators with an extra factor of $1/N_c$ from the two gauge coupling constants at the gluon vertices, and so on. Non-planar gluon exchange graphs result in $1/N_c$ corrections to the operator coefficients. There are $N_c$ quarks in the baryon, so one can terminate the expansion at $N_c$-body operators. An $n$-body operator is typically of order $N_c^n$, so that all of the terms in eq.~(\ref{IIIi}) are of the same order in the $1/N_c$ expansion as the leading term. In the limit $N_c\rightarrow\infty$, one obtains an infinite series of operators which are equally important even at leading order in $1/N_c$. Since we cannot evaluate the coefficients $c^{(n)}_k$, the entire $1/N_c$ expansion would be intractable, were it not for a series of operator identities which allows the number of operators to be reduced to a finite set at a given order in $1/N_c$. In this paper, we derive all these operator identities, and classify the independent operators at any given order in $1/N_c$ for some quantities of interest, such as the baryon axial currents, masses, magnetic moments and non-leptonic decay amplitudes. An important feature of the above $N_c$ counting for the $n$-body quark operators is that the $N_c$ counting is preserved under commutation. The commutator of an $m$-body operator with an $n$-body operator is an $\left(m+n-1\right)$-body operator, \begin{equation} \left[{\cal O}^{(m)},{\cal O}^{(n)}\right]={\cal O}^{(m+n-1)},\label{IIIii} \end{equation} and $\left(1/N_c^{m-1}\right)\left(1/N_c^{n-1}\right)=\left(1/N_c^{(m+n-1)-1}\right)$. In contrast, the anticommutator of an $m$-body and $n$-body operator is typically an $(m+n)$-body operator. The commutator has one less $q^\dagger q$ than the anticommutator, because quark operators acting on different quark lines commute. The commutativity of quark operators acting on different quark lines forces one quark in ${\cal O}^{(m)}$ to act on the same quark line as a quark in ${\cal O}^{(n)}$ to produce a non-zero commutator, and reduces the $\left(m+n\right)$-body operator to an $\left(m+n-1 \right)$-body operator. We have given the quark counting rules for a one-quark QCD operator. Similarly, it is easy to see that a $m$-quark QCD operator is given as an expansion in terms of $n$-body operators with coefficients of order $N_c^{m-n}$. It need not be the case that $n\ge m$. For example, in $\Delta I=1/2$ weak decays, a four-quark (i.e. two-body) QCD operator can produce a one-body quark operator.\footnote{A similar result was found in the chiral quark model\cite{amhg}.} \section{Quark Operator Identities: Classification}\label{sec:classify} In this section, we classify all independent operator identities among the $n$-body operators in the quark representation. These identities have an elegant group theoretical structure. Readers not interested in the details can look at the identities for three flavors in Table~\ref{tab:su6iden}, and skip to the operator reduction rule at the end of Section~\ref{sec:opanalysis}. The general structure of the identities is that certain $n$-body operators can be reduced to linear combinations of $m$-body operators, where $m<n$. Since $n$-body operators acting on an $N_c$-quark baryon state are generically of order $N_c^n$, the coefficient of the $m$-body operator is typically of order $N_c^{n-m}$. For example, some three-body operators can be reduced to two-body operators with coefficients of order $N_c$, one-body operators with coefficients of order $N_c^2$, and zero-body operators with coefficients of order $N_c^3$. We will show that the only independent operator identities which are required are those which reduce two-body operators to linear combinations of one-body and/or zero-body operators. All identities for $n$-body operators with $n>2$ can be obtained by recursively applying two-body identities. This result leads to a tremendous simplification in the analysis, since there are only a finite number of identities which need to be written explicitly for the two-body case. Explicit expressions for the two-body identities are derived in Section~\ref{sec:derive}, and are given in Table~\ref{tab:su2fiden} for an arbitrary number of flavors, and in Tables~\ref{tab:su4iden} and \ref{tab:su6iden} for two and three flavors, respectively. \subsection{Zero-Body Operators} There is a unique zero-body operator, the identity operator $\openone$, which has matrix elements $N_c^0=1$. The identity operator transforms as a singlet under the spin-flavor group $SU(2F)$ and as a singlet under $SU(2)\times SU(F)$. There are no operator identities at this level. \subsection{One-Body Operators} The one-body operators transform under $SU(2F)$ as the tensor product of a quark and antiquark representation. A quark is in the fundamental representation of $SU(2F)$, and transforms as a tensor with one upper index. The antiquark transforms as an $SU(2F)$ tensor with one lower index. Thus, the one-body operators transform as \begin{equation}\label{IVi} {\rm 1-body:}\ \ \left( \overline{{\vbox{\hbox{$\sqr\thinspace$}}}} \otimes {\vbox{\hbox{$\sqr\thinspace$}}} \right) = 1 + adj = 1 + T^\alpha_\beta , \end{equation} where $T^\alpha_\beta$ is a traceless tensor which transforms as the adjoint representation of $SU(2F)$. The independent one-body operators were listed in Section~\ref{sec:qrep}; they are the quark number operator $q^\dagger q$ and the spin-flavor operators $q^\dagger \Lambda^A q$, which consist of $J^i$, $T^a$, and $G^{ia}$. The quark number operator $q^\dagger q$ is a singlet under $SU(2F)$ and $J^i$, $T^a$ and $G^{ia}$ together form the adjoint representation of $SU(2F)$, which agrees with the analysis of eq.~(\ref{IVi}). These one-body operators transform as $(0,0)$, $(1,0)$, $(0,adj)$ and $(1,adj)$, respectively, under $SU(2)\times SU(F)$. The only operator identity allowed at this stage is one relating the one-body and zero-body $SU(2F)$ singlets. This identity is trivial, \begin{equation} q^\dagger q = N_c\ \openone . \label{IVii} \end{equation} Note that this identity has the general structure stated at the beginning of this section: a one-body operator is written as $N_c$ times a zero-body operator. \subsection{Two-Body Operators} The non-trivial identities occur among two-body operators. The two-body operators transform as the tensor product of a two-quark and two-antiquark state. Since the quarks in the ground-state baryon representation are in a completely symmetric state (Fig.~\ref{fig:groundstate}), any two quarks transform according to the two-index symmetric tensor representation of $SU(2F)$, and any two antiquarks are in the complex conjugate representation. Thus, the two-body operators transform as \begin{eqnarray} {\rm 2-body:}\ \left( \overline{{\vbox{\hbox{$\sqr\sqr\thinspace$}}}} \otimes {\vbox{\hbox{$\sqr\sqr\thinspace$}}} \right) &=& 1 + adj + {\bar s s} \nonumber \\ &=& 1 + T^\alpha_\beta + T^{(\alpha_1 \alpha_2)}_{(\beta_1 \beta_2)} ,\label{IViii} \end{eqnarray} where $T^{(\alpha_1 \alpha_2)}_{(\beta_1 \beta_2)}$ is a traceless tensor which is completely symmetric in its upper and lower indices. This tensor representation of $SU(2F)$ will be called the ${\bar s s}$ representation. It is convenient to write the two-body operators as products of two one-body operators, rather than to write them in normal-ordered form directly. The quark number operator $q^\dagger q$ can be eliminated using the identity eq.~(\ref{IVii}), so we only need to consider bilinears of the $SU(2F)$ adjoint representation $q^\dagger \Lambda^A q$, which consists of $J^i$, $T^a$ and $G^{ia}$. Any product of two operators can always be written as the symmetric product (an anticommutator), or the antisymmetric product (a commutator). The commutator can be eliminated using the $SU(2F)$ Lie algebra commutation relations listed in Table~\ref{tab:su2fcomm}. The anticommutator transforms as the symmetric product of two $SU(2F)$ adjoints, \begin{equation}\label{IViv} \left( adj \otimes adj \right)_S = 1 + adj + {\bar a a} + {\bar s s}, \end{equation} where ${\bar a a} = T^{[\alpha_1 \alpha_2]}_{[\beta_1 \beta_2]}$ transforms as a traceless tensor which is antisymmetric in its upper and lower indices. The decomposition of the symmetric tensor product of two adjoints for an arbitrary $SU(Q)$ group, and for the special cases $Q=6$ and $Q=4$ are listed in Table~\ref{tab:adj2s}, using the Dynkin notation for the irreducible representations. Each of the representations in $\left( adj \otimes adj \right)_S$ occurs in eq.~(\ref{IViii}) except for the ${\bar a a}$ representation. The structure of all two-body identities can now be determined. We quote the results here; a detailed derivation of all the identities is presented in Sec.~\ref{sec:derive}. The two-body identities can be divided into three different sets: \begin{enumerate} \item {There is a linear combination of two-body operators which is an $SU(2F)$ singlet. This linear combination can be written as a coefficient of order $N_c^2$ times the zero-body unit operator $\openone$. The $SU(2F)$ singlet in $\left( adj \otimes adj \right)_S$ is the Casimir operator, which equals \begin{equation} \left\{q^\dagger\Lambda^A q,q^\dagger \Lambda^A q\right\} = N_c\left(N_c+2F\right)\left(1-{1\over 2F}\right) \openone, \end{equation} where the coefficient of the $\openone$ operator is the $SU(2F)$ Casimir for the completely symmetric baryon representation Fig.~\ref{fig:groundstate}.} \item {There is a linear combination of two-body operators which transforms as an $SU(2F)$ adjoint. This linear combination can be written as a coefficient of order $N_c$ times the one-body adjoint operator. The $SU(2F)$ adjoint in $\left( adj \otimes adj \right)_S$ is obtained by contraction with the $SU(2F)$ $d$-symbol $d^{ABC}$; it equals \begin{equation}\hskip-2em\label{blotz} d^{ABC} \left\{q^\dagger \Lambda^B q , q^\dagger \Lambda^C q \right\} = 2\left( N_c + F\right)\left( 1 - {1 \over F} \right)\ q^\dagger \Lambda^A q \ , \end{equation} for the completely symmetric baryon representation. The coefficient on the right-hand side of eq.~(\ref{blotz}) is the ratio of the cubic and quadratic Casimirs of the completely symmetric baryon representation Fig.~\ref{fig:groundstate}.} \item {Comparison of eqs.~(\ref{IViii}) and~(\ref{IViv}) shows that there is no ${\bar a a}$ representation for two-body operators acting on the completely symmetric baryon representation, but there is an ${\bar a a}$ representation in $\left( adj \otimes adj \right)_S$. Thus, the linear combination of bilinears in one-body operators which transforms as an ${\bar a a}$ must vanish for the completely symmetric baryon representation. This set of identities eliminates certain bilinears in $\{J^i,T^a,G^{ia}\}$ from the set of independent two-body operators. } \end{enumerate} \subsection{Three-Body Operators and Generalization} Three-body operators act on symmetric tensor products of three-quark states, and transform as the representations \begin{eqnarray} &&{\rm 3-body:}\ \ \left( \overline{{\vbox{\hbox{$\sqr\sqr\sqr\thinspace$}}}} \otimes {\vbox{\hbox{$\sqr\sqr\sqr\thinspace$}}} \right) \nonumber \\ &&= 1 + T^\alpha_\beta + T^{(\alpha_1 \alpha_2)}_{(\beta_1 \beta_2)} + T^{(\alpha_1\alpha_2\alpha_3)}_{(\beta_1\beta_2\beta_3)}.\label{IVvii} \end{eqnarray} The only new tensor occurring at three-body is the traceless tensor $T^{(\alpha_1\alpha_2\alpha_3)}_{(\beta_1\beta_2\beta_3)}$. Any three-body operator can be written as a trilinear in the one-body operators. The one-body quark number operator can be trivially replaced by $N_c \openone$, so only the adjoint one-body operators need to be considered. Any trilinear product of adjoint one-body operators which is not completely symmetric can be reduced to two-body operators using the $SU(2F)$ commutation relations given in Table~\ref{tab:su2fcomm}, so one only needs to consider completely symmetric trilinears in the adjoint one-body operators. The decomposition of $\left(adj \otimes adj \otimes adj \right)_S$ is given in Table~\ref{tab:adj3s} for a general $SU(Q)$ group and for the special cases $Q=6$ and $Q=4$. First consider operator identities which relate the three-body singlet, adjoint and ${\bar s s}$ representations to zero-body, one-body, and two-body operators times coefficients of order $N_c^3$, $N_c^2$, and $N_c$, respectively. In normal ordered form, it is easy to see that these three-body identities are obtained by contraction of a pair of $q^\dagger$, $q$ indices. Contraction of pairs of quark indices is already described by the two-body $\rightarrow$~one-body and two-body $\rightarrow$~zero-body identities, so judicious application of these identities yields the required three-body identities. Table~\ref{tab:adj3s} shows that there are ten irreducible $SU(2F)$ representations in $\left(adj \otimes adj \otimes adj\right)_S$ for $F \ge 2$, of which only four are present in eq.~(\ref{IVvii}). Thus, in principle, there are six sets of identities which vanish identically for the three-body case. (For $F =2$, there are eight irreducible $SU(2F)$ representations in $\left(adj \otimes adj \otimes adj\right)_S$, and thus, in principle, four sets of identities which vanish identically for the three-body case.) It is clear, however, that not all of these identities are really new, since at least some of the three-body identities are simply products of two-body identities in the ${\bar a a}$ representation and one-body operators in the adjoint representation, $J^i$, $T^a$, or $G^{ia}$. The tensor product of ${\bar a a}$ with the adjoint representation is given in Table~\ref{tab:aaadj}. Comparison of Tables~\ref{tab:adj3s} and~\ref{tab:aaadj} shows that all the representations in $\left(adj\otimes adj \otimes adj\right)_S$ which are not present in eq.~(\ref{IVvii}) occur in $\left({\bar a a}\otimes adj\right)$. This observation is not sufficient to conclude that all the three-body identities are given in terms of two-body identities, however. The three-body representations in Table~\ref{tab:adj3s} are contained in the completely symmetric tensor product of three adjoints, $\left(adj\otimes adj \otimes adj\right)_S$. The representations in $\left({\bar a a} \otimes adj\right)$ of Table~\ref{tab:aaadj} are in the tensor product $\left(adj\otimes adj \right)_S\otimes adj$, since ${\bar a a}$ is contained in $\left(adj\otimes adj\right)_S$. To find the representations which reduce to the two-body ${\bar a a}$ identities, one has to impose the additional constraint that the three adjoints in $\left({\bar a a}\otimes adj\right)$ are completely symmetric. Not all the irreducible representations of Table~\ref{tab:aaadj} survive when this constraint is imposed, but all the irreducible representations of Table~\ref{tab:adj3s} which are not present in eq.~(\ref{IVvii}) do survive, as can be checked explicitly. This observation leads to the conclusion that there are no new vanishing three-body identities which are not simply products of the one-body and two-body identities which have already been determined. This conclusion can be generalized to $n$-body operators. There are no new vanishing identities for $n$-body operators, $n\ge3$, which are not products of the original ${\bar a a}$ two-body identities and one-body operators. The ${\bar a a}$ two-body identities result because any product of two adjoint one-body operators which is antisymmetric in creation or annihilation operators must vanish when it acts on the completely symmetric baryon representation. Similarly, the vanishing $n$-body identities result from products of $n$ one-body operators which are not completely symmetric in the $n$ creation and $n$ annihilation operators. Any representation of the permutation group which is not completely symmetric in all the quark creation (or annihilation) operators must be antisymmetric in at least one pair. Two one-body operators containing an antisymmetric quark pair vanish by the two-body identities derived earlier, so $n$-body operators containing an antisymmetric quark pair automatically vanish by the two-body identities, and there are no new identities which vanish for $n\ge3$. In summary, we have classified all non-trivial operator identities for $SU(2F)$ quark operators. For $n$-body quark operators, the only representations of the $SU(2F)$ group which are allowed are $1 + T^{\alpha_1}_{\beta_1} + T^{(\alpha_1\alpha_2)}_{(\beta_1\beta_2)} + \ldots + T^{(\alpha_1\alpha_2\ldots \alpha_n)}_{(\beta_1\beta_2\ldots\beta_n)}$. All other representations can be eliminated using the two-body ${\bar a a}$ operator identities. Furthermore, the only ``purely'' $n$-body representation is $T^{(\alpha_1\alpha_2\ldots \alpha_n)}_{(\beta_1\beta_2\ldots\beta_n)}$. The non-vanishing two-body operator identities can be used to write $n$-body operators that transform as $T^{(\alpha_1\alpha_2\ldots \alpha_m)}_{(\beta_1\beta_2\ldots\beta_m)}$ ($m<n$) as $m$-body operators times coefficients of order $N_c^{n-m}$. \section{Two-Body Quark Identities: Derivation}\label{sec:derive} In this section, all of the non-trivial two-body operator identities are derived explicitly for an arbitrary number of light quark flavors. The identities are listed in Table~\ref{tab:su2fiden}. The $SU(2F)$ group theory which is needed for the computation is given in appendix~\ref{app:suq}. \subsection{Two-Body $\bf \rightarrow$ Zero-Body Identity} The $SU(2F)$ singlet in the symmetric product of two $SU(2F)$ generators is the Casimir operator $\Lambda^A\Lambda^A$, which is a constant for a given irreducible representation. The Casimir for an $SU(Q)$ irreducible representation $R$ (see \cite{grosstaylor}) is \begin{equation}\label{VIIi} C_2(R) = {1 \over 2}\left( NQ - {N^2\over Q} + \sum_i r_i^2 - \sum_i c_i^2 \right), \end{equation} where $r_i$ is the number of boxes in the $\imath^{\rm th}$ row of the Young tableau, $c_i$ is the number of boxes in the $\imath^{\rm th}$ column of the Young tableau, and $N=\sum_i r_i=\sum_i c_i$ is the total number of boxes. The Casimir for the completely symmetric $SU(2F)$ baryon representation with a single row of $N_c$ boxes (Fig.~\ref{fig:groundstate}) is \begin{equation} C_2 = {1\over2}N_c\left(N_c+2F\right)\left(1-{1\over 2F}\right),\label{VIIii} \end{equation} so the Casimir identity\footnote{The Casimir operator in the completely symmetric baryon representation can also be computed directly using the quark operators and the Fierz identity eq.~(\ref{Aix}).} is \begin{equation}\label{VIIci} \left\{ q^\dagger \Lambda^A q,\ \ q^\dagger \Lambda^A q \right\} = N_c\left(N_c+2F\right)\left(1-{1\over 2F}\right) \ \openone. \end{equation} The Casimir operator $\Lambda^A\Lambda^A$ equals $J^2/F + T^2/2 + 2\,G^2$ using the properly normalized $SU(2F)$ generators. Combining this relation with eq.~(\ref{VIIci}) gives the first identity in Table~\ref{tab:su2fiden}. Note that the coefficient of the zero-body operator is of order $N_c^2$, as expected. \subsection{Two-Body $\bf \rightarrow$ One-body Identity} The linear combination of two-body operators which transforms as an $SU(2F)$ adjoint reduces to the adjoint one-body operator \begin{equation}\label{VIIiii} d^{ABC} \left\{ q^\dagger \Lambda^A q, q^\dagger \Lambda^B q\right\} =D(R)\ q^\dagger \Lambda^C q , \end{equation} where $D(R)$ is a constant which must be determined for the completely symmetric baryon representation. The two-body operator in eq.~(\ref{VIIiii}) can be written as \begin{eqnarray} &&2\,d^{ABC}\ \sum_{r,s=1}^{N_c} q^\dagger_{r \alpha} \left(\Lambda^A\right)^\alpha_\beta q^\beta_{r} \ q^\dagger_{s \gamma} \left(\Lambda^B\right)^\gamma_\zeta q^\zeta_{s} \nonumber\\ &&\ \ =2\,d^{ABC}\ \left(\Lambda^A\right)^\alpha_\beta \left(\Lambda^B\right)^\gamma_\zeta\sum_{r,s=1}^{N_c}\ q^\dagger_{r \alpha} q^\beta_{r} q^\dagger_{s \gamma} q^\zeta_{s}, \nonumber \end{eqnarray} where $q_{r}$ denotes the quark annihilation operator acting on the $r^{\rm th}$ quark in the baryon, and the sums on $r$ and $s$ run over the $N_c$ quarks in the baryon. Using the $SU(2F)$ identity \begin{eqnarray} d^{ABC}\left(\Lambda^A\right)^\alpha_\beta&& \left(\Lambda^B\right)^\gamma_\zeta = -{1\over 2F}\left[ \delta^\alpha_\beta \left(\Lambda^C\right)^\gamma_\zeta + \left(\Lambda^C\right)^\alpha_\beta \delta^\gamma_\zeta\right]\nonumber\\ &&\ + {1\over2}\left[\delta^\alpha_\zeta \left(\Lambda^C\right)^\gamma_\beta + \left(\Lambda^C\right)^\alpha_\zeta \delta^\gamma_\beta\right],\label{VIIiv} \end{eqnarray} the two-body operator can be rewritten as \begin{eqnarray} &&-{2N_c\over F} q^\dagger\Lambda^C q + \sum_{r,s} \left(\Lambda^C\right)^\gamma_\beta q^\dagger_{r \alpha} q^\beta_{r} q^\dagger_{s \gamma} q^\alpha_{s} \nonumber\\ &&\ \ +\sum_{r,s} \left(\Lambda^C\right)^\alpha_\zeta q^\dagger_{r \alpha} q^\beta_{r} q^\dagger_{s \beta} q^\zeta_{s} \ .\label{VIIv} \end{eqnarray} The first term is in the form required by eq.~(\ref{VIIiii}), but the last two terms need further simplification. The summation over $r$ and $s$ for these terms can be divided into a sum over $r=s$, and over $r\not=s$. The terms with $r=s$ in eq.~(\ref{VIIv}) are \begin{equation}\label{VIIvi} \sum_{r} \left(\Lambda^C\right)^\gamma_\beta q^\dagger_{r \alpha} q^\beta_{r} q^\dagger_{r \gamma} q^\alpha_{r} +\sum_{r} \left(\Lambda^C\right)^\alpha_\zeta q^\dagger_{r \alpha} q^\beta_{r} q^\dagger_{r \beta} q^\zeta_{r}. \end{equation} The normal ordered version of this operator vanishes, since each quark in the baryon is only singly occupied (any two annihilation operators $q_{r}^\alpha q_{r}^\beta$ acting on the same quark line must vanish (even if $\alpha\not=\beta$)). Normal ordering eq.~(\ref{VIIvi}) using the quark commutator eq.~(\ref{IIi}) yields \begin{eqnarray}\label{VIIvii} &&\sum_{r} \left(\Lambda^C\right)^\gamma_\beta q^\dagger_{r \alpha} q^\alpha_{r} \delta^\beta_\gamma +\sum_{r} \left(\Lambda^C\right)^\alpha_\zeta q^\dagger_{r \alpha} q^\zeta_{r} \delta^\beta_\beta \\ && = 2F\sum_{r} q^\dagger_{r \alpha} \left(\Lambda^C\right)^\alpha_\zeta q^\zeta_{r} \nonumber =2F\ q^\dagger \Lambda^C q \nonumber \end{eqnarray} where the first term vanishes because $\Lambda^C$ is traceless and $\delta^\beta_\beta=2F$ for the second term. Finally, the contribution of the last two terms in eq.~(\ref{VIIv}) with $r\not=s$ must be evaluated. Since $q_{r}$ and $q^\dagger_{s}$ act on different quark lines, they can be treated as commuting operators. Thus the two sums in eq.~(\ref{VIIv}) with $r\not=s$ are equal to each other (as can be seen by exchanging the dummy indices $r$ and $s$), and their sum is equal to \begin{equation} 2 \sum_{r\not=s} \left(\Lambda^C\right)^\gamma_\beta q^\dagger_{r \alpha} q^\dagger_{s \gamma} q^\beta_{r} q^\alpha_{s}.\nonumber \end{equation} This operator acts on the $(r,s)$ quark pair in the initial baryon. The initial baryon is completely symmetric in flavor, so one can exchange the flavor labels $\alpha$ and $\beta$ on the initial quark pair. This gives the equivalent operator \begin{eqnarray} &&2 \sum_{r\not=s} \left(\Lambda^C\right)^\gamma_\beta q^\dagger_{r \alpha} q^\dagger_{s \gamma} q^\alpha_{r} q^\beta_{s} = 2 \sum_{r\not=s} q^\dagger_{s \gamma}\left(\Lambda^C\right)^\gamma_\beta q^\beta_{s} q^\dagger_{r \alpha} q^\alpha_{r}\nonumber\\ &&\ \ = 2\left(N_c-1\right) q^\dagger \Lambda^C q ,\label{VIIviii} \end{eqnarray} since there are $(N_c-1)$ values of $r\not=s$ for a given value of $s$, and $q^\dagger_{r} q_{r}$ is unity since each quark line is singly occupied. Combining eqs.~(\ref{VIIv})--(\ref{VIIviii}) with eq.~(\ref{VIIiii}) gives the final form of the identity \begin{equation}\label{VIIix} d^{ABC} \left\{ q^\dagger\Lambda^A q,q^\dagger \Lambda^B q\right\} = 2\left(N_c + F\right)\left(1-{1\over F}\right) q^\dagger \Lambda^C q. \end{equation} The identity eq.~(\ref{VIIix}) for the $SU(2F)$ adjoint two-body operator can be decomposed under $SU(2)\times SU(F)$ into the three representations $(1,0)$, $(0,adj)$ and $(1,adj)$. The identities for these three $SU(2)\times SU(F)$ representations can be obtained by substituting in the properly normalized $SU(2F)$ generators $J^i/\sqrt F$, $T^a/\sqrt 2$, and $\sqrt2\, G^{ia}$ into eq.~(\ref{VIIix}), and using the decomposition of the $SU(2F)$ $d$-symbol under $SU(2)\times SU(F)$ given in eq.~(\ref{Bv}). Eq.~(\ref{VIIix}) then yields the three identities in the second block of Table~\ref{tab:su2fiden}. \subsection{Vanishing Two-Body Operators} The final set of identities is obtained by combining the two adjoint one-body operators into the ${\bar a a}$ representation, and setting it equal to zero. The ${\bar a a}$ representation can be obtained by using the $SU(2F)$ projection operators discussed in appendix~\ref{app:suq}. The decomposition of the ${\bar a a}$ representation into irreducible $SU(2) \times SU(F)$ representations is given in eq.~(\ref{Bi}). Another method for obtaining the vanishing two-body identities is to simplify anticommutators of $J^i$, $T^a$ and $G^{ia}$ using the same techniques applied in the derivation of eq.~(\ref{VIIix}). One then finds linear combinations of the anticommutators which vanish. The ${\bar a a}$ identities are contained in the third block of Table~\ref{tab:su2fiden}. The simplest method for obtaining the vanishing two-body identities uses a trick. The baryon $SU(2)\times SU(F)$ representations in the completely symmetric $SU(2F)$ representation have identical Young tableaux for the spin and flavor subgroups (see Table~\ref{tab:su2f->suf}). Eq.~(\ref{VIIi}) then implies that the $SU(F)$ Casimir operator $T^a T^a$ of an arbitrary baryon flavor representation is simply related to its $SU(2)$ Casimir $J^i J^i$, \begin{equation}\label{VIIx} T^2 = J^2 + {1\over {4F}}N_c\left(N_c+2F\right)\left(F-2\right). \end{equation} This relation, which transforms as an $SU(2)\times SU(F)$ singlet, is a linear combination of the Casimir identity eq.~(\ref{VIIci}) and the $(0,0)$ element of the ${\bar a a}$ representation of $SU(2F)$. The singlet of the ${\bar a a}$ representation is obtained by finding the linear combination of eq.~(\ref{VIIx}) and the Casimir identity which orthogonal to the Casimir identity. This linear combination, \begin{eqnarray} &&4F\left(2-F\right)\ \left\{G^{ia},G^{ia}\right\} + 3 F^2\ \left\{T^a, T^a\right\}\nonumber\\ &&\qquad + 4\left(1-F^2\right)\ \left\{J^i,J^i\right\}=0,\label{VIIxi} \end{eqnarray} is the first identity in the third block of Table~V. All the other elements of the $SU(2F)$ ${\bar a a}$ irreducible representation can be obtained by applying raising and lowering operators to the $(0,0)$ element, eq.~(\ref{VIIxi}). In our case, all other elements of the ${\bar a a}$ representation can be obtained by commuting identity eq.~(\ref{VIIxi}) with the generators $J^i$, $T^a$ and $G^{ia}$. Since $J^i$ and $T^a$ are spin and flavor generators, they do not produce identities which are in new $SU(2)\times SU(F)$ representations. Thus, only commutators of $G^{ia}$ with eq.~(\ref{VIIxi}) need to be evaluated to obtain the other $SU(2)\times SU(F)$ identities in the $SU(2F)$ ${\bar a a}$ representation. Applying successive commutators, and projecting onto definite $SU(2) \times SU(F)$ channels gives the remaining identities in Table~\ref{tab:su2fiden}. For ease of notation, we have not explicitly written some of the spin and flavor projectors in Table~\ref{tab:su2fiden}, but have simply indicated that both sides of a given equation are to be projected into the relevant channel. For example, $\left\{G^{ia}, G^{ja}\right\} (J=2)$ indicates that only the spin-two piece is to be retained, and is a shorthand notation for $\left\{G^{ia}, G^{ja}\right\}- (1/3)\delta^{ij}\left\{G^{ka}, G^{ka}\right\}$. \section{The Operator Identities for Two and Three Flavors}\label{sec:2and3} The group theoretic structure of the independent quark operators and the complete set of non-trivial quark operator identities were derived in the previous two sections for an arbitrary number of flavors. Specialization of these results to two and three flavors is useful for application of this formalism to QCD. There is considerable simplification in the results for two flavors, since many of the $SU(F)$ representations vanish for $F=2$. There is also some simplification for three flavors. \subsection{Two Flavors} For two light quark flavors, the zero---three-body operators transform according to the $SU(4)$ irreducible representations \begin{eqnarray} &{\rm 0-body:}&\ \ \left( 0 \times 0 \right)= 1, \nonumber\\ &{\rm 1-body:}&\ \ \left( 4 \times \overline 4 \right)= 1 + 15, \nonumber\\ &{\rm 2-body:}&\ \ \left( \overline{10} \otimes 10 \right) = 1 + 15 + 84 , \\ &{\rm 3-body:}&\ \ \left( \overline{20} \otimes 20 \right) = 1 + 15 + 84 + 300.\nonumber \end{eqnarray} The symmetric product of two adjoints (see Table~III) transforms as \begin{equation} \left( 15 \otimes 15 \right)_S = 1 + 15 + 20 + 84, \end{equation} so the vanishing two-body identities transform in the $20$-dimensional representation of $SU(4)$. The $SU(4)$ singlet, adjoint and ${\bar a a}$ two-body identities are listed in Table~VII. The $SU(2) \times SU(2)$ decompositions of the $15$, $20$ and $84$ are given in appendix~B. \subsection{Three Flavors} For three light quark flavors, the zero---three-body operators transform according to the $SU(6)$ irreducible representations \begin{eqnarray} &{\rm 0-body:}&\ \ \left( 0 \times 0 \right)= 1, \nonumber\\ &{\rm 1-body:}&\ \ \left( 6 \times \overline 6 \right)= 1 + 35, \nonumber\\ &{\rm 2-body:}&\ \ \left( \overline{21} \otimes 21 \right) = 1 + 35 + 405, \\ &{\rm 3-body:}&\ \ \left( \overline{56} \otimes 56 \right) = 1 + 35 + 405 + 2695. \nonumber \end{eqnarray} The symmetric product of two adjoints (see Table~III) transforms as \begin{equation} \left( 35 \otimes 35 \right)_S = 1 + 35 + 189 + 405 , \end{equation} so the vanishing two-body identities transform in the $189$-dimensional representation of $SU(6)$. The $SU(6)$ singlet, adjoint and ${\bar a a}$ two-body identities are listed in Table~VIII. The $SU(2) \times SU(3)$ decompositions of the $35$, $189$ and $405$ are given in appendix~B. The ${\bar a s}+{\bar s a}$ and ${\bar s s}$ representations of $SU(3)$ are the $10+\overline{10}$ and $27$, respectively. The ${\bar a a}$ representation doesn't exist for the $SU(3)$ flavor group. \section{Operator Analysis for the $\bf 1/N_c$ Expansion}\label{sec:opanalysis} We now analyze the spin-flavor structure of baryons in large-$N_c$ QCD for $N_c$ finite and odd. Any QCD operator which transforms as an irreducible representation of $SU(2) \times SU(F)$ can be written as an expansion in $n$-body quark operators, $n=0, \ldots, N_c$, which transform according to the same irreducible representation (see eq.~(\ref{IIIi})). The quark operator identities can be used to construct a linearly independent and complete operator basis of $n$-body operators with the correct transformation properties. The operator basis for any $SU(2) \times SU(F)$ representation contains a finite number of operators. The $1/N_c$ expansion for any QCD operator can be simplified by retaining only those operators in the operator basis which contribute at a given order in $1/N_c$. In this section, the generic structure of the $1/N_c$ expansion for the ground-state baryons is discussed. Sections~\ref{sec:axial} through~\ref{sec:nonlep} derive $1/N_c$ expansions for certain static properties of baryons. The analysis for two flavors is straightforward and reproduces earlier results. The analysis for three (or more) flavors is much more subtle and leads to many new results, as well as reproducing some old results. In the $N_c\rightarrow\infty$ limit, it has been shown that the baryon states form degenerate irreducible representations of the $SU(2F)$ spin-flavor algebra generated by spin, flavor, and the space components of the axial flavor currents\cite{dm,djm}. Matrix elements of the axial currents within a given irreducible baryon representation are of order $N_c$, whereas matrix elements of the axial currents between different irreducible representations are at most of order $\sqrt{N_c}$. The mass of the degenerate baryon multiplet is of order $N_c$. The degeneracy of the baryon spectrum is broken by $1/N_c$ corrections, and it has been shown that the $1/N_c$ correction to the baryon masses is proportional to $J^2$ (in the flavor symmetry limit)\cite{j}. This baryon mass spectrum is depicted in Fig.~\ref{fig:spectrum}. The degenerate baryon $SU(2F)$ multiplet splits into a tower of states with spin $1/2, \ldots , N_c/2$. Mass splittings between baryon states at the bottom of the tower ($J$ of order one) are of order $1/N_c$, whereas mass splittings between baryon states at the top of the tower ($J$ of order $N_c$) are of order one. The mass difference between the baryon states at the bottom and top of the towers is of order $N_c$, and is of the same order in $N_c$ as the average mass of the baryon multiplet. The $1/N_c$ correction to the baryon masses is of order $N_c$ near the top of the tower, and is not a small perturbation. However, it is small near the bottom of the tower, where the baryons have spins which are of order one. The $1/N_c$ expansion is therefore valid for the lowest spin states in the $SU(2F)$ baryon representation. Thus, our analysis considers baryons in the limit $N_c\rightarrow\infty$ with $J$ fixed. The $1/N_c$ expansion of a QCD operator is in terms of a basis of $n$-body quark operators, where $n=0, \ldots, N_c$. A generic $n$-body operator can be written as a polynomial of homogeneous degree $n$ in the one-body operators $J^i$, $T^a$ and $G^{ia}$, \begin{equation}\label{on} {\cal O}^{(n)} =\sum_{\ell,m} \left(J^i\right)^\ell \left(T^a\right)^m \left(G^{ia}\right)^{n-\ell-m}, \end{equation} so the expansion of a QCD one-body operator has the form \begin{equation}\label{ope} {\cal O}_{QCD} =\sum_{\ell,m,n} c^{(n)} {1\over N_c^{n-1}} \left(J^i\right)^\ell \left(T^a\right)^m \left(G^{ia}\right)^{n-\ell-m}, \end{equation} where summation over different $n$-body operators ${\cal O}^{(n)}_k$ is implied. The generalization of eq.~(\ref{ope}) to an $m$-body QCD operator is clear from the discussion in Section~\ref{sec:ncount}. An important feature of the operator expansion~(\ref{ope}) can now be explained. We argued in Sec.~\ref{sec:ncount} that the matrix elements of one-body operators are typically of order $N_c$, though they can be smaller if there are cancellations between insertions of the operator on the various quark lines. There is an important example of such a cancellation: the baryon states with a valid $1/N_c$ expansion are restricted to those states for which the matrix element of the one-body operator $J$ is of order one, not of order $N_c$. As a result, every factor of $J$ on the right hand side of eq.~(\ref{ope}) comes with a $1/N_c$ suppression. The quark operator identities can be used to eliminate redundant operators from the expansion eq.~(\ref{ope}). A complete and independent operator basis can be constructed by recursively applying the two-body quark operator identities of Secs.~\ref{sec:classify} and~\ref{sec:derive}. The anticommutator of two one-body operators occurs in the irreducible $SU(2)\times SU(F)$ representations given in the second column of Table~\ref{tab:opleft}. For example, the anticommutator of $J^i$ with $J^j$ is a flavor singlet which transforms in the symmetric tensor product of two spin ones, i.e. it is either a $(0,0)$ or a $(2,0)$. A similar analysis yields all the other entries in the second column of Table~\ref{tab:opleft}. Some of these operators can be eliminated using the operator identities listed in Table~\ref{tab:su2fiden}. There are a total of 15 identities which can be used to eliminate 15 operator products in Table~\ref{tab:opleft}, leaving only the representations listed in the third column of the table. There is a simple structure to the operator products which remain. Consider the operator products $\{T^a,T^b\}$, $\{T^a,G^{ib}\}$, and $\{G^{ia},G^{jb}\}$ which each have two adjoint indices. These indices can be contracted using $\delta^{ab}$ to give a flavor singlet, or with $d^{abc}$ or $f^{abc}$ to give flavor adjoints. All these contractions are eliminated by the identities. The spin indices in $\{G^{ia},G^{jb}\}$ can be contracted with $\delta^{ij}$ to give spin zero, or with $\epsilon^{ijk}$ to give spin one. These contractions are also eliminated using the identities. In addition, the $(1,{\bar a a})$ in $\{T^a,G^{ib}\}$ and $(2,{\bar a a})$ in $\{G^{ia},G^{jb}\}$ can be removed. All other products (including all operator products involving $J$) remain. To summarize, the reduction of operator products is given by the \medskip \noindent{\bf Operator Reduction Rule:\ } All operator products in which two flavor indices are contracted using $\delta^{ab}$, $d^{abc}$ or $f^{abc}$, or two spin indices on $G$'s are contracted using $\delta^{ij}$ or $\epsilon^{ijk}$ can be eliminated.\footnote{Operators such as $f^{acg}d^{bch} \left\{T^g,G^{ih}\right\}$ (which contains $i(\bar s a - \bar a s)$) are different from $\left\{T^a,G^{ib}\right\}$ (which contains $\bar s a + \bar a s$), and are not removed, since the two indices on $\left\{T^g,G^{ih}\right\}$ are not contracted using a $f$ or $d$-symbol. Many combinations in which two adjoint indices $g$ and $h$ are contracted with $f^{acg}d^{bch}$, $f^{acg}f^{bch}$, or $d^{acg}d^{bch}$ can be eliminated using eqs.~(\ref{Axv})--(\ref{Axxii}).} In addition, the $(1,{\bar a a})$ in $\{T^a,G^{ib}\}$ and the $(2,{\bar a a})$ in $\{G^{ia},G^{jb}\}$ can be eliminated. \medskip \noindent The last two exceptional cases are not important for examples of physical interest, since there is no ${\bar a a}$ representation for $F=2$ or $3$. Note that it is possible to choose a different set of independent operators using the two-body identities. The operator reduction rule we have chosen is appealing because it has a nice physical interpretation, which is discussed in Section~\ref{sec:axial}. The two-flavor case is special, since the $d$-symbol vanishes for $SU(2)$, so there are some additional simplifications. There is a symmetry between spin and isospin for the two-flavor case. The operator reduction rule for two flavors becomes: All operators in which two spin or isospin indices are contracted with a $\delta$ or $\epsilon$-symbol can be eliminated, with the exception of $J^2$. Note that the inclusion of $J^2$, but not $I^2$, in the set of independent operators does not break the symmetry between spin and isospin, because of the identity $I^2 = J^2$. In Sections~\ref{sec:axial} through~\ref{sec:nonlep}, the operator reduction rule is used to construct $1/N_c$ expansions for various static properties of baryons. These include baryon axial vector currents, masses, magnetic moments and hyperon non-leptonic decay amplitudes. \section{Operator Analysis For Completely Broken $\bf SU(3)$ Symmetry}\label{sec:broken} The $1/N_c$ expansion also provides information about the spin-flavor structure of baryons to all orders in $SU(3)$ symmetry breaking if the operator analysis is performed for completely broken $SU(3)$ flavor symmetry. In this section, the quark operator identities and the classification of independent $n$-body operators is analyzed for $SU(2) \times U(1)$ flavor symmetry. For this analysis, it is necessary to decompose the one-body operators $J^i$, $T^a$ and $G^{ia}$ which transform under $SU(3)$ flavor symmetry into operators with definite isospin and strangeness. We define new one-body operators \begin{eqnarray} &&I^a = T^a\ \ (a=1,2,3),\nonumber\\ &&G^{ia} = G^{ia}\ \ (a=1,2,3), \nonumber\\ &&t^\alpha = s^\dagger q^\alpha\ \ ,\nonumber\\ &&Y^{i\alpha} = s^\dagger J^i q^\alpha\ \ ,\\ &&N_s = s^\dagger s, \nonumber\\ &&J_s^i = s^\dagger J^i s,\nonumber \end{eqnarray} where $I^a$ and $G^{ia}$ are isospin one operators, and $t^\alpha$ and $Y^{i\alpha}$ are isospin-$1/2$ operators. The spin and strangeness quantum numbers of these operators are obvious from the above definitions. The independent one-body operators for completely broken $SU(3)$ symmetry are $J^i$, $I^a$, $t^\alpha$, $t^\dagger_\alpha$, $N_s$, $G^{ia}$, $Y^{i\alpha}$, $Y^{\dagger i}_\alpha$, and $J_s^i$. This set of operators replaces the $SU(3)$ one-body operators $J^i$, $T^a$ and $G^{ia}$. Note that $t^\alpha$ and $Y^{i\alpha}$, $\alpha=1,2$, correspond to $T^a$ and $G^{ia}$ for $a=4-i5,6-i7$, respectively. The strange quark number operator $N_s$ and the strange quark spin operator $J_s^i$ originate from $T^8$ and $G^{i8}$, \begin{eqnarray}\label{t8} &&T^8= {1 \over {2\sqrt{3}}}\left( N_c - 3 N_s \right),\nonumber\\ &&G^{i8} = {1 \over {2\sqrt{3}}}\left( J^i - 3 J^i_s \right). \end{eqnarray} The operator Lie algebra of Table~\ref{tab:su2fcomm} is unaffected by $SU(3)$ breaking, so the commutation relations of the one-body operators for $SU(2) \times U(1)$ flavor symmetry can be obtained directly from the $SU(3)$ flavor commutation relations using the above identifications. The commutation relations for $SU(2) \times U(1)$ flavor symmetry are listed in Table~\ref{tab:brokencomm}. Note that the spin-flavor algebra contains an $SU(4)$ spin-flavor subgroup with generators $J_{ud}^i$, $I^a$ and $G^{ia}$, where $J_{ud}^i$, the spin operator for the $u$ and $d$ quarks, is linearly related to the spin operators $J^i$ and $J_s^i$, \begin{equation} J_{ud}^i = u^\dagger J^i u + d^\dagger J^i d = J^i - J_s^i. \end{equation} The commutation relations in Table~\ref{tab:brokencomm} are written in terms of $J_{ud}^i$ and $J_s^i$, rather than $J^i$ and $J_s^i$ so that the $SU(4)$ spin-flavor symmetry is manifest. The full spin-flavor symmetry of the algebra is $SU(4) \times SU(2) \times U(1)$, where the $SU(2)$ factor is strange quark spin and the $U(1)$ factor is the number of strange quarks. The two-body operator identities for $SU(4) \times SU(2) \times U(1)$ spin-flavor symmetry can be obtained by decomposing the $SU(6)$ identities in Table~\ref{tab:su6iden} into irreducible representations with definite spin $J$, isospin $I$, and strange quark number $S$\footnote{Note that $S$ is defined as strange quark number $N_s$, not strangeness. Since the strangeness of an $s$-quark is $-1$, $S$ is the negative of strangeness.}. The resulting identities are given in Tables~\ref{tab:broken0} and~\ref{tab:brokennot0}. The identities are denoted by their $SU(2) \times SU(2) \times U(1)$ quantum numbers $(J,I)_S$. Table~\ref{tab:broken0} contains the $S=0$ identities and Table~\ref{tab:brokennot0} contains the $S=1$ and $S=2$ identities. (The $S=-1$ and $S=-2$ identities are the conjugates of the identities in Table~\ref{tab:brokennot0}.) The identities are written most easily in terms of the spin operators $J^i_{ud}$ and $J_s^i$. The anticommutators of two one-body operators occur in the irreducible $SU(2)\times SU(2) \times U(1)$ representations given in the second column of Table~\ref{tab:brokenleft}. The tables of identities contain a total of 33 operator identities which can be used to eliminate 33 different representations appearing in the second column of Table~\ref{tab:brokenleft}. The operator products which remain appear in the third column of the table. There are several simplifications which occur as a result of operator reduction. All $Y Y^\dagger$, $Y t^\dagger$, $t Y^\dagger$ and $t t^\dagger$ anticommutators can be eliminated using the operator identities. This implies that independent $n$-body operators with $\Delta S = 1$ ($\Delta S = -1$) contain only one factor of $t$ or $Y$ ($t^\dagger$ or $Y^\dagger$). It also implies that $\Delta S=0$ operators can be simplified so that they do not contain $t$, $t^\dagger$, $Y$ or $Y^\dagger$. All operator combinations in which two isovector indices are contracted with $\delta^{ab}$ or $\epsilon^{abc}$, or in which an isovector and isospinor index are contracted with $\left({\tau^a \over 2}\right)^\alpha_\beta$, can be eliminated. In addition, the product of two $G$'s or two $Y$'s or a $G$ and $Y$ in which the spin indices are contracted with a $\delta^{ij}$ or $\epsilon^{ijk}$ can be eliminated. A few other operators also can be eliminated. To summarize, the reduction of operator products for the spin $\otimes$ flavor group $SU(2) \times SU(2) \times U(1)$ obeys the \medskip \noindent{\bf Operator Reduction Rule II:\ } \begin{enumerate} \item All products of the form $t^\alpha t^\dagger_\beta$, $t^\alpha Y^{\dagger i}_\beta$, $Y^{i\alpha} t^\dagger_\beta$ and $Y^{i\alpha} Y^{\dagger j}_\beta$ can be eliminated. \item All operator products in which two isovector indices are contracted using $\delta^{ab}$ or $\epsilon^{abc}$, or an isovector and an isospinor index are contracted with $\left({\tau^a \over 2}\right)^\alpha_\beta$, can be eliminated. \item All operator products $G^{ia} G^{jb}$, $Y^{i\alpha} G^{jb}$, and $Y^{i\alpha} Y^{j\beta}$ (and their conjugates) in which two spin indices are contracted using $\delta^{ij}$ or $\epsilon^{ijk}$ can be eliminated. \item The operators $\{ J_s^i, J_s^i \}$, $\{ G^{ia}, J_{ud}^i \}$, $\{ Y^{i\alpha}, J_s^i \}$ and $i\epsilon^{ijk} \{Y^{i\alpha}, J_s^j\}$ (and their conjugates) can be eliminated. \end{enumerate}. Note that there are operators with two isospinor indices contracted with $\epsilon_{\alpha\beta}$ which can ${\it not}$ be eliminated, namely $\epsilon_{\alpha\beta}\{t^\alpha, t^\beta\}$ and $i\epsilon_{\alpha\beta}\{t^\alpha, Y^{i\beta}\}$. Again, it is possible to choose a different set of independent operators using the two-body identities. The operator reduction rule chosen here is one of the simplest. \section{Axial Currents and Meson Couplings}\label{sec:axial} The results of the previous sections will be used to obtain $1/N_c$ expansions for the baryon axial currents and meson couplings, masses, magnetic moments and hyperon non-leptonic decay amplitudes. The axial currents and meson couplings are considered in this section. We begin by deriving the $1/N_c$ expansion for the baryon axial vector current in the $SU(F)$ flavor symmetry limit. Expansions for the axial currents to first order in $SU(3)$ breaking and for $SU(2) \times U(1)$ flavor symmetry are derived in subsequent subsections. Only the space components of the axial current have non-zero matrix elements at zero recoil, so the axial vector current $A^{ia}$ transforms as $(1,adj)$ under $SU(2) \times SU(F)$. The $1/N_c$ expansion for $A^{ia}$ derived in this section can be applied to any other baryon operator which transforms as a $(1, adj)$, such as the magnetic moment operator. The $n$-body quark operators in the $1/N_c$ expansion of $A^{ia}$ reduce to operators containing only a single factor of $G^{ia}$ or $T^a$ by the operator reduction rule, which eliminates all operator products with contracted flavor indices. The only one-body operator is $G^{ia}$. There are two allowed two-body operators, which can be written as \begin{eqnarray}\label{VIIIii} {\cal O}_2^{ia} &=& \epsilon^{ijk}\left\{ J^j, G^{ka} \right\} = i \left[ J^2, G^{ia} \right] ,\\ {\cal D}_2^{ia} &=& J^i T^a .\label{VIIIiii} \end{eqnarray} There are two three-body operators, which can be written as \begin{eqnarray} {\cal O}_3^{ia} &=& \left\{ J^2, G^{ia} \right\} - \frac12 \left\{ J^i, \left\{ J^j, G^{ja} \right\} \right\} ,\label{VIIIiv}\\ {\cal D}_3^{ia} &=& \left\{ J^i, \left\{ J^j, G^{ja} \right\} \right\} . \label{VIIIv} \end{eqnarray} All remaining $n$-body operators can be obtained recursively by applying anticommutators of $J^2$ to the above operators. For $n \ge 2$, the independent $(n+2)$-body operators are given by \begin{eqnarray} {\cal O}_{n+2}^{ia} &=& \left\{ J^2, {\cal O}_n^{ia} \right\},\label{VIIIvi}\\ {\cal D}_{n+2}^{ia} &=& \left\{ J^2, {\cal D}_n^{ia} \right\}. \label{VIIIvii} \end{eqnarray} The set of operators $G^{ia}$, ${\cal O}_n^{ia}$ and ${\cal D}_n^{ia}$, $n=2,3,\ldots,N_c$, form a complete set of linearly independent spin-one adjoints. The operators ${\cal D}_n^{ia}$ are diagonal operators, in the sense that they have non-zero matrix elements only between states with the same spin. The operators ${\cal O}_n^{ia}$ are purely off-diagonal, in the sense that they only connect states with different spin. The operator $G^{ia}$ is neither diagonal or off-diagonal. The operators $G^{ia}$, ${\cal D}_n^{ia}$ and ${\cal O}_{2m+1}^{ia}$, $m=1,2,\ldots$, are odd under time reversal, whereas the operators ${\cal O}_{2m}^{ia}$, $m=1,2,\ldots$ are even under time reversal. Since $A^{ia}$ is $T$-odd, the operators ${\cal O}_{2m}^{ia}$ do not occur in the $1/N_c$ expansion of the axial vector current. The $1/N_c$ expansion for $A^{ia}$ is \begin{equation}\label{VIIIviii} A^{ia} = a_1 G^{ia} + \sum_{n=2,3}^{N_c} b_n {1 \overN_c^{n-1}}{\cal D}_n^{ia} + \sum_{n=3,5}^{N_c} c_n {1\overN_c^{n-1}}{\cal O}_n^{ia}, \end{equation} where the coefficients $a_1$, $b_n$ and $c_n$ have expansions in powers of $1/N_c$ and are order unity at leading order in the $1/N_c$ expansion. The expansion for $A^{ia}$ eq.~(\ref{VIIIviii}) has a simple physical interpretation in terms of quark line diagrams (see Fig. 4). An insertion of the axial current operator on a quark line $r$ gives $J^i_r T^a_r$ acting on that quark line. (The subscript $r$ implies that the operator acts only on the $r^{\rm th}$ quark.) Summation over all quark lines yields the operator \[ \sum_r J^i_r T^a_r = G^{ia}. \] Spin-independent gluon exchange renormalizes the operator $G^{ia}$. Spin-dependent gluon exchange produces additional factors of $J$ acting on different quark lines. The most general operator structure is a flavor matrix $T^a_r$ on some quark line $r$, and a product of $J$'s on one or more different quark lines $s_1,\ldots s_\ell$, summed over all possible choices for $r,s_1,\ldots,s_\ell$. Products of $J_r$ on a single quark line can be reduced to at most one factor of $J_r$ since \[ J_r^i J_r^j = \frac 14 \delta^{ij} + i\frac12\epsilon^{ijk} J_r^k \] for the spin-1/2 operator $J_r^i$. If $r$ is not equal to any of the $s_k$'s, the operator produced after summing over all possible quark combinations is \[ \sum_{r,s_1,\ldots,s_\ell} T^a_r J_{s_1} \ldots J_{s_\ell} = T^a J\ldots J, \] where the indices on the $J$'s are combined to form a spin-one operator. If $r$ is equal to one of the $s_k$'s, the operator produced after summing over all possible quark combinations is \[ \sum_{r,s_1,\ldots,s_\ell} T^a_r J_r J_{s_1} \ldots J_{s_\ell} = G^{ia} J\ldots J, \] where the indices on $G^{ia}$ and the $J$'s are combined to form a spin-one operator. The operator expansion eq.~(\ref{VIIIviii}) has this form. The above diagrammatic argument is similar to the one given in ref.~\cite{lm}. The expansion for $A^{ia}$ eq.~(\ref{VIIIviii}) can be reduced to a finite operator series using the fact that each factor of $J$ comes with a $1/N_c$ suppression. The truncation of the operator series is different for the two flavor and three flavor cases. \subsection{Meson Couplings in the Flavor Symmetry Limit} \subsubsection{Two Flavors and the $\bf I=J$ Rule}\label{sec:twoflavor} In the two-flavor case, the isospin $I$ is of order one for baryons with spin $J$ of order one, since $I=J$. Thus every factor of $I$ and $J$ in eq.~(\ref{VIIIviii}) brings a $1/N_c$ suppression. The matrix elements of the operator $G^{ia}$ are of order $N_c$, as are the matrix elements of ${\cal O}_m^{ia}$ and ${\cal D}_m^{ia}$, $m$ odd. The matrix elements of ${\cal D}_2^{ia}= J^i I^a$, and ${\cal D}_m^{ia}$, $m$ even, are of order one. Thus, the operator expansion eq.~(\ref{VIIIviii}) can be truncated after the one-body operator $G^{ia}$, \begin{equation}\label{VIIIix} A^{ia} = a_1\ G^{ia} \left[ 1 +{\cal O}\left({1\overN_c^2}\right) \right]. \end{equation} Eq.~(\ref{VIIIviii}) implies that there are no $1/N_c$ corrections to ratios of pion-baryon couplings for two quark flavors\cite{dm}. It also implies that there are no $1/N_c$ corrections to ratios of pion-baryon couplings amongst baryons in a given strangeness sector for three quark flavors\cite{djm}. In addition, one can prove that pion couplings to baryons are purely $p$-wave in the large $N_c$ limit, which is an example of the $I=J$ rule of Mattis and collaborators~\cite{mattis}. For arbitrary $N_c$, the baryons form a tower of states with $J$ ranging from $1/2$ to $N_c/2$. Neglecting baryon recoil (since the baryon mass is of order $N_c$), the only pseudoscalar meson coupling between the spin-1/2 and spin-3/2 states is a $p$-wave ($\ell=1$) coupling. However, higher spin baryons can have couplings in other angular momentum channels, such as $\ell=3,5,\ldots$, where $\ell$ must be odd by parity. A meson coupling in the $\ell^{\rm th}$ partial wave is given by an operator $A^{(i_1,\ldots, i_\ell) a}$ which is completely symmetric and traceless in the $\ell$ spin indices. The operator reduction rule implies that at leading order in $N_c$, this operator is proportional to $G^{i_1a} J^{i_2} J^{i_3}\ldots J^{i_\ell}/N_c^{\ell-1}$, completely symmetrized in the spin indices, with a correction of relative order $1/N_c^2$. Since matrix elements of $G^{ia}$ are of order $N_c$, and matrix elements of $J^i$ are of order one, we conclude that the $\ell=1$ coupling is of order $N_c$ (which is just eq.~(\ref{VIIIix})), the $\ell=3$ coupling is of order $1/N_c$, the $\ell=5$ coupling is of order $1/N_c^3$, etc. In the large $N_c$ limit, all of the higher partial waves vanish, and the pion coupling to baryons is purely $p$-wave, isospin one, i.e. it has $I=J$. Mattis et al.\ \cite{mattis} originally derived the $I=J$ rule in the Skyrme model. The $I=J$ rule is true in large $N_c$ QCD because $G^{ia}/N_c$ which has $I=J=1$ is of order one, whereas the matrix elements of $I/N_c$ and $J/N_c$, which have $|I-J|=1$, are of order $1/N_c$. We also get the additional result that the higher partial wave pion-couplings are of order $1/N_c^{\ell-2}$ in the $1/N_c$ expansion. In general, pion couplings which violate the $I=J$ rule are suppressed by a factor of $1/N_c^{|I-J|}$ relative to the $I=J$ coupling, which is of order $N_c$. A similar result also holds for other meson-baryon couplings. For example, $p$-wave kaon couplings (which have $J=1$ and $I=1/2$) are of order $\sqrtN_c$, and $p$-wave $\eta$ couplings (which have $J=1$ and $I=0$) are of order one~\cite{djm}; the spin-one $\rho$ coupling and spin-zero $\omega$ couplings are of order $N_c$; and the spin-zero $\rho$ coupling and spin-one $\omega$ coupling are of order one~\cite{mattis,cgo}. \subsubsection{Three Flavors} The analysis of the $1/N_c$ expansion of the axial current for three or more flavors is essentially the same. We employ the familiar language of $SU(3)$ symmetry here for concreteness. For three flavors, matrix elements of the flavor generators $T^a$ and matrix elements of the spin-flavor generators $G^{ia}$ are not the same order in the $1/N_c$ expansion everywhere in the flavor weight diagram. Consider, for example, the weight diagram of the spin-$1/2$ baryons (Fig.~2). Baryons with strangeness of order $N_c^0$ (near the top of the weight diagram) have matrix elements of $T^a$, $a=1,2,3$ and $G^{i8}$ of order one; matrix elements of $T^a$ and $G^{ia}$, $a=4,5,6,7$, of order $\sqrt{N_c}$; and matrix elements of $T^8$ and $G^{ia}$, $a=1,2,3$ of order $N_c$. In other regions of the weight diagram, matrix elements of different linear combinations of the $T$'s and $G$'s are of order $N_c$, $\sqrt{N_c}$ and one. This non-trivial $N_c$-dependence of the matrix elements of $T^a$ and $G^{ia}$ makes the analysis of the $1/N_c$ expansion for three (or more) flavors more complicated than that for two flavors, because matrix elements of the flavor generators $T^a$ are not suppressed relative to $G^{ia}$. In fact, matrix elements of $T^a$ can be a factor of $N_c$ larger than matrix elements of $G^{ia}$ (for some values of the index $a$) in some regions of the flavor weight diagrams. The $1/N_c$ expansion of the axial vector current is tractable for three or more flavors because of the operator reduction rule and because matrix elements of the spin $J$ are suppressed for our choice of limit. The general form of the $1/N_c$ expansion for the axial current eq.~(\ref{ope}) contains terms with arbitrary powers of $T^a/N_c$, $G^{ia}/N_c$ and $J^i/N_c$. Factors of $T^a/N_c$ and $G^{ia}/N_c$ are of order one somewhere in the weight diagram, but $J^i/N$ is of order $1/N_c$ everywhere. Thus, operators with arbitrary powers of $T^a/N_c$ and $G^{ia}/N_c$ are all equally important and must be retained. The operator reduction rule, however, allows us to reduce this set of operators to the subset with one factor of $T^a$ or $G^{ia}$, eq.~(\ref{VIIIviii}). The operator expansion eq.~(\ref{VIIIviii}) can be truncated after the first two terms, \begin{equation}\label{VIIIx} A^{ia} = a_1\ G^{ia} + b_2\ {1\overN_c}{\cal D}_2^{ia} = a_1\ G^{ia} + b_2 {J^i T^a\overN_c}, \end{equation} since all other terms in eq.~(\ref{VIIIviii}) are suppressed by at least $1/N_c^2$ relative to the terms which have been retained {\it everywhere} in the weight diagram. {}From the operator definitions eqs.~(\ref{VIIIii})--(\ref{VIIIvii}), it is easy to see that: (i) The operators ${{\cal O}_{n}^{ia}/N_c^{n-1}}$, $n$ odd, are of order $1/N_c^{n-1}$ relative to $G^{ia}$ everywhere in the weight diagram; (ii) The operators ${{\cal D}_{n}^{ia}/N_c^{n-1}}$, $n$ odd, are suppressed by $1/N^{n-1}$ relative to $G^{ia}$ everywhere in the weight diagram; and (iii) The operators ${{\cal D}_{n}^{ia}/N_c^{n-1}}$, $n$ even, are suppressed by $1/N^{n-2}$ relative to ${\cal D}_2^{ia}/N_c$ everywhere in the weight diagram. Although the second term in eq.~(\ref{VIIIx}) contains an explicit factor of $1/N_c$, this term is not necessarily suppressed relative to the first term, and must be retained for a valid truncation. In eq.~(9.3) of ref.~\cite{djm}, we defined the $SU(3)$-invariant meson couplings $\cal M$ and $\cal N$, and proved that \begin{equation}\label{VIIIxi} {{\cal N}\over{\cal M}} = {1\over2} + {\alpha\overN_c} + {\cal O}\left({1\overN_c^2}\right). \end{equation} Eq.~(\ref{VIIIx}) implies \begin{equation}\label{VIIIxii} \alpha = - {3 \over 2}\left( 1 + {b_2 \over a_1} \right) , \end{equation} so that $b_2/a_1$ determines the parameter $\alpha$. Eq.~(\ref{VIIIx}) is the quark representation analog of eq.~(9.3) of ref.~\cite{djm} for the meson couplings in the $SU(3)$ limit. We are interested in the baryon couplings for the physical case of $N_c=3$. In the $SU(3)$ limit, the octet baryon couplings are described by the parameters $D$ and $F$, the decuplet to octet transition couplings are described by ${\cal C}$, and the decuplet couplings are described by ${\cal H}$\cite{baryonxpt}. These parameters can be determined by taking matrix elements of eq.~(\ref{VIIIx}) in quark model baryon states, \begin{eqnarray}\label{dfch} D &=& {1 \over 2} a_1, \nonumber\\ F&=& {1 \over 3} a_1 + {1 \over 6} b_2, \nonumber\\ {\cal C} &=& -a_1,\\ {\cal H} &=& -{3 \over 2} \left( a_1 + b_2 \right), \nonumber \end{eqnarray} where we have set $N_c=3$. Note that the diagonal operator ${\cal D}_2$, which is the product of spin and flavor generators, only affects the diagonal couplings $F$ and ${\cal H}$. The parameter $b_2$ produces deviations from the $SU(6)$ prediction. One can eliminate the unknown parameters to get the two relations, \begin{equation}\label{ch} {\cal C}= - 2D,\qquad {\cal H} = 3 D - 9 F. \end{equation} The first relation is an $SU(6)$ relation. The $F/D$ ratio is given by \begin{equation}\label{fdeqn} {F\over D}= \frac23 + \frac{b_2}{3a_1}. \end{equation} One cannot determine the accuracy in $1/N_c$ of $F/D$ from eq.~(\ref{fdeqn}), since this equation is only valid at $N_c=3$. One can also analyze the meson couplings for arbitrary $N_c$, and study the limit as $N_c\rightarrow 3$. The $1/N_c$ dependence of meson-baryon couplings depends on the location of the baryon in the $SU(3)$ weight diagram. If one considers baryons with strangeness of order $N_c^0$, the second term in eq.~(\ref{VIIIx}) is of order $1/N_c^2$ relative to the first term for $a=1,2,3$, ($\pi$ couplings); of relative order $1/N_c$ for $a=4,5,6,7$, ($K$ couplings); and of the same order for $a=8$ ($\eta$ couplings). For baryons with strangeness of order $N_c^0$, the unknown ratio $b_2/a_1$ affects pion couplings at order $1/N_c^2$, kaon couplings at order $1/N_c$, and $\eta$ couplings at order one~\cite{djm}. If one defines $F/D$ using ratios of pion couplings between baryons of strangeness $N_c^0$, one concludes that $F/D=2/3 +{\cal O}(1/N_c^2)$. Similarly a determination using $K$ and $\eta$ couplings gives $F/D=2/3 +{\cal O}(1/N_c)$ and $F/D=2/3 +{\cal O}(1)$, respectively. The ratio of $\pi$ couplings was used in ref.~\cite{djm} since it gives the most accurate determination of $F/D$. For a more detailed discussion of the meson-couplings in the $SU(3)$ limit and the $F/D$ ratio, see Sections~IX and~XI of ref.~\cite{djm}. At next order in the $1/N_c$ expansion, the meson couplings have the form \begin{equation}\label{mesonsubleading} A^{ia} = a_1\ G^{ia} + b_2 {J^i T^a\overN_c}+b_3 {{\cal D}_3^{ia}\overN_c^2} + c_3 {{\cal O}_3^{ia}\overN_c^2}. \end{equation} Since there are four linearly independent operators in eq.~(\ref{mesonsubleading}), the four parameters $D$, $F$, ${\cal C}$ and ${\cal H}$, are all independent at this order in $1/N_c$ expansion, and there is no prediction. \subsection{Meson Couplings with Perturbative $\bf SU(3)$ Breaking}\label{sec:pert} In this section, we compute the $1/N_c$ expansion for a $(1,8)$ operator (such as the axial current or meson coupling) to first order in $SU(3)$ breaking. Flavor $SU(3)$ breaking in QCD is due to the light quark masses and transforms as a flavor octet. We will neglect isospin breaking, and work to linear order in the $SU(3)$ symmetry breaking perturbation $\epsilon {\cal H}^8$. The $SU(3)$ symmetry-breaking correction to the axial current is computed to linear order in $SU(3)$ symmetry breaking from the tensor product of the axial current which transforms as $(1,8)$, and the perturbation $(0,8)$. This tensor product contains a $(1,0)$, $(1,8)$, $(1,8)$ $(1,10 + \overline{10})$, and $(1,27)$. The form of these operators is determined by a straightforward application of the operator reduction rule. The series ${\cal O}_n$ for the $(1,0)$ operators starts with the one-body operator ${\cal O}_1=J^i$. Consecutive terms in the series are generated by ${\cal O}_{n+2}= \left\{J^2,{\cal O}_n\right\}$. The higher order terms in the expansion of the meson couplings are all suppressed by at least $1/N_c^2$ relative to the leading operator $J^i$, which gives the symmetry breaking contribution \begin{equation}\label{pertsinglet} \delta A^{ia}_1 \propto \delta^{a8} J^i \ . \end{equation} The $(1,8)$ operator has the same form as eq.~(\ref{VIIIx}). The $(1,8)$ $SU(3)$-breaking correction to the axial currents is of the form \begin{equation}\label{pertoctet} \delta A^{ia}_8 \propto d^{ab8}\left[ c_1\ G^{ib} + c_2 {J^i T^b\overN_c}\right]\ . \end{equation} A similar term with the $d$-symbol replaced by an $f$-symbol is forbidden by time reversal. Neglected operators are suppressed by $1/N_c^2$ relative to the operators we have retained. There is a second $(1,8)$ operator of the form \begin{equation}\label{fepsop} \delta A^{ia}_8 \propto f^{ab8} \epsilon^{ijk} \{ J^j, G^{kc} \} \ . \end{equation} Neglected operators contain additional factors of $J^2/N_c^2$, and are suppressed by at least $1/N_c^2$ relative to eq.~(\ref{fepsop}). The operator eq.~(\ref{fepsop}) can be written as \begin{equation} \delta A^{ia}_8 \propto \left[J^2,\left[T^8,G^{ia}\right]\right], \end{equation} which shows that it contributes only to amplitudes which change both $J$ and strangeness ($T^8$). The operator reduction rule implies that the $(1,10+\overline{10})$ ($(1,{\bar a s}+{\bar s a})$) representation is given by (see Table~\ref{tab:opleft} and Appendix~\ref{app:suq}) \begin{equation}\label{eqassa} \{G^{ia},T^b\}-\{G^{ib},T^a\}-{2\over3} f^{abc} f^{cgh} \{G^{ig},T^h\}. \end{equation} Additional operators have at least two more factors of $J$, and are suppressed by $1/N_c^2$ relative to the operator in eq.~(\ref{eqassa}). Note that \begin{equation}\label{doublecomm} \left[J^2,\left[T^b,G^{ia}\right]\right] = \left[T^2,\left[T^b,G^{ia}\right]\right] = f^{abc} f^{cgh} \{G^{ig},T^h\}, \end{equation} where the first equality follows from eq.~(\ref{VIIx}), which implies that $J^2-T^2$ equals a constant. Substituting eq.~(\ref{doublecomm}) into eq.~(\ref{eqassa}) gives \begin{equation}\label{pertassa} \delta A^{ia}_{10+\overline{10}} \propto \{G^{ia},T^8\}-\{G^{i8},T^a\}- {2\over3}\left[J^2,\left[T^8,G^{ia}\right]\right]\ . \end{equation} The $(1,27)$ symmetry-breaking correction can be built out of the spin-zero $27$ in $\{T^a,T^b\}$; the spin-one $27$ in $\{G^{ia},T^b\}+\{G^{ib},T^a\}$; and the spin-two $27$ in $\{G^{ia},G^{ib}\}+\{G^{ia},G^{ib}\}$ (see Table~\ref{tab:opleft}). The spin-zero and spin-two $27$'s must be combined with additional factors of $J/N_c$ to form a $(1,27)$, so to leading order in $1/N_c$, only the spin-one $27$ needs to be retained. We do not need to subtract off the singlet and octet parts of $\{G^{ia},T^b\}+\{G^{ib},T^a\}$ because they can be absorbed into the symmetry-breaking singlet and octet operators (\ref{pertsinglet}) and~(\ref{pertoctet}) which have already been included. Thus, the symmetry-breaking $(1,27)$ correction is proportional to \begin{equation}\label{pert27} \delta A^{ia}_{27} \propto \{G^{ia},T^8\}+\{G^{i8},T^a\} \ . \end{equation} The neglected higher order $27$ operators are only suppressed by $1/N_c$ relative to the operator we have retained. The leading order $1/N_c$ expansion for the axial current to first order in $SU(3)$ symmetry-breaking is given by the lowest order $SU(3)$ symmetry result eq.~(\ref{VIIIx}) and the sum of the perturbations eqs.~(\ref{pertsinglet})--(\ref{pert27}). The perturbations involving $T^8$ and $G^{i8}$ can be simplified using eq.~(\ref{t8}). The final expression for the axial current to linear order in symmetry breaking is \begin{eqnarray}\label{axpluspert} A^{ia} + \delta A^{ia} &=& \left(a + \epsilon c_1 d^{ab8}\right) G^{ib} + \left(b + \epsilon c_2 d^{ab8} \right) {J^i T^b\overN_c}\nonumber\\ && + \epsilon c_3 {\left\{G^{ia},N_s\right\}\over N_c} +\epsilon c_4{\left\{J_s^i,T^a\right\}\over N_c} \\ && + \epsilon{c_5\over 3N_c} \left[J^2,\left[N_s,G^{ia}\right]\right] +\epsilon c_6 \delta^{a8} J^i,\nonumber \end{eqnarray} where the parameter $\epsilon$ emphasizes which terms result from symmetry breaking, and the parameters $a$ and $b$ contain terms of order $\epsilon$ from the substitutions for $T^8$ and $G^{i8}$. The double commutator term in eq.~(\ref{axpluspert}) only contributes to processes which change both spin and strangeness. The coefficient $c_6$ in eq.~(\ref{axpluspert}) can be constrained due to the following considerations. We focus on baryons containing no strange quarks. The matrix elements of the isovector axial current ($\bar q \gamma^\mu\gamma_5 \tau^a q$) are of order $N_c$ in the strangeness zero sector, since the expansion for the isovector axial current involves the one-body operator $G^{ia}$, which has matrix elements of order $N_c$. The flavor singlet axial current ($\bar q \gamma^\mu\gamma_5 q$) has matrix elements of order one in the strangeness zero sector, since matrix elements of the one-body operator $J^i$ are of order one. The expansion of the strange axial current ($\bar s \gamma^\mu\gamma_5 s$) involves the one-body operator $J^i$, so naively one expects the matrix elements of the strange axial current also to be of order one. However, in the strangeness-zero sector, $\bar s \gamma^\mu\gamma_5 s$ can only couple to the baryon through a closed $s$-quark loop, which is accompanied by a $1/N_c$ suppression factor. Thus, matrix elements of the strange axial current are of order $1/N_c$, not order one, for strangeness-zero baryons. $SU(3)$ breaking effects in the strangeness-zero sector involve closed $s$-quark loops, and so $SU(3)$ breaking has a $1/N_c$ suppression factor. Diagrams involving a closed loop, such as those in Fig.~(\ref{fig:sloop}), lead to quark mass dependence in axial current matrix elements. Graphs, such as in Fig.~(\ref{fig:sloop}a), in which the axial current is not inserted in the closed loop depend only on ${\rm Tr} M$, where $M$ is the quark mass matrix, and yield matrix elements which are quark mass-dependent, but which do not violate $SU(3)$. (An example of this kind is the pion-nucleon sigma term.) $SU(3)$ violation arises from diagrams such as Fig.~(\ref{fig:sloop}b) in which the axial current operator is inserted in the closed quark loop. This diagram can only produce $SU(3)$ violation in the baryon axial currents at order $1/N_c$. Thus, $SU(3)$ violation in the axial currents must be order $1/N_c$ for strangeness-zero baryons. Imposing this constraint on the expansion eq.~(\ref{axpluspert}) fixes the coefficient $c_6$. In the strangeness zero sector, eq.~(\ref{axpluspert}) reduces to \begin{equation} \left(a + \epsilon c_1 d^{ab8}\right) G^{ib} + \left(b + \epsilon c_2 d^{ab8} \right) {J^i T^b\overN_c}+\epsilon c_6 \delta^{a8} J^i.\nonumber \end{equation} Requiring that there is no $SU(3)$ symmetry breaking between the $\pi$ and $\eta$ couplings at order one gives the constraint \begin{equation}\label{constraint} 3 c_6 = c_1 + c_2, \end{equation} which reduces the number of parameters in eq.~(\ref{axpluspert}) to six parameters. Eqs.~(\ref{axpluspert}) and (\ref{constraint}) yield the following expressions for the pion, kaon and $\eta$ couplings. The pion couplings are given by \begin{eqnarray}\label{pionpert} \pi^{ia} &=& \tilde a G^{ia} + \tilde b{J^i T^a\overN_c}\nonumber\\ && + \epsilon c_3 {\left\{G^{ia},N_s\right\}\over N_c} +\epsilon c_4{\left\{J_s^i,T^a\right\}\over N_c}, \end{eqnarray} where \[ \tilde a = a + \frac1{\sqrt3}\epsilon c_1,\ \ \ \tilde b = b + \frac1{\sqrt3}\epsilon c_2. \] If one considers baryons with strangeness of order $N_c^0$, the matrix elements of $G^{ia}$ are of order $N_c$, whereas the matrix elements of $J^i T^a$ and $J_s^i T^a$ are of order unity. To the order we are working, it is consistent to retain only \begin{equation}\label{pionpertii} \pi^{ia} = \tilde a G^{ia} + \epsilon c_3 {\left\{G^{ia},N_s\right\}\over N_c} . \end{equation} Eq.~(\ref{pionpertii}) determines the pion couplings of baryons with a given strangeness to relative order $1/N_c^2$, and it implies that pion couplings have a linear dependence on strangeness at relative order $1/N_c$~\cite{djm}. The kaon couplings are given by \begin{eqnarray}\label{kaonpert} K^{ia} &=& \left(\tilde a -\frac{\sqrt3}{2} \epsilon c_1\right) G^{ia} + \left(\tilde b -\frac{\sqrt3}{2} \epsilon c_2 \right) {J^i T^a\overN_c}\nonumber\\ && + \epsilon c_3 {\left\{G^{ia},N_s\right\}\over N_c} +\epsilon c_4{\left\{J_s^i,T^a\right\}\over N_c} \\ && + \epsilon{c_5\over 3N_c} \left[J^2,\left[N_s,G^{ia}\right]\right],\nonumber \end{eqnarray} where $\left[N_s,G^{ia}\right]=\pm G^{ia}$ depending on whether one is looking at terms which annihilate $K^+,K^0$ or $K^-,\bar K^0$. The double commutator term only contributes to processes which change $J^2$. The $\eta$ couplings are given by \begin{eqnarray} \eta^i &=& \frac1{2\sqrt3}\left(\tilde a+\tilde b\right) J^i + \left(-\frac{\sqrt3}{2}\tilde a + \epsilon c_1+ \frac1{\sqrt{3}} \epsilon c_4\right) J_s^i \nonumber \\ && + \left( -\frac{\sqrt3}{2}\tilde b + \epsilon c_2+\frac1{\sqrt{3}} \epsilon c_3\right) \frac{N_s}{N_c} J^i \nonumber\\ &&\qquad -\sqrt3 \epsilon \left(c_3+c_4\right) \frac{N_s}{N_c} J^i_s. \label{etapert} \end{eqnarray} A detailed comparison of these equations with the experimental data is given in ref.~\cite{ddjm}. They provide a good description of the data on decuplet $\rightarrow$ octet decays (such as $\Delta\rightarrow N\pi$), and baryon semileptonic decays. \subsection{Meson Couplings Without $SU(3)$ Symmetry} The $1/N_c$ expansion can also be used to obtain meson couplings without assuming $SU(3)$ symmetry. For the $1/N_c$ expansion to be valid, one must work with states which differ in strangeness by order unity. We will work near the top of the $SU(3)$ weight diagram, where the number of strange quarks in the baryon is of order one, and take the large-$N_c$ limit with the number of strange quarks of the baryon held fixed. This form of the $1/N_c$ expansion was discussed in detail in ref.~\cite{djm}. We rederive the results obtained there using the quark representation, and then compare with the perturbative symmetry breaking results obtained in the previous subsection. \subsubsection{Pion Couplings} The isovector axial current and $p$-wave pion couplings have $(J,I)_S$ quantum numbers $(1,1)_0$. The second operator reduction rule implies that all $t$, $t^\dagger$, $Y$, and $Y^\dagger$ operators can be eliminated using the identities, so the pion coupling can be written as a function of $G^{ia}$, $I^a$, $J^i$ (or $J_{ud}^i$), $J_s^i$ and $N_s$. Operators with contracted isospin indices can be eliminated, so the operators which remain have either one $G^{ia}$ and no $I$'s or one $I^a$ and no $G$'s. There are five operator series with the correct time-reversal properties. They are given by the operators \begin{eqnarray} G^{ia}\ ,\nonumber\\ {1 \over N_c}{J_{ud}^i I^a}\ ,\nonumber\\ {1 \over N_c}{J_{s}^i I^a}\ ,\label{pionbroken}\\ {1 \over N_c^2}{ \left\{ J_{ud}^i, \left\{G^{ka},J_s^k\right\}\right\}} \ ,\nonumber\\ {1 \over N_c^2}{ \left\{ J_{s}^i, \left\{G^{ka},J_s^k\right\}\right\}} \ , \nonumber \end{eqnarray} times polynomials \begin{equation} {\cal P}\left( {N_s \over N_c}, {J_{ud}^2 \over N_c^2}, {{J_{ud}\cdot J_s} \over N_c^2} \right) \nonumber \end{equation} in the arguments $N_s/N_c$, $J_{ud}^2/N_c^2$, and $J_{ud}\cdot J_s/N_c^2$. Each operator series involves a different polynomial. Once the operator series has been determined, it is more convenient to rewrite the polynomials as functions of $N_s/N_c$, $J^2/N_c^2$ and $I^2/N_c^2$, using $I^2=J_{ud}^2$ and $J^2 = (J_{ud} + J_s)^2$. For baryons with strangeness of order unity, the matrix elements of $G^{ia}$ are order $N_c$, and the matrix elements of $I$, $J$ (or $J_{ud}$), and $J_s$ are order one, so the dominant operator series is the series generated by $G^{ia}$. The four other operators in eq.~(\ref{pionbroken}) are suppressed by a factor of $1/N_c^2$ relative to $G^{ia}$. The first operator series contains the operators $G^{ia}$ and $N_s G^{ia}/N_c$ up to terms of relative order $1/N_c^2$ compared to the leading operator $G^{ia}$. Thus, eq.~(\ref{pionbroken}) produces the same result as the perturbative breaking formula eq.~(\ref{pionpertii}) to relative order $1/N_c^2$. The derivation of this formula using only $SU(2) \times U(1)$ flavor symmetry implies that the equal spacing rule for pion-baryon couplings is valid to all orders in $SU(3)$ symmetry breaking \cite{djm}. \subsubsection{Kaon Couplings} The kaon couplings transform as $(1,1/2)_1$ and contain either one $t$ and no $Y$, or one $Y$ and no $t$. There are six basic operator series which contribute; they are generated by the operators \begin{eqnarray}\label{kaonbroken} Y^{i\alpha}\ ,\nonumber\\ {1 \over N_c} \left\{t^\alpha, J_{ud}^i\right\}\ ,\nonumber\\ {1 \over N_c} \left\{t^\alpha, J_{s}^i \right\}\ ,\nonumber\\ {1 \over N_c} i\epsilon^{ijk}\left\{Y^{j\alpha},J_{ud}^k\right\}\ , \\ {1 \over N_c^2} \left\{ J_{ud}^i, \left\{Y^{k\alpha},J_{ud}^k\right\}\right\}\ ,\nonumber\\ {1 \over N_c^2} \left\{ J_{s}^i, \left\{Y^{k\alpha},J_{ud}^k\right\}\right\}\ ,\nonumber \end{eqnarray} times polynomials of $N_s/N_c$, $I^2/N_c^2$ and $J^2/N_c^2$. For baryons with strangeness of order unity, the matrix elements of $Y^{i\alpha}$ and $t^\alpha$ are order $\sqrt N_c$. Thus, the leading order operator for the kaon couplings is $Y^{i\alpha}$. There are four additional operators which contribute at relative order $1/N_c$. They are $\{N_s, Y^{i\alpha} \}/ N_c$ and the three operators proportional to $1/N_c$ in eq.~(\ref{kaonbroken}). The perturbative $SU(3)$ breaking expansion eq.~(\ref{kaonpert}) has the same structure outlined above. The commutation relations in Table~\ref{tab:brokencomm} imply that the double commutator term in eq.~(\ref{kaonpert}) is equal to $i\epsilon^{ijk}\left\{Y^{j\alpha},J^k\right\}$. This operator can be rewritten in terms of $J_{ud}$ and $J_s$. The piece involving $J_s$ reduces to a linear combination of the other operators in eq.~(\ref{kaonpert}) by the operator identities. Thus, eq.~(\ref{kaonpert}) contains the same five operators as the completely broken analysis to relative order $1/N_c^2$. The perturbative breaking formula determines four of the five kaon coefficients in terms of the coefficients for the $\pi$ and $\eta$ couplings. This relationship is lost for completely broken $SU(3)$ symmetry. \subsubsection{Eta Couplings} The isoscalar axial current transforms as $(1,0)_0$. The second operator reduction rule implies that the general expansion is generated by the operators \begin{eqnarray}\label{etabroken} J_{ud}^i\ ,\nonumber\\ J_{s}^i\ , \end{eqnarray} times polynomials of $N_s/N_c$, $I^2/N_c^2$ and $J^2/N_c^2$. Eq.~(\ref{etapert}) gives the same expansion up to terms of order $1/N_c^2$ as eq.~(\ref{etabroken}). There is no relationship between the $\pi$ and $\eta$ coefficients in eq.~(\ref{pionpertii}) and eq.~(\ref{etapert}) for perturbatively broken or for completely broken $SU(3)$ symmetry. \section{Baryon Masses}\label{sec:masses} In this section, we study the baryon masses in the flavor $SU(3)$ limit, to first order in $SU(3)$ breaking, and for $SU(2) \times U(1)$ flavor symmetry. \subsection{Baryon Masses in the Flavor Symmetry Limit} The $1/N_c$ expansion for the baryon masses in the $SU(F)$ flavor symmetry limit can be obtained using using the operator reduction rule. The general form of the quark operator expansion of the baryon Hamiltonian is given by eq.~(\ref{ope}). The Hamiltonian is a spin and flavor singlet. The expansion involves the zero-body operator $\openone$ and polynomials in the one-body operators $J^i$, $T^a$ and $G^{ia}$ which transform as the $(0,0)$ representation of $SU(2) \times SU(F)$. To obtain a flavor singlet, all flavor indices on the $T$'s and $G$'s must be contracted using $SU(F)$ tensors with adjoint indices, which can be written as products of the $SU(F)$ invariant tensors $\delta^{ab}$, $d^{abc}$ and $f^{abc}$. All such objects can be removed by the operator reduction rule. Thus, the Hamiltonian can be written purely as an expansion in $J^i$, and, by rotation invariance, it can only be a function of $J^2$. Thus, the baryon mass operator is given by \begin{equation}\label{masssym} M = N_c\ {\cal P}\left({J^2\overN_c^2}\right) \end{equation} where $\cal P$ is a polynomial. This result reproduces the form of the baryon mass expansion obtained previously\cite{j,djm,cgo,lm}. \subsection{Baryon Masses with Perturbative $SU(3)$ Breaking} Flavor $SU(3)$ symmetry is broken because the light quarks have different masses. The perturbation transforms as the $(0, adj)$ irreducible representation of $SU(2) \times SU(F)$. The dominant $SU(3)$ breaking transforms as the eighth component of a flavor octet. Isospin breaking effects are much smaller, and will be neglected here. The quark operator expansion for a $(0, adj)$ QCD operator is of the form given in eq.~(\ref{ope}). The operator reduction rule implies that only $n$-body operators with a single factor of either $T^a$ or $G^{ia}$ need to be retained. There is only one one-body operator, \begin{equation}\label{adjonebody} {\cal O}^a_1 = T^a, \end{equation} and there is only one two-body operator, \begin{equation}\label{adjtwobody} {\cal O}^a_2 = \left\{ J^i, G^{ia} \right\}, \end{equation} allowed by the operator reduction rule. In general, there is only one independent $n$-body operator for each $n$. All of these operators can be generated recursively from operators~${\cal O}^a_1$ and~${\cal O}^a_2$ by anticommuting with $J^2$, \begin{equation} {\cal O}^a_{n+2} = \left\{J^2, {\cal O}^a_n \right\}\ . \end{equation} The set of operators ${\cal O}^a_n$, $n=1,2,\ldots,N_c$, forms a complete set of linearly independent spin-zero adjoints. Thus, the flavor symmetry breaking component of the Hamiltonian has the expansion \begin{equation}\label{IXv} {\cal H}^a=\sum_{n=1}^{N_c} b_n {1\over N_c^{n-1}} {\cal O}^a_n, \end{equation} where $b_n$ are unknown coefficients. Since $J$ is of order one, the contribution of ${\cal O}_{n+2}$ to the baryon mass in eq.~(\ref{IXv}) is suppressed by $1/N_c^2$ relative to that of ${\cal O}_n$. Thus, the expansion of the symmetry breaking perturbation can be truncated after the first two terms, up to corrections of relative order $1/N_c^2$. The expansion for the baryon masses, including $SU(3)$ breaking perturbatively to linear order, is \begin{equation} M = a_0 N_c\ + a_2 {J^2\overN_c}\label{epsmasses} + \epsilon b_1 T^8 + \epsilon {1 \over N_c} b_2 \left\{ J^i, G^{i8} \right\} \end{equation} up to terms of order $1 / N_c^2$. An explicit factor of $\epsilon$ appears in front of the last two terms in eq.~(\ref{epsmasses}) to emphasize which terms arise from symmetry breaking. Note that $\epsilon$ should not be regarded as an additional parameter, since $b_1$ and $b_2$ are unknowns. The general expansion for the masses to linear order in symmetry breaking has the form \begin{eqnarray} M &&= N_c\ {\cal P}_0\left({J^2\overN_c^2}\right) + \epsilon{\cal P}_1\left( {J^2\overN_c^2} \right) T^8 \nonumber\\ &&+ \epsilon {1 \over N_c} {\cal P}_2\left( {J^2\overN_c^2} \right) \left\{ J^i, G^{i8} \right\}\ , \end{eqnarray} where ${\cal P}_i$ are arbitrary polynomials in their argument. Eq.~(\ref{epsmasses}) can be rewritten using the substitutions~(\ref{t8}) for $T^8$ and $G^{i8}$, \begin{equation}\label{epsmasstwo} M = a_0 N_c\ + a_2 {J^2\overN_c} + \epsilon b_1 N_s + \epsilon {1 \over N_c} b_2 \left\{ J^i, J^i_s \right\}, \end{equation} where pieces of the last two terms in eq.~(\ref{epsmasstwo}) have been reabsorbed into the first two terms. Note that \begin{equation} \left\{ J^i, J^i_s \right\} = J^2 + J_s^2 - I^2 \end{equation} since $J^i-J_s^i = J_{ud}^i$ and $J_{ud}^2 = I^2$. The masses of the eight isomultiplets of the spin-$1/2$ octet and spin-$3/2$ decuplet baryons are written in terms of four unknown parameters in eq.~(\ref{epsmasses}), so there are four mass relations which are satisfied to linear order in symmetry breaking and to order $1/N_c^2$ in the $1/N_c$ expansion, \begin{eqnarray} &&{1 \over 3}\left( \Sigma + 2 \Sigma^* \right) - \Lambda = {2 \over 3} \left( \Delta - N \right),\label{massone} \\ &&\Sigma^* - \Sigma = \Xi^* - \Xi, \label{masstwo}\\ &&{3 \over 4}\Lambda + {1 \over 4}\Sigma - {1 \over 2}\left( N + \Xi \right) = 0, \label{massthree}\\ &&\left(\Sigma^* - \Delta \right) = \left(\Xi^* - \Sigma^* \right) ,\label{massfour}\\ &&\left( \Xi^* - \Sigma^* \right)= \left( \Omega - \Xi^* \right) ,\label{massfive} \end{eqnarray} where only four of the above five relations are linearly independent. The first two relations are spin-flavor relations. The third relation is the Gell-Mann--Okubo formula for the baryon octet, and the last two relations are the equal spacing rule of the decuplet. The Gell-Mann--Okubo formula and the equal spacing rule are consequences of $SU(3)$ symmetry alone, but the other two relations connect mass splittings in the octet and decuplet, and depend on the spin-flavor structure of the mass splittings. \subsection{Baryon Masses with Completely Broken $SU(3)$ Symmetry} The analysis of the baryon masses can be performed using only $SU(2) \times U(1)$ flavor symmetry. Such an analysis yields baryon mass relations which are valid to all orders in $SU(3)$ breaking even for large nonperturbative $SU(3)$ breaking. The completely broken $SU(3)$ analysis constructs a $1/N_c$ expansion of all quark operators transforming as spin, isospin and strangeness singlets, $(0,0)_0$. Application of the second operator reduction rule yields the mass expansion, \begin{equation}\label{brokenmass} M = a_0 N_c\ + a_1 N_s + a_{21} {J^2\overN_c} + a_{22} {I^2\overN_c} + a_{23} {N_s^2 \over N_c} , \end{equation} which is valid up to relative order $1 / N_c^3$. Eq.~(\ref{brokenmass}) yields three linearly independent mass relations. Relations~(\ref{massone}) and~(\ref{masstwo}) are still valid. The last three relations are replaced by \begin{eqnarray} &&{3 \over 4}\Lambda + {1 \over 4}\Sigma - {1 \over 2}\left( N + \Xi \right) = -{1 \over 4} \left( \Omega - \Xi^* - \Sigma^* + \Delta\right), \nonumber\\ &&{1 \over 2}\left(\Sigma^* - \Delta \right) - \left(\Xi^* - \Sigma^* \right) + {1 \over 2}\left(\Omega - \Xi^* \right)=0 . \label{masseight} \end{eqnarray} The first relation relates breaking of the Gell-Mann--Okubo formula to breaking of one linear combination of the two equal spacing rule relations. The second relation is the other linear combination of the two equal spacing rule relations. The above results were derived previously in ref.~\cite{djm}, and the reader is referred to this work for additional discussion. Comparison of eq.~(\ref{brokenmass}) and the perturbative formula with linear symmetry breaking eq.~(\ref{epsmasstwo}) shows that the completely broken case replaces the operator $\left\{ J^i, J_s^i \right\}$ by two independent operators, $I^2$ and $N_s^2$. The general form of the mass expansion for completely broken flavor $SU(3)$ symmetry to all orders in the $1/N_c$ expansion is \begin{equation}\label{genmassbroken} M = N_c\ {\cal P}\left({N_s \over N_c}, {J^2\overN_c^2}, {I^2\overN_c^2} \right)\ , \end{equation} where ${\cal P}$ is an arbitrary polynomial in its arguments. \section{Magnetic Moments}\label{sec:magnetic} The baryon magnetic moments were analyzed in ref.~\cite{jm} using the $1/N_c$ expansion in the Skyrme representation. In this section, we compare those results with the analysis in the quark representation. The baryon magnetic moments transform as $(1,8)$ under $SU(2)\times SU(3)$, so the expansions derived in Section~\ref{sec:axial} for the axial couplings can be applied to the magnetic moments as long as one remembers that the coefficients of the operators in the $1/N_c$ expansion can have different values for the magnetic moments than for the axial couplings. The magnetic moments are proportional to the quark charge matrix ${\cal Q} = {\rm diag} ( 2/3, -1/3, -1/3)$, so they can be separated into isovector and isoscalar components. The analysis of Section~\ref{sec:axial} showed that the $1/N_c$ expansions of the isovector and isoscalar axial currents are unrelated for perturbatively broken $SU(3)$ symmetry, and that the same expansions are obtained for perturbatively broken and completely broken $SU(3)$ symmetry. Thus, the analysis of the magnetic moments can be restricted to the cases of exact $SU(3)$ symmetry and completely broken $SU(3)$ symmetry. The analysis of the magnetic moments in the $SU(3)$ symmetry limit is completely analogous to the analysis of the meson couplings in the flavor symmetry limit, and will not be repeated here. The magnetic moments for $N_c=3$ are parametrized by four $SU(3)$ invariants, $\mu_D$, $\mu_F$, $\mu_{\cal C}$ and $\mu_{\cal H}$ \cite{cptmagmom}. These parameters satisfy equations which are the analogues of eqs.~(\ref{dfch}--\ref{fdeqn}) for the meson couplings $D$, $F$, ${\cal C}$, and ${\cal H}$. The isovector and isoscalar magnetic moments for completely broken $SU(3)$ symmetry are given by the operator expansions eqs.~(\ref{pionbroken}) and eq.~(\ref{etabroken}). Retaining terms to order $1/N_c^2$ in these expansions yields \begin{eqnarray} \mu^{i3} &=& c_1 G^{i3} + c_2\frac{N_s}{N_c} G^{i3},\label{mageqn}\\ \mu^{i8} &=& d_1 J^i + d_2 J_s^i + d_3 \frac{N_s}{N_c} J^i + d_4 \frac{N_s}{N_c} J_s^i,\nonumber \end{eqnarray} which is precisely the same expansion used in ref.~\cite{jm}, with the quark operator $G^{ia}$ of the quark representation replacing the analogous Skyrme representation operator $N_c X^{ia}$. The eight isoscalar magnetic moment relations $S1$--$S8$ of ref.~\cite{jm} hold without change, since the matrix elements of $J^i$ and $J^i_s$ are identical in the Skyrme and quark representations. All eight relations are valid in the non-relativistic quark model. Relations $S1$--$S6$ are valid to order $1/N_c^2$. Only one of these relations, $S1$, is measured experimentally; it holds to $4\pm5\%$. Relation $S7$, \begin{equation} 5(p+n)-(\Xi^0+\Xi^-)=4(\Sigma^{+}+\Sigma^-) \ , \nonumber \end{equation} is true in large-$N_c$ QCD at leading order, but is violated at order $1/N_c$. It works experimentally to $22\pm4\%$. Relation $S8$, \begin{equation} (p+n)-3\Lambda=\frac12(\Sigma^{+}+\Sigma^-)-(\Xi^0+\Xi^-)\ , \nonumber \end{equation} is true in large-$N_c$ QCD at leading order. It is also valid at order $1/N_c$ if one imposes $SU(3)$ symmetry in the $1/N_c$ terms, without imposing $SU(3)$ symmetry in the leading order terms. (This restriction corresponds to neglecting the order $\epsilon$ terms in $J^i N_s/N_c$ and $J^i_s N_s/N_c$ in eq.~(\ref{etapert}), but retaining them in $J^i$ and $J^i_s$, which eliminates the $J^i N_s/N_c$ operator from the expansion.) Thus, the violation of this relation is suppressed by $SU(3)$ symmetry breaking/$N_c$. Relation $S8$ holds experimentally to $7\pm1\%$. Thus, the $1/N_c$ expansion provides some understanding of why one quark model relation works better than the others. This example also shows that the $1/N_c$ expansion is not the same as the non-relativistic quark model, even in the quark representation. Ten isovector relations were derived in ref.~\cite{jm}. Relations $V1$--$V7$ hold without change in the quark representation to relative order $1/N_c^2$. Relations $V8$--$V10$ had slightly different forms in the non-relativistic quark model, and in the $1/N_c$ expansion using the Skyrme representation. The isovector relations derived using eq.~(\ref{mageqn}) of the quark representation are identical to those in the quark model, i.e.\ one obtains relations $V8_2$, $V9_2$ and $V10_4$. Relations $V8_2$ and $V9_2$ are valid to relative order $1/N_c^2$ using the quark representation. Relation $V10_4$ is valid at leading order in the $1/N_c$ expansion and at first subleading order in the $1/N_c$ expansion in the $SU(3)$ limit using the quark representation, and is the analogue of relations $V10_2$ and $V10_3$ using the Skyrme representation. The difference between the Skyrme and quark representation forms of the isovector relations is due to a difference of order $1/N_c^2$ in the matrix elements of the two different representations. Since we have neglected terms of order $1/N_c^2$, the difference between the results is not significant to this order. In particular, one of the conclusions of ref.~\cite{jm}, that the $1/N_c$ expansion gives a better prediction than the quark model for the $\Delta^+\rightarrow p\gamma$ transition moment, is incorrect; the difference in the two predictions is an order $1/N_c^2$ effect. The relation $S/V_1$ between the isoscalar and isovector magnetic moments is valid in both the Skyrme and quark representations in the $SU(3)$ limit to two orders in $1/N_c$. Note that the matrix element of $G^{ia}$ in the proton is $(N_c+2)/12$ for our definition of $G^{ia}$. Finally, note that eq.~(\ref{kaonpert}) can be used to obtain the weak magnetism form factors in terms of the baryon magnetic moments. \section{Hyperon Non-Leptonic Decays}\label{sec:nonlep} Hyperon non-leptonic decays are $\Delta S=-1$ non-leptonic weak interaction processes\footnote{Recall that $S$ is defined to be strange quark number, not strangeness, in this work.}. The weak Hamiltonian is proportional to \begin{equation} \left( \bar s \gamma^\mu P_L u \right) \left( \bar u \gamma_\mu P_L d \right)\ \end{equation} at the weak scale. At leading order in $1/N_c$, factorization is exact, so that the weak Hamiltonian can be written as the product of currents. As a result, large $N_c$ considerations do not seem to lead to the $\Delta I=1/2$ rule for $K$ decays, one of the most striking features of non-leptonic weak interactions. It has been suggested that a naive application of $1/N_c$ counting is incorrect, however, because of large logarithms of the form $\ln M_{\rm W}/\Lambda_{\rm QCD}$ from renormalization group scaling (see refs.~\cite{buras,cfg} for discussion of this issue). We do not consider the issue of whether the $\Delta I = 1/2$ rule can be derived in large $N_c$ in this work. We will assume octet dominance and $\Delta I=1/2$ enhancement in the following analysis of the hyperon non-leptonic decay amplitudes in the $1/N_c$ expansion. This assumption appears to be valid experimentally. The general form of the decay amplitude for spin-1/2 baryons is~\cite{pdg} \begin{equation}\label{hnldi} {\cal M} = G_F m^2_{\pi^+} \bar u_{B_f}\left(A-B\gamma_5\right) u_{B_i}, \end{equation} where $A$ and $B$ are parity violating $s$-wave and parity conserving $p$-wave decay amplitudes. The decay rates and asymmetry parameters are given in terms of the amplitudes $s$ and $p$ which are related to $A$ and $B$ by \begin{equation} s=A,\quad p = B \frac{\left|{\bf p}_f\right|}{E_f+M_f}, \end{equation} where ${\bf p}_f$, $E_f$ and $M_f$ are the three-momentum, energy and mass of the final baryon. The dimensionless amplitudes $s$ are given in Table~\ref{tab:hnlds}, where we quote the experimental values extracted in ref.~\cite{ej:hnld}. The assumption of octet dominance or $\Delta I = 1/2$ enhancement leads to three isospin relations amongst the seven decay amplitudes for the spin-$1/2$ octet baryons, \begin{eqnarray}\label{hnlisospin} &&{\cal A}\left(\Sigma^+\rightarrow n \pi^+\right) \!=\!\sqrt2{\cal A}\left(\Sigma^+\rightarrow p \pi^0\right) \!+\!{\cal A}\left(\Sigma^-\rightarrow n \pi^-\right),\nonumber\\ &&{\cal A}\left(\Lambda^0\rightarrow p \pi^-\right)=- \sqrt2\, {\cal A}\left(\Lambda^0\rightarrow n \pi^0\right),\\ &&{\cal A}\left(\Xi^-\rightarrow \Lambda^0 \pi^-\right)= -\sqrt2\,{\cal A}\left(\Xi^0\rightarrow \Lambda^0 \pi^0\right),\nonumber \end{eqnarray} where the relations eq.~(\ref{hnlisospin}) apply to both the $s$- and $p$-wave amplitudes. There are two isospin relations amongst the five $\Omega^-$ $p$-wave amplitudes, \begin{eqnarray} &&{\cal A}\left(\Omega^-\rightarrow \Xi^0 \pi^-\right)= \sqrt2\,{\cal A}\left(\Omega^-\rightarrow \Xi^- \pi^0\right),\nonumber\\ &&{\cal A}\left(\Omega^-\rightarrow \Xi^{*0} \pi^-\right)= -\sqrt2\,{\cal A}\left(\Omega^-\rightarrow \Xi^{*-} \pi^0\right). \end{eqnarray} These isospin relations are evident in the experimental data. \subsection{$S$-Wave Decay Amplitudes} The $s$-wave hyperon non-leptonic decay amplitude does not vanish at zero momentum, and can be obtained using a soft pion theorem, \begin{equation} {\cal A}(B_i\rightarrow B_f + \pi^a) = \frac{i}{f_\pi} \left\langle B_f\right| \left[Q_5^a,{\cal H}_W\right]\left|B_i\right\rangle, \end{equation} where ${\cal H}_W$ is the weak Hamiltonian, $Q_5^a$ is the axial charge, and $f_\pi$ is the pion decay constant. Since the weak Hamiltonian transforms as $(8,1)$ under chiral $SU(3)_L\times SU(3)_R$, $\left[Q_5^a,{\cal H}_W\right]= \left[Q^a,{\cal H}_W\right]$, where the vector charge $Q^a = I^a$. Thus, the $s$-wave non-leptonic weak decay amplitudes, which are obtained from matrix elements of $\left[Q^a,{\cal H}_W\right]$, involve the $1/N_c$ expansion for the weak Hamiltonian. Assuming octet dominance, the weak Hamiltonian transforms as the $(6+i7)$ component of a $(0,8)$ representation of $SU(2)\times SU(3)$. The $1/N_c$ expansion for a $(0,adj)$ operator in the $SU(3)$ symmetry limit was derived in Section~\ref{sec:masses}. There are two operators series, given by the operators $T^a$ and $\left\{J^i, G^{ia}\right\}/N_c$ times polynomials in $J^2/N_c^2$. Thus, the weak Hamiltonian has the expansion, \begin{equation}\label{hweak} {\cal H}_W = b_1 T^{6+i7} + b_2 { {\left\{J^i, G^{i(6+i7)}\right\} } \over N_c }, \end{equation} up to corrections of relative order $1/N_c^2$. For baryons with strangeness of order $N_c^0$, $T^{6+i7}$ is of order $\sqrt{N_c}$ and $\left\{J^i, G^{i(6+i7)}\right\}/N_c$ is of order $1/\sqrt{N_c}$, so the second term in the expansion eq.~(\ref{hweak}) is suppressed by a factor of $1/N_c$ relative to the first term. Thus, to leading order in the $1/N_c$ expansion, the $s$-wave non-leptonic decay amplitudes are given by the matrix elements of $[ I^a, T^{6+i7} ]$, and are purely $F$-coupling. At order $1/N_c$, the second term in eq.~(\ref{hweak}) produces a $1/N_c$-suppressed $D$-coupling. It is known that an $SU(3)$ symmetric fit works well for the $s$-wave amplitudes, with the $D$ coupling smaller than the $F$ coupling by about a factor of three~\cite{amhg}. At leading order in $1/N_c$, the $s$-wave amplitudes are pure $F$, and there are three relations amongst the four independent amplitudes, \begin{eqnarray} &&{\cal A}\left(\Sigma^+\rightarrow n \pi^+\right)=0,\nonumber\\ &&{\cal A}\left(\Xi^-\rightarrow \Lambda^0 \pi^-\right)= -{\cal A}\left(\Lambda^0\rightarrow p \pi^-\right)\\ &&{\cal A}\left(\Lambda^0\rightarrow p \pi^-\right)=\sqrt{\frac32}{\cal A}\left(\Sigma^-\rightarrow n \pi^-\right).\nonumber \end{eqnarray} These three relations are valid up to a correction of relative order $1/N_c$. The one-parameter fit to the $s$-wave amplitudes is given in the third column of Table~\ref{tab:hnlds}. The fit agrees with the experimental data at the $30\%$ level, which is consistent with the level expected for a $1/N_c$ correction. When the subleading $1/N_c$ correction in eq.~(\ref{hweak}) is included, there are two relations valid to relative order $1/N_c^2$ amongst the four independent $s$-wave amplitudes. These relations are ${\cal A}\left(\Sigma^+\rightarrow n \pi^+\right)=0$, and the Lee-Sugawara relation, \begin{eqnarray} &&\sqrt{\frac32}\,{\cal A}\left(\Sigma^-\rightarrow n \pi^-\right)+ {\cal A}\left(\Lambda^0\rightarrow p \pi^-\right)\nonumber\\ &&\qquad\qquad +2\, {\cal A}\left(\Xi^-\rightarrow \Lambda^0 \pi^-\right)=0. \end{eqnarray} This two-parameter fit to the $s$-wave amplitudes is given in the fourth column of Table~\ref{tab:hnlds}. The fit agrees with the experimental data at the $10\%$ level, which is the level expected for $1/N_c^2$ corrections. The analysis of the $s$-wave amplitudes can be repeated adding perturbative $SU(3)$ symmetry breaking or using only $SU(2) \times U(1)$ flavor symmetry. The completely broken $SU(3)$ flavor symmetry case yields the same operator expansion as for perturbative symmetry breaking to linear order. The completely broken analysis is supplied here, since it gives results which are valid to all orders in $SU(3)$ symmetry breaking. The weak Hamiltonian for completely broken $SU(3)$ symmetry has the $1/N_c$ expansion, \begin{equation}\label{hweakbroken} {\cal H}_W = c_1 t^{\dagger}_2 + c_2 { {\left\{J_{ud}^i,Y^{\dagger\,i}_2 \right\} }\overN_c } + c_3 { { \{ N_s, t^{\dagger}_2 \} } \over N_c} , \end{equation} up to corrections of relative order $1/N_c^2$. Note that $t^\dagger_2 = T^{6+i7}$, so the leading operator in eq.~(\ref{hweakbroken}) is the same as the leading operator in the $SU(3)$ symmetric expansion. At relative order $1/N_c$, the single operator in the $SU(3)$ symmetric expansion is replaced by two operators. These two operators are contained in $\{ J^i, G^{i(6+i7)} \} = \{ J_{ud}^i + J_s^i, Y^{\dagger\,i}_2 \}$, since $\{ J_s^i, Y^{\dagger\, i}_2 \}$ reduces to a linear combination of $t^\dagger_2$ and $\{ N_s, t^\dagger_2 \}$ by the operator identities. There are three parameters in the completely broken $SU(3)$ case, so there is one relation amongst the four independent $s$-wave amplitudes, ${\cal A}\left(\Sigma^+\rightarrow n \pi^+\right)=0$. This amplitude can be non-zero only due to corrections to the soft-pion limit. Finally, note the subdominant $(0, 27)$ component of the weak Hamiltonian, which contains a $\Delta I = 3/2$ piece, is given by the two-body operator $\{t^\dagger_1, I^+ \}$ at leading order in the $1/N_c$ expansion, so it is suppressed by one factor of $1/N_c$ relative to the leading $(0,8)$ one-body operator $t^\dagger_\alpha$. \subsection{$P$-Wave Decay Amplitudes} The $p$-wave decay amplitude vanishes at zero-momentum, and so soft-pion theorems can not be used to simplify the calculation. The weak Hamiltonian transforms as $(0,8)$, and the pion coupling transforms as part of an $SU(3)$ octet. Thus the $p$-wave hyperon non-leptonic amplitude transforms as $(1,8\otimes 8)=(1,1)+(1,8)+(1,8)+(1,10+\overline{10})+ (1,27)$. The operators contributing to the $p$-wave amplitudes were classified previously in the analysis of meson couplings with perturbative $SU(3)$ breaking in Sec.~\ref{sec:axial}. The form of the $1/N_c$ expansion for the $p$-wave amplitudes in the $SU(3)$ symmetry limit is the same as the expansion for the meson couplings with perturbative $SU(3)$ symmetry breaking, except that the weak Hamiltonian transforms as $T^{6+i7}$, whereas the symmetry breaking Hamiltonian transforms as $T^8$. Thus, the expression derived in Sec.~\ref{sec:axial} must be rotated to the $6+i7$ direction. The result for the $p$-wave amplitudes can be written as \begin{eqnarray}\label{pwaveeq} P^{ia} &=& d^{a\,(6+i7)\,c} \left( c_1 G^{ic} + c_2 {J^i T^c\overN_c} \right) \nonumber\\ && + c_3 {\left\{G^{ia}, T^{6+i7} \right\}\over N_c} +c_4{\left\{T^a, G^{i\,(6+i7)} \right\}\over N_c} \\ && + {c_5\over N_c} \left[J^2,\left[T^{6+i7},G^{ia}\right]\right] \nonumber\\ &&+c_6 \delta^{a\,(6+i7)} J^i,\nonumber \end{eqnarray} where $a$ denotes the flavor of the pion (or kaon). The term proportional to $c_6$ does not contribute to any of the observed $p$-wave decay amplitudes. The double commutator term requires the initial and final baryons to have different spin, so it does not contribute to any of the octet baryon decay amplitudes, but does contribute to the $p$-wave $\Omega^-$ decays to octet baryons. The analysis of $p$-wave non-leptonic decay amplitudes in the chiral quark model~\cite{amhg} resembles the $1/N_c$ expansion (\ref{pwaveeq}). The chiral quark model also predicts the $p$-wave amplitudes in terms of five one-body and two-body operators. There is a significant contribution to the $p$-wave decay amplitudes from pole graphs, which are sensitive to $SU(3)$ breaking. This introduces additional calculable operators to those given in eq.~(\ref{pwaveeq}). The analysis of the $p$-wave amplitudes including pole graphs is complicated, and will be given elsewhere~\cite{ddjm}. \section{The Skyrme Representation}\label{sec:skyrme} The large-$N_c$ consistency conditions derived in ref.~\cite{dm,j,djm} can be analyzed by constructing irreducible representations of the contracted spin-flavor algebra using the theory of induced representations. This construction naturally leads to a description of large-$N_c$ baryons in terms of the Skyrme representation\footnote{We will refer to the Skyrme representation rather than the Skyrme model to emphasize that we are using the Skyrme model to provide an operator basis for the $1/N_c$ expansion of baryons in QCD, not in the Skyrme model.}. The analysis of large-$N_c$ baryons using the Skyrme representation is discussed in detail in ref.~\cite{djm}, and will not be repeated here. In this section, we elucidate some connections between the quark and Skyrme representations of large-$N_c$ baryons. In the Skyrme representation, the space components of the axial currents are proportional to $N_c$ times \begin{equation} X^{ia} = 2\ {\rm Tr}\ A T^i A^{-1} T^a,\label{sk:1} \end{equation} where $A$, the Skyrmion collective coordinate, is an element of $SU(F)$ in the $F$-flavor case. The spin operators $J^i$ generate the right transformations of $A$, \begin{equation} A \rightarrow A U^{-1},\label{sk:2} \end{equation} where $U$ is an element of a $SU(2)$ subgroup of $SU(F)$, and the flavor operators $T^a$ generate left transformations \begin{equation} A \rightarrow U A,\label{sk:3} \end{equation} where $U$ is an element of $SU(F)$. The Skyrme representation gives an exact realization of the contracted spin-flavor algebra since $X^{ia}$ is a c-number which satisfies the commutation relation $[ X^{ia}, X^{jb}]=0$ to all orders in the $1/N_c$ expansion. The Skyrme and quark representations are equivalent in the large-$N_c$ limit, but differ at subleading orders in the $1/N_c$ expansion. The equivalence is most transparent for the two-flavor case; for three or more flavors, there are additional subtleties. The operator structure of the Skyrme representation is particularly simple for the case of two light flavors. The Skyrme representation operators $J^i$, $I^a$ and $X^{ia}$ have well-defined, $N_c$-independent matrix elements, whereas the quark representation operator $G^{ia}$ has matrix elements of order $N_c$. The equivalence of the Skyrme and quark representations follows from the identification \begin{equation} X^{ia} = \lim_{N_c\rightarrow\infty}\ -{4\over {N_c + 2}}\, G^{ia} + {\cal O}\left({1 \over N_c^2}\right).\label{sk:5} \end{equation} The Skyrme representation identities derived in ref.~\cite{djm} are reproduced by taking the limit eq.~(\ref{sk:5}) of the quark representation identities in Table~\ref{tab:su4iden}, and dropping subleading terms in $1/N_c$, \begin{eqnarray} X^{ia} X^{ia} &=&3, \nonumber \\ X^{ia} J^i &=& - I^a, \nonumber \\ X^{ia} I^a &=& - J^i, \nonumber \\ \epsilon^{ijk} \epsilon^{abc} X^{ia} X^{jb} &=& 2 X^{ic}, \nonumber \\ I^2 &=& J^2, \label{sk:4} \\ X^{ia} X^{ib} &=& \delta^{ab},\nonumber \\ \epsilon^{ijk} \left\{J^j,X^{ka}\right\} &=& \epsilon^{abc}\left\{I^b,X^{ic}\right\}, \nonumber\\ X^{ia} X^{ja} &=& \delta^{ij}. \nonumber \end{eqnarray} Thus, the operator structure of the Skyrme and quark representations is identical at leading order in the $1/N_c$ expansion. Either operator basis can be used to expand QCD operators in a $1/N_c$ expansion. The two expansions will have different coefficients at subleading order in $1/N_c$, but will give identical predictions for physical quantities. The operator structure of the Skyrme representation is simpler than that of the quark representation for two flavors, since the Skyrme representation operators and operator identities do not depend on $N_c$. It is important to note, however, that the baryon spectrum in the Skyrme representation contains more states than in the quark representation. The baryon spectrum in the quark representation is a tower of states with $(J,I) = (1/2,1/2),\ (3/2,3/2),\ldots, (N_c/2,N_c/2)$. The spectrum in the Skyrme representation is an infinite tower of states $(J,I) = (1/2,1/2),\ (3/2,3/2),\ldots$. The extra states in the Skyrme model are sometimes regarded as ``spurious'' states from the point of view of the quark model. They have the quantum numbers of hadrons containing $N_c$ quarks plus some $\bar q q$ pairs. The existence of extra states in the Skyrme representation does not affect the conclusion that the quark and Skyrme representations yield equivalent operator bases since any operator of finite spin (such as the mass operator with spin zero, or the axial current with spin one) does not couple states at the bottom of the tower to these additional states with spin of order $N_c$. For two flavors, the quark representation operator $G^{ia}$ can be written explicitly in terms of the Skyrme representation operator $X^{ia}$ to all orders in $1/N_c$. The matrix elements of $X^{ia}$ between baryon states are known~\cite{anw}. The matrix elements of $G^{ia}$ between baryon states can be computed using the method of ref.~\cite{karlpaton}. Baryons with $I_3=1/2$ are made of $(N_c+1)/2$ $u$-quarks combined into a state with spin $J_u$, and $(N_c-1)/2$ $d$-quarks combined into a state with spin $J_d$, where $J_u$ and $J_d$ are combined to form a state with total spin $J$. The matrix elements of $J_u$ and $J_d$ can be computed using standard methods~\cite{edmonds} to obtain the matrix elements of $G^{ia}$. Writing the matrix elements of $G^{ia}$ in terms of $X^{ia}$, one obtains \begin{eqnarray}\label{gxexpand} G^{ia} &=& -\frac{N_c+2}{4}\ \sqrt{1-\hat z}\ X^{ia} \\ &&\quad + \frac1{N_c+2} \left(\frac{1}{1+\sqrt{1- \hat z}}\right) J^i I^a,\nonumber \end{eqnarray} where $\hat z$ is the operator \begin{equation} \hat z = \frac2{\left(N_c+2\right)^2} Ad_+ J^2, \end{equation} and $Ad_+ J^2$ is the operator defined by \begin{equation} \left(Ad_+ J^2\right) {\cal O} \equiv \left\{J^2,{\cal O} \right\}. \end{equation} Eq.~(\ref{gxexpand}) is valid for matrix elements of $G^{ia}$ between states with $J\le N_c/2$. Matrix elements of $G^{ia}$ in which at least one state has $J > N_c/2$ vanish. The comparison of the Skyrme and quark representations is more interesting when the number of flavors is greater than two. We will concentrate on the three-flavor case in this discussion, since all the subtleties already occur for this case. The baryon spectrum of the Skyrme representation contains additional states which couple to baryon states with low spin. The baryon spectrum is determined by quantization of the collective coordinate $A$ in eq.~(\ref{sk:1}). The collective coordinate $A$ must be quantized subject to the constraint that the body-centered hypercharge is $N_c/3$ (i.e. $T^8$ is $N_c/\sqrt{12}$) \cite{guad}, as dictated by the Wess-Zumino term. It is important to note that this constraint depends on $N_c$. Many errors in the Skyrme model literature arise from quantizing the Skyrmion with the hypercharge set to its value for $N_c=3$, $Y=1$, while expanding in $1/N_c$. Because of this constraint, the spectrum of the Skyrme model depends explicitly on $N_c$, in contrast to the two-flavor case. The spectrum of the Skyrme representation contains the same tower of states as the quark representation. These representations, which are given in Table~\ref{tab:su2f->suf}, consist of Young tableaux with $N_c$ boxes for spin and flavor. In addition, the Skyrme representation contains states which consist of Young tableaux with $N_c+3$ boxes, $N_c+6$ boxes, etc. \cite{am}. These ``spurious'' states can have low spin, such as $1/2$, $3/2$, etc., and they can couple to the standard spin-$1/2$ and spin-$3/2$ baryons via operators of finite spin, such as the axial currents. These additional states have the quantum numbers of a state composed of $N_c$ quarks and $\bar qq$ pairs. In large-$N_c$ QCD, pair creation of an additional $\bar q q$ pair is suppressed by a factor of $1/\sqrt{N_c}$, since pair creation plus annihilation produces a closed fermion loop, which is down by $1/N_c$. It is straightforward to check by explicit computation of Clebsch-Gordan coefficients for arbitrary $N_c$ that the matrix elements of operators between the ``normal'' and ``spurious'' states have this $1/N_c$ suppression. For example, the amplitude for a spin-1/2 baryon with $N_c$ boxes to couple via the axial current to a baryon with $N_c+3$ boxes (i.e. a baryon with one extra $\bar q q$ pair) is suppressed by $1/\sqrt{N}$. Thus, the additional states of the Skyrme representation affect the couplings of the $N_c$-quark baryon states only at subleading order. The Skyrme operators $J^i$, $T^a$ and $X^{ia}$ can be used to obtain a $1/N_c$ expansion for three flavors. Since the matrix elements of the Skyrme operator $X^{ia}$ now have a $N_c$-dependence, the relation eq.~(\ref{sk:5}) between the quark and Skyrme operators $G^{ia}$ and $X^{ia}$ is no longer valid, and the Skyrme model operator identities are not given by taking the $N_c\rightarrow\infty$ limit of the quark model identities. The $N_c$-dependence of the matrix elements of the operators $T^a$ and $X^{ia}$ is different in different regions of the weight diagram. This non-trivial $N_c$-dependence of operator matrix elements is what made the analysis in ref.~\cite{djm} of the $SU(3)$ flavor symmetry limit complicated. In ref.~\cite{djm}, the coupling of baryons to octet mesons was given in terms of two invariant amplitudes $\cal M$ and $\cal N$, with \begin{equation} {\cal N \over M} = {1\over 2} + {\alpha\overN_c} + {\cal O}\left({1\overN_c^2}\right),\label{sk:6} \end{equation} where $\alpha$ is an undetermined parameter. In the quark representation, one can show that the operator $G^{ia}$ implies that \begin{equation} {\cal N \over M} = {1\over2}{N_c-1\overN_c+2} = {1 \over 2}-{3 \over {2N_c}}+\ldots, \end{equation} so that $\alpha=-3/2$ in the non-relativistic quark model. Similarly, the ratio $\cal N/M$ can be calculated for arbitrary $N_c$ for the Skyrme model operator $X^{ia}$, \begin{equation}\label{sk:7} {\cal N \over M} = {1\over2}\,{\left(N_c-1\right)\left(N_c+9\right)\over N_c^2+8N_c+9} = {1 \over 2} + {\cal O}\left( {1 \over N_c^2} \right), \end{equation} so that $\alpha=0$ in the Skyrme model. In the $1/N_c$ expansion of the meson couplings, the operator $J^i T^a /N_c$ changes the prediction for the parameter $\alpha$ in either the quark or Skyrme representation away from these quark and Skyrme model values. The coefficient of $J^i T^a/N_c$ must be different in the $1/N_c$ expansions in the quark and Skyrme representations to produce a given value of $\alpha$ since $G^{ia}$ and $X^{ia}$ give different contributions to $\alpha$. Using eq.~(\ref{VIIIxii}), one finds the non-trivial relation \begin{equation}\label{sk:8} X^{ia} = -{4\over{N_c+2}}\left[ G^{ia} - {J^i T^a\overN_c}\right]+ {\cal O}\left({1\overN_c^2}\right), \end{equation} between the Skyrme and quark model operators,\footnote{The lefthand side of Eq.~(\ref{sk:8}) is the Skyrme model $X^{ia}$ projected onto the set of states which exist in the quark model. There are extra states in the Skyrme model. The full Skyrme model operator $X^{ia}$ can not be written as an expansion in terms of quark operators since it has matrix elements between the quark model states and the extra Skyrme model states.} where the overall coefficient in front of the parentheses is determined by requiring that the Skyrme model identity $X^{ia} T^a = -J^i$ is satisfied to this order. Eq.~(\ref{sk:8}) implies that some of the operator identities in the Skyrme and quark model are different, even at leading order in $N_c$. For instance, the Skyrme model identity $X^{ia} T^a = - J^i$ is valid in the case of two or three flavors, but the analogous quark model identity $\left\{T^a,G^{ia}\right\}=(N_c +F)(1 - 1/F) J^i$ has a different coefficient of proportionality for two and three flavors, even at leading order in $1/N_c$. This occurs because the $J^i T^a/N_c$ term in eq.~(\ref{sk:8}) for $a=8$ is unsuppressed relative to $G^{ia}$, and it produces a leading order change in the $X^{ia} T^a$ identity. Using eq.~(\ref{sk:8}) and eq.~(\ref{VIIx}), one finds that \begin{eqnarray} X^{ia} T^a &=& -{4\over{N_c+2}}\left[ G^{ia} T^a -{1 \over N_c} J^i T^2 \right] + \ldots\nonumber\\ &=& -{4\over{N_c+2}}\left[ {1 \over 2}\left(N_c +F\right)\left(1 - {1 \over F}\right)\right. \nonumber\\ &&\qquad\left.-{1 \over {4F}}\left( N_c + 2F \right)\left( F - 2 \right) \right] J^i + \ldots \\ &=&-J^i,\nonumber \end{eqnarray} for any number of flavors $F$. We will not work out the operator identities in the Skyrme basis. Some of these identities are given in ref.~\cite{djm}. \section{Conclusions}\label{sec:conc} The $1/N_c$ expansion allows one to compute properties of baryons in a systematic expansion of QCD. There are only a few terms in the expansion at any given order in $1/N_c$ once redundant operators are eliminated using the operator reduction rule. Results which hold to two orders in $1/N_c$ typically work at the 10--15\% level. For $N_c=3$, the $1/N_c$ expansion usually can not be carried beyond second order because the number of independent operators becomes too large to make any non-trivial predictions. The $1/N_c$ expansion explains which features of the quark and Skyrme models follow directly from the spin-flavor symmetry structure of QCD. \acknowledgements This work was supported in part by the Department of Energy under grant DOE-FG03-90ER40546 and by the National Science Foundation under grant NSF PHY90-21984. E.J. was supported in part by NYI award PHY-9457911 from the National Science Foundation. A.M. was supported in part by PYI award PHY-8958081 from the National Science Foundation.
2,869,038,156,499
arxiv
\section{\label{sec:1}Introduction} \setcounter{equation}{0} The progress in modern observational cosmology at scales much smaller than the cell of uniformity size (see, e.g., \cite{Sand1,Kar et al,Kar2003,Sand2,Kar2008,Kar2012}) enables to use the new observational data to test different cosmological models. With the help of these data, we can reconstruct the history of galaxies, groups and clusters of galaxies as well as to predict their future. For example, we can explain the formation of the Hubble flows in the vicinity of the group of galaxies \cite{Chernin 2012} or predict a possible collision of the Milky Way and Andromeda in future \cite{CoxLoeb}. According to recent astronomical observations, there is no clear evidence of spatial homogeneity up to sizes $\sim$ 150 Mpc \cite{Labini}. Deep inside of such scales and on late stages of evolution, the Universe consists of a set of discrete inhomogeneities (galaxies, groups and clusters of galaxies) which disturb the background Friedmann Universe\footnote{Of course, such objects as galaxies have their own structure. However, at distances much bigger than the characteristic size of these objects, we can consider them as discrete inhomogeneities.}. Hence, classical mechanics of discrete objects provides more adequate approach than hydrodynamics with its continuous flows. In our previous paper \cite{EZcosm1}, we have elaborated this approach for an arbitrary number of randomly distributed inhomogeneities on the cosmological background and found a gravitational potential of this system. We have shown that this potential has the most natural form in the case of the Friedmann-Robertson-Walker metrics with the hyperbolic space. Therefore, having the gravitational potential of an arbitrary system of inhomogeneities, we can investigate their motion taking into account both the gravitational attraction between them and the cosmological expansion of the Universe. In the present paper, we continue this investigation. First, we obtain the general system of equations of motion for such system and apply these equations to abstract groups of galaxies to show the effects of gravitational attraction and cosmological expansion. Then, we consider our Local Group to investigate the mutual motion of the Milky Way and Andromeda. Here, we distinguish two different models. For the first one, we do not take into account the influence of the Intra-Group Matter (IGrM). Contrary to the conclusions of the paper \cite{CoxLoeb}, we show in this case that for currently known parameters of this system, the collision is hardly plausible in future because of the angular momentum. These galaxies will approach the minimum distance of about 290 Kpc in 4.44 Gyr from present, and then begin to run away irreversibly from each other. For the second model, we take into account the dynamical friction due to the IGrM. Here, we find a characteristic value of the IGrM particle velocity dispersion $\tilde \sigma = 2.306$. For $\tilde \sigma \leq 2.306$, the merger will take place but for bigger values of $\tilde\sigma$ the merger can be problematic because the galaxies approach a region where the dragging effect of the dynamical friction can be too small to force the galaxies to converge. If the temperature of the IGrM particles is $10^5$ K, then this characteristic value of $\tilde\sigma$ corresponds to the IGrM particle mass 17 MeV. Therefore, for lighter masses (and, accordingly, larger values of $\tilde\sigma$) the merger becomes problematic. Then, we define the region in the vicinity of our Local Group where the formation of the Hubble flows starts. For such processes, the zero-acceleration surface (where the gravitational attraction is balanced by the cosmological accelerated expansion) plays the crucial role. We take into account the geometry of the system consisting of two giant galaxies (MW and M31) at the distance 0.78 Mpc. Obviously, if this surface exists, it does not have a spherical shape. We show that such surface is absent for the Local Group. Instead, we find two points and one circle with zero acceleration. Nevertheless, there is a nearly closed area around the MW and M31 where the absolute value of the acceleration is approximately equal to zero. The Hubble flows are formed outside of this area. One of the main conclusions of our work is that cosmological effects become significant already at the scale of the Local Group, i.e. of the order of 1 Mpc. Therefore, we should take them into account when we consider the dynamics of the Local Group at these distances. The paper is organized as follows. In section 2 we obtain the general system of equations of motion for arbitrary distributed inhomogeneities in the open Universe. We apply these equations to abstract systems of galaxies consisting of three and four galaxies in section 3. In section 4, we investigate the mutual motion of the Milky Way and Andromeda in future. In section 5, we define the zero-acceleration region for our Local Group. The main results are summarized in concluding section 6. \section{\label{sec:2}General setup} \setcounter{equation}{0} In our recent paper \cite{EZcosm1}, we have shown that the "comoving" gravitational potential for a system of gravitating masses $m_i$ is \be{2.1} \varphi=-G_N\sum\limits_i m_{i}\frac{\exp(-2l_i)}{\sinh l_i}+\frac{4\pi G_N\overline\rho}{3}\, , \ee where $G_N$ is the Newtonian gravitational constant, $\overline \rho=\mbox{const}$ is the comoving average rest mass density and $l_i$ denotes the comoving geodesic distance between the i-th mass $m_{i}$ and the point of observation in the open Universe, i.e. in the hyperbolic space. This formula has a number of advantages with respect to the flat and spherical space cases \cite{EZcosm1}. First, this potential is finite at any point of space (excluding, of course, the positions of the particles with $l_i=0$). Second, the presence of the exponential function enables us to avoid the gravitational paradox (the Neumann-Seeliger paradox). The $\overline \rho$-term does not spoil this property because the averaged gravitational potential $\overline \varphi$ is equal to zero. Third, the gravitating masses can be distributed completely arbitrarily. It is worth noting that, for different reasons, the arguments in favour of the open Universe were also provided in the recent paper \cite{Barrow}. We consider the potential \rf{2.1} against the cosmological background. In the $\Lambda$CDM model, the scale factor of the Universe reads \cite{EZcosm1} \be{2.2} \tilde a=\left(\frac{\Omega_M}{\Omega_{\Lambda}}\right)^{1/3}\left[\left(1+\frac{\Omega_{\Lambda}}{\Omega_M}\right)^{1/2} \sinh\left(\frac{3}{2}\Omega_{\Lambda}^{1/2}\tilde t\right)+\left(\frac{\Omega_{\Lambda}}{\Omega_M}\right)^{1/2}\cosh\left(\frac{3}{2}\Omega_{\Lambda}^{1/2}\tilde t \right)\right]^{2/3}\, , \ee where we introduced dimensionless variables $\tilde a = a/a_0$, $\tilde t = H_0 t$, and $a_0$, $H_0$ are the values of the scale factor $a$ and the Hubble "constant" $H\equiv \dot a/a\equiv (da/dt)/a$ at the present time $t=t_0$ (without loss of generality, we can put $t_0=0$). The standard density parameters are \be{2.3} \Omega_M=\frac{\kappa\overline\rho c^4}{3H_0^2a_0^3},\quad \Omega_{\Lambda}=\frac{\Lambda c^2}{3H_0^2}\, , \ee where $\kappa\equiv 8\pi G_N/c^4$ and $\Lambda$ is the cosmological constant. It is worth noting that the solution \rf{2.2} is common for any value of the curvature parameter $\mathcal K$ because the density parameter for the curvature $|\Omega_{\mathcal K}|=|\mathcal K|c^2/(a_0^2H_0^2) \ll 1$ \cite{7WMAP} (where $\mathcal K=-1,0,+1$ for open, flat and closed Universes, respectively), and we can drop the spatial curvature term from the Friedmann equation. According to the seven-year WMAP observations \cite{7WMAP}, $H_0\approx70\, \mbox{km/sec/Mpc}\approx 2.3\times 10^{-18}\mbox{sec}^{-1}\approx (13.7\times 10^9)^{-1}\mbox{yr}^{-1}$, $\Omega_M\approx0.27$ and $\Omega_{\Lambda}\approx0.73$. Below, we shall consider astrophysical objects (galaxies and their groups) deep inside of the cell of uniformity, i.e. for physical distances $R \lesssim $ 150 Mpc. As we have shown in \cite{EZcosm1}, the comoving distances in the cell of uniformity are much less than 1: $l_i \ll 1$. For such small distances, we can use the Cartesian coordinates. Then, eq. \rf{2.1} reads \be{2.4} \varphi ({\bf r})=-G_N \sum_{i}\frac{m_i}{|{\bf r}-{\bf r}_i|}+ \frac{4\pi G_N\overline\rho}{3}\, , \ee where ${\bf r}_i$ is the comoving radius-vector of the $i$-th gravitating mass $m_i$. Its Lagrange function is \cite{EZcosm1} \be{2.5} \mathcal{L}_i=-\frac{m_i\varphi_i}{a}+\frac{m_i a^2 v_i^2}{2}\, , \ee where \be{2.6} \varphi_i ({\bf r}_i)=-G_N\sum_{j\neq i}\frac{m_j}{|{\bf r}_i-{\bf r}_j|}+\frac{4\pi G_N\overline\rho}{3} \ee is the gravitational potential created by all the remaining masses at the point ${\bf r}={\bf r}_i$. In eq. \rf{2.5}, \be{2.7} v_i^2=\dot x_i^2+\dot y_i^2+\dot z_i^2\, , \ee and $v_i$ is the comoving peculiar velocity of the $i$-th gravitating source. It has the dimension $(time)^{-1}$. We would remind that we work in the weak-field limit where physical peculiar velocities are much less than the speed of light: $a v_i\ll c$. It can be easily verified that, with respect to the physical coordinates \be{2.8} X_i=ax_i,\quad Y_i=ay_i,\quad Z_i=az_i\, , \ee the Lagrange function is \be{2.9} \mathcal{L}_i=-\frac{m_i\varphi_i}{a}+\frac{m_i}{2a^2}\left[\left(\dot X_i a- \dot a X_i\right)^2+\left(\dot Y_i a-\dot a Y_i\right)^2+\left(\dot Z_i a-\dot a Z_i\right)^2\right]\, , \ee where \be{2.10} \varphi_i=-G_Na\sum_{j\neq i}\frac{m_j}{|{\bf R}_i-{\bf R}_j|}+\frac{4\pi G_N\overline\rho}{3},\quad |{\bf R}_i-{\bf R}_j|=\sqrt{\left(X_i-X_j\right)^2+\left(Y_i-Y_j\right)^2+\left(Z_i-Z_j\right)^2}\, , \ee and the Lagrange equations for the $i$-th mass take the form \ba{2.11} -G_N\sum_{j\neq i}\frac{m_j\left(X_i-X_j\right)}{\left[\left(X_i-X_j\right)^2+\left(Y_i-Y_j\right)^2+\left(Z_i-Z_j\right)^2\right]^{3/2}} &=&\frac1{a}\left(\ddot X_i a-\ddot a X_i\right)\, ,\\ -G_N\sum_{j\neq i}\frac{m_j\left(Y_i-Y_j\right)}{\left[\left(X_i-X_j\right)^2+\left(Y_i-Y_j\right)^2+\left(Z_i-Z_j\right)^2\right]^{3/2}} &=&\frac1{a}\left(\ddot Y_i a-\ddot a Y_i\right)\label{2.12}\, , \\ -G_N\sum_{j\neq i}\frac{m_j\left(Z_i-Z_j\right)}{\left[\left(X_i-X_j\right)^2+\left(Y_i-Y_j\right)^2+\left(Z_i-Z_j\right)^2\right]^{3/2}} &=&\frac1{a}\left(\ddot Z_i a-\ddot a Z_i\right)\label{2.13}\, . \ea Now, we can apply these equations to real astrophysical systems such as a group or cluster of galaxies. To illustrate this, we consider first a number of abstract simplified examples. \section{\label{sec:3}Illustrative examples} \setcounter{equation}{0} In this section, we consider simplified examples where all gravitating masses are on the same plane $Z=0$, i.e. all $Z_i=0$. Then, eqs. \rf{2.11}-\rf{2.13} take the form \ba{3.1} \frac{d^2\tilde X_i}{d\tilde t^2}&=&-\frac{1}{\overline m}\sum_{j\neq i}\frac{m_j(\tilde X_i-\tilde X_j)}{[(\tilde X_i-\tilde X_j)^2+(\tilde Y_i-\tilde Y_j)^2]^{3/2}} + \frac{1}{\tilde a}\frac{d^2\tilde a}{d\tilde t^2}\tilde X_i\, , \quad i,j=1,\ldots ,N \, , \\ \label{3.2} \frac{d^2\tilde Y_i}{d\tilde t^2}&=&-\frac{1}{\overline m}\sum_{j\neq i}\frac{m_j(\tilde Y_i-\tilde Y_j)}{[(\tilde X_i-\tilde X_j)^2+(\tilde Y_i-\tilde Y_j)^2]^{3/2}} + \frac{1}{\tilde a}\frac{d^2\tilde a}{d\tilde t^2}\tilde Y_i\, , \quad i,j=1,\ldots ,N\, , \ea where $N$ is a total number of masses and we introduced the dimensionless variables \ba{3.3} \tilde X_i&=&X_i\left(\frac{H_0^2}{G_N\overline{m}}\right)^{1/3}= \frac{X_i}{0.95\mbox{Mpc}}\left(\frac{10^{12}M_\odot}{\overline m}\right)^{1/3}\, ,\nn \\ \tilde Y_i&=&Y_i\left(\frac{H_0^2}{G_N\overline{m}}\right)^{1/3}= \frac{Y_i}{0.95\mbox{Mpc}}\left(\frac{10^{12}M_\odot}{\overline m}\right)^{1/3}\, ,\nn \\ \tilde a&=&\frac{a}{a_0},\quad \tilde t=H_0t=\frac{t}{13.7\times 10^9\mbox{yr}} \ea and $\overline m=\sum_{i=1}^N m_i /N$ is the average mass of the system. Obviously, the first terms in the right hand side of eqs. \rf{3.1} and \rf{3.2} are due to the gravitational attraction between masses and the second terms originate from the cosmological expansion of the Universe, which is described by eq. \rf{2.2}. We consider the stage of the accelerated expansion, i.e. $d^2\tilde a/d\tilde t^{\, 2} >0$. Therefore, the competition between these two mechanisms defines the dynamical behavior of the masses. Depending on the initial conditions, they either collide with or move off each other. Let us demonstrate this with two particular examples. \subsection{Three gravitating masses: $N=3$} Here, we study dynamics of three gravitating masses $(N=3)$ for two different cases. First, we consider the case with the initial coordinates $(0,-1.5)$, $(1.5,0)$ and $(-1.5,1.5)$ for the first ($i=1$), second ($i=2$) and third ($i=3$) gravitating masses, respectively, and with zero initial velocities\footnote{The components of the velocity are given by the formula $\dot X_i=a\dot x_i+HX_i$. Therefore, to get zero value for this expression at some moment $t_0$, the peculiar velocity should compensate the Hubble velocity at this moment.} $d\tilde X_i/d\tilde t|_{\tilde t=0}=0,\, d\tilde Y_i/d\tilde t|_{\tilde t=0}=0,\, i=1,2,3$. For simplicity, we also take $m_1=m_2=m_3=\overline m$. The numerical solution of eqs. \rf{3.1} and \rf{3.2} shows (see figure \ref{triangle}, the left panel) that in this case the cosmological expansion prevails over the gravitational attraction for all three masses and all of them move off each other. The red (inner) solid triangle corresponds to the initial positions at the initial moment $\tilde t=0$. Then, we depict green (middle) and blue (outer) triangles at the moments $\tilde t=1$ and $\tilde t=2$, respectively. Solid triangles take into account both cosmological expansion and gravitational attraction, while the dashed triangles correspond to pure cosmological expansion. Obviously, the dashed triangles are similar but the gravitational attraction spoils this similarity (see the solid ones). Additionally, the solid triangles are inside of the corresponding dashed ones because the gravitational attraction slows the recession. In the second case, the initial coordinates are $(0,-1)$, $(1,0)$ and $(-1,1)$, respectively, with the same zero initial velocities at $\tilde t=0$. Then, the results of the numerical solutions for times $\tilde t=1$ (green triangles) and $\tilde t=2$ (blue triangles) are shown in figure \ref{triangle}, the right panel. Similar to the previous case, solid (dashed) triangles correspond to the presence (absence) of the gravitational attraction. Therefore, the dashed triangles form the similar triangles. The solid green and blue triangles demonstrate that the gravitational attraction between first $(i=1)$ and second $(i=2)$ masses prevail the cosmological expansion, and these masses approach each other. However, the cosmological expansion dominates for the third $(i=3)$ mass and this mass moves away from the other two. \begin{figure*}[htbp] \centerline{\includegraphics[width=2.5in,height=2.5in]{3pointsexpansion.eps} \includegraphics[width=2.5in,height=2.5in]{3pointscollision.eps}} \caption {Dynamics of three gravitating masses with zero initial velocities. The solid red triangles describe the initial positions at $\tilde t=0$. The green and blue triangles correspond to the positions at the moments $\tilde t=1$ and $\tilde t=2$, respectively. Solid green and blue triangles take into account both cosmological expansion and gravitational attraction, while the corresponding dashed triangles disregard this attraction. Depending on the initial conditions, the gravitating masses move away from each other because the cosmological expansion prevails the gravitational attraction (the left panel), or some of masses can collide with each other in the case of the prevalence of the attraction (the right panel). \label{triangle}} \end{figure*} \subsection{Four gravitating masses: $N=4$} Here, we study dynamical behavior of four gravitating masses $(N=4)$. The results of numerical solutions of eqs. \rf{3.1} and \rf{3.2} are depicted in figure \ref{square}. The left panel demonstrates the situation when the cosmological expansion prevails over the gravitational attraction and all masses move away from each other. The initial coordinates here are $(1.5,1.5)$, $(1.5,-1.5)$, $(-1.5,-1.5)$ and $(-1.5,1.5)$. The right panel corresponds to the opposite case when the gravitational attraction dominates and masses approach each other. Here, the initial coordinates are $(0.5,0.5)$, $(0.5,-0.5)$, $(-0.5,-0.5)$ and $(-0.5,0.5)$. The meaning of color and type of lines is the same as for the previous example, i.e. the red squares correspond to the initial positions of masses at $\tilde t=0$ and green and blue squares show their positions at times $\tilde t=1$, $\tilde t=2$ (the left panel) and $\tilde t=0.3$, $\tilde t=0.6$ (the right panel), respectively. Solid green and blue squares take into account both cosmological expansion and gravitational attraction, while the corresponding dashed squares disregard this attraction. In contrast to the previous three-mass example, here, we alow the masses to rotate clockwise setting the following initial velocities $(d\tilde X_i/d\tilde t,d\tilde Y_i/d\tilde t)_{\tilde t=0}$: $(0,-1)$, $(-1,0)$, $(0,1)$, $(1,0)$ (the left panel) and $(0,-0.25)$, $(-0.25,0)$, $(0,0.25)$, $(0.25,0)$ (the right panel), respectively. Additionally, the orange curved line depicts the trajectory of one of the masses. \begin{figure*}[htbp] \centerline{\includegraphics[width=2.5in,height=2.5in]{4pointsexpansionvelocities_1.eps} \includegraphics[width=2.5in,height=2.5in]{4pointscollisionvelocities_1.eps}} \caption {Dynamics of four gravitating masses with non-zero initial velocities. The solid red squares correspond to the initial positions of masses at $\tilde t=0$, and green and blue squares show their positions at times $\tilde t=1$, $\tilde t=2$ (the left panel) and $\tilde t=0.3$, $\tilde t=0.6$ (the right panel). Solid green and blue squares take into account both cosmological expansion and gravitational attraction, while the corresponding dashed squares disregard this attraction. The orange curved line depicts the trajectory of one of the masses. The left panel demonstrates the situation when the cosmological expansion prevails over the gravitational attraction and all masses move away from each other. The right panel corresponds to the opposite case when the gravitational attraction dominates and masses approach each other. \label{square}} \end{figure*} \section{\label{sec:4}Collision between Milky Way and Andromeda} \setcounter{equation}{0} \subsection{Free-fall approximation} Now, we want to apply our method to real astrophysical objects. For this purpose, we consider our local group of galaxies, which consists of two giant galaxies (our Milky Way (MW) and Andromeda (M31)) and approximately 40 dwarf galaxies. At the present time, these giant galaxies are located at the distance 0.78 Mpc\footnote{It is worth noting that our local group of galaxies forms the region of overdensity located inside the underdensity area. We can easily estimate the size/radius of this area from the formula $R\sim [3M/(4\pi \bar \rho_{phys})]^{1/3}$, where $M$ is the total mass of the group and $\bar \rho_{phys}$ is the average mass density of matter in the Universe. For our group this is a few megaparsecs, e.g., $R\sim 2.54$ Mpc for $M\approx 2.6\times 10^{12}M_{\odot}$ and $\bar \rho_{phys}\approx 0.2556\times 10^{-29}\mbox{g}/\mbox{cm}^3$. This radius can be enlarged if we include the mass of the Intra-Group Matter (IGrM). According to \cite{WhiteNelson}, IGrM can contain up to 30\% of the group mass. In this case, for our local group $R\sim 2.77$ Mpc. Therefore, MW and M31 with their separation distance 0.78 Mpc are deep inside the underdensity region.} and move towards each other with the speed 120 km/sec \cite{CoxLoeb}. Therefore, in future they may encounter. The collision time was estimated recently in the paper \cite{CoxLoeb}, where the authors used the hydrodynamic approach. They found that the average time for the first passage is 2.8 Gyr and for the final merger is 5.4 Gyr. It is of interest to estimate also this time using our mechanical approach. We consider these two galaxies as point-like gravitating masses. Obviously, such approach is valid at distances greater than the sizes of galaxies. For these two galaxies, we can apply our method up to the separation distance of the order of 100 Kpc, where the process of merger starts \cite{CoxLoeb}\footnote{According to the simulations carried out in \cite{Deason1}, this distance can be extended up to 120-150 Kpc.}. The intergalactic/intragroup medium density is estimated as $5\div200$ times the average density of the Universe \cite{Freeland,Fang}. So, we split our investigation into two steps. First, in this subsection, we neglect the Intra-Group Matter (IGrM) in our calculations. Then, in the next subsection we take into account the dynamical friction caused by IGrM. {}From eq. \rf{2.9}, it can be easily seen that the two-particle Lagrange function for two gravitating masses/galaxies (marked as the points A and B) is \ba{4.1} \mathcal{L}_{AB}&=&G_N\frac{m_A m_B}{\left\vert{\bf R}_A-{\bf R}_B\right\vert}+\frac{m_A}{2a^2} \left[\left(\dot X_Aa-\dot a X_A\right)^2+\left(\dot Y_Aa-\dot a Y_A\right)^2+\left(\dot Z_Aa-\dot a Z_A\right)^2\right]+\nonumber\\ &+&\frac{m_B}{2a^2}\left[\left(\dot X_Ba-\dot a X_B\right)^2+\left(\dot Y_Ba-\dot a Y_B\right)^2+\left(\dot Z_Ba-\dot a Z_B\right)^2\right]\, . \ea Let us introduce the projections $L_X,L_Y$ and $L_Z$ of the distance between these masses and the coordinates of the center of mass: \ba{4.2} X_A-X_B&=& L_X\, , \\ \label{4.3}\cfrac{m_AX_A+m_BX_B}{m_A+m_B}&=&X_0\, \ea and the similar expressions for $L_Y,L_Z$ and $Y_0,Z_0$. Therefore, the absolute value of the distance is $\left\vert{\bf R}_A-{\bf R}_B\right\vert=\sqrt{L_X^2+L_Y^2+L_Z^2}=L>0$. Then, eq. \rf{4.1} reads \ba{4.4} &{}&\mathcal{L}_{AB}=G_N\frac{m_A m_B}{L}+\frac{1}{2a^2}\left\{\frac{m_Am_B}{m_A+m_B}\left(\dot{L}_X^2 a^2-2\dot{L}_XL_X\dot a a+\dot a^2 L_X^2+\right.\right.\nonumber\\ &{}&+\left.\dot{L}_Y^2 a^2-2\dot{L}_Y L_Y\dot a a+\dot{a}^2 L_Y^2+\dot{L}_Z^2a^2-2\dot{L}_ZL_Z\dot a a+\dot a^2L_Z^2\right)+\nonumber\\ &{}&+(m_A+m_B)\left.\left[\left(\dot X_0 a- \dot a X_0\right)^2+\left(\dot Y_0 a-\dot a Y_0\right)^2+\left(\dot Z_0 a- \dot a Z_0\right)^2\right]\right\}\, , \ea or, in spherical coordinates, \ba{4.5} &{}&\mathcal{L}_{AB}=G_N\frac{m_A m_B}{L}+\frac1{2}\frac{m_A m_B}{m_A+m_B}\left(\frac{\dot a^2}{a^2} L^2-2\frac{\dot a}{a}\dot L L+\dot L^2+L^2\dot\theta^2+L^2\sin^2\theta\dot\psi^2\right)+\nonumber\\ &{}&+\frac1{2a^2}(m_A+m_B)\left[\left(\dot X_0 a- \dot a X_0\right)^2+\left(\dot Y_0 a-\dot a Y_0\right)^2+\left(\dot Z_0 a- \dot a Z_0\right)^2\right]\, . \ea It can be easily verified that $X_0$ satisfies the equation \be{4.6} a\ddot X_0-\ddot a X_0=0\, \ee with the following solution: \be{4.7} \dot X_0=H X_0 +\frac{a_0}{a}\left(\dot X_{0(in)}-H_0 X_{0(in)}\right)\, , \ee where $a_0=a(t_0), H_0=H(t_0), X_{0(in)}=X_{0}(t_0)$ and $\dot X_{0(in)}=\dot X_{0}(t_0)$ are values at the initial time $t=t_0$. Therefore, $\dot X_0$ satisfies asymptotically (with increasing $a$) the Hubble law. The same conclusion takes place for $\dot Y_0$ and $\dot Z_0$. Let us investigate now the relative motion of the galaxies. For this motion, the Lagrange function is \be{4.8} \tilde{\mathcal{L}}_{AB}=G_N\frac{m_A m_B}{L}+\frac1{2}\frac{m_A m_B}{m_A+ m_B}\left(\frac{\dot a^2}{a^2}L^2-2\frac{\dot a}{a}\dot L L+\dot L^2+L^2\dot\psi^2\right)\, , \ee where without loss of generality we put $\theta=\pi/2$. Therefore, the Lagrange equation for the separation distance is \be{4.9} \ddot L=-G_N\frac{2\overline{m}}{L^2}+\frac{M^2}{\mu^2 L^3}+\frac{\ddot a}{a}L\, , \ee where we introduced the reduced mass, the average mass and the angular momentum: \be{4.10} \frac{m_Am_B}{m_A+m_B}\equiv\mu\, ,\quad \overline{m}=\frac{m_A+m_B}{2}\, ,\quad \mu L^2\dot\psi\equiv M=\mbox{const}\, . \ee The first term in the right hand side of \rf{4.9} is due to the gravitational attraction, the second term is the centrifugal force and the third term originates from the cosmological expansion of the Universe. To integrate eq. \rf{4.9}, we rewrite it with respect to the dimensionless quantities similar to ones in \rf{3.3}: \ba{4.11} \tilde L&=&L\left(\frac{H_0^2}{G_N\overline{m}}\right)^{1/3}\approx \frac{L}{0.95\mbox{Mpc}}\left(\frac{10^{12}M_\odot}{\overline m}\right)^{1/3}\, ,\nn \\ \tilde V&=&V\left(\frac{1}{H_0G_N\overline{m}}\right)^{1/3}\approx \frac{V}{67\mbox{km/sec}}\left(\frac{10^{12}M_\odot}{\overline m}\right)^{1/3}\, , \label{4.11}\\ \tilde M&=&M\frac{\overline{m}}{\mu}\left(\frac{H_0}{G_N^2\overline{m}^5}\right)^{1/3}\approx \frac{L}{0.95\mbox{Mpc}}\; \frac{V_{\perp}}{67\mbox{km/sec}}\left(\frac{10^{12}M_\odot}{\overline m}\right)^{2/3}\, ,\nn \ea where the radial velocity $V= \dot L\to \tilde V = d\tilde L/d\tilde t$ and the transverse velocity $V_{\perp}= L\dot \psi$. Then, eq. \rf{4.9} reads \be{4.12} \frac{d^2\tilde L}{d\tilde t^2}=-\frac{2}{\tilde L^2} + \frac{\tilde M^2}{\tilde L^3}+\frac{1}{\tilde a}\frac{d^2\tilde a}{d\tilde t^2}\tilde L\, . \ee Now, we integrate eq. \rf{4.12} for parameters corresponding to the galaxies MW and M31. The masses of MW and M31 are of the order of $10^{12}M_\odot$ and $1.6\times 10^{12}M_\odot$ respectively \cite{CoxLoeb}\footnote{We take these values because we want to compare our results with the conclusions of this paper. More recent publications indicate both a little bit higher values of the mass of MW \cite{McMillan,Boylan} and lower values \cite{Deason}.}. The separation distance at present time\footnote{Without loss of generality we may put $t_0=0,\ \tilde t_0=0$.} $t=t_0$ is $L_0\approx 0.78\,\mbox{Mpc}\to \tilde L_0\approx 0.753$, and the galaxies approach each other with the radial velocity $V_0\approx -120\,\mbox{km/sec}\to \tilde V_0 \approx -1.633$ \cite{CoxLoeb}. First, let us consider briefly the case of the zero angular momentum $M=0$. If $\ddot a>0$, as it happens at the present stage of the Universe evolution, we can introduce a distance of zero acceleration $L_{cr}$ where $\ddot L=0$: \be{4.13} \tilde L_{cr}(\tilde t)=\left(\frac{2}{-q(\tilde t)}\right)^{1/3}\, , \ee where the deceleration parameter $q=-(1/H^2)(\ddot a/a)=-(d^2\tilde a/d\tilde t^2)/\tilde a$. In the $\Lambda$CDM model, we get from the Friedmann equations that at present time $q_0\approx \Omega_M/2 -\Omega_{\Lambda} \approx -0.595$, and the zero acceleration distance is $\tilde L_{cr}(\tilde t_0)\approx 1.5$. If at $t=t_0$ the relative velocity $\dot L(t_0)=0$, then gravitating masses run away from each other (collide with each other) in the future for $L(t_0)>L_{cr}(t_0)$ ($L(t_0)<L_{cr}(t_0)$). Obviously, the separation distance $\tilde L_0 \approx 0.753$ between MW and Andromeda is less than $\tilde L_{cr}(\tilde t_0)$. Additionally, they have the non-zero radial velocity towards each other. Therefore, they will collide with each other. The result of numerical solution of this collision for $M=0$ is shown in figure \ref{andromeda}, the left panel. The solid blue line takes into account both gravitational attraction and cosmological expansion while the red dashed line disregards the cosmological expansion. This picture demonstrates that the effect of the expansion is very small for relative motion of MW and M31. For example, the time to collision (from present) is $\tilde t\approx 0.2670\to 3.68$Gyr and $\tilde t\approx 0.2636\to 3.63$Gyr for blue and red lines, respectively. We remind that our approach works up to the separation distance $\tilde L\approx 0.1 \to 100$ Kpc when the stage of the galaxy merger starts. \begin{figure*}[htbp] \centerline{\includegraphics[width=3.0in,height=2.5in]{Andromeda_checked.eps} \includegraphics[width=3.0in,height=2.5in]{Andromeda_checked_M100.eps}} \caption {These figures show the change with time of the separation distance between the Milky Way and Andromeda starting from the present ($\tilde t=0$) in the case of the absence of the dynamical friction. The initial (i.e. at present time) separation distance and radial relative velocity are $0.78$ Mpc and $-120$ km/sec, respectively ($0.753$ and $-1.633$ in dimensionless units). The solid blue lines take into account both gravitational attraction and cosmological expansion, while the red dashed lines disregard the cosmological expansion. The transverse velocity is absent in the left panel. Here, the collision between galaxies takes palace in $\tilde t \approx 0.267\to t\approx 3.68$ Gyr from present (the blue line). In the right panel, the transverse velocity is equal to 100 km/sec. Here, the collision is absent and the smallest separation distance is $\tilde L \approx 0.28 \to L\approx 290$ Kpc at time $\tilde t \approx 0.324 \to t\approx 4.44$ Gyr from present. For both pictures, the effect of the cosmological expansion is very small for the considered period of time. \label{andromeda}} \end{figure*} Let us turn now to the case of the non-zero angular momentum. The observations indicate the proper motion of Andromeda perpendicular to our line of sight. This transverse velocity $V_{\perp 0}=V_{\perp}(t_0)$ is less than 200 km/sec \cite{Peebles}. In \cite{Loeb}, the authors found an even smaller estimate: $V_{\perp 0} \sim 100$ km/sec. In our calculations, we will adhere to this value, and with the help of eq. \rf{4.11} we get $\tilde M \approx 1.029$. It can be easily seen from eq. \rf{4.12} that fall to the center is absent because of the centrifugal barrier ($M\neq 0$). Therefore, the collision of the galaxies is possible if the smallest separation distance between them (which corresponds to the turning point) is less than the merger distance $100-150$ Kpc. The result of the numerical integration of eq. \rf{4.12} is shown in figure \ref{andromeda}, the right panel. The solid blue line takes into account both gravitational attraction and cosmological expansion, while the red dashed line disregards the cosmological expansion. Similar to the previous case, the effect of the cosmological expansion is very small for the considered period of time. This picture demonstrates that for the given transverse velocity, the smallest separation distance is $\tilde L \approx 0.28 \to 290$ Kpc at time $\tilde t \approx 0.324 \to 4.44$ Gyr from present. This distance is much bigger than the merger distance. Therefore, for the chosen initial conditions, the collision between the Milky Way and Andromeda is absent. The collision may take place for a smaller transverse velocity. For example, if $V_{\perp 0} \approx 60$ km/sec, then the smallest separation distance is 100 Kpc, which can be sufficient to start the merger. \subsection{Dynamical friction} Let us take now into account the Intra-Group Matter (IGrM). It is well known that a massive body with a mass $M$ moving through surrounding matter, which consists of discrete particles of the mass $m$, will lose its momentum and kinetic energy due to gravitational interaction with these particles. Such effect is called dynamical friction. The force of the dynamical friction is given by the Chandrasekhar formula \cite{Binney}: \be{4.14} \frac{d{\bf V}_M}{dt} =- \frac{4\pi Q \; G_N^2 M\rho_{ph,m}}{V^3_M}\left[\mbox{erf}(\chi) -\frac{2\chi}{\sqrt{\pi}}\exp\left(-\chi^2\right)\right]{\bf V}_M\, , \ee where ${\bf V}_M$ is the physical velocity of the mass $M$, $\rho_{ph,m}$ is the physical rest mass density of IGrM, $\chi \equiv V_M/(\sqrt{2} \sigma)$ and erf is the error function. Here, $Q \equiv (1/2) \ln\left(1+\lambda^2\right)$ is the so called Coulomb logarithm defined by the largest impact parameter $b_{max}$, the initial relative velocity $V_0$ and the masses $M$ and $m$: $\lambda = b_{max} V_0^2/[G_N(M+m)]\approx b_{max} V_0^2/(G_N M)$. The formula \rf{4.14} is defined with respect to a frame where the IGrM particles have the Maxwell's speed distribution with the dispersion $\sigma = \sqrt{kT/m}$. The typical value of the IGrM temperature in the Local Group is \cite{CoxLoeb} $T\sim 10^5 \mbox{K} \to kT \sim 8.6$ eV. Therefore, the Milky Way and Andromeda should slow down moving through the IGrM because of the dynamical friction \rf{4.14}. Then, equations \rf{3.1} and \rf{3.2} describing the dynamics of the galaxies MW and M31 (labelled as A and B, respectively) are modified as follows: \ba{4.15} &{}&\frac{d^2\tilde X_i}{d\tilde t^2}=-\frac{1}{\overline m}\frac{m_j(\tilde X_i-\tilde X_j)}{[(\tilde X_i-\tilde X_j)^2+(\tilde Y_i-\tilde Y_j)^2]^{3/2}}+ \frac{1}{\tilde a}\frac{d^2\tilde a}{d\tilde t^2}\tilde X_i \\ &{}&- \frac{3\, Q\, m_i\, \alpha}{2\, \overline{m}\, \tilde v_{pec,i}^3}\left[\mathrm{erf}\left(\tilde\chi_i\right)- \frac{2\tilde \chi_i}{\sqrt{\pi}} \exp\left(-\tilde\chi_i^2\right)\right] \left(\frac{d \tilde X_i}{d\tilde t}-\frac{1}{\tilde a}\frac{d\tilde a}{d\tilde t}\tilde X_i\right)\, ,\quad i,j=A,B;\; i\neq j\, ,\nn\ea \ba{4.17} &{}&\frac{d^2\tilde Y_i}{d\tilde t^2}=-\frac{1}{\overline m}\frac{m_j(\tilde Y_i-\tilde Y_j)}{[(\tilde X_i-\tilde X_j)^2+(\tilde Y_i-\tilde Y_j)^2]^{3/2}}+ \frac{1}{\tilde a}\frac{d^2\tilde a}{d\tilde t^2}\tilde Y_i \\ &{}&- \frac{3\, Q\, m_i\, \alpha}{2\, \overline{m}\, \tilde v_{pec,i}^3}\left[\mathrm{erf}\left(\tilde\chi_i\right)- \frac{2\tilde \chi_i}{\sqrt{\pi}} \exp\left(-\tilde\chi_i^2\right)\right] \left(\frac{d \tilde Y_i}{d\tilde t}-\frac{1}{\tilde a}\frac{d\tilde a}{d\tilde t}\tilde Y_i\right)\, ,\quad i,j=A,B;\; i\neq j\, , \nn \ea where we assume that the IGrM particles have the Maxwell's speed distribution in a frame comoving with the Hubble flow. In this case ${\bf V}_M$ in \rf{4.14} is the peculiar velocity of MW and M31: \ba{4.19} \tilde{{\bf v}}_{pec,i}&=&\left(\frac{d\tilde X_i}{d\tilde t}-\frac{1}{\tilde a}\frac{d\tilde a}{d\tilde t}\tilde X_i,\frac{d\tilde Y_i}{d\tilde t}- \frac{1}{\tilde a}\frac{d\tilde a}{d\tilde t}\tilde Y_i\right)\, , \\ \label{ 4.20} \tilde v_{pec,i}&=&\left[\left(\frac{d\tilde X_i}{d\tilde t}-\frac{1}{\tilde a}\frac{d\tilde a}{d\tilde t}\tilde X_i\right)^2+ \left(\frac{d\tilde Y_i}{d\tilde t}-\frac{1}{\tilde a}\frac{d\tilde a}{d\tilde t}\tilde Y_i\right)^2\right]^{1/2}\, ,\quad i=A,B\, . \ea As in equations \rf{3.3} and \rf{4.11}, tilde denotes dimensionless quantities. Additionally, $\tilde \chi_i =\tilde v_{pec,i}/(\sqrt{2}\tilde \sigma)$ and $\tilde \sigma=(\sigma/H_0)\left[H_0^2/(G_N\overline{m})\right]^{1/3}$. Regarding the physical rest mass density of IGrM, we define it in the terms of the critical density: $\rho_{ph,m}=\alpha\rho_{cr}=\alpha \left(3H_0^2\right)/(8\pi G_N)$. For the IGrM in the Local Group, we shall take $\alpha \sim 10$ \cite{CoxLoeb}. To estimate the Coulomb logarithm $Q$ for the Local Group, first, we should take into account that the matter density $\rho_{ph,m}$ begins to decrease at scales where the cosmological expansion starts to dominate over the gravitational attraction, i.e. approximately at 1 Mpc. Here, the parameter $\alpha \sim 5$ \cite{CoxLoeb}. Therefore, $b_{max} \sim 1$ Mpc. Then, taking for $V_0$ the typical peculiar velocity 100 km/sec and for $M$ the value $10^{12}M_\odot$, we get that $Q\sim 1$. Let $X_{i}(t)$ and $Y_{i}(t)$ be the barycentric coordinates of the MW ($i=A$) and M31 ($i=B$), that is the origin of coordinates is in the center of mass of MW and M31. In this case, the initial values of the coordinates and velocities are: $X_A(t_0)=m_B L_0/(m_A+m_B)$, $X_B(t_0)=-m_A L_0/(m_A+m_B)$, $Y_A(t_0)=0$, $Y_B(t_0)=0$ and $\dot X_A(t_0)=m_B V_0/(m_A+m_B)$, $\dot X_B(t_0)=-m_A V_0/(m_A+m_B)$, $\dot Y_A(t_0)= m_B V_{\perp 0} /(m_A+m_B)$, $\dot Y_B(t_0)= -m_A V_{\perp 0} /(m_A+m_B)$, where we use the notations from the previous subsection. Now, taking the values of the parameters from the previous subsection (the case of the nonzero angular momentum, e.g., $V_0=-120$ km/sec and $V_{\perp 0}=100$ km/sec), we can integrate numerically the equations \rf{4.15} and \rf{4.17}. We take into account both the cosmological expansion and the gravitational attraction, although, as we have seen above, the influence of the expansion is not significant within the scales of interest $L \leq 1$ Mpc. The result of these calculations is depicted in figures \ref{graphmin} and \ref{graphmax} where the left panels show the change in time of the separation distance and the right panels describe the trajectories of the galaxies. These figures demonstrate that, for fixed values of masses of galaxies, their initial conditions and the parameter $\alpha$, the relative dynamical behavior depends on the dispersion parameter $\sigma = \sqrt{kT/m}$ which in turn is defined by the ratio of the temperature $T$ and the mass $m$ of the particles of IGrM. Comparatively little is known about truly intergalactic medium. Most probably this is a mixture of the baryonic matter (mainly in the form of ionized hydrogen) and dark matter. There is great variety of candidates for dark matter with masses ranging from $\mu$eV$\div$eV (e.g., axions) to TeV (e.g., WIMPs). Therefore, in the formula for the dispersion $\sigma$, the parameters $T$ and $m$ are some effective values. It makes sense not to specify them separately, but to consider their ratio, i.e. $\sigma^2$. As we mentioned above, our approach works up to the first touch of the galaxies which occurs approximately at the separation distance 100 Kpc between their centers. In the left panels, this event is marked by the green points on the bottom red lines. In this case, the merger of the galaxies will take place. Our approach does not describe this process. The continuations of the lines (the separation distance) after the first touch is very schematic. In the right panels, this event corresponds to the touch of two red circles. The distance between their centers is equal to 100 Kpc. We do not continue the trajectories after this first touch. We found two characteristic values for the dimensionless parameter $\tilde \sigma$. The first one is $\tilde \sigma_1 = 1.17$ and corresponds to the situation when the first close passage occurs at the separation distance $L=100$ Kpc (see the green point in the left panel and two red touched circles in the right panel in the figure \ref{graphmin}) which corresponds to the touch of the galaxies. Obviously, for all $\tilde \sigma < \tilde \sigma_1$, this distance will be less than 100 Kpc and the first touch of the galaxies will take place during the first passage. For the bigger values of $\tilde \sigma$, the first passage occurs at the separation distance larger than 100 Kpc. The second characteristic value is $\tilde \sigma_2 = 2.306$ and describes the situation when the galaxies, after the first close passage, grow apart to the turning point at the separation distance 1 Mpc from each other (see the yellow point on the upper red line in figure \ref{graphmax}, the left panel). At these and greater distances, the rest mass density of IGrM decreases and the dragging effect of the dynamical friction can be too small to force the galaxies to converge again. Therefore, for $\tilde \sigma > \tilde\sigma_2$, the merger of the galaxies becomes problematic. For $\tilde \sigma_1 < \tilde \sigma < \tilde \sigma_2$, the touch will take place during the second passage. It is of interest to estimate masses of the IGrM particles which correspond to these characteristic values of $\tilde \sigma$. The masses $m$ can be expressed via the temperature $T$ and dimensionless dispersion $\tilde \sigma$ as follows: $m (\mbox{MeV})\approx \left\{\left[kT(\mbox{erg})/8.464\times 10^{13}(\mbox{cm}^2/\mbox{sec}^2)\right]\times 0.5604\times 10^{27}(\mbox{MeV}/\mbox{g})\right\}/\tilde\sigma^2$. The temperature of IGrM in the Local Group is usually estimated as $T\sim 10^5$ K \cite{CoxLoeb}. Then, for this value of $T$, we get $m_1 \sim 67$ MeV and $m_2 \sim 17$ MeV for $\tilde \sigma_1$ and $\tilde \sigma_2$, respectively. Therefore, for the chosen initial conditions and the value of $T$, the touch of the galaxies will take place during the first passage for the IGrM particle masses $m \geq 67$ MeV and the merger can be problematic for masses lighter than $17$ MeV. \begin{figure*}[htbp] \centerline{\includegraphics[width=3.0in,height=2.5in]{graphmin.eps} \includegraphics[width=3.0in,height=1.3in]{graphmintra.eps}} \caption {These figures show the change with time of the separation distance between the Milky Way and Andromeda (the left panel) and the corresponding trajectories for the MW (the blue line) and M31 (the green line) in the right panel in the case of dynamical friction. The initial conditions are chosen as in the right panel of the figure \ref{andromeda}. The dynamical friction is calculated for the dispersion parameter $\tilde\sigma =\tilde\sigma_1 = 1.17$. For this value of $\tilde\sigma$, the first close passage occurs at the separation distance $L=100$ Kpc (see the green point in the left panel and two red touched circles in the right panel) which corresponds to the touch of the galaxies. For this and smaller values of $\tilde\sigma$, the touch of the galaxies will take place during the first passage.\label{graphmin}} \end{figure*} \begin{figure*}[htbp] \centerline{\includegraphics[width=3.0in,height=2.5in]{graphmax.eps} \includegraphics[width=3.0in,height=3.1in]{graphmaxtra.eps}} \caption {These figures are drawn in the case of the dynamical friction with the dispersion parameter $\tilde\sigma =\tilde\sigma_2 = 2.306$. For this value of $\tilde\sigma$, there is no touch of the galaxies during the first passage because the closest separation distance here is larger than 100 Kpc (the bottom red line in the left panel). After that, the galaxies grow apart to the turning point at the separation distance 1 Mpc from each other (see the yellow point on the upper red line in the left panel). At these and greater distances, the rest mass density of IGrM decreases and the dragging effect of the dynamical friction can be too small to force the galaxies to converge again. Therefore, for $\tilde \sigma > \tilde\sigma_2$, the merger of the galaxies becomes problematic. \label{graphmax}} \end{figure*} \section{\label{sec:5}Formation of Hubble flows in the vicinity of the Local Group} \setcounter{equation}{0} To study the formation of the Hubble flows in the vicinity of our group of galaxies, we need to determine the spatial distribution of vectors of acceleration of astrophysical objects (e.g., dwarf galaxies) in the gravitational field of two giant galaxies taking into account the cosmological expansion of the Universe. Obviously, near the galaxies, the vector must be oriented in the direction of galaxies due to the gravitational attraction, and with the distance from galaxies he has to turn in the opposite direction due to the cosmological accelerated recession. Let us investigate this effect for our Local Group. For this purpose, we consider a test particle/dwarf galaxy in the gravitational field of Andromeda and Milky Way. We study the picture at present time when the separation distance between M31 and MW is $L_0=0.78$ Mpc, and we do not take into account the relative motion of these galaxies. Of course, we can also consider dynamical evolution of this system but this effect is out of the scope of this section. It can be easily seen from the Lagrange function \rf{2.9} that the Lagrange equation for a test particle is \be{5.1} \frac{d}{dt}\left({\bf V}-\frac{\dot a}{a}{\bf R}\right)=-\frac{1}{a}\frac{\partial\varphi}{\partial{\bf R}}+\frac{\dot a^2}{a^2}{\bf R}-\frac{\dot a}{a}{\bf V} \ee or, equivalently, \be{5.2} \dot {\bf V}-\frac{\ddot a}{a}{\bf R} =-\frac{1}{a}\frac{\partial\varphi}{\partial{\bf R}}\, . \ee If the gravitational field is absent (i.e. $\varphi \equiv 0$) then this equation has the following solution: \be{5.3} {\bf V}=\frac{\dot a}{a}{\bf R}+\frac{{\bf const}}{a}\, , \ee where the first term is the Hubble velocity and the second term is the peculiar velocity. In dimensionless variables (see \rf{3.3} and \rf{4.11}) and without peculiar velocity this solution reads \be{5.4} \tilde {\bf V} = \frac{1}{\tilde a}\frac{d\tilde a}{d\tilde t}\tilde{\bf R}=\frac{H}{H_0}\tilde{\bf R}\, . \ee In the case of our Local Group, the gravitational potential is (see eq. \rf{2.10}): \be{5.5} \varphi=\varphi_A+\varphi_B,\quad \varphi_A=-aG_N\frac{m_A}{\left\vert{\bf R}_A-{\bf R}\right\vert},\quad \varphi_B=-aG_N\frac{m_B}{\left\vert{\bf R}_B-{\bf R}\right\vert}\, , \ee where, similar to the previous section, we mark MW and M31 by letters A and B, respectively. Here, we omitted the constant term $\sim \overline \rho$ because it does not contribute to equations of motion. For numerical solution, we rewrite eq. \rf{5.2} in dimensionless variables \rf{3.3} and \rf{4.11} as \be{5.6} \tilde {\bf W}=\frac{d\tilde {\bf V}}{d\tilde t}=\frac{1}{\tilde a}\frac{d^2\tilde a}{d\tilde t^2}\tilde {\bf R}-\frac{1}{\tilde a}\frac{\partial\tilde\varphi}{\partial \tilde {\bf R}}\, , \ee where we introduced additionally the dimensionless potential \be{5.7} \tilde \varphi=\frac{1}{a_0\left(H_0 G_N\overline{m}\right)^{2/3}}\varphi\, . \ee It makes sense to rewrite \rf{5.6} in components. For example, for the $X$-component we have \be{5.8} \tilde W_x=\frac{d\tilde V_x}{d\tilde t}=\frac{1}{\tilde a}\frac{d^2\tilde a}{d\tilde t^2}\tilde X-\frac{1}{\tilde a}\frac{\partial\tilde\varphi}{\partial \tilde X}\ee with \ba{5.9} \frac{1}{\tilde a}\frac{\partial \tilde\varphi}{\partial\tilde X}&=&\frac{m_A}{\overline{m}}\frac{\tilde X-\tilde X_A}{\left[\left(\tilde X-\tilde X_A\right)^2+\left(\tilde Y-\tilde Y_A\right)^2+\left(\tilde Z-\tilde Z_A\right)^2\right]^{3/2}}\nn \\ &+&\frac{m_B}{\overline{m}}\frac{\tilde X-\tilde X_B}{\left[\left(\tilde X-\tilde X_B\right)^2+\left(\tilde Y-\tilde Y_B\right)^2+\left(\tilde Z-\tilde Z_B\right)^2\right]^{3/2}} \ea and similar for the $Y$- and $Z$-components. The absolute value of the dimensionless acceleration is \be{5.10} |\tilde {\bf W}|=\left|\frac{d\tilde {\bf V}}{d\tilde t}\right|=\frac{1}{(H_0^4G_N\overline{m})^{1/3}}\left|\frac{d {\bf V}}{d t}\right|=\sqrt{\left(\tilde W_x\right)^2+\left(\tilde W_y\right)^2+\left(\tilde W_z\right)^2}\, . \ee Eqs. \rf{5.6} and \rf{5.8} demonstrate how the cosmological expansion competes with the gravitational attraction. There is a characteristic region defined by the condition \be{5.11} \left|\frac{1}{\tilde a}\frac{d^2\tilde a}{d\tilde t^2}\tilde {\bf R}\right| \sim \left|\frac{1}{\tilde a}\frac{\partial\tilde\varphi}{\partial \tilde {\bf R}}\right|\, , \ee where the gravitational attraction is balanced by the cosmological expansion. The gravitational attraction is stronger inside of this region (for smaller distances) and the cosmological expansion prevails this attraction outside of this area (for larger distances). It takes place both for positive and negative values of the cosmological acceleration $\ddot a$.\footnote{Here, we mean just the absolute values of accelerations. Obviously, in the decelerated Universe, both the acceleration caused by the gravitational attraction and the acceleration due to the cosmological expansion are negative and they never compensate each other. Nevertheless, there is a region where the absolute value of the cosmological acceleration becomes bigger than the absolute value of the gravitational one. In this region, if the initial velocity ${\bf V}_0$ of a test object (e.g., a dwarf galaxy) is equal to the Hubble velocity $H{\bf R}_0$, then this object will continue to follow the Hubble flow. On the other hand, if its initial velocity ${\bf V}_0$ with respect to an observer in the origin is equal to zero (i.e. its peculiar velocity is equal to minus Hubble velocity, see footnote 2), then this test object will approach the origin after being released due to the total negative acceleration. This will happen at any separation distance between the test object and the observer (see the corresponding discussion in \cite{dec1,dec2,dec3}).} In the case of the accelerated expansion $\ddot a >0$, we have $|{\bf W}| \approx 0$ and this characteristic area is called the region of zero acceleration. Obviously, the Hubble flows are formed outside of this area. For our Local Group consisting of two giant galaxies MW and M31, we choose the origin of coordinates in the barycenter of these galaxies and $X-$axis along the line connecting MW and M31. Therefore, $X_A=L_0 m_B/(m_A+m_B),\, X_B=-L_0 m_A/(m_A+m_B)$, and $Y_A=Z_A=0$, $Y_B=Z_B=0$. Additionally, due to the rotational symmetry around the $X-$axis, it is sufficient to consider the plane $Z=0$. The 3-D picture can be easily reconstructed by rotation around this axis. Therefore, we investigate the distribution of the test body acceleration in the plane $Z=0$. For the masses MW and M31, we take values from the previous section: $m_A\approx 10^{12}M_\odot$ and $m_B\approx 1.6\times 10^{12}M_\odot$. On figure \ref{camel}, we depict the absolute value of the acceleration \rf{5.10} of the test body in the plane $Z=0$. This modulus decreases from large values near the positions of MW and M31 to nearly zero (a red area around the peaks), and then it begins to increase again with the distance from the barycenter. The red area describes approximately the region of zero acceleration. Figure \ref{hedgehog} depicts the vector field of the acceleration \rf{5.6}. It demonstrates the turn of these vectors from the directions towards the MW and M31 (in the vicinity of these galaxies) to outside (with distance from the galaxies). The central region (near MW and M31) is empty because we cut off the vectors with the magnitude $|\tilde {\bf W}|>3$. The yellow and green lines correspond to the conditions $\tilde W_x=0$ and $\tilde W_y=0$, respectively. This figure shows that the vectors change their directions in the vicinity of the region $|\tilde {\bf W}|\approx 0$. \begin{figure*}[htbp] \centerline{\includegraphics[width=4.0in,height=3in]{camel2.eps}} \caption {This plot shows the absolute value of the acceleration of dwarf galaxies in the Local Group. The red area around the peaks corresponds approximately to the zero acceleration region. \label{camel}} \end{figure*} \begin{figure*}[htbp] \centerline{\includegraphics[width=3.0in,height=3.0in]{hedgehog3.eps}} \caption {This figure shows the vector field of the dwarf galaxy acceleration $\tilde {\bf W}$. These vectors are directed towards the MW and M31 (the black points) in the vicinity of the galaxies and turn out with the distance from the galaxies. The yellow and green lines correspond to the conditions $\tilde W_x=0$ and $\tilde W_y=0$, respectively. \label{hedgehog}} \end{figure*} To define more exactly the structure of the zero-acceleration surface, we draw figure \ref{Lagrangepoints}. The yellow and green lines correspond to the conditions $W_x=0$ and $W_y=0$, respectively. Black points define the positions of the Milky Way (the right point) and Andromeda (the left point). Red points are defined by the condition $W_x=W_y=0 \to |{\bf W}|=0$. The right panel takes into account both the gravitational attraction and the cosmological expansion, while the left panel disregards the cosmological expansion. Obviously, in the case of only the gravitational attraction (the left panel), we have only one zero acceleration point between MW and M31 which is the analog of the Lagrange point $L_1$. Much more reach picture happens in the presence of the cosmological accelerated expansion which competes with the gravitational attraction (the right panel). However, this panel shows that, strictly speaking, the zero acceleration surface is absent. Here, we have two additional red points on the $X-$axis and two vertical points. Clearly, due to the rotational symmetry, the latter two points are just the section of a zero acceleration circle by the plane $Z=0$. Nevertheless, we can speak about the approximate zero acceleration surface because the elliptic-like green and yellow lines are very close to each other and they define the region where $|{\bf W}|\approx 0$. We can see also that there are two regions where this surface has a discontinuity. It is clear from the rotational symmetry that these two regions belong to the round belt (they are the section of this belt by the plane $Z=0$). Therefore, inside of the approximate zero acceleration surface the gravitational attraction is stronger than the cosmological expansion while outside of this surface the cosmological expansion prevails over the gravitational attraction. Obviously, the Hubble flows are formed in the latter region. Additionally, we can see that there is an asymmetry in directions along $X$ and $Y$ axes. The characteristic distances from the barycenter to the zero acceleration surface are $|\tilde X|\approx 1.6 \to |X|\approx 1.65$ Mpc and $|\tilde Y|\approx 1.45 \to |Y|\approx 1.5$ Mpc. As it follows from figures \ref{hedgehog} and \ref{Lagrangepoints}, the distance $R$ where the cosmological accelerated expansion begins to prevail over the gravitational attraction is approximately $R\approx 1.6$ Mpc, that is of the order of the scale of our local group. In other words, the cosmological constant is significant on these scales. It is worth noting that in this section we did not take into account the IGrM in the Local Group. Obviously, the inclusion of this matter into consideration will lead to a slight increase of the characteristic distance $R$ to the zero acceleration surface. \begin{figure*}[htbp] \centerline{\includegraphics[width=2.95in,height=2.65in]{Onelagrangepoint3_1.eps} \includegraphics[width=3.0in,height=2.7in]{Lagrangepoints3_1.eps}} \caption {Here, we depict the contour plot of the absolute value of the acceleration $|\tilde {\bf W}|$ in our Local Group (left and right black points are M31 and MW, respectively). The yellow and green lines correspond to the conditions $W_x=0$ and $W_y=0$, respectively. Red points define the positions of the zero acceleration: $W_x=W_y=0 \to |W|=0$. The right panel takes into account both the gravitational attraction and the cosmological expansion, while the left panel disregards the cosmological expansion. The elliptic-like green curve together with the neighboring yellow curve defines the region where $|\tilde {\bf W}|\approx 0$ (the right panel). The vertical bars show the correspondence between the contour plot color and the absolute value of the acceleration. \label{Lagrangepoints}} \end{figure*} \section{Conclusion} In this paper, we have considered the motion of astrophysical objects deep inside of the cell of uniformity where both the gravitational attraction between them and the cosmological expansion of the Universe play the role. To describe this, we obtained the general system of equations of motion for arbitrary distributed inhomogeneities in the open Universe. To show the competitive effects of gravitational attraction and cosmological recession, we considered two illustrative abstract examples where the systems of galaxies consist of three and four galaxies. Then, we investigated our Local Group which consists of two giant galaxies the Milky Way and Andromeda and approximately 40 dwarfs galaxies. According to recent observations, these giant galaxies move towards each other with the relative velocity $\sim 100$ km/sec and may collide in future. Such process was investigated, e.g., in \cite{CoxLoeb} where the authors used the hydrodynamic approach. They found that the average time for the first passage is 2.8 Gyr and for the final merger is 5.4 Gyr. In our paper, we distinguished two different models. For the first one, we did not take into account the influence of the Intra-Group Matter. In this case, our mechanical approach has shown that for currently known parameters of this system, the collision is hardly plausible in future because of the angular momentum. These galaxies will reach the minimum distance of about 290 Kpc in 4.44 Gyr from present, and then begin to run away irreversibly from each other. For the second model, we took into account the dynamical friction due to the IGrM. We found a characteristic value of the IGrM particle velocity dispersion $\tilde \sigma = 2.306$. For $\tilde \sigma \leq 2.306$, the merger will take place, but for bigger values of $\tilde\sigma$ the merger can be problematic because the galaxies approach a region where the dragging effect of the dynamical friction can be too small to force the galaxies to converge.If the temperature of the IGrM particles is $10^5$ K, then this characteristic value of $\tilde \sigma$ corresponds to the IGrM particle mass 17 MeV. Therefore, for lighter masses the merger is problematic. We also have shown that for the chosen initial conditions and the value of $T$, the touch of the galaxies will take place during the first passage of MW and M31 for the IGrM particle masses $m \geq 67$ MeV. We have also defined the region in the vicinity of our Local Group where the Hubble flows start to form. For such processes, the zero acceleration surface (where the gravitational attraction is balanced by the cosmological accelerated expansion) plays the crucial role. We have taken into account that two giant galaxies MW and M31 are located at the distance of 0.78 Mpc from each other. Obviously, if this surface exists, it does not have a spherical shape for given geometry. We have shown that such surface is absent for the Local Group. Instead, we have found two points and one circle with zero acceleration. Nevertheless, there is the nearly closed area around the MW and M31 where the absolute value of the acceleration is approximately equal to zero. The Hubble flows are formed outside of this area. After finishing this article, we became aware of a recent paper \cite{new3}, which also considers the collision between the Milky Way and Andromeda. This paper is based on the authors' measurements of a proper motion of the galaxy M31 \cite{new1}. According to these measurements, the authors found in \cite{new2} the radial and transversal velocity of M31 with respect to the Milky Way. They are $V_{rad,M31} \equiv V_0 = -109.3$ km/sec and $V_{tan,M31} \equiv V_{\perp 0}= 17.0$ km/sec, respectively. These values are less than those we used for our simulation. It is clear, that if the transverse velocity is so small, the merger of the galaxies MW and M31 is inevitable, even without dynamical friction. As far as we can judge from the references cited in our paper, there is wide observational data spread for our Local Group. The advantage of our mechanical approach lies in the fact that our equations enable us to calculate easily the dynamical evolution of the Local Group for any set of the initial conditions. \acknowledgments This work was supported in part by the "Cosmomicrophysics-2" programme of the Physics and Astronomy Division of the National Academy of Sciences of Ukraine. We want to thank the referee for his/her comments which have considerably improved the motivation of our approach and the presentation of the results. A.Zh. acknowledges the Theory Division of CERN for the hospitality during the final stage of the preparation of this paper.
2,869,038,156,500
arxiv
\section{INTRODUCTION} In the past few decades, the increasing consumption of fossil energy has led to public interests and technical developments in utilizing various distributed energy sources. Distributed generations (DGs) with flexible operation modes have been proposed as a general strategy for improving energy efficiency, lowering curtailment, peak shaving, and shifting. The DGs with high penetration make significant contributions to power variations, however, DGs also bring challenges to operate and maintain the stability of the power grid. To solve this problem, microgrids (MGs) have been viewed as an applicable solution to integrate various DGs, energy storage devices and loads, which are connected to the power grid as a whole controllable unit. In the area of energy, the economic dispatch (ED) of MGs is critical and has received extensive attention from research and industrial fields. In order to operate the MG safely and economically, several related studies have been proposed over the last decade. In \cite{bib1}, a dynamic programming-based algorithm was derived to solve the unit commitment problem in the MG, including photovoltaic-based generators to reduce the economic cost. In \cite{bib3} and \cite{bib4}, day-ahead optimization for gas and power systems were studied, which also considered the partial differential constraints in natural gas transmission. For the models in system, though the aforementioned studies contribute considerably to the MG optimization problems, they did not consider the dynamic energy flow constraints of the cogeneration units. The neglect of dynamic process may result in false optimal solutions since the cogeneration unit is constrained by physical dynamic transitions. Therefore, we consider that a more precise model of MG is critical to obtain feasible solutions. In this paper, we construct a practical MG system consists of \emph{combined-cycle gas turbine (CCGT) plant}. Specifically, the CCGT plant is a typical cogeneration unit with a remarkable dynamic process, thus validating the effectiveness of the proposed method considering the dynamic energy flow constraints. In this paper, we propose to use the intra-day optimization strategies, and solve the multi-time periods decision problem via dynamic programming (DP). However, the classical DP usually suffers from the "three curses of dimensionality" \cite{bib10} when handling high dimensional state space and action space. In this context, \emph{approximate dynamic programming (ADP)} is a promising real-time optimization method \cite{bib9}, which makes a trade-off between solvability and optimality of solutions. In the ADP framework, the large-scale optimization problem is viewed as \emph{Markov decision process (MDP)}, which is divided into small sub-problems and solved sequentially. The ADP algorithm has been demonstrated to be valid in resource allocation problems \cite{bib10} and energy storage management \cite{bib11}, while these works focused on the power systems. Therefore, how to apply ADP to operation MG with the CCGT plant still remains a problem. To deal with this issue, this paper proposes a novel ADP algorithm for the economic dispatch of an integrated heat and power microgrid. Specifically, an autoregressive moving average (ARMA) multi-parameter identification model of the CCGT thermodynamic process is considered in the MG system model. We also design an ADP approach based on value function approximation (VFA), in which a post-decision state is employed to achieve the near-optimal solution to minimize the total operational cost over one day. Overall, compared to the existing works, the main contributions of this paper are listed as follows: \begin{itemize} \item A finite-horizon MDP formulation is developed, which incorporates CCGT thermodynamic constraints. To keep the system Markovian, an augmented state vector of CCGT is introduced so that the principle of optimality holds. \item A VFA-based ADP is proposed and achieves near-optimal solutions to the MDP model. As a result, the proposed method solves Bellman's equation forward iteratively. \item Numerical experiments on the proposed ADP method are conducted with comparisons to the traditional myopic policy and MPC policy, thus validating the effectiveness of the proposed ADP method. \end{itemize} The rest of this paper is organized as follows. Section 2 presents the MG system and the formulated MDP model in detail. Section 3 introduces the ADP solution and the VFA design. The experimental settings and comparisons are presented in Section 4. In Section 5, we draw conclusions and promising directions for the future. \section{MODEL OF ECONOMIC DISPATCH FOR MICROGRID} This paper considers an integrated heat and power MG system, as shown in Figure~\ref{MG}, which consists of several dispatchable DGs: CCGT plant, gas boiler (GB), heat pump (HP), fuel cell (FC) and storage device. Wind turbines (WT) are renewable and non-dispatchable sources, which are also included in this MG system. We assume that the MG runs in a grid-connected mode and can trade with the upper-level grid according to the real-time electricity price. The heat and electricity demands on the users side are assembled in two load nodes respectively, which facilitates the information collection and comprehensive dispatch of the MG. Considering the intra-day operation of the integrated MG, we specify a finite time horizon $\mathcal{T}$, indexed by \{$\Delta t$, 2$\Delta t$,\ldots,$T$\}, where the time interval for each time step is $\Delta t$, and we set $\Delta t=15 min$, $T=96\Delta t=24h$ in a day. \begin{figure} \centering \includegraphics[width=\hsize]{MG.png} \caption{The schematic diagram of MG system.} \label{MG} \end{figure} \subsection{The Markov Decision Process Formulation} In the MG system, the real-time economic dispatch problem is a typical multi-time periods optimization problem, which can be decomposed into multiple sequential sub-problems and solved iteratively in the MDP framework. The basic elements of MDP are defined and introduced in this subsection. The state variables are related to the minimally dimensioned function of MG, which are necessary to compute the decision function and the evolution of the system. The state vector $S_{t}$ at time step $t$ are defined in Equation~(\ref{eq1})-(\ref{eq3}), including both power system state variables $S_t^E$ and heat system state variables $S_t^H$ as follows: \begin{equation} \label{eq1} S_{t}= \left\lbrace S_t^E,S_t^H \right\rbrace \end{equation} \begin{equation} \begin{aligned} \label{eq2} S_t^E = \left\lbrace P_{t-\Delta t}^{FC}, P_t^{CCGT}, SOC_t, P_t^{WT,a}, D_t^E, p_t \right\rbrace \end{aligned} \end{equation} \begin{equation} \label{eq3} S_t^H= \left\lbrace Q_{t-\Delta t}^{GB}, Q_{t-\Delta t}^{HP}, \bar{Q}_t^{CCGT}, D_t^Q \right\rbrace \end{equation} where at time $t$, $D_t^E$ and $D_t^Q$ represent the electricity and heat demand of the system, $P_t^{CCGT}$ represents the active power output of CCGT, $SOC_t$ represents the state of charge of the power storage device, $P_t^{WT,a}$ represents the available wind power, $p_t$ represents the electricity price in real market, $\bar{Q}_t^{CCGT}$ represents the augmented states of CCGT thermal output $Q_t^{CCGT}$ (see Subsection\ref{2.2}). Some past system decisions in history are also included in $S_t$, e.g., $P_{t-\Delta t}^{FC}$, $Q_{t-\Delta t}^{GB}$ and $Q_{t-\Delta t}^{HP}$, since operational ramping constraints are considered in this paper. In this context, the feasible power output of units at $t$ is constrained by the previous states. The decision variables at time $t$ include: the active power output of all dispatchable power generations, for example $P_t^{FC}$,$Q_t^{GB}$ and $Q_t^{HP}$; the natural gas input flow of CCGT $g_t^{CCGT}$; the charge and discharge state $u_t^{c}$,$u_t^{d}$ and power $P_t^c$,$P_t^d$; the active power of the MG exchanged with the upper-level grid $P_t^{Grid}$; and the curtailment power of WT $P_t^{wcur}$ and loads $P_t^{cur}$,$Q_t^{cur}$. This paper only focuses on active power balance of the electricity, since the MG system runs in the grid-connected mode which ensures node voltages and phase angles stability. Hence, the decision vector $x_t$ is described in Equation~(\ref{eq4}) as follows: \begin{equation} \begin{aligned} \label{eq4} x_t = \{ & P_t^{FC}, g_t^{CCGT}, P_t^{Grid}, P_t^c, P_t^d, u_t^{c}, u_t^{d}, \\ & P_t^{wcur}, P_t^{cur}, Q_t^{cur}, Q_t^{GB}, Q_t^{HP}, Q_t^{cur} \} \end{aligned} \end{equation} The exogenous information represents the stochastic factors in the system \cite{bib9}. In this paper, the exogenous information vector $W_t$ are the day-ahead forecast error of wind power generation $\hat P_t^{WT}$, real-time electricity price $\hat p_t$ and demands $\hat D_t^E$,$\hat D_t^Q$. $W_t$ is given by Equation~(\ref{eq5}) as follows: \begin{equation} \label{eq5} W_t = \{ \hat P_t^{WT}, \hat D_t^E, \hat p_t, \hat D_t^Q\} \end{equation} In the time sequence, the exogenous information $W_t$ arrives after the previous time step $t-\Delta t$ and before the current decision making at time $t$. Therefore, the decision process evolves as Equation~(\ref{eq6}). \begin{equation} \label{eq6} MG_t = \{ S_0, x_0, W_{\Delta t}, \ldots, S_{t-\Delta t}, x_{t-\Delta t}, W_t, S_t\} \end{equation} According to $S_t$, $x_t$ and $W_{t+\Delta t}$, the state transition function $S^M(S_t, x_t, W_{t+\Delta t})$ is determined by the following equations: \begin{equation} \label{eq7} S_{t+\Delta t}^E(1) = P_t^{FC} \end{equation} \begin{equation} \label{eq8} S_{t+\Delta t}^E(2) = a_0 + b_0\cdot g_t^{CCGT} \end{equation} \begin{equation} \label{eq9} S_{t+\Delta t}^E(3) = S_t^E(3)+(P_t^c\eta^c-\frac{P_t^d}{\eta^d})\cdot\Delta t \end{equation} \begin{equation} \label{eq10} S_{t+\Delta t}^E(k)= P_{t+\Delta t}^F(k-3)+W_{t+\Delta t}(k-3),k\in\{4,5,6\} \end{equation} \begin{equation} \label{eq11} S_{t+\Delta t}^H(1) = Q_t^{GB} ,\quad S_{t+\Delta t}^H(2)= Q_t^{HP} \end{equation} \begin{equation} \label{eq12} S_{t+\Delta t}^H(3) = \boldsymbol A\cdot \bar Q_t^{CCGT} + \boldsymbol B\cdot g_t^{CCGT} \end{equation} \begin{equation} \label{eq14} S_{t+\Delta t}^H(4) = P_{t+\Delta t}^F(4)+W_{t+\Delta t}(4) \end{equation} where $P_{t+\Delta t}^F=\{ P_{t+\Delta t}^{WT,F}, D_{t+\Delta t}^{E,F}, p_{t+\Delta t}^F, D_{t+\Delta t}^{Q,F} \}$ represents the day-ahead forecast of exogenous information. Most units are modeled based on their popular energy hub model~\cite{bib10}, while the state transition function of CCGT is reformulated from the obtained ARMA model~(\ref{eq17}), which describes the dynamic process of CCGT. $\bar Q_t^{CCGT}$ represents the augmented states of the CCGT, and $\boldsymbol A$,$\boldsymbol B$ represents the coefficient matrices of $\bar Q_t^{CCGT}$ repectively. The objective function $V_t^{*}(\cdot)$ is defined to minimize the total operation cost of the MG over the finite horizon $\mathcal{T}$. In time period $t$, the operation cost~(\ref{eq15}) is denoted by $C_t(\cdot)$, including fuel and operation cost $C_t^{f}(\cdot)$, cost of trading with grid $C_t^{tr}(\cdot)$, and penalties on the curtailment $C_t^{cur}(\cdot)$. Following Bellman's optimality principle, we design the optimal value funcition based on the state vector $S_{t}$, decision vector $x_t$ and exogenous information vector $W_t$ as follows: \begin{equation} \label{eq15} C_t(S_t,x_t)= C_t^{f}(S_t,x_t)+C_t^{tr}(S_t,x_t)+C_t^{cur}(S_t,x_t) \end{equation} \begin{equation} \label{eq16} \begin{aligned} V_t^{*} & = \mathop{\min}_{x_t\in\mathcal{X}_t} \mathbb{E}\{\sum_{t=\Delta t}^T C_t(S_t, x_t) \} \\ & = \mathop{\min}_{x_t\in\mathcal{X}_t} (C_t(S_t, x_t)+\mathbb{E}[V_{t+\Delta t}(S_{t+\Delta t})|S_t,x_t]) \end{aligned} \end{equation} where $\mathcal{X}_t$ is the set of fesible decisions, and $\mathbb{E}(\cdot)$ is the conditional expectation. \subsection{Dynamic Process of CCGT}\label{2.2} The CCGT plant in the microgrid consists of the gas turbine, heat recovery system, steam turbine and corresponding controllers, etc. Obviously, the thermal power response of CCGT is slower than that of electric power due to the complex transient flow in the system. The consideration of this dynamic process makes our optimization work more reliable and distinct from the existing energy dispatch strategies for MGs. Some related work \cite{bib15} proposed an ARMA identification model considering the different response times of the CCGT plant. This paper transforms the ARMA model into high-order difference constraints, as shown in Equation~(\ref{eq17}), which are then integrated into the energy dispatch optimization model. \begin{equation} \label{eq17} Q^{CCGT}(k)\!=\!\sum_{m=1}^4 a_mQ^{CCGT}(k-m)+b_mg^{CCGT}(k-m-3) \end{equation} where $a_m$, $b_m$ are parameters estimated by means of system identification technique. $Q^{CCGT}(i)$ represents the heat output of CCGT at sampling point $i$, $g^{CCGT}(j)$ represents the natural gas flow input of CCGT at sampling point $j$. The sample interval is $50s$, thus there are 18 sampling points over one time period $t$. To make the decision process Markovian and have the prerequisite for applying DP, we reformulate the state variables in Equation~(\ref{eq3}) by adopting the augmented states $\bar{Q}^{CCGT}(k)$ as follows~\cite{bib16}: \begin{equation} \label{eq18} \bar{Q}_t^{CCGT}(k) = \left[ x_1(k)\quad x_2(k)\cdots x_6(k)\quad x_7(k) \right]^{\mathrm{T}} \end{equation} \begin{small} \begin{equation} \begin{aligned} \label{eq19} \bar{Q}_t^{CCGT}(k+1) = \left[ \begin{matrix} \boldsymbol 0 & \boldsymbol I\\ 0&\boldsymbol A_1 \end{matrix} \right] \!\cdot\! \bar{Q}^{CCGT}(k)+\left[ \begin{matrix} \boldsymbol{0}^{\mathrm{T}}\quad 1 \end{matrix} \right]^{\mathrm{T}}\!\cdot\! g_t^{CCGT}(k) \end{aligned} \end{equation} \end{small} \begin{equation} \label{eq20} Q_t^{CCGT}(k) = \left[ \begin{matrix} b_4&b_3&b_2&b_1&0&0&0 \end{matrix} \right]\bar{Q}^{CCGT}(k) \end{equation} where $k=1,2,\cdots,18$ in each time period $t$, $\boldsymbol I$ is $6 \! \times \! 6$ identity matrix, $\boldsymbol 0\!=\![0\quad0\quad0\quad0\quad0\quad0]^{\mathrm{T}}$, $\!\boldsymbol A_1=[0\quad 0\quad a_4 \quad a_3 \quad a_2 \quad a_1]$, respectively. \subsection{Constraints} In addition to the above thermodynamic constaints of CCGT, the objective function is subjected to the following constraints: \begin{equation} \begin{aligned} \label{eq21} & P_t^{FC}+P_t^{CCGT}+P_t^{Grid}+(P_t^{d}\cdot u_t^d-P_t^{c}\cdot u_t^c)\\ &+(P_t^{WT,a}-P_t^{wcur})-P_t^{HP}+P_t^{cur}=D_t^E \end{aligned} \end{equation} \begin{equation} \label{eq22} Q_t^{GB}+Q_t^{CCGT}+Q_t^{HP}+Q_t^{cur}=D_t^Q \end{equation} \begin{equation} \label{eq23} \underline {P_t^i} \leq P_t^i \leq \overline{P_t^i}, i\in\{FC,CCGT,Grid\} \end{equation} \begin{equation} \label{eq24} \underline {Q_t^j} \leq Q_t^j \leq \overline{Q_t^j}, j\in\{GB,HP,CCGT\} \end{equation} \begin{equation} \label{eq25} R_t^{i,down}\cdot\Delta t \leq P_t^i-P_{t- \Delta t}^i \leq R_t^{i,up}\cdot\Delta t \end{equation} \begin{equation} \label{eq26} R_t^{j,down}\cdot\Delta t \leq Q_t^j-Q_{t- \Delta t}^j \leq R_t^{j,up}\cdot\Delta t \end{equation} \begin{equation} \label{eq27} u_t^{c}\cdot\underline {P_t^c} \leq P_t^c \leq u_t^{c}\cdot\overline {P_t^c} \end{equation} \begin{equation} \label{eq28} u_t^{d}\cdot\underline {P_t^d} \leq P_t^d \leq u_t^{d}\cdot\overline {P_t^d} \end{equation} \begin{equation} \label{eq29} u_t^{c}+u_t^{d}\leq1, u_t^{c},u_t^{d}\in\{0,1\} \end{equation} \begin{equation} \label{eq30} \underline{SOC}\leq SOC_t \leq \overline{SOC} \end{equation} \begin{equation} \label{eq31} 0 \leq P_t^{wcur}\leq P_t^{WT,a} \end{equation} \begin{equation} \label{eq32} 0 \leq P_t^{cur} \leq D_t^E, 0 \leq Q_t^{cur} \leq D_t^Q \end{equation} where Equation~(\ref{eq21}) and (\ref{eq22}) are the power and heat balance constraints of the MG respectively. The power generated from the dispatchable DGs and traded with the grid are limited by their lower and upper boundaries $\underline {P_t^i}$, $\overline{P_t^i}$, $\underline {Q_t^j}$, $\overline{Q_t^j}$, as indicated by Equation~(\ref{eq23})-(\ref{eq24}). Note that if $P_t^{Grid}$ is positive, the MG purchases electricity from the grid; otherwise, the MG sells surplus energy to the grid. The ramping rate of DGs is limited by Equation~(\ref{eq25})-(\ref{eq26}). The constraints of energy storage device are shown in Equation~(\ref{eq27})-(\ref{eq30}), while $u_t^{c}$ and $u_t^{d}$ are integer variables. The curtailment constraints for renewable power and demands are shown in Equation~(\ref{eq31})-(\ref{eq32}) respectively. All the aforementioned constraints in Equation~(\ref{eq18})-(\ref{eq32}), should be satisfied for time $t\in\mathcal{T}$. \section{APPROXIMATE DYNAMIC PROGRAMMING SOLUTION} In the framework of MDP, the typical multi-time periods decision problem can be solved recursively by dynamic programming. However, DP solves the Bellman's equation backward through time and explores every possible state at every time period, which constantly suffers from the curses of dimensionality for the large state and action space. To solve this problem, an improved alternative ADP is developed in this section. According to \cite{bib12}, ADP based on value function approximation (VFA) has been applied to obtain a near-optimal policy. By approximating the value function around post-decision state variables $S_t^x$, the expectation form in Equation~(\ref{eq16}) is rephrased and the Bellman's equation is reformulated as a deterministic minimization problem as follows: \begin{equation} \label{eq34} V_t(S_t)=\mathop{\min}_{x_t\in\mathcal{X}_t} (C_t(S_t, x_t)+\bar{V}_t^x(S_t^x) \end{equation} where $S_t^x$ is the state after the decision $x_t$ has been made but before the new exogenous information $W_{t+\Delta t}$ has arrived; $\bar{V}_t^x(S_t^x)$ is the VFA around $S_t^x$. Based on Equation~(\ref{eq34}), ADP is developed to solve the MDP problem forward at each time step. It is worth to note that the computation of the value function $V_t^x(S_t^x)=\mathbb{E}[V_{t+\Delta t}(S_{t+\Delta t})|S_t,x_t] $ is time-consuming and intractable. Therefore, a proper approximation $\bar{V}_t^x(S_t^x)$ to the optimal value function $V_t^x(S_t^x)$ is desired for guaranteeing the near-optimal policy $x_t^{*}$ according to current state information of the system. With this analysis, this paper proposes a piecewise linear function based ADP, and accomplishes the algorithm by learning the slopes of the optimal value function at the heat output state $Q_t^{CCGT,x}$. \subsection{Piecewise Linear Function Approximation} The approximate value function quantifies the long-term influence of the current decision $x_t$. In this paper, a convex piecewise linear function (PLF) is used to estimate the value of the heat output of CCGT according to \cite{bib13}, as presented by Equation~(\ref{eq35}). \begin{equation} \label{eq35} \bar{V}_t^x(S_t^x)=\bar{V}_t^x(Q_t^{CCGT,x})=\sum_{a=1}^{N_t} d_{t,a}r_{t,a}, a\in\{1,\cdots,N_t\} \end{equation} where the slopes $d_{t,a}$ should be monotonically increasing, i.e., $d_{t,a}\leq d_{t,a+1}$. Actually, keeping convexity makes the optimization problem linear programs, which helps us handle the high-dimension state space and accelarates convergence. Figure~\ref{PLF_ADP} illustrates the exact optimal value function and the conducted approximation. \begin{figure} \centering \includegraphics[height=5cm]{PLF_ADP.png} \caption{Optimal and approximate value function} \label{PLF_ADP} \end{figure} In time period $t$, the post-decision state $Q_t^{CCGT,x}$ equals to the heat output of CCGT at the last sampling point of this period, which can be calculated by the augmented state Equation~(\ref{eq18})-(\ref{eq20}). The post-decision state $Q_t^{CCGT,x}$ is then divided into $N_t$ segments on average, thus: \begin{equation} \label{eq36} Q_t^{CCGT,x} = Q_t^{CCGT}(18) \end{equation} \begin{equation} \label{eq37} 0\leq r_{t,a}\leq (\underline {Q_t^{CCGT}}-\overline{Q_t^{CCGT}})/N_t \end{equation} \par We substitute~(\ref{eq35}) in the approximated Bellman's equation~(\ref{eq34}), then the near-optimal solution at time $t$ can be obtained by solving a deterministic optimization as follows: \begin{equation} \label{eq38} x_t^{*} = \arg\mathop{\min}_{x_t\in\mathcal{X}_t,r_{t,a}\in\mathcal{R}_t} (C_t(S_t, x_t)+\sum_{a=1}^{N_t} d_{t,a}r_{t,a}) \end{equation} where $\mathcal{R}_t$ is limited by Equation~(\ref{eq36}) and~(\ref{eq37}). Note that each time period is approximated by an independent PLF. \subsection{The Updating Process of PLF-ADP} In order to make the decisions as close to the optimal as possible, the slopes of each segment for each time period should be updated iteratively until convergence. In this paper, we introduce the superscript $n$ to represent the variables value in the $n$th iteration. Therefore, $\bar{V}_t^{x,n-1}$ represents the approximate value function obtained in the $(n-1)$th iteration, which can be utilized to make decisions in the $n$th iteration as follows: \begin{equation} \begin{aligned} \label{eq39} V_t(S_t^n)&=\mathop{\min}_{x_t\in\mathcal{X}_t} (C_t(S_t^n, x_t^n)+\bar{V}_t^{x,n-1}(S_t^{x,n}))\\ &=\mathop{\min}_{x_t\in\mathcal{X}_t} (C_t(S_t^n, x_t^n)+\sum_{a=1}^{N_t} d_{t,a}^{n-1}r_{t,a}^n) \end{aligned} \end{equation} To update the slopes of each segment, a sample observation of the marginal value $\hat{d}_{t-\Delta t,a}^n(Q_{t-\Delta t}^{CCGT,x,n})$ is needed, as indicated by Equation~(\ref{eq40}). \begin{equation} \begin{aligned} \label{eq40} &\hat{d}_{t-\Delta t,a}^n(Q_{t-\Delta t}^{CCGT,x,n})=\hat{d}_{t,a}^n(Q_t^{CCGT,n})\\ &=V_t^{*}(Q_t^{CCGT,n})-V_t^{*}(Q_t^{CCGT,n}-\rho) \end{aligned} \end{equation} Then the slopes of $\bar{V}_{t-\Delta t}^{x,n}(Q_{t-\Delta t}^{CCGT,x,n})$ can be updated as follows: \begin{equation} \begin{aligned} \label{eq41} &d_{t-\Delta t,a}^n(Q_{t-\Delta t}^{CCGT,x,n})=\alpha^{n-1}\hat{d}_{t,a}^n(Q_t^{CCGT,n})\\ &\qquad\qquad+(1-\alpha^{n-1})d_{t-\Delta t,a}^{n-1}(Q_{t-\Delta t}^{CCGT,x,n}) \end{aligned} \end{equation} where $\alpha^{n-1}$ is the stepsize to weight the information combined with the exsiting knowledge about the state value. There are several methods to decide the stepsizes, such as deterministic and stochastic stepsizes. In this work, a generalized harmonic stepsize rule is adopted to improve the rate of convergence. Note that Equation~(\ref{eq41}) only updates the slope for $a$th segment of $\bar{V}_{t-\Delta t}^{x,n}(Q_{t-\Delta t}^{CCGT,x,n})$. Besides, we apply the SPAR algorithm in \cite{bib9} to ensure the slopes are monotonically increasing after the update. \begin{table}[!t] \renewcommand\arraystretch{1} \centering \caption{Parameters of Generators} \label{parameter1} \begin{tabular}{c|c|c|c|c} \hhline \multirow{2}*{Unit} & $P_{min}$ & $P_{max}$ & Ramp Rate & CC\\ ~ & (MW) & (MW) & (MW/h) & (\$/MWh) \\ \hline FC & 0.8 & 7 & 7 & 65\\ \hline CCGT & 6 & 43 & 38 & 92 \\ \hline SOC & -3 & 3 & - & - \\ \hline WT & 0 & 3.6 & - & - \\ \hline Grid & -6 & 6 & 6 & $p_t$\\ \hhline \end{tabular} \end{table} \begin{table}[!t] \centering \caption{Parameters of the Heat Generators} \label{parameter2} \begin{tabular}{c|c|c|c|c} \hhline \multirow{2}*{Unit} & $Q_{max}$ & $Q_{min}$ & Ramp Rate & CC\\ ~ & (MW) & (MW) & (MW/min) & (\$/MWh) \\ \hline GB & 1 & 15 & 3 & 300\\ \hline HP & 0 & 5 & 5 & - \\ \hline CCGT & 15 & 50 & 0.5 & - \\ \hhline \end{tabular} \end{table} \begin{table}[!t] \renewcommand\arraystretch{1} \centering \caption{Parameters of CCGT} \label{parameter3} \begin{tabular}{c|c|c|c|c} \hhline \multirow{4}{*}{Parameters} & $a_1$ & $a_2$ & $a_3$ & $a_4$ \\ \cline{2-5} ~ & 1.6301 & -0.6292 & -0.3266 & 0.2570\\ \cline{2-5} & $b_1$ & $b_2$ & $b_3$ & $b_4$ \\ \cline{2-5} ~ & 0.2087 & 0.06311 & 0.3656 & 0.4031 \\ \hhline \end{tabular} \end{table} \section{Experiments} In this section, the significance of considering the thermodynamic process of CCGT and the performance of the proposed PLF-ADP algorithm are validated by numerical experiments on an integrated heat and power microgrid system, as shown in Figure~\ref{MG}. The MDP model and associated constraints of the microgrid are available in Section 2. The parameters of the microgrid are partially shown in Table~\ref{parameter1}-\ref{parameter3}, where CC represents the cost coefficients of DGs in the optimization problem. The initial energy stored in the device is set to 7.5MW, meanwhile, the capacity and cycled efficiency of SOC are $\underline{SOC}=1.5$MW, $\overline{SOC}=15$MW, $\eta_t^c=\eta_t^d=0.9$. The penalties of curtailments $P_t^{wcur}$, $P_t^{cur}$ and $Q_t^{cur}$ are set to be 200\$/MWh, 150\$/MWh and 350\$/MWh respectively. \begin{figure}[t] \centering \includegraphics[width=8cm]{Demand.png} \caption{The Prediction of WT and Demands} \label{Demand} \end{figure} The day-ahead predicted power demand, heat demand and wind power are shown in Figure~\ref{Demand}. The wind power data in this paper comes from the real-world WT system in Turkey. The day-ahead prediction for electricity price of market is tiered pricing. All the experiments are conducted with Python on an Intel Core i5 2.80GHz Windows-based PC with 8GB RAM. \begin{figure}[t] \centering \includegraphics[width=8cm]{Stacked_P_++.png} \caption{Day-ahead power dispatch based on MILP} \label{MILP_Power} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=8cm]{Stabel_Dynamic_Contrast.png} \caption{The Heat Output Curves of CCGT} \label{Stabel_Dynamic_Contrast} \end{figure} Firstly, the classical mixed integer linear programming (MILP) algorithm is implemented to obtain the operation strategy for the MG with the thermodynamic process of CCGT. The $\bar{Q}_t^{CCGT}$ augmented states model is used to constrain the feasible region of the input $g_t^{CCGT}$. Thus, the dynamic operation curve of the heat output $Q_t^{CCGT}(k)$ is recorded every 50 seconds, i.e., there are total 1728 sample points in 24h. The experimental results are shown in Figure~\ref{MILP_Power}-\ref{Stabel_Dynamic_Contrast}. It is obvious that the fluctuating electricity and heat demand is mainly provided by the CCGT and grid, while the potential thermal-electric coupling makes the auxiliary units necessary to satisfy Equation~(\ref{eq21})-(\ref{eq22}). After the midnight(0:00-6:00), the market electricity price is quite low and the MG tends to increase power purchase from the grid accordingly. Meanwhile, the energy storage device discharges to reduce the cost. In the time period 40-60(10:00-15:00), the demands gradually rise and the market price is relatively high, so the MG begins to sell power to the grid as much as possible on the basis of meeting the demands. Simultaneously, the power storage device is charged in advance to meet the load demand and reduce the load curtailment during load peak hours (time period 73-80). The gas boiler only generates heat in time periods 46-52 and 82-86 during which the heat output of CCGT almost reaches the upper limit. Figure~\ref{Stabel_Dynamic_Contrast} shows the heat output curves of the CCGT based on the energy hub model \cite{bib14} and identification model respectively, while the former model only considers the stable transition state of the CCGT. Figure~\ref{Stabel_Dynamic_Contrast} also shows that the two curves are not exactly the same and the curve with dynamic process is more smooth and practical for CCGT, since the stable curve may conflict with ramping constraints when the demands fluctuate quickly, thus demonstrating the significance of considering the thermodynamic process of CCGT. \begin{figure}[t] \centering \includegraphics[width=8cm]{ADP_Power.png} \caption{Intra-day power dispatch based on ADP} \label{ADP_Power} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=8cm]{ADP_CCGT.png} \caption{Heat output of CCGT based on ADP} \label{ADP_CCGT} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=8cm]{ADP_Iteration.png} \caption{The convergence curve of ADP} \label{ADP_Iteration} \end{figure} Secondly, the performance of the proposed PLF-ADP algorithm is presented in Figure~\ref{ADP_Power}-\ref{ADP_Iteration}. The generation output of CCGT and the power exchange between the MG and the grid are shown in Figure~\ref{ADP_Power} and Figure~\ref{ADP_CCGT}. It is obvious that the MG sells electricity to the grid early in time period 24. The convergence process of the ADP is depicted in Figure~\ref{ADP_Iteration}, where the ADP converges in less than 40 iterations. To demonstrate the effectiveness, myopic policy and model predictive control (MPC) are used as competitive comparisons. The experimental results show the solution from ADP performs better than the myopic policy and MPC algorithm and makes 5\% cost reduction, though the computation time is longer due to the iteration process. In summary, based on augmented states value approximation, the proposed PLF-ADP algorithm is effective for the economic dispatch of the integrated microgrid. \section{CONCLUSIONS} In this paper, we propose a novel ADP algorithm based on Markov decision process for the economic dispatch problem of a microgrid, which consists of both heat and power distributed generators in the real world. Specifically, we integrate the CCGT thermodynamic process into the approximate dynamic programming with augmented states. In the experimental section, we validate the effectiveness of the proposed algorithm with comparisons to the conventional optimization strategies. However, there remain some limitations in our work. Based on the existing research work, we intend to improve both performance and efficiency in the future, also with the uncertainty in the real applications, thus making the proposed ADP a more feasible and extensive application for automatic economic dispatch. \balance
2,869,038,156,501
arxiv
\section{Introduction} The existence of picture number is a central characteristic of the RNS formalism of superstring theory~\cite{Friedan:1985ge}. Picture number complicates the construction of RNS string field theories\footnote{Various steps towards the construction of superstring field theory can be found in~\cite{Witten:1986cc,Witten:1986qs,Wendt:1987zh,Preitschopf:1989fc, Arefeva:1989cm,Arefeva:1989cp,Berkovits:1995ab,Berkovits:2000hf, Berkovits:2001im,Arefeva:2002mb,Michishita:2004by,Berkovits:2005bt, Berkovits:2009gi,Kroyter:2009zj,Kroyter:2009zi,Kroyter:2009bg}. A review of recent progress in string field theory, whose section 8 includes an introduction to superstring field theory is~\cite{Fuchs:2008cc}.}. The fact that the pictures of the NS fields are integer, while those of the Ramond fields are half-integer implies that one should either put a relative picture changing operator between the two sectors, as is done in the modified theory~\cite{Preitschopf:1989fc}, or use two (or more) Ramond fields with a constraint, as in the non-polynomial version of the theory~\cite{Michishita:2004by}. Both these resolutions are problematic. The former suffers from collisions of picture changing operators~\cite{Wendt:1987zh,Kroyter:2009zi}\footnote{Witten's theory also does not seem to support some desired classical solutions~\cite{DeSmet:2000je}.}, while the constraint of the latter cannot be derived from a covariant action. We review the current status of RNS superstring field theories, focusing on the cubic theories, in section~\ref{sec:SSFTintro}. In this work we propose a new open RNS string field theory, which does not suffer from these problems. Our starting point in defining this action relies on two observations made in~\cite{Berkovits:2001us}. The first is that the cohomology of $Q$ at any given picture and ghost numbers in the small Hilbert space is the same as the cohomology of $Q-\eta_0$ at the same picture and ghost numbers in the large Hilbert space. The second observation is that relaxing the constraint of a given picture number does not change the cohomology, unlike the case of $Q$ in the small Hilbert space, where considering two picture numbers at once results in a doubling of the cohomology. We explain these issues in section~\ref{sec:QetaPXi}. The theory itself is introduced in section~\ref{sec:SSFT}. The action is cubic and is defined in the large Hilbert space, \begin{equation} \label{action} S=-\oint \cO \Big(\frac{1}{2}\Psi \tQ\Psi+\frac{1}{3}\Psi^3\Big). \end{equation} Here, $\Psi$ is the string field and we abbreviated \begin{equation} \label{Qtilde} \tQ\equiv Q-\eta_0\,. \end{equation} The string fields are multiplied, as always, using Witten's star product~\cite{Witten:1986cc}, which we keep implicit. The integration symbol appearing in~(\ref{action}) is used throughout this paper to represent the CFT expectation value in the large Hilbert space, while the standard integration symbol represents the expectation value in the small Hilbert space. The mid-point insertion $\cO$ is defined using a superposition of all multi-picture-changing operators. We introduce these operators and discuss their properties. In particular, it is proven that they can always be represented by primary fields. Another novel property of the action~(\ref{action}), which we present in section~\ref{sec:Ramond}, is that the NS and Ramond sector fields are simply unified into a single string field, \begin{equation} \Psi\rightarrow \Psi+\al\,. \end{equation} Now, $\Psi$ is the NS string field, which is formed by a linear combination of string fields with arbitrary integer picture number and $\al$ is the Ramond string field, which is formed by a linear combination of half-integer picture numbers. Expanding the action~(\ref{action}) in terms of the new $\Psi$ and $\al$ gives, \begin{equation} S= -\oint \cO\Big(\frac{1}{2}\Psi \tQ \Psi+\frac{1}{3}\Psi^3 +\frac{1}{2}\al \tQ \al+\Psi \al^2\Big). \end{equation} This is very similar in form to the usual cubic RNS superstring field theory action~\cite{Preitschopf:1989fc}. The most striking difference being the fact that the same insertion $\cO$ multiplies all terms in the action. The consistent inclusion of the Ramond sector enables the covariant description of all sectors of general D-brane systems in the new formalism. The field--antifield (BV) formulation of the theory is presented in section~\ref{sec:BV}, where we also comment on gauge fixing. The supersymmetry properties of the theory are discussed in section~\ref{sec:SUSY}. Conclusions and some open problems are presented in section~\ref{sec:conc}. Note added: While this work was nearing completion, I learned of the work~\cite{SchnablGrassi}, which has some similarities to our construction. It would be interesting to understand the interrelation, if any, between the two theories. \newpage \section{Cubic superstring field theory} \label{sec:SSFTintro} In this section we recall the construction of the existing cubic RNS string field theories and their problems. The first proposal for such a theory was presented by Witten~\cite{Witten:1986qs}, following his bosonic theory~\cite{Witten:1986cc}. While the structure in the two cases is almost the same, there was one essential new feature that had to be addressed, namely the picture number. Since in the small Hilbert space, where the theory was constructed, CFT expectation values are non-zero for picture number $-2$, it seemed quite sensible to define the NS string field to carry the ``natural'' $-1$ picture, in order to get a standard form for the kinetic term in the action. The interaction term, on the other hand, had to be appended with a +1 picture number, in order to allow for a non-trivial result. This was achieved by an explicit insertion of the picture-changing operator $X$ in the action. The only consistent way of inserting the picture changing operator was as a mid-point insertion. Any other choice would have destroyed the associativity of the star product and the gauge invariance of the action. However, it was soon realized that the facts that the mid-point is invariant under the star product and that the OPE of $X$ with itself is singular, imply the emergence of singularities that ruin the consistency of the theory~\cite{Wendt:1987zh}. The resolution of~\cite{Preitschopf:1989fc,Arefeva:1989cp} (henceforth, ``the modified theory'') was to change the picture number of the NS string field to zero and insert an overall factor of $Y_{-2}$ in front of the action. The action of the NS sector of this construction is given by, \begin{equation} \label{NSaction} S_{NS}=-\int Y_{-2}\big(\frac{1}{2}\Psi Q\Psi+\frac{1}{3}\Psi^3\big)\,. \end{equation} Now, the gauge transformation does not contain any picture changing operators and in perturbation theory the factors of $Y_{-2}$ come (at least for trees) next to factors of $\big(Y_{-2}\big)^{-1}$ from the propagator. Therefore, singularities do not emerge. While this solved the problem with picture changing collisions, other criticism on this theory remained. A common claim was that the equation of motion derived from the action is not the one needed, due to the non-trivial kernel of $Y_{-2}$, \begin{equation} Y_{-2}(Q\Psi+\Psi^2)=0 \quad \stackrel{?}{\Longleftrightarrow} \quad Q\Psi+\Psi^2=0\,. \end{equation} In fact, not only the kernel of $Y_{-2}$, but also the space of operators whose OPE with $Y_{-2}$ is singular, is potentially problematic. However, these problems may arise only for string fields having these types of operators as local insertions at the string mid-point. String fields of this sort suffer from problems regardless of the existence of the mid-point insertion in the action and (at least) most of them should be discarded from the construction of the space of string fields\footnote{See, for example, section 2 of~\cite{Kroyter:2009zj} for further related discussion. Also note that a cubic theory, which avoids insertions of operators with a non-trivial kernel, exists and it seems that it is classically equivalent to the modified theory~\cite{Berkovits:2009gi,Kroyter:2009zj}. However, this formalism does not solve the problems with the Ramond sector. Moreover, the space of operators whose OPE with the insertion of this theory is singular, is non-trivial. Thus, even if one insists, in contrast to what we suggest here, that mid-point insertions should be allowed, this theory is still in no way better than the modified one.}. While the construction of a space of string fields is generally lacking, it does not seem that it would be more complicated in the case of the modified theory. We therefore believe that this issue does not form a ground for discarding the theory. Another issue of this theory is the abundance of possible $Y_{-2}$ insertions. However, it was shown in~\cite{Kroyter:2009bg} that theories based on different choices of $Y_{-2}$'s are classically equivalent. Hence, this is also not much of an obstruction to the theory. Moreover, cubic superstring field theory is very successful in many ways. In particular, analytical solutions describing the tachyon vacuum and marginal deformations are known in the theory~\cite{Erler:2007xt,Aref'eva:2008ad,Fuchs:2008zx}. There is, however, one serious problem with this formalism related to the inclusion of the Ramond sector. The Ramond sector carries half-integer pictures. Hence, it is impossible (with a Ramond field of a well defined picture) to write an interacting action with a common $Y_{-2}$ factor. The proposal of~\cite{Preitschopf:1989fc} was to generalize the action~(\ref{NSaction}) to, \begin{equation} \label{PTYaction} S=-\int \Big(Y_{-2}\big(\frac{1}{2}\Psi Q\Psi+\frac{1}{3}\Psi^3\big)+ Y\big(\frac{1}{2}\al Q \al+\Psi \al^2\big)\Big), \end{equation} where $\al$ is the Ramond field. That this formulation is problematic can be observed already when deriving the equations of motion from the action~(\ref{PTYaction}). These take the form (after acting on them with picture changing operators), \begin{subequations} \label{CubEOM} \begin{eqnarray} \label{eomCubA} Q \Psi + \Psi^2 + X \al^2 &=& 0\,,\\ \label{eomCubAl} Q \al + [\Psi,\al] &=& 0\,. \end{eqnarray} \end{subequations} The first of these equations implies that mid-point insertions are to be allowed for at least one of the string fields $\Psi$ and $\al$, since otherwise the Ramond sector becomes trivial. It is not clear what sort of an operator could have been inserted on $\al$, in order to produce a $Y$ (that is needed to cancel the $X$) on $\al^2$. Hence, we assume that the insertion is on $\Psi$. We cannot assign $X$ to $\Psi$, since then the term $\Psi^2$ would be singular. Hence, we decompose $\Psi$ as, \begin{equation} \Psi=\Psi_0+\xi \Psi_1\,, \end{equation} with $\Psi_0$ being an odd component and $\Psi_1$ being an even one~\cite{Kroyter:2009zi}. This still does not solve the problem, since singular terms of the form $\xi X$ from the $\Psi Q\Psi$ term appear now in the action. One may claim that the $X$ should be canceled against ``half the $Y_{-2}$'' before it is multiplied by the $\xi$. This means, however, that all string fields in the theory should be regularized somehow, already at the classical level. In fact, the theory itself should be regularized. A similar observation can be made by considering the gauge transformation of this theory, \begin{subequations} \label{cubGauge} \begin{eqnarray} \delta \Psi &=& Q\La+[\Psi,\La]+X[\al,\chi]\,,\\ \delta \al &=& Q\chi+[\al,\La]+[\Psi,\chi]\,, \end{eqnarray} \end{subequations} where $\La$ is the NS gauge string field and $\chi$ is the Ramond gauge string field, both of which are even string fields. While the second equation is benign, the first one explicitly shows that an $X$ mid-point insertion should be allowed for $\Psi$, as it is not clear how one could avoid it without moving the problem to $\al$. In fact, when iterated,~(\ref{cubGauge}) inevitably leads to singularities due to collisions of $X$~\cite{Kroyter:2009zi}. It might be possible to regularize the theory by moving the local insertions away from the mid-point. However, such a regularization would break the associativity of the star product and the gauge invariance of the theory. It is not clear whether those could be restored in the limit in which the regularization is removed, nor is it clear whether the singularities can be avoided in this limit. Another option would be to look for an insertion-free formulation. Such a formulation exists, namely the non-polynomial theory~\cite{Berkovits:1995ab,Michishita:2004by}. Here, however, one faces another problem. The theory is defined using two constrained Ramond fields. If one insists on a covariant formulation of the theory, the constraint cannot be derived from an action. One might try to use two Ramond fields in the description of the cubic theory, in analogy with the situation in the non-polynomial theory. While the resulting theory has some nice properties, a constraint relating the two Ramond fields should be introduced~\cite{Kroyter:2009zi}. Nonetheless, it is not understood how to derive a proper constraint from an action without introducing explicit mid-point insertions. Thus, the problems described above persist. We conclude that a more fundamental revision of the theory is needed. We derive a cubic theory with the desired properties in section~\ref{sec:SSFT}. \section{Cohomology in the large Hilbert space} \label{sec:QetaPXi} In this section we review the various representations of vertex operators in the RNS formalism. Vertex operators are important to us, since string fields are their off-shell generalizations. Different choices of representations of the vertex operators naturally lead to different string field theory formulations. While in their most familiar representation, the RNS vertex operators live in the small Hilbert space, some other useful representations use the large Hilbert space. The large Hilbert space consists of two copies of the small Hilbert space, with one of the copies being multiplied by $\xi_0$ \cite{Friedan:1985ge}. Within the copy without the $\xi_0$ insertion, i.e., within the small Hilbert space, the on-shell condition for a vertex operator $V$ gets the form, \begin{equation} \label{smallOS} Q V=0\,. \end{equation} The requirement of being in the small Hilbert space is enforced by, \begin{equation} \label{smallOsEta} \eta_0 V=0\,. \end{equation} To these relations one has to add the identification of states that differ by an exact element, \begin{equation} \label{cohoEquiv} V\approx V+Q\La\,, \end{equation} where $\La$ also obeys~(\ref{smallOsEta}). This is the way that the standard cohomology problem is formulated in the large Hilbert space. The cubic theories described in section~\ref{sec:SSFTintro} are off-shell extensions of this representation of the vertex operators. If one wishes to consider the second copy of the small Hilbert space, i.e., the one with the $\xi_0$ insertion, the on-shell condition for a vertex operator $V$ can be written as, \begin{equation} \label{largeOS} Q\eta_0 V=0\,. \end{equation} Here $\eta_0$ removes the $\xi_0$ insertion and the remaining equation is the same as~(\ref{smallOS}). Two solutions should be considered equivalent if they differ by a term of the form, \begin{equation} \label{XicopyEquiv} \delta V=\xi_0 Q\eta_0 \La\,. \end{equation} This relation mimics~(\ref{cohoEquiv}) for the $\xi_0$ copy of the small Hilbert space. Hence,~(\ref{largeOS}) and~(\ref{XicopyEquiv}) correctly define the equivalence classes in this space, despite the fact that they do not look like a cohomology problem. The expression~(\ref{largeOS}) can also be used for the whole large Hilbert space. To that end we have to append to the equivalence relation~(\ref{XicopyEquiv}) the equivalence to zero of the small Hilbert space. Changing the basis of equivalence generators this can be written as, \begin{equation} \label{2LinGaugeTrans} \delta V=Q\La_Q+\eta_0 \La_\eta\,. \end{equation} There is a natural correspondence between $\xi_0$-based states at ghost and picture numbers $g$ and $p$ and small Hilbert space states at ghost number $g+1$ and picture number $p-1$. The change of quantum numbers comes from using $\eta_0$ and $\xi_0$ as the canonical isomorphism mappings. Hence, NS string fields in the ``natural'', $p=-1$, $g=1$ picture, correspond to $p=g=0$ string fields that extend the vertex operators represented by~(\ref{largeOS}) and~(\ref{2LinGaugeTrans}). While the former case leads to Witten's theory~\cite{Witten:1986qs}, which suffers from divergences, the later naturally leads to the non-polynomial theory of Berkovits~\cite{Berkovits:1995ab}, in which these problems do not arise. One might wish at this stage to consider also the cohomology of $Q$ in the large Hilbert space. Another possibility would be to consider the cohomology of $\eta_0$, since using\footnote{Here and elsewhere, $[A,B]$ stands for the graded commutator, e.g., $[Q,\eta_0]\equiv Q\eta_0+\eta_0 Q$.} \begin{equation} \label{Qeta0} [Q,\eta_0]=0\,, \end{equation} one realizes that~(\ref{largeOS}) and~(\ref{2LinGaugeTrans}) are exactly symmetric upon interchanging $Q$ and $\eta_0$. However, in the large Hilbert space the cohomology of both operators is trivial, as we turn now to show while further illustrating the symmetry between both operators\footnote{The symmetry between $Q$ and $\eta_0$ was noted already by Berkovits and Vafa~\cite{Berkovits:1994vy}. In the $N=4$ language $J_B$ and $\eta$ are the currents $G^+$ and $\tilde G^+$ respectively. Also, note that one can completely exchange the roles of $Q$ and $\eta_0$ and define a ``dual small Hilbert space'', by considering the $Q$-closed subspace of the large Hilbert space and studying the cohomology of $\eta_0$ in this space. This cohomology is the same as that of $Q$ in the ``ordinary small Hilbert space'', since both ($Q$'s cohomology in one space and $\eta_0$'s in the other) are relative cohomologies that are defined in the same way in the large Hilbert space: $\eta_0 V=Q V=0$, $V\approx V+Q\eta_0\La$.}. For an arbitrary derivation $d$, the triviality of its cohomology can most easily be shown if a state $A$ exists such that $dA=\One$, where $\One$ is the identity element of the algebra on which $d$ acts\footnote{Such a state is called a ``contracting homotopy''. In the context of string field theory this structure was used for proving that the cohomology around Schnabl's solution~\cite{Schnabl:2005gv} is trivial~\cite{Ellwood:2006ba} (see also~\cite{Ellwood:2001ig,Erler:2007xt,Fuchs:2008zx}).}. The proof is then straightforward. Let $V$ be a closed state, i.e., $dV=0$, then $V$ is exact. Specifically, $V=d(AV)$, since \begin{equation} d(AV)=(dA)V+(-)^{(d)(A)}AdV=\One V+0=V\,, \end{equation} where $(d)$ and $(A)$ in the exponent stand for the parities of the derivation and the state $A$. Such a state exists for $\eta_0$. In fact, a family of such states exists, namely $\xi(z)$. It is less trivial to find it, but a similar family exists for $Q$~\cite{NarganesQuijano:1988gb}, namely\footnote{It is easy to read from $P$ the part of $Q$, which has $\phi$-momentum two. However, the complete $Q$ is related to this part by a similarity transformation~\cite{Acosta:1999hi} and the generator of this transformation leaves $P$ invariant.} \begin{equation} P(z)=-c\xi\partial\xi e^{-2\phi}(z)\,. \end{equation} Hence, we write, \begin{subequations} \label{QP1EtaXi1} \begin{align} \label{QP1} Q P(z) &=1\,,\\ \label{EtaXi1} \eta_0 \xi(z) &=1\,, \end{align} \end{subequations} where $1$ here stands for $\One(z)$, the insertion of unity at $z$, which does not change the state on which it acts and is of course $z$-independent. This completes the proof of the triviality of $Q$ and $\eta_0$. It is also interesting to consider, \begin{subequations} \begin{align} \label{X} X(z)&\equiv Q \xi(z)= \big(c\partial\xi +e^\phi G_m+e^{2\phi}b\partial \eta +\partial(e^{2\phi}b \eta)\big)(z)\,,\\ Y(z)&\equiv \eta_0 P(z)= \big(c\partial\xi e^{-2\phi}\big)(z)\,. \end{align} \end{subequations} Here, $G_m$ is the superconformal matter generator, which in flat background takes the form\footnote{We use the same conventions as in~\cite{Fuchs:2008cc}.}, \begin{equation} G_m=i\psi_\mu \partial X^\mu\,. \end{equation} The expression~(\ref{X}) is, however, universal and holds regardless of the existence of a specific background. These definitions imply that all the quantum numbers of $X$ and $Y$ are trivial except for their picture numbers, which are $1$ and $-1$ respectively. In particular, they are zero weight conformal primaries. These operators are also $Q$-closed. For $X$ it follows from the fact that it is exact (in the large Hilbert space), while for $Y$ it follows from the (graded) Jacobi identity and the relations~(\ref{Qeta0}) and~(\ref{QP1}). Similarly, these operators are closed with respect to $\eta_0$. All in all we can write, \begin{equation} \label{XYQeta} QX=QY=\eta_0 X=\eta_0 Y=0\,. \end{equation} It is also possible to write $Y$ explicitly as an exact state in the large Hilbert space, \begin{equation} \label{YcY} Y=Q\cY\,. \end{equation} This statement is in a sense trivial, since we already proved that it is closed and any closed state can be written in the large Hilbert space as an exact state using $P$. However, $P$ has a singular OPE with $Y$, which complicates the explicit construction. The most straightforward resolution of the OPE singularities would have been the replacement of the operator product by a normal ordered product. This strategy does not work, since it leads to a vanishing result, due to the zeros at the $bc$ and $\xi\eta$ sectors. We can remedy that by defining $\cY$ as the leading regular term in the OPE, \begin{equation} \cY_0(w)\equiv \oint \frac{dz}{2\pi i} \frac{P(z)Y(w)}{z-w}\,. \end{equation} This is almost what we want. This operator is local and obeys~(\ref{YcY}). However, it has some non-trivial quantum numbers other than the needed ghost and picture numbers, meaning that, while it carries zero conformal weight, it is not a primary conformal field. Since this property will turn out to be of importance to us, we would like to suggest a conformal primary candidate for $\cY$, \begin{equation} \label{YetaPot} \cY=\frac{1}{5}\,c\,\xi\partial\xi e^{-3\phi}G_m -\xi e^{-2\phi}\,. \end{equation} Similarly, $X$ can be written, in the large Hilbert space, as a local $\eta_0$ exact state. From the discussion above it follows that, \begin{equation} \label{PYxiX} P=\xi Y\,,\qquad \xi=PX\,. \end{equation} The last two equalities in~(\ref{XYQeta}) imply that $X$ and $Y$ reside in the small Hilbert space. We can now conclude that these operators are ``picture changing operators'', namely that they define homomorphisms between the cohomologies (in the small Hilbert space) of $Q$ at picture numbers $p$ and $p\pm 1$. They are also each other's inverse in the sense of the OPE, \begin{equation} X(z)Y(0)\sim 1\,, \end{equation} as is implied by~(\ref{PYxiX}). Nevertheless, as already stated, these operators suffer from singularities in their OPE's with themselves, \begin{equation} \begin{aligned} \label{OPEsing} X(z)X(0)\sim &\frac{(\cdots)}{z^2}\,,\qquad \xi(z)X(0)\sim\frac{(\cdots)}{z^2}\,,\\ Y(z)Y(0)\sim &\frac{(\cdots)}{z^2}\,,\qquad P(z)Y(0)\sim\frac{(\cdots)}{z^2}\,. \end{aligned} \end{equation} If locality is not important, one can plug several operators at different values of $z$ and get singularity-free multi-picture changing operators in this way. Otherwise, the singular parts of the OPE's can be simply removed, since they correspond to $Q$-exact terms. With the understanding of picture changing operators one can define more ways for representing the cohomology problem in the large Hilbert space~\cite{Berkovits:2001us}. In these new representations, the physical space could reside in the small Hilbert space and not in its $\xi_0$ copy as before. Hence, the ghost and picture numbers are the same as for the $Q$ cohomology. The cohomology operator that one should use is $\tQ$ of~(\ref{Qtilde}). Let $V$ be a closed state with respect to $\tQ$, i.e., \begin{equation} \label{tQV0} \tQ V=0\,, \end{equation} and let it have a given picture number $p$. Then, since $Q V$ and $\eta_0 V$ have different picture numbers,~(\ref{tQV0}) implies that, \begin{equation} Q V=\eta_0 V=0\,, \end{equation} i.e., $V$ lives in the small Hilbert space and is closed. Suppose we add to $V$ a term of the form $\tQ \La$, subject to the constraint that the result still has picture number $p$. Such a $\La$ can be decomposed to picture numbers $p$ and $p+1$ plus a term $\La_{triv}$ obeying, \begin{equation} Q\La_{triv}=\eta_0\La_{triv}=0\,. \end{equation} Such a term does not contribute to $\delta V$ and so can be discarded. The requirement that $\delta V$ has picture number $p$ implies, \begin{equation} Q \La_{p+1}=0\,,\qquad \eta_0 \La_p=0\,. \end{equation} The triviality of $Q$ and $\eta_0$ in the large Hilbert space then implies, \begin{equation} \La_{p+1}=Q \La^Q_{p+1}\,,\qquad \La_p=\eta_0 \La^\eta_{p+1}\,. \end{equation} All in all we can write, \begin{equation} \delta V=\tQ \La=(Q-\eta_0)\big(Q \La^Q_{p+1}+\eta_0 \La^\eta_{p+1}\big)= Q\eta_0(\La^Q_{p+1}+\La^\eta_{p+1}\big)\,, \end{equation} that is, two states are identified if they differ by a state, which is $Q$ exact in the small Hilbert space. Given such a variation, \begin{equation} \delta V=Q\La\,, \end{equation} with $\La$ in the small Hilbert space, one can choose \begin{equation} \La_p=\La\,,\qquad \La_{p+1}=0\,, \end{equation} and get this variation as a variation by a $\tQ$ exact term in the large Hilbert space. Hence, the cohomology of $Q$ in the small Hilbert space is the same as that of $\tQ$ in the large Hilbert space at the same picture and ghost numbers, as stated. Unlike the case of the usual cohomology problem, the cohomology does not change upon relaxing the fixed picture condition. Assume that the state $V$ carries a picture number bounded between $p_{min}$ and $p_{max}$. We write, \begin{equation} V=\sum_{p=p_{min}}^{p_{max}} V_p\,. \end{equation} Consider the gauge transformation generated by\footnote{Having string field theory in mind, we refer to a change of the representative of a given cohomology as a gauge transformation.}, \begin{equation} \label{gaugeUp} \La=\xi(z)V_{p_{min}}\,. \end{equation} It induces the variation, \begin{equation} \delta V=(Q-\eta_0)\xi(z)V_{p_{min}}=(X-\xi Q) V_{p_{min}}-V_{p_{min}} +\xi\eta_0 V_{p_{min}}\,. \end{equation} The first term has picture number $p_{min}+1$, the second term eliminates the original $V_{p_{min}}$, while the last term drops out, since it is the lowest picture component of~(\ref{tQV0}). Thus, this gauge transformation removes the lowest picture of the state. If one considers instead the gauge transformation generated by \begin{equation} \label{gaugeDown} \La=-P(z)V_{p_{max}}\,, \end{equation} one removes the highest picture component, since now \begin{equation} \delta V=(Y-P\eta_0) V_{p_{max}}-V_{p_{max}}\,. \end{equation} Using these transformations, one can reduce the picture number range of any state to a single arbitrary picture number, \begin{equation} p_{min}\leq p \leq p_{max}\,. \end{equation} For this case, we have already shown that the $\tQ$ cohomology is equivalent to the $Q$ cohomology. Moreover, the equivalence to the $Q$ cohomology is independent on the choice of the final $p$. To show that, consider the one before last stage in the sequence of gauge transformations, where two non-trivial picture numbers $p$ and $p+1$ are left. Using the gauge transformations~(\ref{gaugeUp}) or~(\ref{gaugeDown}), we can get to either of, \begin{subequations} \begin{align} V_p+V_{p+1}& \rightarrow V_p+YV_{p+1}-P\eta_0 V_{p+1}\,,\\ V_p+V_{p+1}& \rightarrow X V_p+V_{p+1}-\xi Q V_p\,. \end{align} \end{subequations} For the first two terms in the r.h.s of the two equations, it is obvious that they are picture changed versions of the same state. For the last term it follows from~(\ref{PYxiX}) and \begin{equation} QV_p-\eta_0 V_{p+1}=0\,, \end{equation} which is the $p$-picture component of~(\ref{tQV0}) for this case. We can now conclude that the cohomology of $\tQ$ over the space of states with picture number, which is arbitrarily bounded from both sides, is canonically isomorphic to that of $Q$ at any fixed picture number. Assume now that the picture number is unbounded. By indefinitely repeating the procedure defined above, one can send the picture to arbitrarily high or low values. Hence, the state component at any given picture eventually will be zero. Explicitly, consider, the multi-picture changing operators $X_n$ and $Y_{-n}$, for $n\geq 0$. These operators are the regularized versions of the powers of $X$ and $Y$, which are otherwise divergent and will be properly defined in the next section. In particular, \begin{equation} X_1=X\,,\qquad Y_{-1}=Y\,,\qquad X_0=Y_0=1\,. \end{equation} Among other properties, theses operators satisfy the relations, \begin{align} \label{XnYnProp} Q X_n&=Q Y_{-n}=\eta_0 X_n=\eta_0 Y_{-n}=0\,,\\ \label{XYnYXn} X Y_{-n}&\sim Y_{-(n-1)}\,,\qquad Y X_n\sim X_{n-1}\,. \end{align} Using these operators we can show that any $\tQ$-closed state is also exact, since the following contracting homotopy operators exist, \begin{equation} \label{QtContractHomo} \La_-=\xi\sum_{n=1}^\infty Y_{-n}\,,\qquad \La_+=-P\sum_{n=1}^\infty X_n\,. \end{equation} One might mistakenly conclude that the cohomology of $\tQ$ over the space of states with unbounded picture number is trivial. However, there is always more than one way to define spaces using infinite sums of objects living is some constituent spaces. In the case at hand, we assumed that convergence in the all-picture space is defined ``point-wise'', i.e., as the union of the limits at every picture. There are certainly other ways to define limits. Let us use an analogy with the following sequence of vectors, $(1,0,...),\ (1,1,0,...),\ (1,1,1,0,...),...$ The point-wise limit of this sequence exists and equals $(1,1,1,...)$. However, its limit in the $L_1$ norm, for instance, does not exist, since the norm of the ``would-be limit'' diverges. One could look for the analogy of the above for the case of vertex operators. Any physical vertex operator can be represented in all possible pictures. Moreover, there would also be many exact terms that could be added to it. We can construct a $\mathbb Z$-sequence for any infinite sum of representatives of the physical vertex operator in the following way. We identify the location along the vector with the picture number. We choose a representation for the vertex operator at some given picture number and associate it with the vector $(...,0,0,1,0,0,...)$. Next, we identify all the exact states with the zero vector and let the operators $X$ and $Y$ shift the vectors to the left and to the right. These rules establish a unique assignment of vectors. We now want to constrain the space of vectors by imposing some norm on it. The $L_n$ norms are probably not what we are after, since it is natural to expect that the norm commutes with the operations $X$ and $Y$. Instead, consider the absolute value of the sum of all entires, provided it is well defined regardless of the summation order. This is only a semi-norm, since there are many elements whose ``norm'' is zero, i.e., all the exact states and all the states that are represented by vectors with entries that sum up to zero. Nonetheless, the semi-norm is enough for defining the space that we need. One can, as usual, divide this space by the trivial space and obtain a genuine norm. The resulting normed space would be one-dimensional and it would correspond to the cohomology problem of this vertex. We do not know how to generalize this construction to cover the whole space of string fields, since we do not know how should the relative normalization of different physical vertices work, nor do we know what to do with non-closed, i.e., off shell states. This is, however, exactly the usual problem with the definition of the space of string fields, which cannot be accomplished due to a lack of a natural norm. We conclude, that in this respect the democratic theory is not better, but also not worse than any other string field theory. We would also like to point out, that the correct definition of the space of string fields would probably differ from the one obtained using the semi-norm presented above, even when restricted to a given physical vertex, due to non-linearities. Specifically, we see below that the gauge transformation associated with picture changing should be modified in the case of an interacting theory. We saw that depending on the exact definition of the space of string fields with unbounded picture number, $\tQ$ has a trivial cohomology, or the same cohomology as for the bounded case. Presumably it might also be possible to get other results for its cohomology. However, since we do not have a complete definition for the space of string field, we follow the ``standard'' practice in string field theoretical research and ignore this problem, implicitly assuming that this space is somehow defined in a proper way. \section{Constructing the theory} \label{sec:SSFT} Here, we want to define an RNS string field theory, which generalizes the $\tQ$-description of vertex operators. It turns out that a single or a bounded range of picture numbers are inadequate choices. Hence, the theory will generalize the case where all picture numbers are allowed. We say that the theory is defined in the ``democratic picture'', since within this construction, all string fields, regardless of picture numbers, have equal opportunity to influence the physics\footnote{The idea that in some cases all pictures should contribute is not new~\cite{Callan:1988wz}. However, we suggest that it ``might be a feature, not a bug''.}. The construction of the theory is almost straightforward. The only subtlety is that a non-trivial insertion is required. We start this section by deriving the form of the theory in~\ref{sec:SSFTform} and devote the major part of the section to the derivation and to the study of the insertion in~\ref{sec:SSFTInsertion}. \subsection{Constructing the form of the theory} \label{sec:SSFTform} Let us start by writing the free action for the NS sector. The equation of motion we are after is \begin{equation} \label{freeEOM} \tQ\Psi=0\,, \end{equation} and the action should be invariant under the gauge transformation, \begin{equation} \label{LinSFTgauge} \delta \Psi=\tQ \La\,. \end{equation} From the discussion of the previous section we infer that $\Psi$ lives in the large Hilbert space, has ghost number one and its picture number is either bounded, or unbounded with some (unknown) restriction on the behaviour of $\Psi$'s components as a function of the picture number. It is natural to use $\Psi \tQ \Psi$ for the construction of the free action. Since the string field lives in the large Hilbert space, we have to use the large Hilbert space CFT expectation value for the integration over the space of string fields. We can also consider some linear operation to be performed before the integration. Hence, the free action should be of the form, \begin{equation} \label{linAction} S_{free}=-\frac{1}{2}\oint\cO \big(\Psi \tQ \Psi\big)\,, \end{equation} where the action has been written with a canonical normalization despite the fact that we haven't specified $\cO$ yet. From the experience we have with other theories, we know that the equation of motion and gauge symmetry can be naturally extended to the non-linear level by writing, \begin{equation} \label{Action} S=-\oint \cO\Big( \frac{1}{2}\Psi \tQ \Psi+\frac{1}{3}\Psi^3\Big). \end{equation} The infinitesimal gauge symmetry related to this action is, \begin{equation} \label{gauge} \delta \Psi=\tQ\La+[\Psi,\La]\,, \end{equation} and its finite form is, \begin{equation} \Psi\rightarrow e^{-\La}\big(\tQ+\Psi\big)e^\La\,. \end{equation} An important consequence of the above is that if the original picture number is allowed to be non-zero, the picture number cannot be bounded and we are led to consider the space of (properly restricted) string fields with arbitrary picture number. \subsection{Constructing the mid-point insertion} \label{sec:SSFTInsertion} One might wonder whether a non-trivial $\cO$ is really needed. Let us give two arguments in favor of a non-trivial $\cO$: \begin{itemize} \item The ghost number of $\Psi \tQ \Psi$ equals three, but the integration we are using is the large Hilbert space integration, which is non-vanishing only for ghost number two string fields. Hence, the action would be identically zero without an $\cO$ insertion and it would not imply the desired equation of motion. \item The large Hilbert space integral picks up the copy of the small Hilbert space that is multiplied by $\xi_0$. Nevertheless, as we showed in the previous section, it is the small Hilbert space without this insertion that carried the physical information when $\tQ$ is used. \end{itemize} The only consistent form for $\cO$ is that of a mid-point insertion of a zero-weight conformal primary\footnote{We refer, again, to section 2 of~\cite{Kroyter:2009zj}, for discussion on this subject. Also, note that there are two ``mid-points'', due to the doubling-trick. One may hope at this stage that both $z=\pm i$ will be proven to be equivalent or that some specific superposition of the two will be forced on us by requiring reality of the action, or by some other principle.}. The remarks above suggest that this insertion should contain the $\xi$-field. Another immediate restriction on $\cO$ is that it has to commute with the kinetic operator, since otherwise the equations of motion and gauge transformations will not work out correctly, \begin{equation} \label{QcO0} \tQ\cO=0\,. \end{equation} Decomposing the mid-point insertion with respect to the picture number, \begin{equation} \cO=\sum_{n \in {\mathbb Z}}\cO_n\,, \end{equation} turns the relation~(\ref{QcO0}) into a recursion relation for the $\cO_n$'s, \begin{equation} \label{recursion} Q\cO_n=\eta_0\cO_{n+1}\,. \end{equation} In order to be able to use the recursion relations, an initial condition is also needed. For finding an appropriate initial condition we invoke the ``correspondence principle''. Let us fix the picture-related gauge symmetry by restricting the string field to carry zero picture number and to live in the small Hilbert space. This partial gauge fixing reduces the action to that of the modified theory, provided that we choose, \begin{equation} \cO_{-1}=\xi Y_{-2}=\cY\,, \end{equation} where $\cY$ is given by~(\ref{YetaPot}). This choice of $\cO_{-1}$ implies that all the classical solutions of the modified theory~\cite{Erler:2007xt,Aref'eva:2008ad,Fuchs:2008zx} are also solutions, with the same action and cohomology, of the new theory. Moreover, it is also possible to generalize the construction of boundary states~\cite{Kiermaier:2008qu} to the modified theory~\cite{Kroyter:2009bg} and thus, also to our case. All that gives much credibility to the construction. Substituting $\cO_{-1}$ into the recursion relation~(\ref{recursion}) immediately leads to, \begin{subequations} \label{UniqueO01} \begin{align} \cO_0=& \xi Y=P\,,\\ \cO_1=& \xi\,. \end{align} \end{subequations} We see that in the three examples above, the insertion is given by the product of $\xi$ and the picture changing operators $Y_{-2},Y,1$. This is the case since the picture changing operators themselves are invariant under $Q$ and $\eta$, while upon acting on $\xi$, $Q$ produces the picture changing operator $X$. This state of affairs cannot continue to $|p|>1$ picture numbers without a modification, due to the OPE singularities~(\ref{OPEsing}). A straightforward resolution is to consider only the non-singular part\footnote{Note, that the fact that our insertion includes all possible pictures is important, since it enables a non-trivial value for the action for string fields of arbitrary picture number.}, \begin{equation} \label{On} \cO_n= \left\{\begin{array}{ll} {\displaystyle \oint_w \frac{dz}{2\pi i} \frac{\xi(z)X_{n-1}(w)}{z-w}}& n>1\\ \\ {\displaystyle \oint_w \frac{dz}{2\pi i} \frac{P(z)Y_n(w)}{z-w}} \qquad \qquad & n<-1\ \,,\\ \end{array}\right. \end{equation} where $X_p$ and $Y_p$ are the multi-picture-changing operators. One might wonder whether this construction is reliable, e.g., whether multi-picture-changing operators exist at all and whether they are unique in some sense. In fact they are. To explain this claim, let us recall some more known facts about the RNS string~\cite{Horowitz:1988ip,Lian:1989cy}: \begin{itemize} \item The BRST operator $Q$ commutes with the ghost and picture generators. Hence, the cohomology can be decomposed to definite ghost and picture numbers. \item The cohomologies at the same ghost numbers and different picture numbers are isomorphic. \item For non-zero momentum the cohomology is concentrated in ghost numbers one and two, which are in fact (Poinear\`e) dual. \item Unique non-trivial elements exist in the cohomologies at zero momentum at ghost numbers zero and three (and all picture numbers). \end{itemize} These facts imply that picture changing operators exist for all integer picture numbers. We can also deduce that these multi-picture-changing operators are unique, up to the addition of $Q$-exact terms. We denote these operators by $X_n$, that is we define, \begin{equation} \label{XnDef} X_n\equiv \left\{ \begin{array}{cc} X_n &\qquad n>0\\ 1 &\qquad n=0\\ Y_n &\qquad n<0 \end{array} \right.\,. \end{equation} The OPE $X_n X_m$ might generally contain singular terms. These singular terms, however, must be $Q$-exact, as can be seen from plugging the OPE into a general expectation value and using the observations above. For similar reasons, the regular term must equal $X_{n+m}$, \begin{equation} \label{XnXmXnm} X_n X_m = X_{n+m}+Q\mbox{-exact}\,. \end{equation} Even more explicitly we may write, \begin{equation} \label{XnDef2} X_n(w)\equiv \left\{\begin{array}{ll} {\displaystyle \oint_w \frac{dz}{2\pi i} \frac{X_{n-1}(z)X(w)}{z-w}}& n>1\\ \\ {\displaystyle \oint_w \frac{dz}{2\pi i} \frac{Y_{n+1}(z)Y(w)}{z-w}} \qquad \qquad & n<-1\\ \end{array}\right.\,. \end{equation} With the definitions~(\ref{On}),~(\ref{XnDef}) and~(\ref{XnDef2}), the insertion we propose can be schematically written as, \begin{equation} \label{NonPrimO} \cO\simeq \xi\sum_{n=-\infty}^\infty X_n\,. \end{equation} It is amusing to notice that if we use~(\ref{XnXmXnm}) for replacing $X_n$ by $X^n$, forgetting for the moment about the ($Q$-exact) OPE divergences and about convergence radius issues, we can write, \begin{equation} \cO\simeq \xi\Big(\frac{1}{1-X}+\frac{1}{1-X^{-1}}-1\Big)=0\,. \end{equation} A more relevant and more accurate observation is that, in the language of~\cite{Berkovits:1999in,Berkovits:1994vy}, our insertion is the ``unique $U(1)$-neutral extension of the vertex operator 1''. This is a very natural insertion for a democratic-picture formalism. While we do not think that this is of a particular importance, we note that the proposed mid-point insertion probably does not have a non-trivial kernel. The existence of a regular string field whose OPE with $\cO$ is zero at all the picture numbers, seems to us very unlikely. So far we were ignoring the fact that $\cO$ has to be a primary field. We would now like to prove that a primary solution to the recursion relation exists. As a bonus, we will also prove that all the picture changing operators have primary representatives. The proof goes by induction for $p>1$ and for $p<0$. The respective initial conditions are $\cO_1=\xi$ and $\cO_0=P$, which are completely fixed by their quantum numbers and are indeed primaries. Acting on a zero-weight primary field with either $Q$ or $\eta_0$ gives back a primary. Hence, we can assume that the multi-picture changing operators are primaries and we only have to prove that the $\cO_p$'s are also primaries. Given that $\cO_p$ is primary for some $p>0$, it follows from~(\ref{On}) and~(\ref{XnDef2}) that $X_p=Q\cO_p$ is also primary. The $\cO_{p+1}$ defined by~(\ref{On}) is not a primary. In fact, for $n\geq 0$ we can write, \begin{equation} \label{LnO} L_n \cO_{p+1}=\oint \frac{dz}{2\pi i}z^{n+1} \frac{dw}{2\pi i} T(z)\frac{\xi(w)}{w}X_p(0)= \oint \frac{dw}{2\pi i}w^{n} \partial \xi (w)X_p(0)= -n \xi_n X_p (0)\,. \end{equation} Here, the first equality is a mere substitution. In the second equality we used the fact that $X_p$ is a primary. Integration by parts leads to the final result. We note, that while $\cO_{p+1}$ is not a primary, the problematic terms introduced by acting on it with the Virasoro operators all lie in the small Hilbert space. Hence, we could try to cancel them by adding to $\cO_{p+1}$ a term, which resides in the small Hilbert space. Such a term would not lead to a change of $X_p$, so the recursion relations at lower orders are intact. Let us prove that such a term exists. Let $\cS$ be the set of non-zero elements obtained from repeatedly acting on $\cO_{p+1}$ with positive Virasoro operators. The level of this set is bounded by some integer $N$. Define $\cS_N$ to be the set of all possible level-$N$ combinations of Virasoro operators acting on $\cO_{p+1}$, \begin{equation} \cS_N=\{L_N \cO_{p+1},L_{N-1}L_1 \cO_{p+1},\ldots\}\,. \end{equation} Some of the elements of $\cS_N$ are actually zero, but let us consider the more general case, in which the $\cO_{p+1}$ operator is replaced by an arbitrary (non-primary) zero-weight operator, whose maximal level under the action of multiple Virasoro operators is $N$. The size of $\cS_N$ is $P(N)$, the number of partitions of $N$ into positive integers. Let $\cC_N$ be the set of all elements that can be formed by acting on $\cS_N$ with an arbitrary number of negative Virasoro operators of total level $-N$. The set $\cC_N$ contains $P(N)^2$, a-priori independent, zero-weight, small Hilbert space elements. These elements are annihilated by the action of Virasoro operators of total level greater than $N$, but not by operators of level $N$ or lower. Consider now \begin{equation} \cO_{p+1}^{(N)}=\cO_{p+1}+\sum_{k=1}^{P(N)^2} \al_k C_k\,, \end{equation} where $C_k\in \cC_N$ and the $\al_k$'s are coefficients. Act on $\hat \cO_{p+1}$ by all ($P(N)$) possible level N Virasoro operators and require that the result vanishes. This gives $P(N)$ equations, where each equation contains the $P(N)$ elements of $\cS_N$, which should vanish separately. All in all, we get $P(N)^2$ linear non-homogeneous equations in $P(N)^2$ variables. In fact, it is easy to see that the matrix describing these $P(N)^2$ equations factorizes into $P(N)$ identical blocks of size $P(N)\times P(N)$. Each block is nothing but the level-N Kac-matrix with $c=0$ and $h=-N$. The Kac determinant at level $N$ equals up to a constant to the product of terms of the form, \begin{equation} P_{r,s}=h-h_{r,s}\,,\qquad h_{r,s}=\frac{c-1}{24}+F_{r,s}^2\,, \end{equation} with $F^2_{r,s}$ known, $c$-dependent positive constants. In our case $h$ is a negative integer, while \begin{equation} h_{r,s}\geq -\frac{1}{24}\,. \end{equation} Hence, the Kac-determinant is non-zero and a solution exists and is unique. The solution defines $\cO_{p+1}^{(N)}$ as a zero-weight operator that differs from our original $\cO_{p+1}$ only in the small Hilbert space. Acting on $\cO_{p+1}^{(N)}$ with Virasoro operators of total level greater than $N-1$ gives zero by construction. We can now repeat the procedure, defining $\cO_{p+1}^{(N-1)}$ that removes the $N-1$ terms and so on. The operator $\cO_{p+1}^{(1)}$ is then primary by construction. The fact that many of the elements of $\cC_N$ are zero does not modify any of the arguments we made and for $p>1$ the proof is completed. The proof for $p<0$ is almost identical. All that is needed is to replace $Q$ by $\eta_0$, $\xi(z)$ by $P(z)$ and the small Hilbert space by the dual small Hilbert space. While our proof implies using $P(N)^2$ elements for eliminating $\cS_N$, we can use the explicit form~(\ref{LnO}) in order to work in practice with smaller sets. In fact, generalizing~(\ref{LnO}), we see that all the elements of $\cS_N$ are equal up to a constant. Hence, we can replace the $P(N)^2$ elements of $\cC_N$ by the $P(N)$ elements obtained by acting on $\xi_N X_p(0)$ with total-level $-N$ Virasoro operators. We now get a set of at most $P(N)$ equations with at most $P(N)$ variables. The matrix of coefficient is again the Kac matrix, which leads to a unique solution. This is a simplified (but less general) version of the theorem above. Let us now consider $\cO_2$ as an example. The theorem implies that it is given by, \begin{equation} \cO_2=-c\xi\xi'+\xi e^\phi G_m +\Big(2 b \eta \xi \phi '+\eta \xi b'-2 b \xi \eta '\Big)e^{2\phi} +\tilde \cO_2\,, \end{equation} where $\tilde \cO_2$ resides in the small Hilbert space. Here, we truncated the expression one gets using~(\ref{On}) to its $\xi_0$ component, as the rest of it lies in the small Hilbert space, which we can freely change. Applying the Virasoro operators, we see that $N=2$ in this case and that \begin{equation} \label{O2CT} -L_1^2\cO_2=L_2\cO_2=4b e^{2\phi}\,. \end{equation} Considering the two terms $L_{-1}^2 b e^{2\phi}$ and $L_{-2} b e^{2\phi}$ gives two equations, which we can solve. The solution, however, is quite cumbersome when written explicitly. Moreover, while the solution is universal by construction, it is not manifestly universal, e.g., for the standard matter fields we get different coefficients of $T_X$ and $T_\psi$ in various terms of the solution. A way around it is to note that in order to prove the theorem, we do not really have to use the full energy momentum tensor. All that is needed is to have $T_{\xi\eta}$, as well as the part of the energy momentum tensor associated with the fields that appear in~(\ref{O2CT}), i.e., we can work with $T_g$ instead of with the total energy momentum tensor in the case at hand. Working with $T_g$ gives again two equations. Solving the equations leads to a $\tilde \cO_2$, which contains six, manifestly universal terms. Actually, playing a bit with coefficients reveals an even simpler solution, \begin{equation} \tilde \cO_2=-\Big(29b''+51b'\phi' +2b \phi '^2\Big)\frac{e^{2\phi}}{86}\,. \end{equation} This illustrates the fact that while a solution of the form used in the proof above is unique, there are generally many other (universal) solutions. It is interesting to note that although we explicitly removed only the second order poles, we got a primary field, without actually going to the next stage. It is now straightforward to calculate $X_2$, which we did in a particular background (one-dimensional linear-dilaton) for simplicity. The resulting expression is a sum of 150 terms (in this background) and is not particularly illuminating. This $X_2$ is primary by construction and can be used to define $\cO_3$. Now, $N=6$ and terms with various $\phi$-momenta appear. The $e^{4\phi}$-terms are found all the way to $N=6$, while the $e^{3\phi}$-terms and $e^{2\phi}$-terms have $N=4$ and $N=3$ respectively. It is clear that all these terms would not be removed in a single step. We examined a 371-parameter universal ansatz, containing the terms that are dictated by the theorem, and found a 94-parameter family of solutions. In a relatively simple case a primary $\cO_3$ can be written as a sum of 336 terms. Having settled the issue of being primary, we now want to discuss the amount of freedom in solving the recursion relation~(\ref{recursion}), without the restriction to primary operators. Too much freedom in choosing $\cO$ would lead to an embarrassment of riches, i.e., having several, presumably inequivalent, theories. This state of affairs was exactly one of the grounds for criticizing the modified theory. Then, in~\cite{Kroyter:2009bg}, we showed that, at least classically, all these theories are equivalent, regardless of the exact form of the insertion and regardless of the distribution of the insertion among the two mid-points ($\pm i$). The only thing we need in order to make the same assertion here, is to show that the difference $\delta \cO$ of two candidate $\cO$ insertions is given by a $\tQ$-exact term. It is clear from~(\ref{QcO0}) that \begin{equation} \tQ\delta \cO=0\,. \end{equation} However, while $Q$ and $\eta_0$ are separately trivial in the large Hilbert space, their difference, $\tQ$, is not. Hence, this equation is not enough for our proof. Nonetheless, the initial condition for the non-unique $\cO_{-1}$ fixes the insertions $\cO_0$ and $\cO_1$ uniquely~(\ref{UniqueO01}), at least as long as we are considering only chiral insertions. This can be seen simply by examining all possible insertions with those quantum numbers. Consider now $\cO_2$. The recursion relation~(\ref{recursion}) implies that it is defined up to an $\eta_0$-closed term. Since $\eta_0$ is exact in the large Hilbert space, we can write, \begin{equation} \delta \cO_2=-\eta_0 \Upsilon_3\,. \end{equation} This implies at the next picture number the identity, \begin{equation} \eta_0\delta \cO_3=Q\cO_2=\eta_0 Q \Upsilon_3\,, \end{equation} whose general solution is, \begin{equation} \delta \cO_3=Q \Upsilon_3 + \eta_0 \Upsilon_4\,. \end{equation} We see that the total effect of $\Upsilon_3$ is to induce a change in $\cO$, \begin{equation} \delta \cO=\tQ \Upsilon_3\,, \end{equation} and the analysis can then be repeated for $\Upsilon_4$ and so on. A similar analysis can be applied for negative picture numbers. We conclude that in general, \begin{equation} \delta \cO=\tQ \Upsilon\,. \end{equation} Hence, all the theories that are defined by solutions of the recursion relation~(\ref{recursion}) with the initial conditions~(\ref{UniqueO01}), are classically equivalent. This also implies that, at least for the purpose of evaluating the action of a classical solution, we do not have to find an explicit primary conformal representative of $\cO$. The regular part of~(\ref{NonPrimO}) will do. Note, however, that the two parts of the action would have to be evaluated in the same coordinate system. This is not really a restriction, since for solutions one simply evaluates, \begin{equation} S=-\oint \cO\Big( \frac{1}{2}\Psi \tQ \Psi+\frac{1}{3}\Psi(-\tQ \Psi)\Big)= -\frac{1}{6}\oint \cO\Psi \tQ \Psi\,. \end{equation} Then, it is possible to perform the calculations using~(\ref{XnDef2}) rather than with the cumbersome explicit expressions of the conformal primaries. \section{The Ramond sector and general D-brane systems} \label{sec:Ramond} In order to complete this work, we have to incorporate the Ramond sector into the formalism. This sector leads to various singularities within the modified theory. Nonetheless, the construction of the action of the modified theory, as well as of Witten's theory, relied on correct principles. Hence, if we could manage to construct an action that generalizes these constructions, without suffering from their problems, we would know that we are on the right track. Thus, we search for an action that reduces to, \begin{equation} S= -\int Y_{-2}\Big(\frac{1}{2}\Psi Q \Psi+\frac{1}{3}\Psi^3\Big) -\int Y\Big(\frac{1}{2}\al Q \al+\Psi \al^2\Big), \end{equation} when the NS and Ramond string fields live in the small Hilbert space and carry picture numbers zero and $-\frac{1}{2}$ respectively. From the inconsistency of the modified theory we can infer that the above set of restrictions does not form a consistent set of gauge conditions. We return to the issue of gauge fixing after the theory is constructed. We know already that the first part of this action generalizes to~(\ref{Action}). Then, applying the same philosophy as before, we replace $Q$ with $\tQ$, the $Y$ insertion with $\cO^R$ and the integration with integration in the large Hilbert space. The correspondence principle implies that \begin{equation} \cO^R_0=P\,. \end{equation} Having $\tQ$ as the kinetic operator implies again the recursion relations~(\ref{recursion}) and hence, \begin{equation} \cO^R=\cO^{NS}\,. \end{equation} It is straightforward to see that the resulting action can be simply written in terms of a single string field and that it takes exactly the simple form~(\ref{Action}), provided we redefine, \begin{equation} \Psi\rightarrow\Psi+\al\,. \end{equation} Thus, we unified the NS and Ramond string fields. The resulting string field $\Psi$ includes all possible integer (NS) and half-integer (R) picture numbers at ghost number one. The full gauge symmetry of the action is~(\ref{gauge}), where the even gauge string field $\La$ carries ghost number zero and an arbitrary integer (NS) or half-integer (R) picture number. There are no picture changing operators or other mid-point insertions in the definition of the gauge symmetry. Hence, no singularities can emerge and the gauge symmetry is well defined. The inclusion of the various sectors (NS$\pm$, R$\pm$) of a general D-brane configuration, described by some Chan-Paton factors, is straightforward and should be done in the same way as in~\cite{Arefeva:2002mb} (see also~\cite{Berkovits:2000hf}), i.e., the space of string fields should be tensored with a matrix space representing the Chan-Paton factors, as well as with the ``internal Chan-Paton space'' of two by two matrices. The NS+ string field is tensored with an internal Chan-Paton factor of $\sigma_3$ (granted also to $Q$), while the NS$-$ string field is tensored with $i\sigma_2$. Since the Ramond string fields are added to the NS+ string field in our formulation, they should also be tensored with $\sigma_3$. A single Chan-Paton entry cannot contain both R sectors. Hence, there is no problem in assigning the same factor to both R$\pm$. This assignment is also consistent with our discussion on this subject in~\cite{Kroyter:2009zi}. The fact that $Q$ gets a $\sigma_3$ factor implies that $\eta_0$ gets the same factor (again, in accordance with the discussion in~\cite{Fuchs:2008zx,Kroyter:2009zi}). This implies that $\xi$ and $P$ insertions should also be tensored with $\sigma_3$, which suggests that the whole $\cO$ insertion is to be tensored with this factor. This, in a sense, clarifies the origin of the $\sigma_3$ insertion on $Y_2$ in the modified theory: It is a remnant from integrating the $\xi$ insertion that has to carry this factor, when going from the large Hilbert space to the small one. The gauge string fields should all carry a unity factor, except the NS$-$ gauge string field that carries a $\sigma_1$ factor. With these assignments all the axioms needed continue to hold and all sectors of an arbitrary (not necessarily BPS) D-brane system can be uniformly and covariantly described. \section{Field--antifield formulation and gauge fixing} \label{sec:BV} The gauge symmetry of our theory is infinitely reducible and closes only ``using the equations of motion'', i.e., only up to trivial gauge transformations. This state of affairs calls for the use of the (covariant) field--antifield (BV) formalism~\cite{ZinnJustin:1974mc,Batalin:1981jr,Batalin:1984jr, Batalin:1984ss,Voronov:1982cp} (see~\cite{Henneaux:1989jq,Henneaux:1992ig,Gomis:1994he} for reviews). This formalism replaces the gauge symmetry by a BRST symmetry, which can be fixed at a later stage using a ``gauge--fixing fermion''. Generally speaking, the BV construction is nothing but trivial. Luckily, the algebraic structure at hand is identical to that of the bosonic theory. The BV formulation of this theory was studied by Thorn and by Bochicchio~\cite{Thorn:1986qj, Bochicchio:1986bd,Bochicchio:1986zj,Thorn:1988hm}. All what we have to do then, is to use the following substitutions, \begin{equation} Q_{bos} \rightarrow \tQ=Q_{RNS}-\eta_0\,,\qquad \int_{bos} \rightarrow \oint \cO\,,\qquad \Psi_{cl,bos} \rightarrow \Psi_{cl,RNS}\,, \end{equation} where as already explained, the classical RNS field $\Psi_{cl,RNS}$ carries ghost number one and all (integer and half-integer) picture numbers. Mimicking the construction of~\cite{Thorn:1986qj, Bochicchio:1986bd,Bochicchio:1986zj,Thorn:1988hm} for the case at hand is straightforward, due to the identical algebraic structure (the properties of $\tQ$ and $\oint \cO$ as well as the form of the redundant gauge symmetry). The construction leads (before gauge fixing) to an action identical in form to~(\ref{Action}), only with the string field $\Psi_{cl,RNS}$ replaced by the string field $\Psi_{BV,RNS}$, which contains all possible picture and ghost numbers, as well as both (NS and R) sectors of the theory. This is very satisfactory from an aesthetic point of view, since now the theory is defined by a string field that uses the whole ``Hilbert space''. Moreover, for the case of a general D-brane system, the string field lives in the maximal space (in terms of sectors and ghost and picture numbers) consistent with the system. One possible subtlety with this construction is the implicit assumption that the integration measure that we use induces a non-degenerate inner product in the space of string fields. While the analogous assertions in some other cases are pretty much CFT axioms, our case might depend on the definition of the space of string fields. The reason for the difference is that, in theories with no explicit insertions, the space is naturally decomposed into subspaces with fixed ghost and picture numbers and with fixed conformal weights. Each such space is dual to another such space with respect to the inner product. Hence, the string field can be decomposed into a direct sum of spaces and within each of these spaces it can be further decomposed into a finite sum of component fields. Each component field has as its anti-field another component field that lives in its dual space. All that implies that the BV theory can be equally well formulated in terms of the component fields and in terms of string fields. When mid-point insertions are included, they tend to couple fields with various conformal weights. Moreover, in our case, there is a decomposition only with respect to the ghost number, while all picture numbers are coupled, due to the presence of the $\cO$ insertion, which includes all integer picture numbers. The inner products between the elements of two dual spaces at ghost numbers $g$ and $3-g$ can be collected into an infinite matrix. The question of degeneracy then depends on the space of allowed vectors and dual vectors (ghost number $g$ and $3-g$ string fields respectively). One observation is that this matrix contains infinitely many infinite size blocks, of constant entries. One could fear that these blocks, originating from sets of physically equivalent vertex operators, imply that the measure is degenerate. This is not necessarily the case, since the elements generating such a block couple also to other, in particularly off-shell, states. We would like to stress that it would be wrong to try and ``invert'' $\cO$ treating it separately from the measure, as it is often done with the $Y_{-2}$ of the modified theory. Particularly, the inverse might not be defined over the correct space of string fields, i.e., the formal object ``$\cO^{-1} \Psi$'' would probably not be part of the space of string fields for any non-zero $\Psi$. We believe that the subtleties related to the definition of the space of string fields have a resolution, the BV construction is reliable and can be used for gauge fixing. To that end, a gauge-fixing fermion should be introduced. The gauge fixing fermion is an odd functional of the string fields, with (second quantized) ghost number $-1$. It depends on some auxiliary fields, e.g., non-minimal sets of variables that serve as Lagrangian multipliers. The set of non-minimal fields needed for our case is known~\cite{Gomis:1994he}. However, there are many ways for constructing gauge fixing fermions, not all of which are admissible. Moreover, it is not clear whether the subtleties with the quantum master equation of the bosonic theory persist in our theory. Even if it would be possible to show that the quantum master equation holds in our case, the construction of an admissible gauge fixing fermion would probably still be non-trivial. In calculating RNS loop scattering amplitudes subtleties appear related to the measure on supermoduli spaces~\cite{D'Hoker:2002gw,Morozov:2008wz,DuninBarkowski:2009ej}. It might be too optimistic to expect that these issues can be avoided by using string field theory for the evaluation of amplitudes. On the other hand, if one could show that superstring field theory is consistent at the quantum level, this could be an alternative definition for the superstring, not relying on the subtleties of supermoduli spaces! We leave the resolution of these important questions to future work. A somewhat naive alternative to the discussions above would be to simply enforce some auxiliary conditions that suppose to remove the gauge-related redundancy. For example, when the theory is restricted to the NS sector, it is possible to constrain the string field to carry zero picture number and to live in the small Hilbert space. The action then reduces to that of~\cite{Preitschopf:1989fc,Arefeva:1989cp}, where a further gauge fixing is needed, e.g., Siegel gauge, Schnabl gauge, a linear $b$-gauge~\cite{Kiermaier:2007jg}, or an $a$-gauge~\cite{Asano:2006hk,Asano:2008iu}. Trying to restrict the NS sector to other picture numbers seem to lead to inconsistent results. At picture number $-1$, one might argue that Witten's theory is obtained, while at other picture numbers other inconsistent theories seem to emerge\footnote{After the first version of this paper appeared, it was discovered in~\cite{Kroyter:2010rk} that gauge fixing the theory to picture number $-1$ does not lead to Witten's theory, but to another, classically consistent, theory, which is the $\Ztwo$ dual of the modified theory in the sense of~\cite{Berkovits:1994vy,Berkovits:1996bf}. Moreover, it was found there that the democratic theory can be reduced using another gauge fixing to the non-polynomial theory and that this partial gauge fixing can also be extended to the Ramond sector. Finally, it is also argued there that gauge fixing at other fixed picture numbers is probably inconsistent. These new results give strong evidence in favour of the democratic theory.}. It would be very interesting to consider universal gauge fixings that do not concentrate at fixed picture numbers. We leave this interesting project to future work. \section{Supersymmetry} \label{sec:SUSY} Up to this point, our discussion was universal, i.e., it did not depend in any way on the BCFT used in the definition of the theory. Now, however, we would like to study supersymmetry within our theory, which is background dependent. Hence, we specify that we work in the standard ten-dimensional RNS flat space, i.e., the matter system is composed of ten world-sheet scalars $X^\mu$ and ten world-sheet fermions $\psi^\mu$. The space-time supersymmetry generators of the RNS formalism carry half-integer picture numbers. In a fixed picture number theory this implies that picture changing operators should be appended to the definition of the supersymmetry transformation. For consistency, these operators have to be inserted at the string mid-point, which leads to singularities and probably takes the string field outside its domain of definition\footnote{We stress again that general mid-point operator insertion {\it on the string field} might lead to singularities. In order to avoid these potential problems one has to restrict somehow the space of string fields, such that potentially harmful mid-point insertions would not be allowed. This in turn implies that mid-point operator insertions {\it in the action}, as we consider here, are harmless.}. Working at an unrestricted picture-number space, as we do here, potentially avoids this problem. In previous cubic formulations~\cite{Witten:1986qs}, supersymmetry was generated by the zero momentum, integrated, $-\frac{1}{2}$-picture, fermion vertex. We may consider the same generator in our theory, \begin{equation} \label{WittenSUSY} \delta_{SUSY}\Psi=\ep^\al Q_\al \Psi\equiv Q^\ep\Psi\,,\qquad Q_\al=\oint \frac{dz}{2\pi i}\, e^{-\frac{\phi}{2}}S_\al(z)\,. \end{equation} Here, $S_\al$ is the spin field, which is responsible for exchanging the NS and Ramond sectors while $\ep^\al$ are odd parameters. Using the integrated vertex seems to be the better option, since the unintegrated vertex should be inserted at a given point. This point cannot lie inside the local coordinate patch, since that might lead to singularities from collisions with the state itself. It also cannot be inserted outside the local coordinate patch, as that might take the string field outside of its domain of definition, as well as to introduce singularities from multiplication by other string fields. The generator~(\ref{WittenSUSY}) would be a symmetry, provided that the following three conditions hold~\cite{Witten:1986cc,Witten:1986qs}: \begin{subequations} \label{3conds} \begin{enumerate} \item $Q^\ep$ should be a derivation of the star product, \begin{equation} \label{FirstCond} Q^\ep(AB)=Q^\ep A B+A Q^\ep B\,. \end{equation} \item $Q^\ep$ should be invariant under the kinetic operator, \begin{equation} \label{SecondCond} [\tQ,Q^\ep]=0\,. \end{equation} \item $Q^\ep$ should leave the integration measure invariant, \begin{equation} \label{ThirdCond} \oint \cO Q^\ep A=0\qquad \forall A\,. \end{equation} \end{enumerate} \end{subequations} The first condition holds, since $Q^\ep$ is an integral of a current. The second condition holds, since the vertex defining $Q^\ep$ lives in the small Hilbert space and is on-shell. The third condition, however, does not hold, since the vertex has singular OPE's with many of the $\cO_n$'s. This is hardly surprising, since these operators are essentially picture changing operators, which should act non-trivially on an on-shell vertex at any given picture. While our supersymmetry generators fail to be symmetries off-shell, they are symmetries on-shell. By that we do not mean that the action around solutions is invariant under the linearized supersymmetry transformation: The action is linearly invariant under any change of a solution by definition. We mean that this linearized transformation naturally acts on the space of solutions, i.e., it sends solutions to solutions, \begin{equation} \delta_{SUSY}(\tilde Q\Psi+\Psi^2)=\tQ Q^\ep \Psi + Q^\ep\Psi\Psi+\Psi Q^\ep \Psi =Q^\ep(\tilde Q\Psi+\Psi^2)=0\,. \end{equation} Here used was made of the properties~(\ref{FirstCond}) and~(\ref{SecondCond}). Nonetheless, a genuine symmetry must be defined also off-shell\footnote{The situation we have should not be confused with the common one of having a symmetry algebra that ``closes only up to the use of the equations of motion''. In the case at hand, we cannot even claim that we have a symmetry.}. One possible direction towards defining supersymmetry off-shell is to consider a superposition of supersymmetry generators at various picture numbers, \begin{equation} Q_\al\stackrel{?}{=} \oint\frac{dz}{2\pi i} \sum_{p\in (\cz+\frac{1}{2})} k_p V^p_\al(z)\,, \end{equation} where $k_p$ are unknown coefficients. These coefficients are restricted by the requirement that~(\ref{ThirdCond}) holds. One might think that this restriction gives a set of recursion relations, similar to the ones that we got for the $\cO_n$ insertions. Instead, one gets an infinite set of equations, each one including infinitely many summands. Each summand can include many different operators that should all independently vanish by the choice of coefficients. It is neither clear to us whether this system of equations has a solution, nor how to construct this solution perturbatively, or otherwise. A more promising approach can be based on the fact that (when integrated) the fermion vertex is $\tQ$-closed. Our experience with string field theory from the last few years suggests that it might be useful to write it formally as if it were exact~\cite{Okawa:2006vm,Fuchs:2007yy,Fuchs:2007gw}. We propose to write the integrated fermion vertex as, \begin{equation} V_\al=-\tQ W_\al\,. \end{equation} Being loyal to the democratic paradigm, the above vertex should be allowed to have an arbitrary picture and choosing different pictures should result in gauge equivalent configurations. We would then write, \begin{equation} \label{QalQWTakeOne} Q^\ep \Psi=\tQ (W^\ep) \Psi\,. \end{equation} The problem is that the string field is not necessarily closed, so~(\ref{ThirdCond}) is still not obeyed, which is not surprising since all we did was to rewrite the form of the vertex. The following modification can be proposed in order to resolve this problem, \begin{equation} \label{QalQWPsi} \delta_{SUSY} \Psi\stackrel{?}{=}\tQ (W^\ep \Psi)\,. \end{equation} Both~(\ref{QalQWTakeOne}) and~(\ref{QalQWPsi}) agree when restricted to vertex operators and reduce in this case to the standard expression. However, while~(\ref{ThirdCond}) holds for~(\ref{QalQWPsi}),~(\ref{FirstCond}) no longer holds. The natural way to resolve this problem is to notice that~(\ref{QalQWPsi}) is in fact the linearized form of an infinitesimal fermionic gauge transformation\footnote{The idea of formally representing supersymmetry as a gauge symmetry, albeit in a different way, appeared already in~\cite{Qiu:1987dv}.}. Hence, it is natural to add to the above a non-linear term and define, \begin{equation} \label{SUSYgauge} \delta_{SUSY} \Psi=\tQ (W^\ep \Psi)+[\Psi,W^\ep \Psi]\,. \end{equation} The relations~(\ref{3conds}) are not obeyed now, but there is no reason for them to be obeyed, since~(\ref{SUSYgauge}) is not a linear transformation of the string field. It is nevertheless a symmetry, as it is a formal gauge symmetry (recall that we use a formal gauge string field)\footnote{One might have considered $W_\al=-\int_{-\infty}^\infty dz V_\al(z)\ket{1}$ as the formal gauge string field, with $\ket{1}$ being the identity string field and the integration is in the cylinder coordinates. The gauge string field is manifestly formal due to the presence of the identity string field in its definition and the resulting transformation is exactly~(\ref{WittenSUSY}). Nonetheless, this choice is wrong, since the integration limits approach the mid-point insertion, invalidating the arguments that show that a gauge symmetry is a symmetry.}, with the (formal) gauge string field \begin{equation} \La_{SUSY}=W^\ep \Psi\,. \end{equation} The main potential problem with this proposal is to find an adequate $W^\ep$, that is, to define it in such a way that while it is a formal string field, the resulting transformation~(\ref{SUSYgauge}) defines a genuine string field. Since $V_\al$ lives in the small Hilbert space, it is possible to write, \begin{equation} \label{WalTakeOne} W_\al\stackrel{?}{=}\xi V_\al\,. \end{equation} This, however, results in an addition to the vertex, \begin{equation} \delta V_\al=-Q W_\al\,, \end{equation} i.e., we add to the vertex an expression, which is roughly minus the same vertex in a different picture. This is obviously wrong. Indeed,~(\ref{WalTakeOne}) is a genuine string field. Hence, the resulting transformation is a genuine gauge transformation rather than a supersymmetry one. A way out might be to add to $W_\al$ $\eta$-primitives of all higher pictures. Recall that up to $Q$-exact (singular) terms, the recursion relation satisfied by the integrated vertex operators is, \begin{equation} \label{Vrecurs} V^{p+1}=Q(\xi V^p)+\partial(\xi \hat V^p)\,, \end{equation} where $V^p$ is the unintegrated vertex operator, defined by the relation, \begin{equation} \label{QVhatV} QV^p=\partial \hat V^p\,. \end{equation} The second term in~(\ref{Vrecurs}) is needed for assuring that the resulting vertex lives in the small Hilbert space and continues to respect~(\ref{QVhatV}). The recursion relation~(\ref{Vrecurs}) might suffer from OPE singularities. However, as these singularities are exact, they can be safely removed from the definition. Total derivative and $Q$-exact terms could then be added to assure that the operator is primary. Now, let us define, \begin{equation} W_\al^{p+1}=(-1)^{p-p_0} \xi V^p_\al\qquad p+1\geq p_0\,, \end{equation} and \begin{equation} W_\al=\sum_{p=p_0+1}^\infty W_\al^p\,. \end{equation} Here $p_0$ is an arbitrary starting point. With this definition we get \begin{equation} \tQ W_\al=-V_\al^{p_0}\,. \end{equation} Hence, at the linearized level, the transformation reduces to a supersymmetry transformation. Moreover, the components of $W_\al$ do not decrease in any way as a function of picture number. Thus, it is indeed a formal gauge string field, as we wanted to have. The difference between starting with two different pictures, on the other hand, is given by a genuine gauge transformation. The resulting supersymmetry transformation takes the form, \begin{equation} \delta_{SUSY} \Psi=Q^\ep\Psi-W^\ep(\tQ\Psi+\Psi^2)\,. \end{equation} The second term vanishes on-shell and the transformation reduces to the standard linear supersymmetry transformation. Off-shell the transformation becomes non-linear with respect to the string field. This might have been expected, since on the one hand string field theory is a non-linear extension of the world-sheet formalism, while on the other hand supersymmetry is non-linearly realised in many circumstances. There is one potential obstacle for our interpretation: It seems that the string field we obtain off-shell is not more genuine than the formal gauge string field. Indeed, our experience with formal gauge string fields suggests that ``higher order counter-terms'' should be added to the gauge string field in order to obtain a legitimate physical string field. We leave the problem of finding these terms and the related problem of understanding the non-linear terms induced by the supersymmetry algebra in the momentum transformation of off-shell string fields to future study. \section{Conclusions} \label{sec:conc} In this work a new universal open RNS string field theory was presented. The theory is cubic and includes a mid-point insertion in the action. This mid-point insertion most probably carries only a trivial kernel. More importantly, while we have an insertion in the action, we allow for mid-point insertions neither in the gauge transformations nor in the equations of motion. Thus, the theory does not suffer from singularities due to collisions of mid-point insertions as was the case with the previous formulations. The new theory naturally and covariantly unifies the NS and Ramond string fields and can be used to describe open strings on arbitrary D-brane systems. Since it can be reduced to the modified theory, it supports its solutions. These observations give much credibility to the theory. Nonetheless, there is more to be done. Specifically, one should devise a gauge fixing of the theory, since it is imperative for the construction of perturbation theory, e.g., for defining propagators. While the first step towards that end, i.e., the BV construction, was completed and led to an elegant result, it is important to further study this highly non-trivial issue. The understanding of the gauge fixing of our theory might lead not only to further credibility to the theory, but also to new ways of evaluating RNS scattering amplitudes, which avoid the problems with the supermoduli spaces, at least for the scattering of open strings. It is also important to complete the formulation of the off-shell supersymmetry transformations of the theory. Our construction uses primary multi-picture changing operators. The existence of these operators was not known and was proven here. Moreover, we showed that within a specific universal space spanned by total Virasoro operators plus a non-trivial piece, these operators are unique. However, when one considers more general, but still universal, spaces, these operators are no longer unique. The understanding of multi-picture changing operators achieved in this work is an important achievement by itself, since it might be of use also for other approaches towards the RNS string. We used the multi-picture changing operators in order to construct the (primary) $\cO_n$ operators, which generalize $\xi$ and $P$ to other picture numbers. The generalization is, however, incomplete, in the sense that while $\xi$ and $P$ serve as contracting homotopy operators for the commuting cohomology operators $Q$ and $\eta_0$, we do not know of currents that we can similarly associate with the other $\cO_n$ operators. It would be interesting if we could define currents $J_p(z)$, carrying ghost number one and picture number $p$, such that the $\cO_{-p}$ would serve as the contracting homotopy operators for their mutually commuting charges, i.e., \begin{subequations} \begin{align} J_p(z)J_{p'}(w)&\sim\ldots+\frac{\partial(..)}{z-w}\,,\\ T(z)J_p(w)&\sim\frac{J_p(w)}{(z-w)^2}+\frac{\partial J_p(w)}{z-w}\,,\\ J_p(z)\cO_{-p}(w)&\sim \frac{1}{z-w}\,. \end{align} \end{subequations} The familiar cases are, \begin{equation} J_0(z)=J_B(z)\,,\qquad J_{-1}(z)=\eta(z)\,. \end{equation} We managed to find a candidate $J_{-2}$, \begin{equation} J_{-2}=\frac{1}{4}\big(3 c''-13 c\phi'^2+2(bcc'+c\xi'\eta+c T_m)\big) e^{-2\phi}\,. \end{equation} In fact, we even found a multi-parameter family of such candidates. There might, however, be further restrictions on $J_{-2}$ coming from consistency with the other $J_p$'s, or otherwise. We believe that a better understanding of these currents (if they exist) might shed light on the nature of the RNS string, as well as help in future string field theoretical research. Our construction is based on the use of $\tQ$, which was introduced by Berkovits in~\cite{Berkovits:2001us}, for the purpose of relating the RNS formalism and the pure-spinor one~\cite{Berkovits:2000fe,Berkovits:2002zk}. Here, we used this operator for constructing an RNS string field theory. One might ask whether it is possible to rewrite our theory using pure-spinor variables. This is not straightforward, since the mapping between the two formalisms is for on-shell states only, while we should be looking for an off-shell mapping. On the other hand, a pure-spinor string field theory already exists~\cite{Berkovits:2005bt}. This theory is also cubic and it also uses a mid-point insertion whose kernel is trivial for the saturation of bosonic zero modes (this is also the origin of the notion of ``picture''). However, it uses the non-minimal formulation of pure-spinor string theory~\cite{Berkovits:2005bt}. Presumably, it might be related to the present formalism if that would be extended along the lines of~\cite{Berkovits:2009gi,Kroyter:2009zj}. It would be very interesting to pursue this direction of research, as it might lead to a unified picture of superstring field theory, while clarifying some fundamental issues along the way. A potential obstacle is the fact that the pure-spinor string field should be allowed to be singular with respect to the pure-spinor $\la^\al$, but not too singular. This state of affairs seems to lead to a contradiction that could only be resolved by modifying the pure-spinor formalism itself~\cite{Aisaka:2008vw,Bedoya:2009np}. A modification of the pure-spinor formalism should also allow the introduction of a GSO$(-)$ sector. One approach towards the inclusion of this sector was presented in~\cite{Berkovits:2007wz}, where some non-minimal sectors were added to the theory. However, it is not clear how to unify these non-minimal sectors with the non-minimal sector of~\cite{Berkovits:2005bt}. Presumably, the comparison with the democratic theory might give us some clues regarding the resolution of these difficulties. We hope that the democratic theory would be found useful for the construction of closed RNS string field theories. Complications in generalizing this work are bound to arise, among other reasons, since in closed string theory the relative cohomologies at different picture numbers are not all isomorphic~\cite{Berkovits:1997mc}. Presumably, one could devise a way around this problem by replacing the $b_0^-\Psi=0$ condition by some sort of a gauge symmetry, in a way analogous to the treatment of $b_0$ in the open string case and the treatment of picture number in this work. Another potential difficulty comes from the fact that in the closed string case the mid-point, on which we inserted the operator $\cO$, is absent. This difficulty could possibly be resolved along the lines of~\cite{Saroja:1992vw}. This construction could potentially be generalized, using some ideas of the democratic theory and of the NS heterotic string field theory~\cite{Berkovits:2004xh}, in order to construct the desired closed RNS string field theories as well as a complete RNS heterotic string field theory. This important issue remains for future work. Identifying correctly the space of string fields is still an open problem even within the context of bosonic string field theory. In fact, some of the most important recent achievements of the field, such as proving Sen's conjectures~\cite{Sen:1999mh,Sen:1999xm} for Schnabl's solution~\cite{Schnabl:2005gv} depend crucially on properties of this unknown space~\cite{Okawa:2006vm,Fuchs:2006hw,Ellwood:2006ba}. Thus, our proposal that some properties hold for this new theory, provided the space of string fields has some given properties, are along the line of what is currently the common practice. Finding sound definitions for the spaces of string fields of the various string field theories would be highly important, both for understanding these theories, as well as for their proper definition. \section*{Acknowledgments} I would like to thank Ido Adam, Ofer Aharony, Nathan Berkovits, Stefan Fredenhagen, Udi Fuchs, Michael Kiermaier, Carlo Maccaferri, Yaron Oz, Stefan Theisen, Scott Yost and Barton Zwiebach for discussions. This work is supported by the U.S. Department of Energy (D.O.E.) under cooperative research agreement DE-FG0205ER41360. My research is supported by an Outgoing International Marie Curie Fellowship of the European Community. The views presented in this work are those of the author and do not necessarily reflect those of the European Community. \newpage
2,869,038,156,502
arxiv
\section{Introduction} \label{introduction} Relativistic collisionless shocks are widely viewed as efficient sources of particle acceleration in blazars, supernova remnants and gamma-ray burst (GRB) afterglows \cite{Piran_rmp_2005}, as well as other high-energy astrophysical objects~\cite{Medvedev_1999}. Collisionless shocks are ubiquitous in low-density astrophysical plasmas, where energy is dissipated through effective collisions provided by particles' interactions with turbulent electromagnetic fields. In the absence of binary collisions, such effective collisions enable particle acceleration to ultra-relativistic energies. Understanding the emergence of the underlying electromagnetic turbulence via collective plasma instabilities is crucial for understanding the physics of particle acceleration in astrophysical contexts, where plasma densities are low and binary collisions can be mostly neglected. Moreover, the complexity of instability-mediated electromagnetic fields raises questions about the exact mechanism behind the acceleration of fast particles, as well as the factors distinguishing such minority populations from the majority of particles that never reach high energies. In the specific case of relativistic unmagnetized plasma flows interacting with each other or with the interstellar medium (ISM) plasmas, the classic Weibel Instability~\cite{Weibel} (WI) is widely viewed as responsible for the spontaneous generation of sub-equipartition electromagnetic fields~\cite{Chang_2008, Keshet_apj_2009, Neda_pop_2018}, mediation of collisionless shocks, and generation of superthermal particles \cite{Silva_2003,Milosavljevi__2006,Spitkovsky_aip_2005,Spitkovsky_2008,Sironi_2013,Haugbolle_apj_2011} in various astrophysical scenarios. The WI is a collective electromagnetic instability which develops in plasmas with anisotropic velocity distributions. Analytical and simulation studies show that the WI \cite{Medvedev_1999,Silva_2003} generates large magnetic fields that can reach the Alfv\`{e}nic limit during its nonlinear stage \cite{Yoon_pra_1987, Kato_pop_2005,Polomarov_prl_2008,Shvets_pop_2009,Bret_pop_2010}. Particle-in-cell (PIC) simulations have long been the primary tools for studying the effects of the WI-generated electromagnetic turbulence on the generation of superthermal particles. For example, power-law ({\it i.e.} non-Maxwellian) particle distributions $f(\varepsilon) \sim \varepsilon^{-p}$ as a function of the particle energy $\varepsilon$ have been predicted based on PIC simulations results, with consistent power-law coefficients $p$ have extracted by several groups, e.g., $p \simeq 2.4$ in two-dimensional (2D)~\cite{Spitkovsky_aip_2005,Spitkovsky_2008} and $p \simeq 2.1-2.3$ in three-dimensional (3D)~\cite{Haugbolle_apj_2011} geometries. Particle acceleration has been found to be governed by stochastic diffusion, where particles move back and forth across the shock front and gain energy by scattering from self-consistent magnetic turbulence through the first-order Fermi acceleration mechanism~\cite{Fermi_pr_1949}. Consistently with phase space diffusion mechanism of particle acceleration, the maximum energy gain of particles in Fermi acceleration is observed to scale with acceleration time ($t_{acc}$) according to\cite{Sironi_2013} $\epsilon_{\rm max} \propto t^{1/2}_{acc}$. In the aforementioned references, Fermi acceleration is identified by the existence of non-thermal tail of the distribution function and kinetic energy carried by the non-thermal particles. Nevertheless, the detailed micro-physics behind particle acceleration is still poorly understood. Because of the complex vectorial nature of the electromagnetic turbulence inside the collisionless shock itself and in the shocked plasma, systematic tracking of the particles passing through the shock is needed to answer many specific questions. These include: (i) what is the relative role of different components of the electric field in particle acceleration? (ii) what distinguishes the majority of thermalized particles in the shocked region from the minority particles gaining most of the energy? (iii) what are the telltale signs of Fermi acceleration that can be extracted from such tracking? Note that while magnetic fields are dominant inside the shock, and are primarily responsible for particles' thermalizations, they can neither accelerate nor decelerate charged particles -- this is done by the much weaker electric fields. While particle tracking has been used in the past to track the accelerated particles~\cite{Spitkovsky_2008,Martin_2009,Plotnikov_mnras_2018}, it has not been used to investigate the highly-anisotropic nature of particle acceleration, as expressed by relative contributions of the longitudinal (parallel to the front velocity) and transverse components of the electric field to particles' energy gains/losses. In this work, we present the details of numerical tracking of representative particles extracted from first-principles 2D PIC simulations of the relativistic, unmagnetized electron-positron (pair) plasma shocks. The 2D geometry is sufficient for capturing the basic physics of particle acceleration. The rest of the manuscript is organized as follows. In Section~\ref{method}, we describe the geometry and physical parameters of the problem at hand, and the details of the PIC simulation setup. The structure of the shock and the dynamics of the bulk plasma (pre-shock and shocked) are discussed in Section~\ref{structure}. A detailed balance between work done on accelerated particles by different electric field components is discussed in Section~\ref{bifurcation} and the acceleration process is described using particle tracking in Section \ref{single_particle_dynamics}. We describe the work-energy bifurcation, which reveals two distinct groups of particles. The particles from the first group gain energy in equal measure from the longitudinal and transverse electric fields, while those from the second group derive most of their energy from the transverse electric field. These distinct groups of particles indicate the difference in wave-particle interaction between bulk and superthermal plasmas. In Section \ref{sec:reflected_particles}, we discuss the role of the shock-reflected particles. The conclusions are presented in Section~\ref{conclusions}. \section{Physical setup and simulation details}\label{method} The physical setup of the problem is schematically illustrated in Fig.~\ref{fig:density_fieldenergy}(a): two streams of cold electron-positron plasmas counter-propagate along the $x$-direction and come into the initial contact in the plane marked by a black dashed line. In the rest of the manuscript, we assume that the two electrically-neutral streams are mirror images of each other, and that their initial Lorentz factors and laboratory frame densities for each of the species are $\gamma_{0} = 20$ and $n_0$, respectively. The mirror symmetry enables a standard computationally-efficient approach~\cite{Kato_2007} to modeling colliding plasmas: the stream $2$ particles are reflected off a stationary wall placed in the plane of contact ($x=0$). Perfectly-conducting boundary conditions for the electromagnetic fields are imposed at $x=0$. Under this approach, unperturbed streaming plasma is continuously injected through the right boundary. As the plasma is reflected by the wall and collides with the incoming plasma, a region of counter-streaming plasma is formed in the overlapping region. The resulting extreme anisotropy of the mixed plasma triggers the WI and eventually leads to the formation of a collisionless shock (red dashed line in Fig.~\ref{fig:density_fieldenergy}(a)) propagating in the $+x$-direction. As the incoming cold plasma encounters strong electromagnetic fields in the shock region, it gets thermalized and forms an isotropic hot plasma in the shocked region. By symmetry, the plasma in this region behind the shock (further referred to as the downstream region) has a vanishing overall drift velocity. The simulation is carried out in the laboratory reference frame of a stationary reflective wall, where the downstream plasma is, on average, at rest. The above described problem is numerically solved using a 2D ($y$-independent) version of a PIC code VLPL \cite{Pukhov_1999, Pukhov_jcp_2020}. A novel rhombi-in-plane scheme \cite{Pukhov_jcp_2020} is used for updating the electromagnetic fields, which is designed to suppress the numerical Cherenkov instability. The non-vanishing electromagnetic field components $B_y$ (out-of-plane), $E_x$ (longitudinal), and $E_z$ (transverse) are assumed to be functions of $(x,z)$ and $t$, and the only non-vanishing components of the electron/positron momenta are $p_x$ and $p_z$. The natural scales for time and space are the inverse values of the plasma frequency $\omega_{p}^{-1}$ and wave number $k^{-1}_{p} (= c/\omega_{p})$, respectively. Here $\omega_{p} = \left( 4 \pi n_{0} e^{2}/\gamma_{0} m_{e} \right)^{1/2}$ is the relativistic plasma frequency, $-e$ and $m_{e}$ are the electric charge and mass of an electron. The size of the simulation domain is chosen to be $L_{x} \, \times \, L_{z} = 2100 k^{-1}_{p} \times 67 k^{-1}_{p} $. The following spatial grid cell $(\Delta x, \Delta z)$ and time step $\Delta t$ were used: $\Delta x = 0.07 k^{-1}_{p}$, $\Delta z = 3 \Delta x$, and $\Delta t = \Delta x/c$~\cite{Pukhov_jcp_2020}. For all simulations, $16$ particles per grid cell per species were used. \begin{figure}[ht!] \includegraphics[width=1\linewidth]{density_fieldenergy_shortrange1_820.pdf} \caption{(a) Two counter-propagating pair plasmas collide and produce two counter-propagating shocks. (b-e) Typical snapshots ($t = t_{\rm fin} \simeq 2040 \omega^{-1}_{p}$) of the shocked (``downstream'') and pre-shocked (``upstream'') plasmas. (b) Electron density, (c) magnetic field, (d) transversely averaged electron density, and (e) transversely averaged electromagnetic field energy densities: longitudinal electric $\epsilon_{x}^{(E)}$ (blue), transverse electric $ \epsilon_{z}^{(E)}$ (magenta), and transverse magnetic $\epsilon_{B}$ (black) lines. Inset: zoom into the $720 k_{p}^{-1} \leq x \leq 820 k_{p}^{-1}$ range.} \label{fig:density_fieldenergy} \end{figure} \section{Review of the Shock Structure}\label{structure} Below we review the well-established properties of the shocked (downstream) and pre-shocked (upstream) plasmas, as well as that of the shock created by the collision of counter-streaming plasmas \cite{Silva_2003,Milosavljevi__2006,Spitkovsky_aip_2005,Spitkovsky_2008,Keshet_apj_2009, Sironi_2013,Haugbolle_apj_2011,Lemoine_prl_2019,Lemoine_pre_2019a,Lemoine_pr_2019b,Lemoine_pre_2019c}. Unless stated otherwise, all figures are plotted at $\omega_{p}t_{\rm fin} \simeq 2040$ chosen to ensure that the shock region is well-formed. A sharp density transition from $n_e=n_0$ upstream to $n_{s}/n_{0} = (\Gamma_{\rm ad}/(\Gamma_{\rm ad}-1) + 1/\gamma_{0}(\Gamma_{\rm ad}-1)) \sim 3.1$\cite{Blandford_pof_1976,Kirk_npp_1999} downstream of the shock shown in Fig.~\ref{fig:density_fieldenergy}(d) corresponds to a hydrodynamic shock with an adiabatic constant $\Gamma_{\rm ad}=3/2$ of a 2D gas. On the $x>0$ side of the contact point $x=0$, the shock propagates with the velocity $v_{s}/c = (\Gamma_{\rm ad} - 1)((\gamma_{0} - 1)/(\gamma_{0} + 1))^{1/2} \sim 0.475c$\cite{Blandford_pof_1976,Kirk_npp_1999} in the $+x$-direction. Effective collisions inside the shock are provided by the turbulent magnetic field plotted in Fig.~\ref{fig:density_fieldenergy}(b), where complex multi-filamentary structures with a typical transverse scale of $\sim 5 \, k_{p}^{-1}$ can be observed reaching from the shock into the upstream region. Magnetic filaments are elongated in the direction of the incoming upstream plasma. While the magnetic field is quasi-static in the down stream region, it is highly dynamic in the upstream region. Such time-dependence results in a finite longitudinal electric field $E_x$. The relative magnitudes of different components of the electromagnetic field can be appreciated from the respective plots of their transversely averaged energy densities as shown in Fig.~\ref{fig:density_fieldenergy}(e). The largest normalized energy density $\epsilon_{B}(x) = \langle B^{2}_{y} \rangle /8 \pi \gamma_{0} n_{0} m_{e} c^{2}$ is associated with the magnetic field (black line), while the smallest one, $\epsilon_{x}^{(E)}(x) = \langle E^{2}_{x} \rangle /8 \pi \gamma_{0} n_{0} m_{e} c^{2}$, belongs to the longitudinal electric field (blue line). Here $\langle \rangle$ defines averaging over the transverse $z$ coordinate. The intermediate energy density $\epsilon_{z}^{(E)}(x) = \langle E^{2}_{z} \rangle /8 \pi \gamma_{0} n_{0} m_{e} c^{2}$ is associated with the transverse electric field (magenta line). Note that both $\epsilon_{B}(x)$ and $\epsilon_{z}^{(E)}(x)$ reach far into the upstream region, forming an important pre-shock region~\cite{Lemoine_prl_2019,Lemoine_pre_2019a,Lemoine_pr_2019b,Lemoine_pre_2019c} discussed below in the context of Fermi acceleration. In our simulation, the magnetic field energy density peaks at $\sim 20 \%$ of the equipartition energy in the shock transition region, and decays away from the shock front. \subsection{The role of the longitudinal electric field in electron energy Maxwellization}\label{sec:long_field} We further note from Fig.~\ref{fig:density_fieldenergy}(e) that the longitudinal electric field energy is vanishingly small in the upstream region, whereas the transverse electric and magnetic field energies are much stronger and comparable to each other: $\epsilon_{z}^{(E)} \approx \epsilon_B$ everywhere in the upstream region~\cite{Lemoine_prl_2019,Lemoine_pre_2019a}. The latter property is due to the fact that the dominant current filaments in the upstream (including the pre-shock) region are associated with highly-directional flows of electrons and positrons that have not yet undergone any significant isotropization as can be seen from Fig.~\ref{fig:momentum_bifurcation}(b). The presence of a small but finite longitudinal electric field in the shock transition region has been related to the oblique modes associated with the WI~\cite{Bret_PRE_2010,Lemoine_prl_2019}. The role of the transverse component of the electric field in accelerating superthermal particles has been generally recognized~\cite{Lemoine_pre_2019c}. Its importance is not surprising because of its large amplitude in the pre-shock region. On the other hand, the role of the longitudinal electric field in providing energy Maxwellization to the medium-energy particles (both downstream and upstream) has not been previously studied. The reason for neglecting the $E_x$ component is that it is considerably smaller than $E_z$ in the pre-shock region. On the other hand, the amounts of the mechanical work $W_{x}^{(j)}$ ($W_{z}^{(j)}$) done by the longitudinal (transverse) electric field components on the $j$th upstream particle interacting with the shock field could be comparable with each other. Here we define \begin{equation}\label{eq:work_done} W_{x,z}^{(j)} = q^{(j)} \int_{-\infty}^{+\infty}dt E_{x,z}(x^{(j)}(t),z^{(j)}(t)) v_{x,z}^{(j)}(t), \end{equation} where $v_{x,z}^{(j)}=p_{x,z}^{(j)}/m_{e}\gamma^{(j)}$ are the time-dependent longitudinal (transverse) velocity components of the $j$th particle. From here onwards, we assume that the particles are electrons, and $q^{(j)}=-e$. We further note from Figs.~\ref{fig:momentum_bifurcation}(a,b) that most of the counter-streaming ($p_x<0$) particles are not yet isotropized, {\it i.e.}, $|p_z|^{(j)} \ll |p_x|^{(j)}$. This creates a surprising opportunity for $|W_x|^{(j)} \simeq |W_z|^{(j)}$ despite $|E_x|\ll |E_z|$ everywhere in the pre-shock region. Moreover, we find that the two electric field energies, $\epsilon_{z}^{(E)}$ and $\epsilon_{x}^{(E)}$, are comparable in the downstream region, see the inset in Fig.~\ref{fig:density_fieldenergy}(e). The relativistic pair plasmas incident on the shock region are fully isotropized behind the shock, as can be observed by comparing Figs.~\ref{fig:momentum_bifurcation}(a) and (b). The out-of-plane magnetic field is responsible for efficient isotropization of the incident plasma: magnetic field energy $\epsilon_B$ dominates the downstream region immediately behind the shock, where it is much larger than the electric field energy. Therefore, just as in the pre-shock region, it is plausible for the two electric field components to do comparable mechanical work on the incident particles. In the next Section, we classify plasma electrons interacting with the shock into two categories defined by the relative magnitudes of $W_x$ and $W_z$. \begin{figure}[ht] \label{fig:momentum_bifurcation} \includegraphics[width=1\linewidth]{momentum_bifurcation_3.pdf} \caption{Phase space densities in the downstream ($x<950 k_p^{-1}$), upstream ($x>1050 k_p^{-1}$), and shock/pre-shock ($950<x<1050 k_p^{-1}$) regions. (a) Longitudinal (all particles) and (b) transverse (counter-stream particles) color-coded momentum phase space density. (c-e) Decomposition of the kinetic energy gain/loss into the work done by longitudinal (blue line) and transverse (red line) electric fields inside selected spatial windows: (c) downstream ($50 <x< 150 k_p^{-1}$), (d) inside the shock ($1050<x<1150 k_p^{-1}$), and (e) upstream ($x>1150 k_p^{-1}$). } \end{figure} \section{Emergence of the Energy Bifurcation}\label{bifurcation} To quantify the contributions of the longitudinal and transverse electric fields to individual particles' kinetic energy increments $\Delta \varepsilon^{(j)} \equiv (\gamma^{(j)} - \gamma_0)m_{e}c^2$, where $\gamma^{(j)}$ is the final Lorentz factor of the $j$'th particle, we break up all particles located within a given spatial window $L_1 < x < L_2$ into energy bins centered around their final Lorentz factors $\gamma$. The following values of $L_1$ and $L_2$ indicated by the dashed lines in Fig.~\ref{fig:momentum_bifurcation}(a,b) were chosen: (i) $L_1=50 k^{-1}_{p}$ and $L_2 = 150 k^{-1}_{p}$ for the downstream region, (ii) $L_1= 1050k^{-1}_{p}$ and $L_2 = 1150 k^{-1}_{p}$ for the shock/pre-shock region, and (iii) $L_1= 1150 k^{-1}_{p}$ and $L_2 = 2000 k^{-1}_{p}$ for the upstream region. In order to concentrate specifically on the electrons that have already completed their interaction with, and thermalization by the shock, only those particles with $v_x^{(j)} > 0$ were counted in the upstream region. Note that a considerably larger spatial window was used for the upstream particles because of their relatively small number. Inside each spatial window, an ensemble of particles whose final energy is centered around $\gamma$ is selected, and their average respective energy gains $W_{x,z}(\gamma) \equiv \langle W_{x,z}^{(j)} \rangle$ are calculated over the $\gamma$-dependent ensembles. The results for $W_{x,z}(\gamma)$ are plotted in Figs.~\ref{fig:momentum_bifurcation}(c-e) as a function of the final Lorentz factor $\gamma$ for the downstream (c), shock/pre-shock (d), and upstream (e) spatial windows. The most dramatic result corresponds to shocked plasma downstream from the shock: the $W_{x}(\gamma)$ (black) and $W_{z}(\gamma)$ (orange) curves plotted in Fig.~\ref{fig:momentum_bifurcation}(c) exhibit a clear bifurcation at $\gamma \equiv \gamma_{\rm bf} \approx 2 \gamma_0$. Note that no such bifurcation was found for the electrons residing in the shock region, as can be seen from Fig.~\ref{fig:momentum_bifurcation}(d). We attribute this to the fact that the electrons inside the shock have not yet completed their interaction with turbulent electromagnetic fields inside and outside of the shock. Similarly, the small population of particles reflected by the shock back into the upstream region (see Fig.~\ref{fig:momentum_bifurcation}(e)) does not exhibit the same behavior of the $W_{x,z}(\gamma)$ graphs as the downstream particles. This behavior is discussed in Section~\ref{sec:reflected_particles}. Below we concentrate on the analysis of particle energy gain/loss in the downstream region. \subsection{Properties of Thermalized Plasma Downstream From the Shock}\label{sec:downstream} Based on the bifurcated curves in Fig.~\ref{fig:momentum_bifurcation}(c), we identify two groups of particles in the downstream region: particles with moderate ($\gamma < \gamma_{\rm bf}$) and particles with large ($\gamma > \gamma_{\rm bf}$) kinetic energies. The first group of particles, which we refer to as the bulk population, is thermalized to a relativistic Maxwellian distribution. Remarkably, both the longitudinal and transverse electric fields perform equal work on the bulk plasma particles: $W_{x}(\gamma) \approx W_{z}(\gamma)$ for all $\gamma < \gamma_{\rm bf}$. Note that the bulk population contains both particles that have been slowed down by the electric fields of the shock ($W_{x,z}(\gamma)<0$ for $\gamma < \gamma_0$) and the ones that have nearly doubled their energy. To our knowledge, this is the first computational demonstration of the equal contributions of the longitudinal and transverse components of the electric field in the Maxwellization of the shocked pair plasma. While the importance of the longitudinal field component $E_x$ has been known in electron-ion plasmas~\cite{Spitkovsky_ions_2008,Kumar_2015}, it has not yet been appreciated for collisionless shocks in pair plasmas~\cite{Lemoine_pr_2019b}. The second group of particles, which we refer to as superthermal particles, acquire most of their kinetic energy from the transverse electric field, {\it i.e.}, $W_{z}(\gamma) > W_{x}(\gamma)$ for all $\gamma > \gamma_{\rm bf}$ as shown in Fig.~\ref{fig:momentum_bifurcation}(c). The bifurcation point at $\gamma = \gamma_{\rm bf}$ in the work-energy graph separates the population of the bulk particles gaining energy in the downstream region of the shock from the population of superthermal particles gaining energy in the course of repetitive bouncing in the shock/pre-shock region. By carrying out simulations for different periods of time, we have observed that while the ratio of the work performed by the transverse and longitudinal electric field increases with time for the superthermal population, the value of the bifurcation Lorentz factor $\gamma_{\rm bf}$ remains time-invariant. We have also carried out simulations with varying initial (upstream) Lorentz factor ($\gamma_{0} = 2 - 50$). In Fig. \ref{fig:bifurcation} we have plotted the ratio $\gamma_{\rm bf}/\gamma_0$ as a function of $\gamma_0$. Remarkably, we found that for relativistic pair plasma shocks ($\gamma_{0} \geq 5$) the ratio $\gamma_{\rm bf}/\gamma_0$ remains constant ($ \approx 2$). However, for mildly relativistic shocks, this ratio is found to be much higher (e.g., $\approx 3.7$ for $\gamma_0 = 2$). It is expected that by symmetry, positrons and electrons exhibit the same bifurcating work-energy graphs. \begin{figure}[ht] \label{fig:bifurcation} \centering \includegraphics[width=1\linewidth]{compare_gamma_bifurcation_multiple_gamma0_2_50.pdf} \caption{The ratio of the bifurcation to initial (upstream) Lorentz factors $\gamma_{bf}/\gamma_0$ for different initial Lorentz factors $\gamma_0$.} \end{figure} Another manifestation of the emergence of the superthermal population comes from the energy spectrum of thermalized electrons in the shocked region of the plasma. A typical spectrum plotted in Fig.~\ref{fig:particle_spectrum} corresponds to thermalized electrons inside a $100 \, k_{p}^{-1}$-wide slice in the downstream region at $t_{\rm fin} \simeq 2040 \omega^{-1}_{p}$. We have fitted the numerically simulated spectrum (black line) to a sum of a Maxwell-J\"{u}ttner (MJ)~\cite{juttner_AnnPhys1911} (red line) and a power-law (blue line) spectra. {Specifically, we chose the following analytic expression for the distribution function: \begin{eqnarray} \nonumber \label{eq:distribution} f \left( \frac{\gamma}{\gamma_0} \right) & = & C_1 \frac{\gamma}{\gamma_0} \exp \left( - \frac{\gamma}{ \Theta} \right) + \\ && C_2 \left( \frac{\gamma}{\gamma_0} \right) ^{-p} min\left\lbrace 1, \exp \left( - \frac{\gamma - \gamma_{cut}}{\Delta \gamma_{cut}} \right) \right\rbrace, \end{eqnarray} where the first and second terms in the RHS correspond to the MJ and power law (with an exponential cutoff) \citep{Spitkovsky_2008,Stockem_ppcf_2012} distributions, respectively. The cutoff implies that $C_2 = 0$ for $\gamma < \gamma_{\rm min}$, $\Theta = k_{B}T/ m_{e}c^{2}$ is the dimensionless temperature, $k_{B}$ is the Boltzmann constant, and $C_{1,2}$ are the normalization constants. $\gamma_{\rm cut}$ and $\Delta \gamma_{\rm cut}$ show the beginning of high energy cutoff and high energy spread, respectively. In Fig. ~\ref{fig:particle_spectrum}(a), theoretical MJ distribution is plotted for $k_{B}T = 9.3 m_{e}c^{2}$, which is in excellent agreement with $k_ {B}T_{\rm RH} = 0.5 (\gamma_0 -1)m_{e}c^{2} = 9.5m_{e}c^{2}$ predicted by Rankine-Hugoniot condition \cite{Blandford_pof_1976,Kirk_npp_1999} for complete thermalization in the downstream region. The power-law is plotted for $p = 2.5$, $\gamma_{\rm min} = 3 \gamma_{0}$, $\gamma_{\rm cut} = 10 \gamma_{0}$ and $\Delta \gamma_{\rm cut} = 6 \gamma_{0}$.} Deviation from the MJ spectrum is clearly observed for $\gamma > 2\gamma_0$. Understanding the origins of the two groups of particles (bulk (group I) and superthermal (group II)) requires that we examine individual particle trajectories in greater detail: their entrance into the shock, subsequent interaction with the shock, and their eventual transition into the downstream region. \begin{figure}[ht] \label{fig:particle_spectrum} \centering \includegraphics[width=0.8\linewidth]{particle_spectrum_downstream_upstream.pdf} \caption{(a) Electron energy spectra inside the $50 k_{p}^{-1} < x < 150 k_{p}^{-1}$ spatial window downstream: simulated (black line) and its fit to the sum of a Maxwell-J\"{u}ttner (red line) and a power-law $\gamma^{-2.5}$ (blue line) spectra. (b) Electron energy spectra of the upstream reflected particles inside the $1150 k_{p}^{-1} < x < 2000k_{p}^{-1}$ spatial window. Inset: transversely averaged density of upstream-reflected electrons at $t_{\rm fin} = 2040 \omega^{-1}_{p}$.} \end{figure} \subsection{Particle Tracking Results}\label{single_particle_dynamics} To understand field-particle interactions with different regions of the shock, we tracked electrons' normalized energies $\gamma(t)$ and positions $x(t)$ between $t \equiv t_0=0$ and $t \equiv t_{\rm fin}=2040 \omega^{-1}_{p}$ based on their final energies $\gamma_{\rm fin} \equiv \gamma(t_{\rm fin})$ and positions $x_{\rm fin} \equiv x(t_{\rm fin})$ with respect to the shock. Specifically, four classes of particles were considered: (I) two classes of the bulk particles with $\gamma_{\rm fin} < \gamma_{\rm bf}$ that ended up downstream of the shock's position $x_{\rm sh}(t) \approx v_{\rm sh}t$ (top row of Fig. \ref{fig:four_particle_trajectory}), and (II) two classes of superthermal particles with $\gamma_{\rm fin} \gg \gamma_{\rm bf}$ (bottom row of Fig. \ref{fig:four_particle_trajectory}). Group I electrons comprise those that gained or lost energy, as exemplified by representative particles in Figs. \ref{fig:four_particle_trajectory}(a) and (b), respectively. Group II electrons that gained a significant amount of energy from the shock comprise those that have crossed the shock into the downstream region, as shown in Fig.~\ref{fig:four_particle_trajectory}(c), and those that have reflected from the shock into the upstream region, as shown in Fig.~\ref{fig:four_particle_trajectory}(d). The black lines in Fig.~\ref{fig:four_particle_trajectory} indicate electrons' trajectories $x(t)$. Shock's trajectory $x_{\rm sh}(t)$ separates the blue (upstream) region from the gray (downstream) region. The dotted red line indicates the edge of the pre-shock region $x_{\rm p-sh}(t)$ defined in such a way that the magnetic energy declines from its peak as $\epsilon_B(x_{\rm p-sh})/\epsilon_B(x_{\rm sh})=1/e^2$ (see Fig.~\ref{fig:density_fieldenergy}(e) for a representative profile of $\epsilon_B$ as a function of $x$). The blue lines in Figs.~\ref{fig:four_particle_trajectory}(a-d) indicate electrons' Lorentz factors $\gamma(t)$ normalized by $\gamma_0$, i.e. energy gain (loss) correspond to $\gamma_{\rm fin}/\gamma_0 > 1$ ($\gamma_{\rm fin}/\gamma_0 < 1$). By comparing particles' trajectories and energy changes, it is easy to deduce when those energy changes have occurred. \begin{figure}[ht]\label{fig:four_particle_trajectory} \includegraphics[width=1\linewidth]{four_particle_path_new2.pdf} \caption{Time-dependent trajectories and normalized energies of four representative electrons. Color-coded: transversely-averaged electron density separated by the shock line $x_{\rm sh}(t)$. Dotted red line: the boundary of the pre-shock $x_{\rm p-sh}(t)$. Black lines (left scale): horizontal trajectories $x(t)$, blue lines (right scale): energies $\gamma(t)$. (a,b) Typical bulk electrons gaining (a) and losing (b) energy. (c,d) Superthermal electrons moving into downstream (c) and upstream (d) regions.} \end{figure} Particles of the first and second classes of group I gain or lose moderate amounts of energy that are comparable to their initial energies $\varepsilon_0 = \gamma_0 m_{e}c^2$. Those particles cross the shock once, become thermalized, and turn into bulk plasma in the downstream region. The downstream plasma primarily consists of these two classes of particles, as they form the Maxwellian portion of the spectrum shown in Fig.~\ref{fig:particle_spectrum}. As the bifurcation in Fig.~\ref{fig:momentum_bifurcation}(c) indicates, these two classes of particles, on average, gain (for $W_{x,z}>0$) or lose (for $W_{x,z}<0$) approximately equal amounts of energy from both components of the electric field. This is related to the fact that the downstream region of the plasma contains almost equal amounts of electromagnetic energies $\epsilon_{x}^{(E)}$ and $\epsilon_{z}^{(E)}$ associated with the longitudinal and transverse electric field components, respectively (see inset in Fig.~\ref{fig:density_fieldenergy}(e)). Additional mixing between longitudinal and transverse momenta $p_x$ and $p_z$ is provided by the magnetic field $B_y$ which is much larger than either $E_x$ or $E_z$ components of the electric field. A small number of particles which are either reflected by the shock, or diffuse from the downstream to upstream region, do not immediately cross the high-field region between the shock and the pre-shock boundary shown by a dashed line in Figs.~\ref{fig:four_particle_trajectory}(c) and (d). Such group II particles can stay in the pre-shock region for a long time, gaining significant energy from the strong transverse electric field. An example of a particle belonging to the third class of bulk electrons that gain considerable energy while eventually moving through the shock is shown in Fig.~\ref{fig:four_particle_trajectory}(c). This specific particle (which we label as $j=3$) stays in the pre-shock region for almost $(\Delta t)^{(3)}_{\rm p-sh} \approx 700\omega_p^{-1}$, gains $\Delta \varepsilon^{(3)} \approx 14 \varepsilon_0$ by experiencing numerous rapid energy changes that can be characterized as first-order Fermi accelerations, and eventually crosses the shock transition region into the downstream region. In agreement with the energy bifurcation curve, $W_z^{(3)} \approx 12 W_x^{(3)}$, {\it i.e.}, superthermal electrons crossing into the downstream region gain more than an order of magnitude from the transverse component of the electric field than from the longitudinal one. The reason for this is that superthermal electrons spend a long period of time $(\Delta t)^{(3)}_{\rm p-sh}$ in the pre-shock region, where they are subjected to $E_z \gg E_x$. In combination with isotropization provided by a strong magnetic field in the pre-shock region, this results in $W_z^{(3)} \gg W_x^{(3)}$. \section{Reflected particles in the upstream region}\label{sec:reflected_particles} At the same time, a minority of electrons interacting with the pre-shock region for a long time eventually get reflected and move into the upstream region. The number of reflected electrons and positrons is much smaller than of those propagating past the shock into the downstream region. Qualitatively, this is related to the fact that the combination of the transverse magnetic and electric fields in the pre-shock region creates a stronger deflecting force for the particles traveling in the positive $x$-direction than for their counterparts with $v_x<0$. Therefore, the pre-shock creates an effective one-way barrier that makes it easier for the thermalized particles to diffuse downstream from the shock than to reflect back into the upstream region. The energy spectrum of the reflected electrons population is shown in Fig.~\ref{fig:particle_spectrum}(b). It peaks at a much higher Lorentz factor $\gamma^{({\rm up})}_{({\rm peak})} \approx 5\gamma_0$ than the $\gamma^{({\rm down})}_{({\rm peak})} \approx \gamma_0$ peak of the energy spectrum of the downstream electron population. Therefore, based on the plots of the averaged $W_x(\gamma)$ and $W_z(\gamma)$ in Fig.~\ref{fig:momentum_bifurcation}(e), we conclude that most of the reflected pairs gain most of their energy from the transverse electric field component than from the longitudinal one. A typical trajectory and energy gain plots for a representative class-four particle are shown in Fig. \ref{fig:four_particle_trajectory}(d). The particle spends roughly the same time interacting with the pre-shock as the one shown in Fig.~\ref{fig:four_particle_trajectory}(c), gains approximately the same energy, and eventually becomes a counter-streaming particle penetrating deep into the upstream region. Next, we discuss the importance of the counter-streaming particles for seeding the WI. The counter-streaming population propagating ahead of the shock, plotted in the inset of Fig. ~\ref{fig:particle_spectrum}(b) and also observed in Fig.~\ref{fig:momentum_bifurcation}(a), is essential for maintaining the shock. For example, the density of counter-streaming particles determines the growth rate and saturation of the secondary WI manifested magnetic field filaments in the upstream region, as shown in the Fig.~\ref{fig:density_fieldenergy}(d). Note that the density of the counter-streaming particles decreases as they move away from the shock transition region. This effect is illustrated by Fig.~\ref{fig:gamma_density}, where we plot the transversely-averaged density of electrons with Lorentz factors within the following ranges: (1) $\gamma_0 < \gamma < \gamma_{\rm bf}$ (blue line), (2) $\gamma_{\rm bf} < \gamma < \gamma^{({\rm up})}_{({\rm peak})}$ (orange line), and (3) $\gamma^{({\rm up})}_{({\rm peak})} < \gamma < 2 \gamma^{({\rm up})}_{({\rm peak})}$ (yellow line). The general trend is the same for all three energy ranges: higher density in the downstream than in the upstream region. It confirms that the particles more easily escape into the downstream than into the upstream because of the deep penetration of the transverse electric and magnetic fields into the upstream region shown in Fig.~\ref{fig:density_fieldenergy}(e). Only the highest energy highly-collimated counter-streaming particles penetrate deep into the upstream as they are less scattered by the upstream electromagnetic fields, which clearly explains rapid density fall of the counter-streaming particles in the upstream region shown in the inset of Fig. \ref{fig:particle_spectrum}(b). Another reason behind the lower density of accelerated particles in the upstream is that the reflected particles seed the secondary Weibel instability in the upstream region, thereby losing energy in the process~\cite{Lemoine_pre_2019a}. As more particles are accelerated by the shock, the resulting sub-population of fast particles catches up with the slower particle reflected at earlier times. This leads to overall density increase of counter-streaming particles with time, which is likely to be the reason why current PIC simulations do not reach a steady-state~\cite{Keshet_apj_2009}. \begin{figure}[ht]\label{fig:gamma_density} \includegraphics[width=1\linewidth]{counterstream_density.pdf} \caption{Transversely averaged electron densities for kinetic energy ranges $\gamma_0 < \gamma < \gamma_{\rm bf}$ (blue line), $\gamma_{\rm bf} < \gamma < \gamma^{({\rm up})}_{({\rm peak})}$ (orange line), and $\gamma^{({\rm up})}_{({\rm peak})} < \gamma < 2 \gamma^{({\rm up})}_{({\rm peak})}$ (yellow line). All plots correspond to $t=t_{\rm fin}$.} \end{figure} Not surprisingly, some of the most energetic electrons can be found among those reflected upstream of the shock. The trajectory of one such simulated particle shown in Fig.\ref{fig:trajectory_single_particle}(a) (blue line, left scale) demonstrates that the most energetic class-four particles ``surf'' around the shock and gain energy continuously (blue line, right scale). The temporal evolution of the decompositions of the kinetic energy change into $W_{x}^{(4)}(t)$ and $W_{z}^{(4)}(t)$ are plotted in Fig.~\ref{fig:trajectory_single_particle}(b). The jumps in $W_{z}^{(4)}(t)$ clearly coincide with multiple scatterings of the particle around the shock region. Such scattering in the shock transition region can be identified as a beginning of the first order Fermi acceleration. \begin{figure}[ht]{ \label{fig:trajectory_single_particle}} \centering \includegraphics[width=1\linewidth]{most_energetic_particle_energy_combine.pdf} \caption{Time evolution of the longitudinal position and normalized energy of a representative fast electron reflected by the shock. (a) Black line: particle trajectory $x(t)$, blue line: Lorentz factor $\gamma(t)$. Color-coded: transversely-averaged plasma density. (b) Mechanical work performed by the longitudinal (black line) and transverse (orange line) electric field components, and the Lorentz factor (blue line) of the particle.} \end{figure} \section{Conclusions} \label{conclusions} In conclusion, we have studied particle acceleration via the unmagnetized relativistic collisionless shock in pair (electron-positron) plasmas by means of a first-principles 2D PIC code. The vectorial nature and strong anisotropy of the electric field produced by the classic Weibel instability contributes to highly anisotropic energy gain by fast particles experiencing first order Fermi acceleration in the shock and pre-shock regions. On the other hand, the electric field is found to be fairly isotropic in the shocked plasma region downstream of the shock. The effects of the anisotropic electric fields on the particles upstream and downstream of the shock were studied by implementing a particle tracking routine that follows, as a function of time, the mechanical work done by each field component on the individual particles. One of the key findings of tracking particles' energy gains and losses is that the downstream plasma particles bifurcate into two groups based on their final energy: slow ($\gamma < \gamma_{\rm bf}$) and fast ($\gamma > \gamma_{\rm bf}$) group of particles. For relativistic shocks, the empirically found value of the bifurcation Lorentz factor separating the two groups is found to be $\gamma_{\rm bf} \sim 2 \gamma_{0}$ for a wide range of the initial Lorentz factors $\gamma_0$. Another surprising findings of particle-tracking is that the slow group of particles forming the bulk of the shocked plasma gains/loses equal amounts of energy from the longitudinal and transverse electric field components despite the former being much smaller than the latter in and around the shock. On the other hand, the fast particles gain most of their energy from the transverse electric field component because most of the energy gain takes place inside the shock/pre-shock region, where particles' momenta are already thoroughly anisotropized while the longitudinal component of the electric field is much smaller than the transverse one. Therefore, the results of tracking particles' trajectories and energy exchanges with the two electric field components indicate that the development of a bifuracated energy gain distribution is a telltale sign of the emergence of Fermi acceleration in the shock/pre-shock regions of the plasma. Future research directions will include extending these results to 3D geometry, as well as expanding this work to mixed plasma flows containing hadrons in addition to leptons. \begin{acknowledgments} The work was supported by DOE grant DE-NA0003879. The authors thank the Texas Advanced Computing Center (TACC) for providing HPC resources. \end{acknowledgments} \vspace{5mm} \section*{data availibility statement} The data that support the findings of this study are available from the corresponding author upon reasonable request.
2,869,038,156,503
arxiv
\section{Introduction} \hspace*{6mm} It goes without saying that field theories play a central role in drawing a particle picture. They are especially important to explore a way to construct a theoretical view on a $curved$ space-time (of more than four dimensions). Recently-developed theories of strings\cite{1} and membranes\cite{2}, as well as those of two-dimensional gravity\cite{3}, go along this way. If one completes a picture with a general action, one may have a clear understanding about why the fundamental structure is of one dimension (a string), excluding other extended structures of two or more dimensions. The first purpose of this paper is to obtain a general action for the fields of $q$-dimensional differential forms on a general curved space-time. In such a way can we deal not only with strings and $p$-branes ( $p$-dimensional extended objects), but also with vector and tensor fields as assigned on each point of a compact Riemannian manifold (e.g., a sphere or a torus of general dimensions). Our next aim is, as a result of this treatment, to generalize the conventional Maxwell theory to that on the $curved$ space-time of {\it arbitrary dimensions} . Our method is based on the mathematical theory having been developed by de Rham and Kodaira\cite{4}. In the theory of harmonic integrals the elegant theorem, having been now crowned with the names of the two brilliant mathematicians, says that an arbitrary differential form consists of three parts: a harmonic form, a $d$-boundary and a $\delta$-boundary. With this theorem we derive a general Maxwell theory only $kinematically$ , i. e., through mathematical manipulation. We have an electromagnetic field coming from the $d$-boundary, whereas a magnetic monopole field from the $\delta$-boundary. We are thus to have a generalized Maxwell theory with an electric charge and a magnetic monopole on an arbitrary-dimensional curved space-time. In this paper we proceed by taking various concrete examples to construct a field theory. Section 2 treats an algebraic method for obtaining a general act ion. Sections 3 to 5 are devoted to concrete examples. In Sect.6 we put concluding remarks and summary. Two often-used mathematical formulas are listed in Appendix. We hope the method developed here will become one of the steps which one makes forward to construct the field theory of all extended objects \,\,\,-----\,\,\, strings and $p$\,-branes with or without {\it spin degrees of freedom} \,\,\,-----\,\,\, based on algebraic geometry. \vspace{10mm} \setcounter{equation}{0} \setcounter{section}{1} \section{A general action with $q$-forms} \hspace*{6mm} Let us start with a Riemannian manifold $M^n$, where we, the observers, live, and with a submanifold $\bar{M}^m$, where particles live ( $n,m$ : dimension o f the spaces; $n\geq m$). Both $M^n$ and $\bar{M}^m$ are supposed to be $compact$ \,\,\,-----\,\,\, compact only because mathematicians construct a beautiful theory of harmonic forms over $compact$ spaces, and de Rham-Kodaira's theorem or Hodge's theorem has not yet been proven with respect to the differential forms over {\it non-compact} spaces. We will admit the space $\bar{M}^m$ of a particle to be a submanifold of $M^n$. For instance, $\bar{M}^m$ may be a circle or a sphere within an $n$-dimensional (compact) space $M^n$. The local coordinate systems of $M^n$ and $\bar{M}^m$ shall be denoted by $(x^{\mu})$ and $(u^i)$, respectively [$\mu =1,2,...,n; i=1,2,...,m$]\cite{5}. A point ( $u^1$,$u^2$,...,$u^m$) of $\bar{M}^m$ is, at the same time, a point of $M^n$, so that it is also expressed by $x^\mu=x^\mu(u^i)$. In a conventional quantum field theory, point particles, scalar fields, vector or higher-rank tensor fields, or spinor fields are attributed to each point of $\bar{M}^m$. In this view we are to assign a $q$-dimensional differential form ({\it q-form}) $F^{(q)}$ to each point of $\bar{M}^m$, which is expressed , as mentioned above, by the local coordinate $(u^1,u^2,...,u^m)$ or by $x^{\mu}=x^{\mu}(u^i)$. Physical objects \,\,\,-----\,\,\, point particles, strings or electromagnetic fields \,\,\,-----\,\,\, should be identified with these $q$-forms. We then make an action with $F^{(q)}$. One of the candidates for the action $S$ is $(F^{(q)} ,F^{(q)}) \equiv \int_{\bar{M}^m}F^{(q)}*F^{(q)}$, where $*$ means Hodge's star operator transforming a $q$-form into an $(m-q)$-form. Expressed with respect to an {\it orthonormal basis} $\omega _1,\omega _2,...,\omega _m$, it becomes \begin{equation} *(\omega _{i_1} \wedge \omega _{i_2} \wedge ... \wedge \omega _{i_q}) = \frac{1}{(m-q)!} \delta\left( \begin{array}{cccccc} 1 & 2 & ... & ... & ... & m \\ i_1 & ... & i_q , & j_1 & ... & j_{m-q} \end{array} \right) \omega _{j_1} \wedge \omega _{j_2} \wedge ... \wedge \omega _{j_{m-q}}, \label{2:1} \end{equation} where $\delta (....)$ denotes the signature $(\pm)$ of the permutation and the summation convention over repeated indices is , here and hereafter, always implied. The inner product $(F^{(q)},F^{(q)})$ is a scalar and shares a property of scalarity with the action $S$. Let us , therefore, admit the action $S$ to be proportional to $(F^{(q)},F^{(q)})$ and investigate each case that we confront with in the conventional theoretical physics. Thus we put \begin{eqnarray} S &=& (F^{(q)},F^{(q)}) = \int_{\bar{M}^m}\hspace{-2mm}{\cal S} \nonumber \\ &=& \int_{\bar{M}^m}\hspace{-2mm}{\cal L}\hspace{1mm} du^1\wedge du^2\wedge ...\wedge du^m, \label{2:2} \\ {\cal S} &\equiv& F^{(q)}\!*F^{(q)} = {\cal L}\hspace{1mm} du^1 \wedge du^2 \wedge ... \wedge du^m. \nonumber \end{eqnarray} Here ${\cal S}$ is an {\it action form}, but we will sometimes call it by the same name $action$. ${\cal L}$ is interpreted as a Lagrangian density. According to the well-known de Rham-Kodaira theorem, an arbitrary $q$-form decomposes into the three mutually orthogonal $q$ -forms: \begin{eqnarray} F^{(q)}=F_{\rm I}^{(q)} + F_{\rm II}^{(q)} + F_{\rm III}^{(q)} , \label{2:3} \end{eqnarray} where $F_{\rm I}^{(q)}$ is a harmonic form, meaning\cite{6} \begin{eqnarray} dF_{\rm I}^{(q)} = \delta F_{\rm I}^{(q)} = 0, \label{2:4} \end{eqnarray} and $F_{\rm II}^{(q)}$ is a $d$-boundary, and $F_{\rm III}^{(q)}$ is a $\delta$-boundary (coboundary). Here $\delta$ is Hodge's adjoint operator, which implies $\delta = (-1)^{m(q-1)+1}*d*$ when operated to $q$-forms over the $m$-dimensional space. There exist, therefore, a $(q-1)$-form $A_{\rm II}^{(q-1)}$ and a $(q+1)$-form $A_{\rm III}^{(q+1)}$, such that \begin{eqnarray} F_{\rm II}^{(q)} = dA_{\rm II}^{(q-1)} \: ; \: F_{\rm III}^{(q)} = \delta A_{\rm III}^{(q+1)}. \label{2:5} \end{eqnarray} The action $S$ is (proportional to) $(F^{(q)},F^{(q)})$ ; \begin{eqnarray} {\cal S} &\equiv& (F^{(q)},F^{(q)}) \nonumber \\ &=& (F_{\rm I}^{(q)},F_{\rm I}^{(q)}) + (A_{\rm II}^{(q-1)}, \delta dA_{\rm II}^{(q-1)}) + (A_{\rm III}^{(q+1)},d\delta A_{\rm III}^{(q+1)}), \label{2:6} \\ S &\equiv& \int _{\bar{M}^m}\hspace{-2mm}{\cal S} = \int\!{\cal L} \hspace{1mm}du^1 \wedge du^2 \wedge .... \wedge du^m. \nonumber \end{eqnarray} The physical meaning of Eq.(\ref {2:6}) is whatever we want to discuss in this paper and will be described in detail from now on. \vspace{10mm} \setcounter{equation}{0} \section{Example 1.\,\,point particles, strings and $p$-branes} \hspace*{6mm} We first assign $F^{(0)}=1$ to a point $(u^1,...,u^m)$ of the submanifold $\bar{M}^m$, and we always make use of the relative (induced) metric $\bar{g}_{ij}$ for $\bar{M}^m$ (so that the intrinsic metric of the submanifold is irrelevant). \begin{eqnarray} \bar{g}_{ij} \equiv \frac{\partial x^\mu(u)}{\partial u^i} \frac{\partial x^\nu(u)}{\partial u^j} g_{\mu\nu}, \label{3:1} \end{eqnarray} where $g_{\mu\nu}$ is a metric of the Riemannian space $M^n$. Since the volume element $dV \equiv \omega_1\wedge \omega_2\wedge...\wedge \omega_m$ is expressed, with respect to the local coordinate $(u^i)$, as \begin{eqnarray} dV = \sqrt{\bar{g}}\hspace{1mm}du^1\wedge du^2\wedge...\wedge du^m = *1, \label{3:2} \end{eqnarray} we immediately find \begin{eqnarray} (F^{(0)},F^{(0)})= \int_{\bar{M}^m}\hspace{-2mm}\sqrt{\bar{g}} \hspace{1mm}du^1\wedge du^2\wedge...\wedge du^m, \label{3:3} \end{eqnarray} with $\bar{g}=\det (g_{ij})$. When $n=4$ and $m=1$, we have \begin{eqnarray} \bar{g} = g_{\mu\nu} \frac{dx^\mu}{du^1} \frac{dx^\nu}{du^1} = g_{\mu\nu} \dot {x}^\mu \dot{x}^\nu, \label{3:4} \end{eqnarray} ( $\cdot$ means $d/du^1$), hence \begin{eqnarray} (F,F) &=&\int_{\bar{M}^1}\hspace{-2mm}ds , \label{3:5} \\ ds^2 &=& g_{\mu\nu} \dot{x}^\mu \dot{x}^\nu {(du^1)}^2 = g_{\mu\nu} dx^{\mu} dx^{\nu} , \nonumber \end{eqnarray} which indicates that $(F,F)$ is an action (up to a constant) for a point particle in a curved 4-dimensional space, with $u^1$, interpreted as a proper time. On the contrary, if we take up a submanifold $\bar{M}^2$, Eq.(\ref{3:3}) becomes \begin{eqnarray} (F^{(0)}, F^{(0)}) = \int _{\bar{M}^2}\hspace{-2mm}\sqrt{\bar{g}} \hspace{1mm}du^1 \wedge du^2\,, \label{3:6} \end{eqnarray} with \begin{eqnarray} \bar{g} = \det ( \frac{\partial x^{\mu}}{\partial u^i} \frac{\partial x^{\nu}} {\partial u^j} g_{\mu\nu}), \label{3:7} \end{eqnarray} which is just the Nambu-Goto action in a $curved$ space (with $u^1=\tau$ and $ u^2=\sigma$ in a conventional notation). There, and here, the determinant $\bar{g}$ of an induced metric plays an essential role. If we confront with an arbitrary submanifold $\bar{M}^{p+1}$ ($p$ : an arbitrary integer $\leq n-1$, we are to have a $p$-brane, whose action is nothing but that given by Eq.(\ref{3:3}) with $m=p+1$. Let us discuss the transformation property of the action or Lagrangian density . The transformation of $\bar{M}^m$ into $\bar{M'}^m$ without changing $M^n$\cite{7} means reparametrization. \begin{eqnarray} u^i &\rightarrow& u'^i , \label{3:8} \\ x^{\mu}(u^i) &\rightarrow& x'^{\mu}(u'^i) = x^{\mu}(u^i). \nonumber \end{eqnarray} By this the volume element Eq.(\ref{3:2}) does not change, so that our Lagrangian (density) for the $p$ -brane is trivially invariant under the reparametrization. If we convert $M^n$ into $M'^n$ without changing $\bar{M}^m$, a general coordinate transformation \begin{eqnarray} x^{\mu}(u^i) \rightarrow x'^{\mu}(u^i) \label{3:9} \end{eqnarray} is induced, under which $\bar{g}_{ij}$ does not change, because of the transformation property of the metric $g_{\mu\nu}$. Our action is trivially invariant also for this general coordinate transformation. If we transform $\bar{M}^m$ and $M^n$ simultaneously, i.e., \begin{eqnarray} u^i &\rightarrow& u'^i , \nonumber \\ x^{\mu}(u^i) &\rightarrow& x'^{\mu}(u'^i), \label{3:10} \end{eqnarray} we do not have an equality $x'^{\mu}(u'^i) = x^{\mu}(u^i)$. This type of transformations is examined, as an example, for $n=3$ and $m=2$ as follows. Let us take $M^3={\sf R}^3$ (compactified), and $\bar{M}^2={\sf S}^2$ (2-dim surface of a sphere) whose local coordinate system is $(u^1 , u^2)$. A point of ${\sf S}^2$ is expressed by $(u^1 , u^2)$, but it is at the same time a point $(x^1 , x^2, x^3)$ of ${\sf R}^3$. We give the relation between the two coordinate systems by the stereographic projection: \begin{eqnarray} x^1 &=& \frac{2r^2 u^1}{(u^1)^2 + (u^2)^2 + r^2}\,, \nonumber \\ x^2 &=& \frac{2r^2 u^2}{(u^1)^2 + (u^2)^2 + r^2}\,, \\ x^3 &=& \frac{r[r^2-(u^1)^2 - (u^2)^2]}{(u^1)^2 + (u^2)^2 + r^2}\,, \nonumber \label{3:11} \end{eqnarray} where $r$ is the radius of the sphere defining ${\sf S}^2$. The transformation $(u^1, u^2) \rightarrow (u'^1,u'^2)$ induces the transformation $(x^1, x^2, x^3) \rightarrow (x'^1, x'^2, x'^3)$, and vice versa. The definition of the metric $g_{\mu\nu}$ for $M^n$ and the induced one $\bar{g}_{ij}$ for $\bar{M}^m$ tells us \begin{eqnarray} g'_{\mu\nu}(x') = \frac{\partial x^{\rho}}{\partial x'^{\mu}} \frac{\partial x ^{\delta}}{\partial x'^{\nu}} g_{\rho\delta} (x) , \label{3:12} \end{eqnarray} and \begin{eqnarray} \bar{g}'_{ij}(u') = \frac{\partial u^k}{\partial u'^i} \frac{\partial u^l} {\partial u'^j} \bar{g}_{kl} (u) , \label{3:13} \end{eqnarray} so that we have \begin{eqnarray} \sqrt{\bar{g}'(u')}\hspace{1mm}du'^1 \wedge ... \wedge du'^m = \sqrt{\bar{g}(u )} \hspace{1mm}du^1 \wedge ... \wedge du^m , \label{3:14} \end{eqnarray} hence follows the invariance of the action. \vspace{10mm} \setcounter{equation}{0} \section{Example 2.\,\,scalar fields} \hspace*{6mm} Now we consider the case where a scalar field $\phi (x^{\mu}(u^i))$ is assigned to each point $x^{\mu}(u^i)$. From now on we regard every quantity as that given over the subspace $\bar{M}^m$, hence we will write the field simply as $\phi (u^i)$ instead of $\phi (x^{\mu}(u^i))$ , etc. An arbitrary 0-form \,\,\,-----\,\,\, a scalar field \,\,\,-----\,\,\, decomposes into two parts: \begin{eqnarray} F^{(0)} = F_{\rm I}^{(0)} + F_{\rm III}^{(0)}. \label{4:1} \end{eqnarray} $F_{\rm I}^{(0)}$ is given by \begin{eqnarray} F_{\rm I}^{(0)} = \phi (u) , \label{4:2} \end{eqnarray} with which we obtain \begin{eqnarray} (F_{\rm I}^{(0)} , F_{\rm I}^{(0)}) = \phi ^2(u) dV , \label{4:3} \end{eqnarray} meaning a mass term of a scalar field. $F_{\rm III}^{(0)}$ is composed, on the contrary , of a $\delta$ -boundary of a 1-form: \begin{eqnarray} F_{\rm III}^{(0)} &=& \delta A^{(1)} , \nonumber \\ A^{(1)} &=& A_i du^i . \label{4:4} \end{eqnarray} Hence we have \begin{eqnarray} F_{\rm III}^{(0)} = -\partial _k (\sqrt{\bar{g}} A^k) \sqrt{\bar{g}} \bar{g} ^ {11} \bar{g}^{22} ... \bar{g}^{mm} , \label{4:5} \end{eqnarray} where, as usual, \begin{eqnarray} A^k = \bar{g}^{kl} A_l \:\: {\rm and} \:\: \partial _k = \frac{\partial} {\partial u^k} , \label{4:6} \end{eqnarray} and $(\bar{g} ^{ij})$ is the inverse of $(\bar{g} _{ij})$. In a special case, where we work with a flat space and an orthonormal basis, i.e., \begin{eqnarray} \bar{g} ^{ij} = \delta ^{ij} \:\: {\rm and} \:\: du^i = \omega ^i , \label{4:7} \end{eqnarray} we have a simple form \begin{eqnarray} F_{\rm III} ^{(0)} = - \partial _k A^k , \label{4:8} \end{eqnarray} by which the action form ${\cal S}$ becomes \begin{eqnarray} {\cal S} = (\partial _k A^k)^2 dV . \label{4:9} \end{eqnarray} This is the `kinetic' term of the {\it k-vector field} $A^k$. The {\it gauge transformation} exists for this field: \begin{eqnarray} A^{(1)} &\rightarrow& \tilde{A}^{(1)} = A^{(1)}+\delta A^{(2)} , \nonumber \\ A^{(2)} &=& \frac{1}{2} A_ {{i_1}{i_2}}du^{i_1}\wedge du^{{i_2}} . \label{4:10} \end{eqnarray} In components, it is written as \begin{eqnarray} \tilde{A} _h\! =\! A_h\!\! +\! \frac{1}{2(m-2)!} \delta\!\! \left( \begin{array}{ccccc} h & l_1 & ... & ... & l_{m-1} \\ i_1 & i_2 & j_1 & ... & j_{m-2} \end{array} \right )\! \frac{\partial (\sqrt{\bar{g}} A^{i_1 i_2})}{\partial u^k} \sqrt{\bar{g}}\bar{g}^{k l_1}\bar{g}^{l_2 j_1}\!... \bar{g}^{l_{m-1} j_{m-2}} . \label{4:11} \end{eqnarray} One can further calculate , if one wants to , to have a beautiful form: \begin{eqnarray} \tilde{A} _i &=& A_i - \frac{1}{2} \delta \left( \begin{array}{cc} j_1 & j_2 \\ k & i \end{array} \right ) \bar{g} ^{kl} D_l A_{j_1 j_2}, \nonumber \\ D_l A_{j_1 j_2} &=& \frac{\partial A_{{j_1}{j_2}}}{\partial u^l} - A_{k j_2} \ Gamma_{{j_1}l}^k-A_{{j_1}k} \Gamma_{{j_2}l}^k , \label{4:12} \end{eqnarray} where $\Gamma_{jk}^i$ is the well-known affine connection. \begin{eqnarray} \Gamma_{jk}^i = \frac{1}{2} g^{il} (\frac{\partial \bar{g} _{jl}}{\partial u^k } + \frac{\partial \bar{g} _{lk}}{\partial u^j} - \frac{\partial \bar{g} _{jk} }{\partial u^l} ) . \label{4:13} \end{eqnarray} Note that our fundamental fields are the $A_i$, and the gauge transformation is obtained with the $A_{i_1 i_2}$ of the rank {\it higher by one} than the former. This is, of course, due to the nilpotency of $\delta$, $\delta ^2 = 0$, and typical of our new type of formulation. \vspace{10mm} \setcounter{equation}{0} \section{Example 3.\,\,vector fields} \hspace*{6mm} When a 1-form $F^{(1)}$ is assigned to each point of $\bar{M}^m$, we have \begin{eqnarray} F^{(1)} = F_{\rm I}^{(1)} + F_{\rm II}^{(1)} + F_{\rm III}^{(1)} . \label{5:1} \end{eqnarray} First we will see the contribution of $F_{\rm I}^{(1)}$ to the action, which is harmonic. Writing as \begin{eqnarray} F_{\rm I}^{(1)} = F_i du^i , \label{5:2} \end{eqnarray} we immediately have an action (form) \begin{eqnarray} {\cal S}_{\rm I} = F_{\rm I}^{(1)}*F_{\rm I}^{(1)}=F_iF^i\! \sqrt{\bar{g}}\hspace{1mm} du^1 \wedge ... \wedge du^m \label{5:3} \end{eqnarray} The contribution of the $d$-boundary is calculated in the same way. Putting \begin{eqnarray} F_{\rm II}^{(1)} = dA^{(0)} , \label{5:4} \end{eqnarray} we have the action \begin{eqnarray} {\cal S}_{\rm II} = g^{ij} \partial _i A^{(0)} \partial _j A^{(0)}\!\sqrt{\bar {g}} \hspace{1mm}du^1 \wedge ... \wedge du^m , \label{5:5} \end{eqnarray} which expresses a massless scalar particle $A^{(0)}$. Freedom of the choice of gauges does not here appear. The contribution of the $\delta$-boundary is, on the contrary, rather complicated in calculation. If we put \begin{eqnarray} F_{\rm III} ^{(1)} &=& \delta A^{(2)} , \nonumber \\ A^{(2)} &=& \frac{1}{2} A_{i_1 i_2} du^{i_1} \wedge du^{i_2} , \label{5:6} \\ F_{\rm III} ^{(1)} &=& F_i du^i , \nonumber \end{eqnarray} we have \begin{eqnarray} F_h &=& \delta \left( \begin{array}{ccccc} h & l_1 & l_2 & ... & l_{m-1} \\ i_1 & i_2 & j_1 & ... & j_{m-2} \end{array} \right) \frac {\partial}{\partial u^k} ( \sqrt{\bar{g}}A^{i_1 i_2})\sqrt{\bar{g}} \,\bar{g}^{kl_1} \bar{g}^{j_1 l_2} ... \bar{g}^{j_{m-2} l_{m-1}} \nonumber \\ &=& - \frac{1}{2} \delta \left( \begin{array}{cc} j_1 & j_2 \\ k & h \end{array} \right) \bar{g}^{kl} D_l A_{{j_1}{j_2}} , \label{5:7} \end{eqnarray} with $D_l$, defined in Eq.(\ref{4:13})\cite{8}. The action is \begin{eqnarray} {\cal S}_{\rm III} = F_i F^i\!\sqrt{\bar{g}}\hspace{1mm} du^1 \wedge ... \wedge du^m . \label{5:8} \end{eqnarray} The gauge transformation is given in this case by \begin{eqnarray} A^{(2)} &\rightarrow& \tilde{A}^{(2)} = A^{(2)} + \delta A^{(3)} , \nonumber \\ A^{(3)} &=& \frac{1}{3!} A_{i_1 i_2 i_3}du^{i_1} \wedge du^{i_2} \wedge du^{i_3} , \label{5:9} \end{eqnarray} which trivially leads to the relation \begin{eqnarray} F_3 ^{(1)} = \delta A^{(2)} = \delta \tilde{A}^{(2)} . \label{5:10} \end{eqnarray} When expressed in components, it is written as \begin{eqnarray} \tilde{A} _{h_1 h_2} = A_{h_1 h_2} \!\!\!&-&\!\!\! \frac{1}{3!(m-3)!} \delta \left( \begin{array}{cccccc} i_1 & i_2 & i_3 & j_1 & ... & j_{m-3} \\ h_1 & h_2 & l_1 & ... & ... & l_{m-2} \end{array} \right) \frac{\partial}{\partial u^k} ( \sqrt{\bar{g}} A^{i_1 i_2 i_3}) \nonumber \\ &\times&\!\!\!\sqrt{\bar{g}}\,\bar{g}^{kl_1}\bar{g}^{j_1 l_2}...\bar{g}^{j_{m- 3} l_{m-2}}, \label{5:11} \end{eqnarray} where, of course, the components with superscript are related to those with subscript in a conventional manner, as has been described repeatedly. \begin{eqnarray} A^{i_1 i_2 i_3} = \bar{g}^{i_1 j_1} \bar{g}^{i_2 j_2} \bar{g}^{i_3 j_3} A_{j_1 j_2 j_3} . \label{5:12} \end{eqnarray} We finally express Eq.(\ref{5:11}) in an elegant form. \begin{eqnarray} \tilde{A} _{h_1 h_2} ^{(2)} &=& A _{h_1 h_2} ^{(2)} - \frac{1}{3!} \delta \left( \begin{array}{ccc} j_1 & j_2 & j_3 \\ k & h_1 & h_2 \end{array} \right) \bar{g}^{kl} D_l A_{j_1 j_2 j_3} , \nonumber \\ D_l A_{j_1 j_2 j_3} &=& \frac{\partial A_{j_1 j_2 j_3}}{\partial u^l} - A_{kj_2 j_3} \Gamma _{j_1 l}^k - A_{j_1 k j_3} \Gamma _{j_2 l}^k - A_{j_1 j_2 k} \Gamma _{j_3 l} ^k. \label{5:13} \end{eqnarray} Especially when the space-time is flat and one takes an orthonormal reference frame, one has \begin{eqnarray} F_i = -\frac{1}{2} \delta \left( \begin{array}{cc} k & i \\ i_1 & i_2 \end{array} \right) \frac{\partial A^{i_1 i_2}}{\partial u^k} , \label{5:14} \end{eqnarray} which further reduces to a familiar form {\it for} $m=4$: \begin{eqnarray} F^i &=& \partial _k A^{ik} \nonumber \\ {\cal S} &=& \partial _k A^{ik} \partial _l A^{il} dV. \label{5:15} \end{eqnarray} The gauge transformation becomes in this case \begin{eqnarray} \tilde{A} _{i_1 i_2} = A_{i_1 i_2} - \partial _k A_{i_1 i_2 k} . \label{5:16} \end{eqnarray} Needless to say, the total action comes from adding ${\cal S}_{\rm I}$,${\cal S}_{\rm II}$ and ${\cal S}_{\rm III}$. A new type of gauge transformations Eq.(\ref{5:13}) appears, due to the coboundary property of $F_{\rm III} ^{(1)}$. \vspace{10mm} \setcounter{equation}{0} \section{Example 4.\,\,tensor fields} \hspace*{6mm} Now we come to the case where a 2-form is assigned to each point of $\bar{M}^m$, the case of which is most useful and attractive for future development. A 2-form decomposes, as usual, into the following three: \begin{eqnarray} F^{(2)} = F_{\rm I} ^{(2)} + F_{\rm II} ^{(2)} + F_{\rm III} ^{(2)}\,. \label{6:1} \end{eqnarray} The harmonic form $F_{\rm I} ^{(2)}$ is written with the components $A_{ij}$ as follows: \begin{eqnarray} F_{\rm I} ^{(2)} = \frac{1}{2} A_{i_1 i_2} du^{i_1} \wedge du^{i_2} , \label{6:2} \end{eqnarray} from which we have \begin{eqnarray} {\cal S}_I = F_{\rm I} ^{(2)} * F_{\rm I} ^{(2)} = \frac{1}{2} A_{i_1 i_2} A^{i_1 i_2}\!\sqrt{\bar{g}}\hspace{1mm} du^1 \wedge ... \wedge du^m . \label{6:3} \end{eqnarray} The contribution of the $d$ -boundary is expressed with our fundamental 1-form $A^{(1)}$. \begin{eqnarray} F_{\rm II} ^{(2)} = dA ^{(1)} . \label{6:4} \end{eqnarray} This further reduces, when written in components, \begin{eqnarray} F_{\rm II} ^{(2)} &=& \frac{1}{2} F_{i_1 i_2} du^{i_1} \wedge du^{i_2} , \nonumber\\ A^{(1)} &=& A_i du^i , \label{6:5} \end{eqnarray} to a familiar relation \begin{eqnarray} F_{ij} = \partial _i A_j - \partial _j A_i , \label{6:6} \end{eqnarray} $(\partial _i = \partial / \partial u^i )$, which shows that $F_{ij}$ is a field-strength. The gauge transformation here is given by \begin{eqnarray} A^{(1)} \rightarrow \tilde{A} ^{(1)} = A^{(1)} + dA^{(0)} . \label{6:7} \end{eqnarray} Namely, it is expressed in components as \begin{eqnarray} \tilde{A}_i = A_i + \partial _i A(u), \label{6:8} \end{eqnarray} with $A(u)$, an arbitrary scalar function, which is a familiar form in the conventional Maxwell electromagnetic theory. The invariance of the contribution to $F_{\rm II} ^{(2)}$ owes self-evidently, to the nilpotency $d^2=0$. If we further put \begin{eqnarray} \delta F_{\rm II} ^{(2)} = \delta dA^{(1)} = J , \label{6:9} \end{eqnarray} we have \begin{equation} - \frac{1}{2} \frac{1}{(m-2)!} \delta \left( \begin{array}{ccccc} h & l_1 & l_2 & ... & l_{m-1} \\ i_1& i_2& j_1& ...& j_{m-2} \end{array} \right) \frac{\partial}{\partial u^k} ( \sqrt{\bar{g}} F^{i_1 i_2}) \sqrt{\bar{g}} \hspace{1mm}\bar{g} ^{l_1 k} \bar{g} ^{l_2 j_1} ... \bar{g} ^{l_{m-1} j_{m-2}} = J_h . \label{6:10} \end{equation} After some lengthy calculations we finally have the following beautiful form. \begin{eqnarray} - \frac{1}{2} \delta \left( \begin{array}{cc} i_1 & i_2 \\ j & h \end{array} \right) \bar{g}^{jl} D_l F_{i_1 i_2} = J_h . \label{6:11} \end{eqnarray} The covariant derivative $D_l$ is given in Eq.(\ref{4:12}). Equation (\ref{6:10}) or (\ref{6:11}) takes a simple form for the $flat$ $m$-dimensional space, expressed in an {\it orthonormal basis}. \begin{eqnarray} F_{ij},^j = J_i \label{6:12} \end{eqnarray} This is nothing but the Maxwell equation in an $m$-dimensional space, with $J_i$, interpreted as an electromagnetic current density. One therefore finds that Eq.(\ref{6:9}) or (\ref{6:11}) is the generalized Maxwell equation in the {\it curved m-dimensional space}. From the viewpoint of {\it action-at-a-distance}\cite{9}, vector fields are composed of matter fields. Namely, vector fields can be traced by looking at the matter fields. Our standpoint is, on the contrary, such that our fundamental objects are vectors and we can trace the matter field by regarding the vector fields as such and calculating the left-hand side of Eq.(\ref{6:12}) with Eq.(\ref{6:6}). Our electromagnetic current of the matter $J_i(u)$ is determined by the vector fields $A_i(u)$. Now comes the contribution of the $\delta$-boundary : \begin{eqnarray} F_{\rm III} ^{(2)} = \delta A^{(3)}, \label{6:13} \end{eqnarray} where $A^{(3)}$ is a 3-form. Expressed, as usual, in components \begin{eqnarray} F_{\rm III} ^{(2)} &=& \frac{1}{2} F_{i_1 i_2} du^{i_1} \wedge du^{i_2} , \nonumber \\ A^{(3)} &=& \frac{1}{6} A_{i_1 i_2 i_3} du^{i_1} \wedge du^{i_2} \wedge du^{i_3} , \label{6:14} \end{eqnarray} Eq.(\ref{6:13}) leads us to \begin{eqnarray} F_{h_1 h_2} \hspace{-2mm}&=&\hspace{-2mm} -\frac{1}{6(m-3)!} \delta\!\left( \begin{array}{cccccc} h_1& h_2& l_1& ...&...& l_{m-2} \\ i_1& i_2& i_3& j_1& ...& j_{m-3} \end{array} \right) \nonumber \\ &&\hspace{20mm}\times \frac{ \partial}{\partial u^k}(\sqrt{\bar{g}} A^{i_1 i_2 i_3}) \sqrt{\bar{g}}\hspace{1mm} \bar{g}^{l_1 k} \bar{g}^{l_2 j_1}\!...\bar{g}^{l_{m-2} j_{m-3}}. \label{6:15} \end{eqnarray} Along the same line already mentioned repeatedly we further have \begin{eqnarray} F_{i_1 i_2} = - \frac{1}{6} \delta \left( \begin{array}{ccc} j_1 & j_2 & j_3 \\ k & i_1 & i_2 \end{array} \right) \bar{g}^{kl} D_l A_{j_1 j_2 j_3}, \label{6:16} \end{eqnarray} with the covariant derivative $D_l A_{j_1 j_2 j_3}$, defined in Eq.(\ref{5:13}). Putting \begin{eqnarray} dF_{\rm III} ^{(2)} &=& d \delta A^{(3)} = -*K^{(m-3)} , \nonumber \\ K^{(m-3)} &=& \frac{1}{(n-3)!} K_{i_1 i_2 ... i_{m-3}} du^{i_1} \wedge ... \wedge du^{i_{m-3}} , \label{6:17} \end{eqnarray} one has the relation between the components of $F_{\rm III} ^{(2)}$ and $K^{(m-3)}$: \begin{equation} F_{i_1 i_2 , i_3} + F_{i_2 i_3 , i_1} + F_{i_3 i_1 , i_2} = - \frac{1}{(m-3)!} \delta \left( \begin{array}{cccccc} 1& 2& ...&...&...&m \\ j_1& ...& j_{m-3}& i_1& i_2& i_3 \end{array} \right) \hspace{-1mm}\sqrt{\bar{g}} K^{j_1 ... j_{m-3}} , \label{6:18} \end{equation} where $F_{i_1 i_2 , i_3} \equiv \partial F_{i_1 i_2} / \partial\, u^{i_3}$, etc.. If our space-time $\bar{M}^m$ is {\it flat and the dimension is} $m=4$, these expressions reduce to a familiar form. \begin{eqnarray} F_{\mu \nu} &=& -\partial ^{\rho} A_{\mu \nu \rho} , \nonumber \\ \tilde{F} _{\mu \nu} ,^{\nu} &=& K_{\mu} , \label{6:19} \end{eqnarray} where \begin{eqnarray} \tilde{F} _{\mu \nu} &=& \frac{1}{2} \epsilon _{\mu \nu \rho \sigma} F^{\rho \ sigma} , \nonumber \\ K &=& K_{\mu} du^{\mu} . \label{6:20} \end{eqnarray} Equations (\ref{6:19}) and (\ref{6:20}) tell us that $K_{\mu}$ is a {\it magnetic monopole current}\cite{10}. One finds, here also, that one stands on the viewpoint of tracing a monopole by regarding the three form $A^{(3)}$ as a fundamental object. The {\it gauge transformation} is, in this case, given by \begin{eqnarray} A^{(3)} \rightarrow \tilde{A}^{(3)} = A^{(3)} + \delta A^{(4)} . \label{6:21} \end{eqnarray} In components is it written as \begin{eqnarray} \tilde{A}_{h_1 h_2 h_3} \hspace{-2mm}&=&\hspace{-2mm}A_{h_1 h_2 h_3}+ \frac{1}{4!(m-4)!}\delta\! \left( \begin{array}{ccccccc} h_1& h_2& h_3& l_1& ...&...& l_{m-3} \\ i_1& i_2& i_3& i_4& j_1& ...& j_{m-4} \end{array} \right) \nonumber \\ &&\times\frac{\partial}{\partial u^k} ( \sqrt{\bar{g}} A^{i_1 i_2 i_3 i_4}) \sqrt{\bar{g}}\hspace{1mm} \bar{g}^{l_1 k} \bar{g}^{l_2 j_1} ... \bar{g}^{l_{m-3} j_{m-4}}, \label{6:22} \end{eqnarray} which one can further rewrite in the following form. \[ \tilde{A}_{i_1 i_2 i_3}=A_{i_1 i_2 i_3} + \frac{1}{4!} \delta \left( \begin{array}{cccc} j_1 & j_2 & j_3 &j_4 \\ k & i_1 & i_2 & i_3 \end{array} \right) g^{kl} D_l A_{j_1 j_2 j_3 j_4}, \] \begin{equation} D_l A_{j_1 j_2 j_3 j_4}=\frac{\partial A_{j_1 j_2 j_3 j_4}}{\partial u^l} - A _{k j_2 j_3 j_4} \Gamma _{j_1 l}^k - A_{j_1 k j_3 j_4} \Gamma _{j_2 l}^k - A_{ j_1 j_2 k j_4} \Gamma _{j_3 l}^k - A_{j_1 j_2 j_3 k} \Gamma _{j_4 l}^k . \label{6:23} \end{equation} The action form ${\cal S}_{\rm III} = F_{\rm III}^{(2)} * F_{\rm III}^{(2)}$ can be, of course, calculated along the same line already mentioned. And the total action ${\cal S}$ is \begin{eqnarray} {\cal S} = {\cal S}_{\rm I} + {\cal S}_{\rm II} + {\cal S}_{\rm III} . \label{6:24} \end{eqnarray} \vspace{10mm} \setcounter{equation}{0} \section{Summary and conclusions} \hspace*{6mm} We have assigned a differential $q$-form $F^{(q)}$ to each point $x^{\mu}=x^{\mu} (u^i)$ of the submanifold $\bar{M}^m$ of the extended object's world, included in our observer's world $M^n$, thus endowing a particle {\it with an intrinsic degree of freedom}. An arbitrary $q$-form decomposes into a harmonic form, a $d$-boundary plus a $\delta$-boundary. With $F^{(q)}$ we can make a scalar $(F^{(q)}, F^{(q)})$ defined by Eq.(\ref{2:2}) and we regard this as an action for the system. Now, to say more concretely, if the assigned form is of zero, one is to have a generalized action for a point particle, a string or a $p$-brane in a curved space-time. The well-known Nambu-Goto action as well as the membrane action is thus naturally derived with this general principle. Owing to the construction itself the action is reparametrization-invariant and, at the same time, invariant under the general coordinate transformation. For $q \geq 1$ we obtain a non-trivial action with spin degrees of freedom. The case of $q=2$ is probably most attractive. The $d$-boundary $F_{\rm II} ^{(2)}$ has a fundamental 1-form $A^{(1)}$, and the world made of it is a conventional electromagnetic one, based on the Maxwell equation. The $\delta$-boundary $F_{\rm III} ^{(2)}$ has, on the contrary, a fundamental 3-form $A^{(3)}$, which is interpreted as a magnetic monopole current (at least in case of $m=4$). Our equation differs from the conventional one only in that the former is more general than the latter, if one admits the existence of monopoles. The former is formulated on a general curved space-time with an arbitrary space-time dimension. In the conventional picture the space-time is flat. So if one wants to construct the theory on curved space-time, one feels it ambiguous to decide where to replace $\det(\delta _{ij})=1$ by $\det(g_{ij}) \neq 1$. Anyway one can construct an arbitrary $q$-form field over a general Riemannian manifold through de-Rham-Kodaira's theorem. Thus , as usual, one is to assign spin degrees of freedom. This last statement is important. A $p$-brane is usually considered as a $p$-dimensional extended object $\bar{M}^{p+1}$ moving across our world $M^n$. Each point of a $p$-brane has no internal degree of freedom. On the contrary, if one takes up a $q$-form over $\bar{M}^m \subset M^n$, one is to have an internal degree of freedom based on the number of components of the $q$-form. The dimension of the local coordinate system $(u^1 , ..., u^m)$ of $\bar{M}^m$ indicates the dimension less by $1\,(p=m-1)$ of the extended object and the dimension of the $q$-form represents the internal degree of each point. A string is generated for $m=2$ and $q=0$, whereas a conventional $p$-brane, for $m=p+1$ and $q=0$. Lastly we comment on the newly-introduced gauge transformation. Gauge freedom comes from the nilpotency of the boundary operators $d$ and $\delta$ : $d^2=0$ and $\delta ^2=0$, the latter of which induces a new type of gauge transformations. Equations(\ref{4:10}), (\ref{5:9}) and (\ref{6:21}) are such examples. In the Dirac monopole theory with $n=m=4$ and $q=2$, we have a one-component scalar $A^{(4)}$ which contributes to $A^{(3)}$, a monopole current. Detail analysis along this way is worth studying and may promise a fruitful result about physical extended object. \bigskip The author thanks Prof. M.Wadati for researching facility at the University of Tokyo. \newpage \addcontentsline{toc}{section}{Appendix}
2,869,038,156,504
arxiv
\section{Introduction} The present work deals with small transverse vibrations of an infinite string moving axially with a constant speed. Two fixed supports, distanced by $L$ as represented in Figure \ref{fig0}, prevent transversal displacements of the string at the supporting points while the axial motion remains unaffected. \begin{figure}[tbph] \centering \includegraphics[width=0.61\textwidth]{GhS_fig1} \caption{A string travelling to the left with a speed $v$.} \label{fig0} \end{figure} We introduce a coordinate system $(x,t)$, attached to the travelling string, where $x$ coincides with the rest state axis of the string and $t$ denotes the time. We denote transverse displacement of the string by $\phi (x,t)$ and we choose the position of the left support to coincide with $x=0$. Assuming that the string travels to the left with a scalar speed $v$, the positions of the left and right supports are $x=vt$ and $x=L+vt$ for $t\geq 0 $, respectively. If we assume that the string travels to the right then it suffices to change $v$ by $-v$ in the remainder of this paper. For $T>0,$ we denote the interval \begin{equation*} \mathbf{I}_{t}:=\left( vt,L+vt\right) ,\text{ \ for\ }t\in \left( 0,T\right) . \end{equation* A simplified model describing the free small transverse vibrations of this string is the following wave equation \begin{equation} \left\{ \begin{array}{ll} \phi _{tt}-\phi _{xx}=0,\ \smallskip & \text{for }x\in \mathbf{I}_{t}\text{ and }t\in \left( 0,T\right) , \\ \phi \left( vt,t\right) =\phi \left( L+vt,t\right) =0,\smallskip & \text{for }t\in \left( 0,T\right) , \\ \phi (x,0)=\phi ^{0}\left( x\right) \text{, \ \ }\phi _{t}\left( x,0\right) =\phi ^{1}\left( x\right) ,\text{ \ } & \text{for }x\in \mathbf{I}_{0} \end{array \right. \tag{WP} \label{wave} \end{equation where the subscripts $t$ and $x$ stand for the derivatives in time and space variables respectively, $\phi ^{0}$ is the initial shape of the string and \phi ^{1}$ is its initial transverse speed. We assume that the speed $v$ is strictly less then the speed of propagation of the wave (here normalized to c=1$), i.e. \begin{equation} 0<v<1. \label{tlike} \end{equation If $v\geq 1,$ then the problem is ill-posed, see for instance \cite{RaCa1995 . The wave equation formulated above is a simple model to represent several mechanical systems such as plastic films, magnetic tapes, elevator cables, textile and fibre winding, see for example \cite{Chen2005,BBJT2020,HoPh2019 . This model can be dated back to Skutch \cite{Skut1897}, its simplicity is only apparent and we should mention that the method of separation of variables cannot be applied to this problem. Miranker's work \cite{Mira1960} is one of the early influencing papers on the topic of axially moving media. He proposed two approaches to solve Problem (\ref{wave}). The first one is to "freeze" the space interval by formulating the problem in the interval \left( 0,L\right) $. Thus, introducing the variables $\eta =x-vt$ and $\tau =t$, the first equation in (\ref{wave}) become \begin{equation} \phi _{\tau \tau }-2v\phi _{\eta \tau }-\left( 1-v^{2}\right) \phi _{\eta \eta }=0,\text{ \ for }\eta \in \left( 0,L\right) ,\text{ }\tau >0. \label{freez} \end{equation The obtained problem is more familiar and the vast majority of the literature on travelling strings follows this approach. Some important results in this direction are given by Wickert and Mote \cite{WiMo1990} where the authors write (\ref{freez}) as a first-order differential equation with matrix differential operators (a state space formulation) and obtained a closed form representation of the solution for arbitrary initial conditions. There is also other methods to solve (\ref{freez}), for instance a solution by the Laplace transform method is proposed in \cite{vaPo2005}. The solution can also be constructed using the characteristic method, see for instance \cite{RaCa1995,CLFL2017}. The approach of Miranker \cite{Mira1960} is to solve (\ref{wave}), i.e. keep the space interval depending on time. He obtained a closed form of the solution by a series formulas (See page 39 in \cite{Mira1960}). After few rearrangements, his formulas can be rewritten a \begin{equation} \phi (x,t)=\sum_{n\in \mathbb{\mathbb{Z} }^{\ast }}c_{n}\left( e^{n\pi i\left( 1-v\right) \left( t+x\right) /L}-e^{n\pi i\left( 1+v\right) \left( t-x\right) /L}\right) ,\text{ for }x\in \mathbf{I}_{t}\text{ and }t\in \left( 0,T\right) . \label{exact0} \end{equation Despite the utility of such a formula for numerical and asymptotic approaches, it remained underexploited in the literature related to axially moving strings. Since Miranker was not explicit on how to compute the coefficients $c_{n}$, we give in the paper at hands a method to compute each $c_{n}$ in function of the initial data $\phi ^{0}\ $and $\phi ^{1},$ see Theorem \ref{thexist1} in the next section. The idea is inspired from \cite{Seng2020} where the second author obtained the exact solution of strings with two linearly moving endpoints at different speeds. Similar techniques were used in \cit {Bala1961,Seng2018} for a string with one moving endpoint. Each problem in \cite{Seng2020,Bala1961,Seng2018}, is set in an interval expanding with time (in the inclusion sense) and the solution is presented by a series containing a type of functions different from those in (\ref{exact0}). Thus, the results of \cite{Seng2020,Seng2018} in particular do not apply to the present problem (\ref{wave}). In this work, we show that the series formulas (\ref{exact0}) can be manipulated to establish the following results: \begin{itemize} \item \emph{A conserved quantity.} The functional\footnote Here and in the sequel, the subscript $v$ is used to emphasize the dependence on the speed $v$. \begin{equation} \mathcal{E}_{v}\left( t\right) =\frac{1}{2}\int_{vt}^{L+vt}\left( \phi _{t}+v\phi _{x}\right) ^{2}+\left( 1-v^{2}\right) \phi _{x}^{2}dx,\ \ \ \text{for }t\geq 0, \label{E} \end{equation depending on $L,t,v$ and the solution of $\left( \ref{wave}\right) ,$ is conserved in time. We give two different proofs for this fact, see Theorem \ref{th1}. Note that $\phi _{t}+v\phi _{x}=\frac{d}{dt}\left( \phi \left( x+vt,t\right) \right) $ is the total (called also the material) derivative. Under the assumption (\ref{tlike}), this functional is positive-definite and we will call it the "energy" of the solution $\phi $. Although there are many expressions of energy for axially moving strings, see for instance \cit {RRWM1998,WiMo1989}, we could not find the definition (\ref{E}) in the literature. \end{itemize} \begin{itemize} \item \emph{Exact boundary observability.} \begin{itemize} \item The wave equation (\ref{wave}) is exactly observable at any endpoint x=x_{b}+vt,$ where $x_{b}=0$ or $x_{b}=L$. Due to the finite speed of propagation, the time of observability is expected to be positive and depends on the initial length $L$ and the speed $v$. We show that this time is exactly \begin{equation*} T_{v}:=2L/(1-v^{2}), \end{equation*} see Theorems \ref{thobs1}. \item If we observe both endpoints, i.e. for $x =vt$ and $x =L+vt,$ the time of observability is reduced to \begin{equation*} \tilde{T}_{v}:=L/(1-v), \end{equation*} see Theorem \ref{thobs2}. \end{itemize} \end{itemize} Although the problem considered here is linear and extensively studied, the application of Fourier series method to establish the above stated results is new to the best of our knowledge. Let us also note that letting v\rightarrow 0$ in the above results, we recover some known facts for the wave equation in non-travelling intervals \cite{Lion1988,KoLo2005}. In particular, $\mathcal{E}_{0}\left( t\right) =\frac{1}{2}\int_{0}^{L}\phi _{t}^{2}+\phi _{x}^{2}dx\ $is known to be conserved and we get $T_{0}=2L \tilde{T}_{0}=L$ as sharp values for boundary observability time. After the present introduction, we derive an expression for the coefficients of the series formula (\ref{exact0}). In section 3, we show that the energy \mathcal{E}_{v}$ is conserved in time. The boundary observability results at one endpoint and at both endpoints are addressed in the last section. \section{Computing the coefficients of the series} To simplify some formulas, we introduce the notatio \begin{equation*} \gamma _{v}:=\frac{1+v}{1-v},\text{ \ \ \ \ }L_{1}:=\frac{1-v}{1+v}L\ \ \text{and \ }L_{2}:=\frac{2}{1-v}L \end{equation* since these constants will appear frequently in the sequel. Note that \begin{equation*} 1<\gamma _{v}<+\infty \text{ \ \ and \ \ }0<L_{1}<L<L_{2}/2,\text{ \ for 0<v<1. \end{equation* For every initial dat \begin{equation} \phi ^{0}\in H_{0}^{1}\left( \mathbf{I}_{0}\right) ,\text{ \ }\phi ^{1}\in L^{2}\left( \mathbf{I}_{0}\right) , \label{ic} \end{equation we already know that if (\ref{tlike}) holds the solution of Problem (\re {wave})\emph{\ }exists and satisfie \begin{equation} \phi \in C\left( [0,T];H_{0}^{1}\left( \mathbf{I}_{t}\right) \right) \text{\ \ \ and \ \ }\phi _{t}\in C\left( [0,T];L^{2}\left( \mathbf{I}_{t}\right) \right) , \label{solreg} \end{equation see for instance \cite{DaZo1990,BaCh1981}. Moreover, an easy computation shows that the solution $\phi $ given by (\ref{exact0}) satisfies the periodicity relation \begin{equation} \phi (x+vT_{v},t+T_{v})=\phi (x,t), \label{T-period} \end{equation i.e., after a time $T_{v}=2L/\left( 1-v^{2}\right) $ the string travels a distance $vT_{v}$ and return to its original form at time $t$. \subsection{Coefficients expressions} \begin{theorem} \label{thexist1}Under the assumptions (\ref{tlike}) and (\ref{ic}), the solution of\ Problem (\ref{wave}) is\emph{\ }given by the series (\re {exact0}) where the coefficients $c_{n}\in \mathbb{C}$ are given by any of the two following formulas \begin{align} c_{n}& =\frac{1}{4n\pi i}\int_{0}^{L_{2}}\left( \tilde{\phi}_{x}^{0}+\tilde \phi}^{1}\right) e^{-n\pi i\left( 1-v\right) x/L}dx, \label{cn+} \\ & =\frac{1}{4n\pi i}\int_{-L_{1}}^{L}\left( \tilde{\phi}_{x}^{0}-\tilde{\phi ^{1}\right) e^{n\pi i\left( 1+v\right) x/L}dx\text{, \ \ for }n\in \mathbb{\ \mathbb{Z}}^{\ast }, \label{cn-} \end{align where\emph{\ }$\tilde{\phi}_{x}^{0}$ and $\tilde{\phi}^{1}$ are extensions of the initial data $\phi ^{0}$and\ $\phi ^{1}$ on the interval $\left( -L_{1},L_{2}\right) $ given below by (\ref{phi0x+}) and (\ref{phi1+})\emph{\ }respectively. \end{theorem} Before proceeding to the proof, let us describe how to extend the function \phi ,\ $defined only on $\mathbf{I}_{t}=\left( vt,L+vt\right) ,$ to the intervals $\left( -L_{1}+vt,vt\right) $ and $\left( L+vt,L_{2}+vt\right) $. On one hand, we se \begin{equation} \tilde{\phi}(x,t)=\left\{ \begin{array}{ll} -\phi \left( \gamma _{v}\left( vt-x\right) +vt,t\right) , & \text{if }x\in \left( -L_{1}+vt,vt\right) ,\smallskip \\ \phi \left( x,t\right) , & \text{if }x\in \left( vt,L+vt\right) ,\smallskip \\ -\phi \left( \frac{1}{\gamma _{v}}\left( vt-x\right) +\frac{2L}{1+v +vt,t\right) ,\text{ \ \ } & \text{if }x\in \left( L+vt,L_{2}+vt\right) \end{array \right. \label{phi0+} \end{equation The obtained function is well defined since the first variable of $\phi $ remains in the interval $\left( vt,L+vt\right) $. In particular, $\tilde{\ph }(vt,t)=\tilde{\phi}(L+vt,t)=0,\ $hence the homogeneous boundary conditions at $x=vt$ and $x=L+vt$ remain satisfied, for every $t\geq 0$. \begin{figure}[tbph] \centering \includegraphics[width=0.71\textwidth]{GhS_fig2} \caption{Example of the extension of an initial data $\protect\phi ^{0}$.} \label{fig1a} \end{figure} \begin{remark} If $v=0$, then $L_{1}=L$ and $L_{2}=2L$. In this case, the functions $\tilde \phi}$ and $\tilde{\phi}_{t}$ are odd on the intervals $\left( -L,L\right) $ and $\left( 0,2L\right) $ with respect to the middle of each interval. The extension $\tilde{\phi}_{x}$ is an even function on these intervals. \end{remark} Taking the derivative of (\ref{phi0+}) with respect to $x$, we obtain$, \begin{equation} \tilde{\phi}_{x}(x,t)=\left\{ \begin{array}{ll} \gamma _{v}\phi _{x}\left( \gamma _{v}\left( vt-x\right) +vt,t\right) , & \text{if }x\in \left( -L_{1}+vt,vt\right) ,\smallskip \\ \phi _{x}\left( x,t\right) , & \text{if }x\in \left( vt,L+vt\right) ,\smallskip \\ \frac{1}{\gamma _{v}}\phi _{x}\left( \frac{1}{\gamma _{v}}\left( vt-x\right) +\frac{2L}{1+v}+vt,t\right) ,\text{ \ \ \ \ } & \text{if }x\in \left( L+vt,L_{2}+vt\right) \end{array \right. \label{phi0x+} \end{equation On the other hand, $\tilde{\phi}_{t}(x,t)$ is extended as follow \begin{equation} \tilde{\phi}_{t}(x,t)=\left\{ \begin{array}{ll} -\gamma _{v}\phi _{t}\left( \gamma _{v}\left( vt-x\right) +vt,t\right) , & \text{if }x\in \left( -L_{1}+vt,vt\right) ,\smallskip \\ \phi _{t}\left( x,t\right) , & \text{if }x\in \left( vt,L+vt\right) ,\smallskip \\ \frac{-1}{\gamma _{v}}\phi _{t}\left( \frac{1}{\gamma _{v}}\left( vt-x\right) +\frac{2L}{1+v}+vt,t\right) ,\text{ \ \ \ \ } & \text{if }x\in \left( L+vt,L_{2}+vt\right) \end{array \right. \label{phi1+} \end{equation} \begin{remark} In Figure \ref{fig3}, let $(x_{1},t_{1})$ be the intersection of the two characteristic starting from the initial endpoints $x=0$ and $x=L$, after one reflection on the boundaries. We can check that, the two backward characteristic lines from $(x_{1},t_{1})$ intersect the $x-$axis precisely at $x=-L_{1}$ and $x=L_{2}$. \end{remark} \begin{figure}[bth] \centering \includegraphics[width=0.67\textwidth,height=51mm]{GhS_fig3} \caption{Relation between $L_1,L_2$ and some characteristics of wave propagation.} \label{fig3} \end{figure} Now we are ready to show the coefficients formulas. \begin{proof}[Proof of Theorem \protect\ref{thexist1}] Thanks to (\ref{solreg}), we can derive term by term the series (\ref{exact0 ), it comes tha \begin{align} \phi _{x}(x,t)& =\frac{\pi i}{L}\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}\left( \left( 1-v\right) e^{n\pi i\left( 1-v\right) \left( t+x\right) /L}+\left( 1+v\right) e^{n\pi i\left( 1+v\right) \left( t-x\right) /L}\right) ,\smallskip \label{ph x} \\ \phi _{t}(x,t)& =\frac{\pi i}{L}\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}\left( \left( 1-v\right) e^{n\pi i\left( 1-v\right) \left( t+x\right) /L}-\left( 1+v\right) e^{n\pi i\left( 1+v\right) \left( t-x\right) /L}\right) , \label{ph t} \end{align where $t\geq 0$, $x\in \left( vt,L+vt\right) $. Combining this, with (\re {phi0x+}) and (\ref{phi1+}), the extensions $\tilde{\phi}_{x}$ and $\tilde \phi}_{t}$ on the interval $\left( vt,L_{2}+vt\right) $ are given b \begin{equation} \tilde{\phi}_{x}(x,t)=\left\{ \begin{array}{ll} \displaystyle\frac{\pi i}{L}\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}\left( \left( 1-v\right) e^{n\pi i\left( 1-v\right) \left( t+x\right) /L}+\left( 1+v\right) e^{n\pi i\left( 1+v\right) \left( t-x\right) /L}\right) , & \text{ if\ }x\in \left( vt,L+vt\right) \smallskip , \\ \displaystyle\frac{\pi i}{\gamma _{v}L}\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}\left\{ \left( 1-v\right) e^{\frac{n\pi i\left( 1-v\right) }{L \left( \left( 1+v\right) t+\frac{vt-x}{\gamma _{v}}+\frac{2L}{1+v}\right) }\right. , & \\ \multicolumn{1}{r}{\left. +\left( 1+v\right) e^{\frac{n\pi i\left( 1+v\right) }{L}\left( \left( 1-v\right) t-\frac{vt-x}{\gamma _{v}}-\frac{2L} 1+v}\right) }\right\}} & \multicolumn{1}{r}{\text{ if\ }x\in \left( L+vt,L_{2}+vt\right) , \end{array \right. \label{phx} \end{equation} \begin{equation} \tilde{\phi}_{t}(x,t)=\left\{ \begin{array}{ll} \displaystyle\frac{\pi i}{L}\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}\left( \left( 1-v\right) e^{n\pi i\left( 1-v\right) \left( t+x\right) /L}-\left( 1+v\right) e^{n\pi i\left( 1+v\right) \left( t-x\right) /L}\right) , & \text{ if }x\in \left( vt,L+vt\right) \smallskip , \\ \displaystyle\frac{-\pi i}{\gamma _{v}L}\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}\left\{ \left( 1-v\right) e^{\frac{n\pi i\left( 1-v\right) }{L \left( \left( 1+v\right) t+\frac{vt-x}{\gamma _{v}}+\frac{2L}{1+v}\right) }\right. , & \\ \multicolumn{1}{r}{\left. -\left( 1+v\right) e^{\frac{n\pi i\left( 1+v\right) }{L}\left( \left( 1-v\right) t-\frac{vt-x}{\gamma _{v}}-\frac{2L} 1+v}\right) }\right\}} & \multicolumn{1}{r}{\text{\ if }x\in \left( L+vt,L_{2}+vt\right) , \end{array \right. \label{pht} \end{equation} Taking the sum of (\ref{phx}) and (\ref{pht}) on the interval $\left( vt,L_{2}+vt\right) $, we ge \begin{equation*} \tilde{\phi}_{x}+\tilde{\phi}_{t}=\left\{ \begin{array}{ll} \displaystyle\frac{2\pi i}{L}\left( 1-v\right) \sum_{n\in \mathbb{\mathbb{Z} ^{\ast }}nc_{n}e^{n\pi i\left( 1-v\right) \left( t+x\right) /L}, & x\in \left( vt,L+vt\right) ,\medskip \\ \displaystyle\frac{2\pi i}{\gamma _{v}L}\left( 1+v\right) \sum_{n\in \mathbb \mathbb{Z}}^{\ast }}nc_{n}e^{\frac{n\pi i\left( 1+v\right) }{L}\left( \left( 1-v\right) t-\frac{vt-x}{\gamma _{v}}-\frac{2L}{1+v}\right) }, & x\in \left( L+vt,L_{2}+vt\right) \end{array \right. \end{equation* Since $e^{\frac{n\pi i\left( 1+v\right) }{L}\left( \left( 1-v\right) t-\frac vt-x}{\gamma _{v}}-\frac{2L}{1+v}\right) }=e^{n\pi i\left( 1-v\right) \left( t+x\right) /L}$, we get the same expression on the two sub-intervals, i.e. \begin{equation} \tilde{\phi}_{x}+\tilde{\phi}_{t}=\frac{2\pi i}{L}\left( 1-v\right) \sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}e^{n\pi i\left( 1-v\right) \left( t+x\right) /L},\text{ \ \ for}\ \ x\in \left( vt,L_{2}+vt\right) . \label{29} \end{equation Taking into account that $\left\{ \sqrt{\frac{1-v}{2L}}e^{n\pi i\left( 1-v\right) \left( t+x\right) /L}\right\} _{n\in \mathbb{\mathbb{Z}}}$ is an orthonormal basis for $L^{2}\left( vt,L_{2}+vt\right) $, for every $t\geq 0 , we rewrite (\ref{29}) as \begin{equation} \frac{1}{4\pi i}\sqrt{\frac{2L}{1-v}}\left( \tilde{\phi}_{x}+\tilde{\phi _{t}\right) =\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}\sqrt{\frac{1-v}{2 }}e^{n\pi i\left( 1-v\right) \left( t+x\right) /L}, \label{30} \end{equation for $x\in \left( vt,L_{2}+vt\right) $. This means that $nc_{n}$ is the n^{th}$ coefficient of the functio \begin{equation} \frac{1}{4\pi i}\sqrt{\frac{2L}{1-v}}\left( \tilde{\phi}_{x}+\tilde{\phi _{t}\right) \in L^{2}\left( vt,L_{2}+vt\right) . \label{parsev+} \end{equation By consequence \begin{equation} nc_{n}=\frac{1}{4\pi i}\int_{vt}^{L_{2}+vt}\left( \tilde{\phi}_{x}+\tilde \phi}_{t}\right) e^{-n\pi i\left( 1-v\right) \left( t+x\right) /L}dx\text{, \ \ \ for\ }n\in \mathbb{\mathbb{Z}}^{\ast } \label{31-} \end{equation and (\ref{cn+}) holds as claimed for $t=0$. The same argument can be carried out on the interval $\left( -L_{1}+vt,L+vt\right)$ by taking this time the difference between (\ref{phx ) and (\ref{pht}), we obtai \begin{equation*} \tilde{\phi}_{x}-\tilde{\phi}_{t}=\left\{ \begin{array}{ll} \frac{2\pi i}{L}\gamma _{v}\left( 1-v\right) \displaystyle\sum_{n\in \mathbb \ \mathbb{Z} }^{\ast }}nc_{n}e^{n\pi i\left( 1-v\right) \left( \left( 1+v\right) t+\gamma _{v}\left( vt-x\right) \right) /L}, & x\in \left( -L_{1}+vt,vt\right) ,\smallskip \\ \frac{2\pi i}{L}\left( 1+v\right) \displaystyle\sum_{n\in \mathbb{\mathbb{Z} }^{\ast }}nc_{n}e^{n\pi i\left( 1+v\right) \left( t-x\right) /L}, & x\in \left( vt,L+vt\right) \end{array \right. \end{equation* After few rearrangement, it follows tha \begin{equation} \tilde{\phi}_{x}-\tilde{\phi}_{t}=\frac{2\pi i}{L}\left( 1+v\right) \sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}e^{n\pi i\left( 1+v\right) \left( t-x\right) /L},\text{ \ \ for\ }x\in \left( -L_{1}+vt,L+vt\right) . \label{29b} \end{equation Since $\left\{ \sqrt{\frac{1+v}{2L}}e^{n\pi i\left( 1+v\right) \left( t-x\right) /L}\right\} _{n\in \mathbb{\mathbb{Z}}}$ is an orthonormal basis for $L^{2}\left( -L_{1}+vt,L+vt\right) $, we deduce tha \begin{equation} nc_{n}=\frac{1}{4\pi i}\int_{-L_{1}+vt}^{L+vt}\left( \tilde{\phi}_{x}-\tilde \phi}_{t}\right) e^{-n\pi i\left( 1+v\right) \left( t-x\right) /L}dx\text{, \ \ \ for\ }n\in \mathbb{\mathbb{Z}}^{\ast }. \label{31} \end{equation For $t=0$, we obtain (\ref{cn-}) and the theorem follows. \end{proof} As a byproduct of the above proof, we have the following. \begin{corollary} \label{coroSll}Under the assumptions (\ref{tlike}) and (\ref{ic}), the sum \sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}\left\vert nc_{n}\right\vert ^{2}$ is finite and is given by any of the two formulas, for $t\geq 0, \begin{align} \sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}\left\vert nc_{n}\right\vert ^{2}& \frac{L}{8\pi ^{2}\left( 1-v\right) }\int_{vt}^{L_{2}+vt}\left( \tilde{\phi _{x}+\tilde{\phi}_{t}\right) ^{2}dx \label{ncn+} \\ & =\frac{L}{8\pi ^{2}\left( 1+v\right) }\int_{-L_{1}+vt}^{L+vt}\left( \tilde \phi}_{x}-\tilde{\phi}_{t}\right) ^{2}dx. \label{ncn-} \end{align} \end{corollary} \begin{proof} Parseval's equality applied to the function given in (\ref{30}) yield \begin{equation*} \sum_{n\in \mathbb{\mathbb{Z} }^{\ast }}\left\vert nc_{n}\right\vert ^{2}=\left\vert \frac{1}{4\pi i}\sqrt{\frac{2L}{1-v}}\right\vert ^{2}\int_{vt}^{L_{2}+vt}\left( \tilde{\phi}_{x}+\tilde{\phi}_{t}\right) ^{2}dx\text{, \ \ \ for\ }t\geq 0. \end{equation* Thus (\ref{ncn+}) holds as claimed. The identity (\ref{ncn-}) follows from \ref{31}) in a similar manner. \end{proof} \subsection{A numerical example} To illustrate the above results, we compute the solution of (\ref{wave}) for two values of speed $v=0.3$, $v=0.7$ and \begin{equation*} L=\pi \ ,\phi ^{0}\left( x\right) =\sin \left( x\right) /10,\ \ \phi ^{1}\left( x\right) =0 \end{equation* and use (\ref{ncn+}) for the first 40 frequencies, i.e. $\left\vert n\right\vert \leq 40 $ in the series sum (\ref{exact0}). See Figures 4 and 5. \begin{figure}[h] \centering \begin{minipage}[h]{0.48\textwidth} \includegraphics[width=\textwidth]{GhS_fig_v03bw} \caption{The solution $\protect\phi$ for $v=0.3$ in the interval $(vt \protect\pi+vt)$ over one period $T_v \simeq 6.91$.} \end{minipage} \hfill \begin{minipage}[h]{0.48\textwidth} \includegraphics[width=\textwidth]{GhS_fig_v07bw} \caption{The solution $\protect\phi$ for $v=0.7$ in the interval $(vt \protect\pi+vt)$ over one period $T_v \simeq 12.32$.} \end{minipage} \end{figure} \section{Energy expressions and estimates} In this section, we show that the energy $\mathcal{E}_{v}\left( t\right) $ of the solution of Problem (\ref{wave}) is conserved in time. \begin{theorem} \label{th1}Under the assumptions (\ref{tlike}) and (\ref{ic}), the solution of Problem (\ref{wave}) satisfie \begin{equation} \mathcal{E}_{v}\left( t\right) =\frac{2\pi ^{2}\left( 1-v^{2}\right) }{L \sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}\left\vert nc_{n}\right\vert ^{2},\ \ \ \ \ \text{for}\ t\geq 0. \label{est0} \end{equation $($the left-hand side is independent of $t).$ \end{theorem} \begin{proof} The two identities (\ref{ncn+}) and (\ref{ncn-}) implies tha \begin{equation} \frac{1}{1+v}\int_{-L_{1}+vt}^{L+vt}\left( \tilde{\phi}_{x}-\tilde{\phi _{t}\right) ^{2}dx=\frac{1}{1-v}\int_{vt}^{L_{2}+vt}\left( \tilde{\phi}_{x} \tilde{\phi}_{t}\right) ^{2}dx=\frac{8\pi ^{2}}{L}\sum_{n\in \mathbb{\ \mathbb{Z}}^{\ast }}\left\vert nc_{n}\right\vert ^{2}. \label{32} \end{equation Using the extensions (\ref{phi0x+}), (\ref{phi1+}) and considering the change of variable $x=\frac{1}{\gamma _{v}}\left( vt-\xi \right) +vt$, in \left( -L_{1}+vt,vt\right) ,$ we obtai \begin{multline*} \frac{1}{1+v}\int_{-L_{1}+vt}^{vt}\left( \tilde{\phi}_{x}\left( x,t\right) \tilde{\phi}_{t}\left( x,t\right) \right) ^{2}dx=-\frac{1}{1+v \int_{L+vt}^{vt}\gamma _{v}\left( \phi _{x}\left( \xi ,t\right) +\phi _{t}\left( \xi ,t\right) \right) ^{2}d\xi \\ =\frac{1}{1-v}\int_{vt}^{L+vt}\left( \phi _{x}\left( \xi ,t\right) +\phi _{t}\left( \xi ,t\right) \right) ^{2}d\xi . \end{multline* Taking $x=\gamma _{v}\left( vt-\xi \right) +\frac{2L}{v-1}+vt$, in $\left( L+vt,L_{2}+vt\right) ,$ we obtai \begin{equation*} \frac{1}{1-v}\int_{L+vt}^{L_{2}+vt}\left( \tilde{\phi}_{x}\left( x,t\right) \tilde{\phi}_{t}\left( x,t\right) \right) ^{2}dx=\frac{1}{1+v \int_{vt}^{L+vt}\left( \phi _{x}\left( \xi ,t\right) -\phi _{t}\left( \xi ,t\right) \right) ^{2}d\xi . \end{equation* Then, taking (\ref{32}) into account,\ it comes that \begin{multline*} \frac{1}{1-v}\int_{-L_{1}+vt}^{L+vt}\left( \tilde{\phi}_{t}+\tilde{\phi _{x}\right) ^{2}dx+\frac{1}{1+v}\int_{vt}^{L_{2}+vt}\left( \tilde{\phi}_{x} \tilde{\phi}_{t}\right) ^{2}dx \\ =\frac{2}{1+v}\int_{vt}^{L+vt}\left( \phi _{x}-\phi _{t}\right) ^{2}dx+\frac 2}{1-v}\int_{vt}^{L+vt}\left( \phi _{t}+\phi _{x}\right) ^{2}dx=\frac{16\pi ^{2}}{L}\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}\left\vert nc_{n}\right\vert ^{2}. \end{multline* Expanding $\left( \phi _{x}\pm \phi _{t}\right) ^{2}$ and collecting similar terms, we get \begin{equation} \frac{1}{1-v^{2}}\left( 2\int_{vt}^{L+vt}\phi _{x}^{2}+\phi _{t}^{2}+4v\phi _{x}\phi _{t}dx\right) =\frac{8\pi ^{2}}{L}\sum_{n\in \mathbb{\mathbb{Z} ^{\ast }}\left\vert nc_{n}\right\vert ^{2},\text{ \ \ for}\ t\geq 0. \label{est1} \end{equation Recalling that $\mathcal{E}_{v}\left( t\right) $ is given by (\ref{E}), this identity can be rewritten as in (\ref{est0}). This end the proof. \end{proof} The fact that $\mathcal{E}_{v}\left( t\right) $ is constant in time can be established by using only the identities $\phi _{tt}=\phi _{xx}$ and $\phi \left( vt,t\right) =\phi \left( L+vt,t\right) =0$ from (\ref{wave}). \begin{proof}[A second proof for the conservation of $\mathcal{E}_{v}\left( t\right) $] It suffices to show that $\frac{d}{dt}\mathcal{E}_{v}\left( t\right) =0.$ First, the boundary conditions $\phi \left( vt,t\right) =\phi \left( L+vt,t\right) =0$ means that $\frac{d}{dt}\phi \left( vt,t\right) =\frac{d} dt}\phi \left( L+vt,t\right) =0,$ henc \begin{equation} \phi _{t}\left( vt,t\right) +v\phi _{x}\left( vt,t\right) =\phi _{t}\left( L+vt,t\right) +v\phi _{x}\left( L+vt,t\right) =0. \label{dtotal=0} \end{equation Since the limits of the integral in the expression of $\mathcal{E}_{v}\left( t\right) $ are time-dependent, then Leibnitz's rule implies tha \begin{multline} \frac{d}{dt}\mathcal{E}_{v}\left( t\right) =v\left( 1-v^{2}\right) \left( \phi _{x}^{2}\left( L+vt,t\right) -\phi _{x}^{2}\left( vt,t\right) \right) \label{DE} \\ +\int_{vt}^{L+vt}\frac{\partial }{\partial t}\left( \phi _{t}+v\phi _{x}\right) ^{2}\ dx+\left( 1-v^{2}\right) \frac{\partial }{\partial t \left( \phi _{x}^{2}\right) dx. \end{multline The remaining integral equals, after using $\phi _{tt}=\phi _{xx}\ $then integrating by parts, \begin{multline*} \int_{vt}^{L+vt}\left( \phi _{t}+v\phi _{x}\right) \phi _{xx}+\left( v\phi _{t}+\phi _{x}\right) \phi _{xt}dx=\int_{vt}^{L+vt}-\left( \phi _{xt}+v\phi _{xx}\right) \phi _{x}+\left( v\phi _{t}+\phi _{x}\right) \phi _{xt}dx \\ =v\int_{vt}^{L+vt}-\phi _{xx}\phi _{x}+\phi _{t}\phi _{xt}dx, \end{multline* which is nothing bu \begin{equation*} v\int_{vt}^{L+vt}\frac{\partial }{\partial x}\left( \phi _{t}^{2}-\phi _{x}^{2}\right) dx=-v\left( 1-v^{2}\right) \left( \phi _{x}^{2}\left( L+vt,t\right) -\phi _{x}^{2}\left( vt,t\right) \right) \end{equation* due to (\ref{dtotal=0}). Going back to (\ref{DE}), we infer that $\frac{d}{d }\mathcal{E}_{v}\left( t\right) =0$ as claimed. \end{proof} Let us now compare $\mathcal{E}_{v}\left( t\right) $ to the usual expression of energy for the wave equation \begin{equation*} E_{v}\left( t\right) :=\frac{1}{2}\int_{vt}^{L+vt}\phi _{t}^{2}+\phi _{x}^{2}dx,\text{ \ for}\ t\geq 0. \end{equation* In contrast with $\mathcal{E}_{v}\left( t\right) ,$ the expression E_{v}\left( t\right) $ is not conserved in general. Due to the periodicity relation (\ref{T-period}), we know at least that $E_{v}$ is $T_{v}-$periodic in time. Moreover we have \begin{corollary} \label{coro3.1}Under the assumptions (\ref{tlike}) and (\ref{ic})\emph{,} the energy $E_{v}\left( t\right) $ of the solution of Problem (\ref{wave}) satisfie \begin{equation} \frac{\mathcal{E}_{v}\left( t\right) }{1+v}\leq E_{v}\left( t\right) \leq \frac{\mathcal{E}_{v}\left( t\right) }{1-v},\ \ \ \ \ \text{for}\ t\geq 0 \label{ES0} \end{equation an \begin{equation} \frac{1}{\gamma _{v}}E_{v}\left( 0\right) \leq E_{v}\left( t\right) \leq \gamma _{v}E_{v}\left( 0\right) ,\ \ \ \ \ \text{\ for }t\geq 0. \label{stab} \end{equation} \end{corollary} \begin{proof} We can write (\ref{est1}) as \begin{equation} E_{v}\left( t\right) +v\int_{vt}^{L+vt}\phi _{x}\phi _{t}\ dx=\mathcal{E _{v}\left( t\right) ,\text{ \ \ for}\ t\geq 0. \label{En3.8} \end{equation Thanks to the algebraic inequality $\pm ab\leq \left( a^{2}+b^{2}\right) /2$ we know tha \begin{equation*} \pm \int_{vt}^{L+vt}\phi _{x}\phi _{t}\ dx\leq E_{v}\left( t\right) \text{, \ \ \ for }t\geq 0. \end{equation* Then, it comes tha \begin{equation} \mathcal{E}_{v}\left( t\right) \leq \left( 1+v\right) E_{v}\left( t\right) \text{ \ \ and \ \ }\left( 1-v\right) E_{v}\left( t\right) \leq \mathcal{E _{v}\left( t\right) ,\text{ \ for }t\geq 0. \label{ESES} \end{equation This implies (\ref{ES0}). Since (\ref{ESES}) holds also for $t=0$, then (\re {stab}) follows by combining the two inequalitie \begin{align*} \left( 1-v\right) E_{v}\left( t\right) & \leq \mathcal{E}_{v}\left( t\right) =\mathcal{E}_{v}\left( 0\right) \leq \left( 1+v\right) E_{v}\left( 0\right) , \\ \left( 1-v\right) E_{v}\left( 0\right) & \leq \mathcal{E}_{v}\left( 0\right) =\mathcal{E}_{v}\left( t\right) \leq \left( 1+v\right) E_{v}\left( t\right) , \end{align* for\ $t\geq 0$. \end{proof} \begin{remark} The equality in estimation (\ref{ES0}) may hold for some $t\geq 0$. This is the case whenever $\phi_{t}\left( x,t\right)=\pm \phi _{x}\left(x, t\right) , for $x\in \mathbf{I}_t$ and some $t\geq 0$. For instance, if the initial data satisfy $\phi ^{1}=\pm \phi _{x}^{0}$ we obtain from (\ref{En3.8}) that \begin{equation*} \left( 1\pm v\right) E_{v}\left( 0\right) =E_{v}\left( 0\right) +v\int_{0}^{L}\phi _{x}^{0}\phi ^{1}\ dx=\mathcal{E}_{v}\left( 0\right) , \end{equation* i.e. $E_{v}\left( 0\right) =\mathcal{E}_{v}\left( 0\right) /\left( 1\pm v\right) $. By periodicity, we have also $E_{v}\left( nT_{v}\right) \mathcal{E}_{v}\left( 0\right) /\left( 1\pm v\right) ,$ for $n\in \mathbb{Z} $. The + and -- signs are used respectively. \end{remark} \begin{remark} As $v\rightarrow 1^{-},$ we have $\mathcal{E}_{v}\left( 0\right) \rightarrow \left\Vert \phi ^{1}+\phi _{x}^{0}\right\Vert _{L^{2}\left( 0,L\right) }/2.$ If the initial data satisfies $\phi ^{1}+\phi _{x}^{0}\neq 0,$ it follows from (\ref{ES0}) tha \begin{equation*} E_{v}\left( t\right) \leq \frac{\mathcal{E}_{v}\left( t\right) }{1-v}=\frac \mathcal{E}_{v}\left( 0\right) }{1-v}\rightarrow +\infty \text{, \ as v\rightarrow 1^{-}. \end{equation*} Taking the precedent remark into account, we may have large value for E_{v}\left( t\right)$, as $v$ becomes close to the speed of propagation $c=1 , even for small initial value $\mathcal{E}_{v}\left( 0\right)$. To see what happens to the string in this case, let us take $v=0.9$ in the precedent numerical example, see Figure \ref{fig09}. We observe a layer effect (i.e. a subregion in $\mathbf{I}_t$ where $\phi _{x}$ becomes very large) that travels from the left endpoint to the right one over one period $T_{v}$. This phenomenon becomes more marked as $v$ is closer to $1$. \end{remark} \begin{figure}[tbph] \centering \includegraphics[width=0.57\textwidth,height=77mm]{GhS_fig_v09bw} \caption{The solution $\protect\phi$ for $v=0.9$ in the interval $(vt \protect\pi+vt)$ over one period $T_v \simeq 33.07$.} \label{fig09} \end{figure} \section{Boundary observability} In many applications, it is preferred that the sensors do not interfere with the vibrations of the string, so they are placed at the extremities. In addition, interior pointwise sensors are difficult to design and the system may become unobservable depending on the sensors location. This fact was shown by Yang and Mote in \cite{YaMo1991} where they cast $\left( \ref{freez \right) $ in a state space form and use semi-group theory. \subsection{Observability at one endpoint} First, we show the observability of (\ref{wave}) at each endpoint $x_{b}+vt$ where \begin{equation*} x_{b}=0\ \ \text{or \ }x_{b}=L. \end{equation* The problem of observability considered here can be stated as follows: To give sufficient conditions on the length $T$ of the time interval such that there exists a constant $C(T)>0$ for which the observability inequalit \footnote One can replace $\mathcal{E}_{v}\left( 0\right) $ by $E_{v}\left( 0\right) $ in the left-hand side, but this does not matter since (\ref{ES0}) holds under the assumption (\ref{tlike}). \begin{equation} \mathcal{E}_{v}\left( 0\right) \leq C(T)\int_{0}^{T}\phi _{x}^{2}(x_{b}+vt,t)dt, \label{ct} \end{equation holds for all the solutions of (\ref{wave}). This inequality is also called the inverse inequality. The next theorem shows in particular that the boundary observability holds for $T\geq T_{v}=2L/\left( 1-v^{2}\right) $. \begin{theorem} \label{thobs1}Under the assumptions (\ref{tlike}) and (\ref{ic}), we have \begin{equation} \int_{0}^{MT_{v}}\phi _{x}^{2}(x_{b}+vt,t)dt=\frac{4M}{\left( 1-v^{2}\right) ^{2}}\mathcal{E}_{v}\left( 0\right) . \label{33} \end{equation} By consequence, the solution of (\ref{wave}) satisfies the direct inequalit \begin{equation} \int_{0}^{T}\phi _{x}^{2}(x_{b}+vt,t)dt\leq K_{1}(v,T)\mathcal{E}_{v}\left( 0\right) \text{, for every }T\geq 0, \label{D1} \end{equation with a constant $K_{1}(v,T)$ depending only on $v$ and $T$. If $T\geq T_{v}$, Problem (\ref{wave}) is observable at $\xi \left( t\right) =x_{b}+vt$ and it holds that \begin{equation} \mathcal{E}_{v}\left( 0\right) \leq \frac{\left( 1-v^{2}\right) ^{2}}{4 \int_{0}^{T}\phi _{x}^{2}(x_{b}+vt,t)dt. \label{obs1} \end{equation} \end{theorem} \begin{proof} Thanks to (\ref{ph x}), we can evaluate $\phi _{x}$ at the endpoint x=x_{b}+vt$ . We obtai \begin{align*} \phi _{x}(x_{b}+vt,t)& =\frac{\pi i}{L}\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}\left( \left( 1-v\right) e^{\frac{n\pi i\left( 1-v\right) }{L}\left( \left( 1+v\right) t+x_{b}\right) }+\left( 1+v\right) e^{\frac{n\pi i\left( 1+v\right) }{L}\left( \left( 1-v\right) t-x_{b}\right) }\right) \\ & =\frac{\pi i}{L}\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}\left( \left( 1-v\right) e^{\frac{n\pi i\left( 1-v\right) }{L}x_{b}}+\left( 1+v\right) e^{ \frac{n\pi i\left( 1+v\right) }{L}x_{b}}\right) e^{n\pi i\left( 1-v^{2}\right) t/L}, \end{align* which can be rewritten a \begin{equation} \phi _{x}(x_{b}+vt,t)=\left\{ \begin{array}{ll} \displaystyle\frac{2\pi i}{L}\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}e^{2n\pi it/T_{v}}\smallskip , & \text{if }x_{b}=0, \\ \displaystyle\frac{2\pi i}{L}\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}e^{-n\pi i\left( 1+v\right) }e^{2n\pi it/T_{v}}, & \text{if }x_{b}=L \end{array \right. . \label{phix0} \end{equation Let $M\in \mathbb{N}^{\ast }$. Since the set of functions $\left\{ e^{2n\pi it/T_{v}}/\sqrt{T_{v}}\right\} _{n\in \mathbb{\mathbb{Z}}}$ is complete and orthonormal in the space $L^{2}(mT_{v},\left( m+1\right) T_{v})$ for m=0,...,M-1$, then Parseval's equality applied to the function \begin{equation*} \phi _{x}(x_{b}+vt,t)\in L^{2}(mT_{v},\left( m+1\right) T_{v}),\text{ \ for m=0,...,M-1, \end{equation* yields, after summing up the integrals for all the subintervals of $\left( 0,MT_{v}\right) , \begin{equation*} \frac{1}{T_{v}}\int_{0}^{MT_{v}}\phi _{x}^{2}(x_{b}+vt,t)dt=\frac{4M\pi ^{2 }{L^{2}}\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}\left\vert nc_{n}\right\vert ^{2} \end{equation* and (\ref{33}) follows. For every $T\geq 0$, we can take the integer $M$ large enough to satisfy MT_{v}=M\frac{2L}{1-v^{2}}\geq T$. Then, the identity (\ref{33}) yields \begin{equation*} \int_{0}^{T}\phi _{x}^{2}(x_{b}+vt,t)dt\leq \int_{0}^{MT_{v}}\phi _{x}^{2}(x_{b}+vt,t)dt=\frac{4M}{\left( 1-v^{2}\right)^2}\mathcal{E _{v}\left( 0\right), \end{equation* i.e., (\ref{D1}) holds for $K_{1}(v,T):=4M/\left(1-v^{2}\right)^2$. The inequality (\ref{obs1}) follows from (\ref{33}) with $M=1.$ \end{proof} \begin{remark} \label{rmk dt}Taking (\ref{dtotal=0}) into account, we hav \begin{equation*} \phi _{t}^{2}(x_{b}+vt,t)=v^{2}\phi _{x}^{2}(x_{b}+vt,t),\text{ \ \ \ for x_{b}=0\ \text{or }x_{b}=L,\ \forall t\geq 0. \end{equation*} Then, the results of Theorem \ref{thobs1} hold if we replace $\phi _{x}(x_{b}+vt,t)$ by $\phi _{t}(x_{b}+vt,t)/v^{2}$ with the same constants in the inequalities. \end{remark} \begin{remark} The time of boundary observability $T_{v}$ can be predicted by a simple argument, see Figure $\ref{fig4}$. An initial disturbances concentrated near $x=L+vt$ may propagate to the left as $t$ increases. It reaches the left boundary, when $t$ is close to $\frac{L}{1+v}$. Then travels back to reach the right boundary when $t$ is close to $\frac{2L}{1-v^{2}}=T_{v}$, see Figure \ref{fig4} (left). We need the same time $T_{v}$ for an initial disturbance concentrated near $x=vt$, see Figure \ref{fig4} (right). \end{remark} \begin{figure}[tbph] \centering\includegraphics[width=0.71\textwidth,height=57mm]{GhS_fig4} \caption{Propagation of small disturbances with support near an endpoint.} \label{fig4} \end{figure} \subsection{Observability at both endpoints} Place two sensors at both endpoints $x=vt$ and $x=L+vt$ of the interval \mathbf{I}_{t}$, one expects a shorter time of observability. The next theorem shows that the observability, in this case, holds for $T\geq \tilde{ }_{v}=L/\left( 1-v\right) $. \begin{theorem} \label{thobs2}Under the assumption (\ref{tlike}) and (\ref{ic}),\emph{\ }we have: \begin{equation} \int_{0}^{\frac{L}{1+v}}\phi _{x}^{2}(vt,t)dt+\int_{0}^{\frac{L}{1-v}}\phi _{x}^{2}(L+vt,t)dt=\frac{4\mathcal{E}_{v}\left( 0\right)}{\left( 1-v^{2}\right)^{2}} . \label{slm2} \end{equation By consequence, the solution of (\ref{wave}) satisfies the direct inequalit \begin{equation} \int_{0}^{T}\phi _{x}^{2}(vt,t)+\phi _{x}^{2}(L+vt,t)dt\leq K_{2}(v,T \mathcal{E}_{v}\left( 0\right) \text{, for every }T\geq 0, \label{D2} \end{equation with a constant $K_{2}(v,T)$ depending only on $v$ and $T$. If $T\geq \tilde{T}_{v}$, Problem (\ref{wave}) is observable at both endpoints $x=vt,x=L+vt$ and it holds tha \begin{equation} \mathcal{E}_{v}\left( 0\right) \leq \frac{\left( 1-v^{2}\right) ^{2}}{4 \int_{0}^{T}\phi _{x}^{2}(vt,t)+\phi _{x}^{2}(L+vt,t)dt. \label{obs3} \end{equation} \end{theorem} \begin{proof} Arguing by density as in \cite{Seng2020}, it suffices to establish (\re {slm2}) for smooth initial data. Thus, assuming that $\phi _{x}^{0}$ and \phi ^{1}$ are continuous functions ensures in particular that their Fourier series are absolutely converging. This allow us to to interchange summation and integration in the infinite series considered in the remainder of the proof. Let $m\in \mathbb{Z}^{\ast }$. On one hand, taking $x_{b}=0$ in (\ref{phix0 ), multiplying by $\overline{imc_{m}e^{2m\pi it/T_{v}}}$ then integrating on $\left( 0,L/\left( 1+v\right) \right) ,$ we obtain \begin{equation*} \int_{0}^{\frac{L}{1+v}}\phi _{x}(vt,t)\text{ }\overline{imc_{m}e^{2m\pi it/T_{v}}}dt=\frac{2\pi }{L}m\bar{c}_{m}\int_{0}^{\frac{L}{1+v}}\left( \sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nc_{n}e^{2\left( n-m\right) \pi it/T_{v}}\right) dt. \end{equation* Integrating term-by-term, we obtai \begin{multline} \int_{0}^{\frac{L}{1+v}}\phi _{x}(vt,t)\text{ }\overline{imc_{m}e^{2m\pi it/T_{v}}}dt=\frac{2\pi }{L}\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}nmc_{n \bar{c}_{m}\int_{0}^{\frac{L}{1+v}}e^{2\left( n-m\right) \pi it/T_{v}}dt \label{Anm} \\ =\sum_{n\in \mathbb{\mathbb{Z}}^{\ast }}A_{nm}, \end{multline wher \begin{equation*} A_{nm}=\left\{ \begin{array}{ll} \displaystyle\frac{2\pi }{1+v}\left\vert mc_{m}\right\vert ^{2},\medskip & \text{ \ if }n=m, \\ \displaystyle\frac{2nmc_{n}\bar{c}_{m}}{i\left( n-m\right) \left( 1-v^{2}\right) }\left( e^{\pi i\left( n-m\right) \left( 1-v\right) }-1\right) , & \text{ \ if }n\neq m \end{array \right. \end{equation*} On the other hand, taking $x_{b}=L$ in the identity (\ref{phix0}), multiplying by $\overline{imc_{m}e^{-m\pi i\left( 1+v\right) }e^{2m\pi it/T_{v}}}$, then integrating term-by-term on $\left( 0,L/\left( 1-v\right) \right) $, we end up wit \begin{equation} \int_{0}^{\frac{L}{1-v}}\phi _{x}(L+vt,t)\text{ }\overline{imc_{m}e^{-m\pi i\left( 1+v\right) }e^{2m\pi it/T_{v}}}dt=\sum_{n\in \mathbb{\mathbb{Z} ^{\ast }}B_{nm}, \label{Bnm} \end{equation wher \begin{equation*} B_{nm}=\left\{ \begin{array}{ll} \displaystyle\frac{2\pi }{1-v}\left\vert mc_{m}\right\vert ^{2},\medskip & \text{ \ if }n=m, \\ \displaystyle\frac{2nmc_{n}\bar{c}_{m}}{i\left( n-m\right) \left( 1-v^{2}\right) }\left( 1-e^{-\left( n-m\right) \pi i\left( 1+v\right) }\right) , & \text{ \ if }n\neq m \end{array \right. \end{equation* Computing $A_{nm}+B_{nm}$ we obtain: \begin{itemize} \item If $n=m,$ then \begin{equation*} A_{mm}+B_{mm}=2\pi \left\vert mc_{m}\right\vert ^{2}\left( \frac{1}{1+v} \frac{1}{1-v}\right) =\frac{4\pi }{1-v^{2}}\left\vert mc_{m}\right\vert ^{2}. \end{equation*} \item If $n\neq m,$ the \begin{align*} A_{nm}+B_{nm}& =\frac{2nmc_{n}\bar{c}_{m}}{i\left( n-m\right) \left( 1-v^{2}\right) }\left( e^{\pi i\left( n-m\right) \left( 1-v\right) }-e^{-\left( n-m\right) \pi i\left( 1+v\right) }\right) \\ & =\frac{2nmc_{n}\bar{c}_{m}}{i\left( n-m\right) \left( 1-v^{2}\right) e^{-\pi i\left( n-m\right) \left( 1-v\right) }\left( e^{\left( n-m\right) \pi i\left( 1-v+1+v\right) }-1\right) , \end{align*} i.e., $A_{nm}+B_{nm}=0\text{ if }n\neq m.$ \end{itemize} By consequence, the sum of (\ref{Anm}) and (\ref{Bnm}) is simply given b \begin{multline} \int_{0}^{\frac{L}{1+v}}\phi _{x}(vt,t)\overline{imc_{m}e^{2m\pi it/T_{v}}}dt \label{A+B} \\ +\int_{0}^{\frac{L}{1-v}}\phi _{x}(L+vt,t)\text{ }\overline{imc_{m}e^{-m\pi i\left( 1+v\right) }e^{2m\pi it/T_{v}}}dt=\frac{4\pi }{1-v^{2}}\left\vert mc_{m}\right\vert ^{2}, \end{multline for every $m\in \mathbb{Z}^{\ast }$. Taking the sum for $m\in \mathbb{\ \mathbb{Z}}^{\ast }$, and interchange summation and integration, it comes tha \begin{multline*} \int_{0}^{\frac{L}{1+v}}\phi _{x}(vt,t)\left( \sum\limits_{m=-\infty }^{+\infty }\overline{imc_{m}e^{2m\pi it/T_{v}}}\right) dt \\ +\int_{0}^{\frac{L}{1-v}}\phi _{x}(L+vt,t)\left( \sum\limits_{m=-\infty }^{+\infty }\overline{imc_{m}e^{-m\pi i\left( 1+v\right) }e^{2m\pi it/T_{v}} \right) dt=\frac{4\pi }{1-v^{2}}\sum\limits_{m=-\infty }^{+\infty }\left\vert mc_{m}\right\vert ^{2}. \end{multline* Thanks to (\ref{phix0}), we obtai \begin{equation*} \frac{L}{2\pi }\left( \int_{0}^{\frac{L}{1+v}}\phi _{x}^{2}(vt,t)dt+\int_{0}^{\frac{L}{1-v}}\phi _{x}^{2}(L+vt,t)dt\right) \frac{4\pi }{1-v^{2}}\sum\limits_{m=-\infty }^{+\infty }\left\vert mc_{m}\right\vert ^{2}. \end{equation* This shows (\ref{slm2}). Inequality (\ref{D2}) is a consequence of Theorem \ref{thobs1}, it suffices to choose $x_{b}=vt$ then $x_{b}=L+vt$ in the direct inequality (\ref{D1}) and take the sum. The inequality (\ref{obs3}) holds for $T=\max \left\{ \frac{L}{1-v},\frac{L}{1+v}\right\} =\tilde{T}_{v}$ and therefore for every T\geq \tilde{T}_{v}$ as well. \end{proof} \begin{remark} If $T<\frac{L}{1-v}$, then the observability does not hold. Indeed, an initial disturbance with sufficiently small support and close to $x=0$ will hit the boundary $x=L+vt$ only after the time $T$, see Figure $\ref{fig4}$ (Right). \end{remark} \begin{remark} Thanks to the Hilbert uniqueness method (HUM), due to J-.L. Lions \cit {Lion1988}, we can easily derive exact boundary controllability results at one or at both endpoints from the above observability results. The proof is not much different from that in \cite{Seng2020}. \end{remark} \begin{remark} The techniques used in this paper can be adapted to deal with more complicated boundary conditions for travelling strings. The results will appear in a forthcoming paper. \end{remark} \subsection*{Acknowledgements} The authors have been supported by the General Direction of Scientific Research and Technological Development (Algerian Ministry of Higher Education and Scientific Research) PRFU \# C00L03UN280120220010. They are very grateful to this institution. \subsection*{ORCID} Abdelmouhcene Sengouga \href{https://orcid.org/0000-0003-3183-7973} https://orcid.org/0000-0003-3183-7973}
2,869,038,156,505
arxiv
\section{Introduction} Phase-insensitive amplifiers coherently and uniformly amplify every quadrature amplitude of an input electromagnetic field. The prototypical example of such an amplifier is a laser gain medium with population inversion between the active levels. Besides being a key component of lasers, phase-insensitive amplifiers (e.g., erbium-doped fiber amplifiers) are widely deployed in today's optical communication networks for restoring signal amplitudes and to offset detection noise \cite{RSS09optical}. Many physical mechanisms leading to phase-insensitive amplification are known in diverse platforms (see, e.g., Refs.~\cite{CDG+10,CCJ+12,LVH+14,CHN+20}), but they are all constrained by the unitarity of quantum dynamics to add a gain-dependent excess noise \cite{HM62,Cav82,CDG+10} that is minimized when the effective population in the active levels of the gain medium is completely inverted \cite{CCJ+12,Aga12quantum}. Such minimum-noise phase-insensitive amplifiers -- hereafter called \emph{quantum-limited amplifiers (QLAs)} -- are also of fundamental importance in continuous-variable quantum information. This is because the quantum channels defined by QLAs, together with pure-loss channels, are building blocks for constructing all other phase-covariant Gaussian channels by concatenation \cite{CGH06,Ser17qcv}. Due to the ubiquity of loss channels in nature, there is a vast literature on their sensing (see, e.g., \cite{Nai18loss,PBG+18,BAB+18,PVS+20} and references therein). In contrast, previous work on sensing gain of a QLA is limited to the context of detecting Unruh-Hawking radiation using single-mode probes \cite{AAF10} or assumes access to the internal degrees of freedom of the amplifier \cite{GP09}. In this Letter, we fill this gap by optimizing the gain sensing precision over all multimode ancilla-entangled probes and all joint quantum measurements, constraining only the energy and number of input modes of the probe. We also propose concrete probes, measurements and estimators enabling laboratory demonstration of a quantum advantage using present-day technology limited by nonunity-efficiency photodetection. Beyond gain sensing itself, owing to the above-mentioned concatenation theorem, our results combined with those for pure-loss channels \cite{Nai18loss} are expected to yield fundamental performance limits for a vast suite of detection and estimation problems involving Gaussian channels with excess noise -- see, e.g., Refs. \cite{TEG+08,NG20,BCT+21,GMT+20,Pir11,OLP+21,MI11,SWA+18,SSW20arxiv,WDA20,JDC22,ZP20,HBP21,Zhu21,BGD+17,TBG+21}. \begin{figure}[btp] \centering\includegraphics[trim=55mm 82mm 35mm 78mm, clip=true,width=\columnwidth]{ampgainsensing.pdf} \caption{A general ancilla-assisted parallel strategy for sensing the gain $G$ of a QLA $\cl{A}_G$ (dashed box): Each of $M$ signal ($S$) modes (one of which is shown) of a probe $\ket{\psi}_{AS}$ possibly entangled with an ancilla system $A$ is subject to a two-mode squeezing interaction $\hat{H}_I = i\hbar\kappa\pars{\hat{a} \hat{e} -\hat{a}^\dag \hat{e}^\dag}$ between the $S$ mode (annihilation operator $\hat{a}$) and an environment ($E$) mode (annihilation operator $\hat{e}$) initially in the vacuum state. An estimate $\check{G}$ of $G = \cosh^2\kappa t$ is obtained using the optimal joint measurement on the output of $AS$.}\label{fig:ampgainsensing} \end{figure} \indent \emph{Quantum-limited amplifiers--} A canonical realization of a QLA involves an optical parametric amplifier (or \emph{paramp}) effecting a two-mode squeezing interaction between the amplified or \emph{signal} ($S$) mode (annihilation operator $\hat{a}$) and an \emph{environment} mode ($E$) (annihilation operator $\hat{e}$), after which the $E$ mode is discarded \cite{Aga12quantum,CCJ+12,Ser17qcv} (Fig.~\ref{fig:ampgainsensing}, dashed box). In the interaction picture, the paramp Hamiltonian $\hat{H}_I = i\hbar \kappa \pars{\hat{a} \hat{e} -\hat{a}^\dag \hat{e}^\dag }$, where $\kappa$ is an effective coupling strength. Quantum-limited operation obtains when the environment is initially in the vacuum state. Evolution for a time $t$ results in the Bogoliubov transformations $\hat{a}_\tsf{out} = \sqrt{G} \,\hat{a}_\tsf{in} - \sqrt{G-1}\,\hat{e}^\dag_\tsf{in}; \hat{e}_\tsf{out} = \sqrt{G} \,\hat{e}_\tsf{in} - \sqrt{G -1}\,\hat{a}^\dag_\tsf{in}, where $\hat{a}_\tsf{in} = \hat{a}(0), \hat{e}_\tsf{in} =\hat{e}(0)$ are the input (time zero) and $\hat{a}_\tsf{out} = \hat{a}(t), \hat{e}_\tsf{out} =\hat{e}(t)$ are the output (time $t$) annihilation operators and $\sqrt{G} = \cosh \kappa t \equiv \cosh\tau$. The average output energy $\mean{\hat{a}^\dag_\tsf{out} \hat{a}_\tsf{out}} = G\mean{\hat{a}^\dag_\tsf{in} \hat{a}_\tsf{in}}+ (G-1)$, where the last term represents the added noise of a QLA of gain $G \geq 1$ \cite{Cav82}. The state transformation (quantum channel) on the signal mode corresponding to a QLA of gain $G$ is denoted $\cl{A}_G$. \indent \emph{Gain sensing setup and background ---} Figure~\ref{fig:ampgainsensing} also shows a general ancilla-assisted parallel estimation strategy for gain sensing. A pure state $\ket{\psi}_{AS}$ (called the \emph{probe}) of $M$ signal modes entangled with an arbitrary ancilla system $A$ is prepared. Each of the signal modes passes through the QLA, following which the joint $AS$ system is measured using an optimal (possibly probe-dependent) measurement for estimating $G$. The probe has the general form \begin{align} \label{probe} \ket{\psi}_{AS} = \sum_{\mb{n} \geq \mb{0}} \sqrt{p}_{\mb{n}}\ket{\chi_{\mb n}}_A \ket{\mb n}_S, \end{align} where $\ket{\mb n}_S = \ket{n_1}_{S_1}\ket{n_2}_{S_2}\cdots \ket{n_M}_{S_M}$ is an $M$-mode number state of $S$, $\{\ket{\chi_{\mb n}}_A\}$ are normalized (not necessarily orthogonal) states of $A$, and $\left\{p_{\mb n} \geq 0 \right\}$ is the probability distribution of $\mb{n}$. The number $M$ of available signal modes depends on operational constraints such as measurement time and bandwidth, and will turn out to be fundamental in determining the sensing precision. Additionally, we impose the standard constraint on the average photon number in the signal modes: $\bra{\psi} \hat{I}_A \otimes \pars{\sum_{m=1}^M \hat{N}_m} \ket{\psi} = N$, where $\hat{N}_m = \hat{a}_m^\dag \hat{a}_m$ is the number operator of the $m$-th signal mode and $\hat{I}_A$ is the identity on the ancilla system. This constraint can be simplified as $\sum_{n=0}^{\infty} n\,p_n = N,\;\;{\rm where}\;\; p_n = \sum_{\mb{n}\, :\, n_1 +\ldots + n_M = n} p_{\mb n} $ is the probability mass function of the \emph{total} photon number in the signal modes. A mixed-state probe can be purified using an additional ancilla with the resulting purification being again of the form ~\eqref{probe} with the same $N$ and $M$. Thus, optimization over probes of the form of Eq.~\eqref{probe} suffices. We are interested in comparing the performance of the optimal quantum probes of the form of Eq.~\eqref{probe} to the best performance achievable using \emph{classical} probes under the same resource constraints, i.e., probes that consist of mixtures of $M$-mode coherent states, possibly correlated with an arbitrary number $M'$ of ancilla modes. Such probes can be prepared using laser sources, and have the form \begin{align}\label{classicalprobe} \rho_{AS} = \iint \mathop{}\!\mathrm{d}^{2M'}\balpha\mathop{}\!\mathrm{d}^{2M} \bbeta \,P(\balpha,\bbeta) \ket{\balpha}\bra{\balpha}_A\otimes\ket{\bbeta} \bra{\bbeta}_S, \end{align} where $\balpha = \pars{\alpha^{(1)}, \ldots, \alpha^{(M')}} \in \mathbb{C}^{M'}$ and $\bbeta = \pars{\beta^{(1)}, \ldots, \beta^{(M)}}\in \mathbb{C}^{M}$ index $M'$- and $M$-mode coherent states of $A$ and $S$ respectively, and $P\pars{\balpha,\bbeta} \geqslant 0$ is a probability distribution. The signal energy constraint takes the form $ \int_{\mathbb{C}^{M'}} \mathop{}\!\mathrm{d}^{2M'} \balpha \int_{\mathbb{C}^M} \mathop{}\!\mathrm{d}^{2M} \bbeta\, P(\balpha,\bbeta) \pars{\sum_{m=1}^M \abs{\bbeta^{(m)}}^2} = {N}. $ Given a probe $\ket{\psi}_{AS}$, we have the output state $\rho_G := {\mr{id}_A \otimes \cl{A}_G^{\otimes M}}\pars{\ket{\psi}\bra{\psi}_{AS}}$ ($\rho_G = \mr{id}_A \otimes \cl{A}_G^{\otimes M}\pars{\rho_{AS}}$ for a classical probe \eqref{classicalprobe}), where $\mr{id}_A$ is the identity channel on $A$. Estimation of $G$ from the state family $\{\rho_{G}\}$ is subject to the \emph{quantum Cram\'er-Rao bound} (QCRB) \cite{Hel76,*Hol11,PBG+18,PVS+20}, a brief description of which follows. A measurement on the $AS$ system is described by a collection of positive operators $\left\{\hat{\Pi}_y\right\}_{\cl{Y}}$ indexed by the measurement result $y \in \cl{Y}$ and summing to the identity. The probability distribution of the result $P(y;G) = \Tr \rho_G \hat{\Pi}_y$, and an estimator $\check{G}(y)$ based on this measurement is called \emph{unbiased} for $G$ if $\int_{\cl{Y}} \mathop{}\!\mathrm{d} y\, \check{G}(y) P(y;G) = G$ for all $G$ in the interval of interest. The (classical) \emph{Cram\'er-Rao bound} (CRB) bounds the mean squared error (MSE) $\mathbb{E}\bracs{\check{G} - G}^2$ of any unbiased estimator as $ \mathbb{E}\bracs{\check{G} - G}^2 \geq 1/{\cl{J}_G \bracs{Y}}$, where $\cl{J}_G \bracs{Y} := \mathbb{E} \bracs{{\partial_{G} \ln P(Y;G)}}^2 = - \mathbb{E} \bracs{{\partial_{G}^2 \ln P(Y;G)}}, $ is the (classical) \emph{Fisher information} (FI) on $G$ of the measurement $Y$ \cite{Kay93ssp1}. Different measurements $\left\{\hat{\Pi}_y\right\}_{\cl{Y}}$ result in different CRBs. On the other hand, there exists a Hermitian operator $\hat{L}_G$ called the symmetric logarithmic derivative (SLD) satisfying $\partial_G \rho_{G} \equiv \partial \rho_{G}/ \partial G = \pars{\rho_{G} \hat{L}_{G} + \hat{L}_{G} \rho_{G}}/2.$ The \emph{quantum Fisher information} (QFI) is defined as $\cl{K}_G = \Tr \rho_G \hat{L}_{G}^2,$ and the QCRB $ \mathbb{E}\bracs{\check{G} - G}^2 \geq \cl{K}_G^{-1} $ minimizes the right-hand side of the CRB over all unbiased measurements and defines the quantum-optimal sensing performance. The QFI $\cl{K}_\theta$ on $\theta$ of an arbitrary state family $\{\rho_{\theta}\}$ is related to the fidelity $F\pars{\rho_\theta, \rho_{\theta'}} = \Tr \sqrt{\sqrt{\rho_{\theta}} \rho_{\theta'} \sqrt{\rho_{\theta}}}$ between the states of the family via \cite{Hay06,*BC94} \begin{align} \label{QFIfromfidelity} \cl{K}_{\theta} = -4\, \partial^2_{\theta'} F\pars{\rho_\theta,\rho_{\theta'}}\vert_{\theta'=\theta}. \end{align} It is expedient for us to work with the QFI on the parameter $\tau = \kappa t =\mr{arccosh} \sqrt{G}$. Since the SLDs with respect to $\tau$ and $G$ satisfy $\hat{L}_G = \frac{\partial \tau}{\partial G}\, \hat{L}_\tau$, $\cl{K}_G = \pars{\frac{\partial \tau}{\partial G}}^2 \cl{K}_\tau$ so that maximizing either QFI suffices. \indent \emph{Optimal gain sensing ---} We first obtain an upper bound $\widetilde{\cl{K}}_\tau \geq {\cl{K}}_\tau$ on the QFI in the hypothetical situation where the output of $ASE$ is available for measurement. Given a probe $\ket{\psi}_{AS}$, we then hold the state family $\left\{ \Psi_{\tau} = \ket{\psi_\tau} \bra{\psi_\tau}\right\}$ defined by $ \ket{\psi_{\tau}}_{ASE} = \hat{I}_A \otimes \pars{\otimes_{m=1}^{M} \hat{U}_m(\tau)} \ket{\psi}_{AS}\ket{\mb{0}}_E, $ where $\hat{U}_m(\tau)=\exp\pars{-i\hat{H}_I t/ \hbar}$ is the paramp unitary acting on the $m$-th signal and environment mode pair parametrized by $\tau = \kappa t$. The QFI $\widetilde{\cl{K}}_{\tau}$ of $\left\{\Psi_{\tau}\right\}$ upper bounds $\cl{K}_\tau$ due to the monotonicity of the QFI with respect to partial trace over $E$ \cite{Hay06}. We can show (see Appendix A) that the paramp takes the input $\ket{n}_S\ket{0}_E$ to $\hat{U}(\tau) \kets{n}\kete{0}= \sech^{(n+1)} \tau\sum_{a=0}^\infty \sqrt{{n +a \choose a}} \tanh^a \tau \kets{n+a}\kete{a}.$ Thus, the paramp coherently adds a random number $a$ of photons to both $S$ and $E$ according to the negative binomial distribution $\mr{NB}(n+1, \sech^2 \tau)$ \cite{RS15probstats}. For any probe \eqref{probe}, we have \begin{align} \label{ASEstate2} \ket{\psi_\tau}_{ASE} &= \sum_{\mb{a} \geq \mb{0}} \varket{\psi_{\mb{a};\tau}}_{AS} \ket{\mb{a}}_E, \end{align} where $\varket{\psi_{\mb{a};\tau}}_{AS} = \sum_{\mb{n} \geq\mb{0}} \sqrt{p_\mb{n} A_\tau(\mb{n},\mb{a})} \ket{\chi_{\mb{n}}}_A\ket{\mb{n} +\mb{a}}_S $ are non-normalized states of $AS$ and $A_\tau(\mb{n},\mb{a}) = \prod_{m=1}^M {n_m + a_m \choose a_m} \sech^{2(n_m+1)}\tau \tanh ^{2 a_m}\tau$ is a product of negative binomial probabilities. For $\ket{\psi_{\tau'}}_{ASE}$ the output state on $ASE$ obtained by passing $\ket{\psi}_{AS}$ through a QLA of gain $G' = \cosh^2 \tau'$, the fidelity between the outputs can be shown after some computation (see Appendix B) to be \begin{align} \label{ASEoverlap} F\pars{\Psi_\tau, \Psi_{\tau'}} =\braket{{\psi_\tau}}{{\psi_{\tau'}}} &= \sum_{n=0}^\infty p_n \nu^{n+M}, \end{align} where $\nu =\sech\,(\tau'-\tau) = \pars{\sqrt{GG'} - \sqrt{(G-1)(G'-1)}}^{-1}\in (0,1]$. Using Eq.~\eqref{QFIfromfidelity}, we obtain the sought upper bounds $\cl{\widetilde{K}}_\tau = 4(N+M)$ and $ \widetilde{\cl{K}}_G = \frac{N+M}{G(G-1)}$ on the true QFI with respect to $\tau$ and $G$. Returning to the original problem in which only $\rho_\tau = \Tr_E \Psi_\tau$ is accessible, we have from Eq.~\eqref{ASEstate2} that $ \rho_\tau = \sum_{\mb{a}\geq \mb{0}} \varket{\psi_{\mb{a};\tau}} \varbra{\psi_{\mb{a};\tau}}_{AS}. $ For given $\left\{p_{\mb{n}}\right\}$ in Eq.~\eqref{probe}, consider probes for which $\{\ket{\chi_{\mb n}}_A\}$ is an orthonormal set. Such probes, called \emph{number-diagonal signal (NDS)} probes, are known to be optimal probes for diverse sensing problems \cite{NY11,SWA+18,Nai18loss}. Orthonormality of the $\{\ket{\chi_{\mb n}}_A\}$ implies that $\varbraket{\psi_{\mb{a};\tau}}{\psi_{\mb{a'};\tau'}} = \varbraket{\psi_{\mb{a};\tau}}{\psi_{\mb{a};\tau'}} \delta_{\mb{a},\mb{a'}}$, so the output fidelity $ F\pars{\rho_\tau,\rho_{\tau'}} = \sum_{\mb{a}\geq\mb{0}} \varbraket{\psi_{\mb{a};\tau}}{\psi_{\mb{a};\tau'}} = F\pars{\Psi_\tau, \Psi_{\tau'}}$ of Eq.~\eqref{ASEoverlap}. Thus, the QFIs on $\tau$ and $G$ \begin{align} \label{ndsqfi} \cl{{K}}_\tau = 4(N+M);\;\; {\cl{K}}_G = \frac{N+M}{G(G-1)} \end{align} of NDS probes saturate the upper bounds calculated above. This result exhibits several remarkable features. First, {any} NDS probe with the given $N$ and $M$ is quantum-optimal regardless of its exact signal photon number distribution $\left\{p_{\mb{n}}\right\}$. This generalizes the single-mode Fock-state optimality result \cite{AAF10} not just to multimode Fock states but to the infinite class of ancilla-entangled multimode NDS probes including the workhorse of optical quantum information -- the two-mode squeezed vacuum (TMSV) state. Second, gain sensing performance explicitly depends on the number $M$ of signal modes. This contrasts sharply with loss sensing, for which the optimal QFI is $M$-independent \cite{Nai18loss}. Physically, this difference stems from the gain-dependent quantum noise introduced by a QLA that makes the output states of two QLAs with distinct gains distinguishable even for a vacuum input. Increasing the number of signal modes further improves their distinguishability. In contrast, vacuum probes of any $M$ are invariant states of loss channels and are therefore useless for sensing them. Finally, the roles of $N$ and $M$ in Eq.~\eqref{ndsqfi} are seen to be equivalent so that one resource can be exchanged for the other, providing additional flexibility in the choice of optimal probes. For an $M$-mode signal-only coherent-state probe $\ket{\sqrt{N_1}}_{S_1}\cdots\ket{\sqrt{N_M}}_{S_M}$ with $\sum_{m=1}^M N_m = N$, the output state $\rho_\tau$ is a product of single-mode Gaussian states. The QFI on $G$ follows from the results of \cite{PJT+13} after some algebra: \begin{equation} \begin{aligned} \label{csQFI} \cl{K}^{\tsf{coh}}_G &= {\frac{N}{G(2G-1)} + \frac{M}{G(G-1)}}. \end{aligned} \end{equation} The convexity of QFI in the state \cite{Fuj01} and the linear dependence on $N$ of the first term in the above expressions imply that no classical probe [Eq.~\eqref{classicalprobe}] with $M$ signal modes can beat the QFI of Eq.~\eqref{csQFI}. Both Eqs.~\eqref{ndsqfi} and \eqref{csQFI} contain a term proportional to $N$ (the \emph{photon contribution}) and another proportional to $M$ (the \emph{modal contribution}). The modal contribution in the optimal quantum and classical QFI is identical, but the quantum-optimal photon contribution is at least twice the classical photon contribution and far exceeds it in the $G \sim 1$ regime (see Fig.~\ref{fig:totalfi}). \begin{figure}[t] \centering \includegraphics[trim=12.5mm 60mm 22mm 72mm, clip=true, scale=0.44]{clvsqttotalfiN6M9.pdf} \caption{The optimal quantum (blue) (Eq.~\eqref{ndsqfi}) and classical QFI (red) (Eq.~\eqref{csQFI}) for $N=6$ and $M=9$. Also shown are the FI of homodyne (purple dashed-dotted), heterodyne detection (green dotted), and the inverse MSE of the photodetection-based estimator (Eq.~\eqref{Gcheck}) (yellow dashed) for a coherent-state probe.} \label{fig:totalfi} \end{figure} \emph{Performance of standard measurements---} Suppose that an arbitrary NDS probe \eqref{probe} is input to a QLA of unknown gain and that we measure the basis $\left\{\keta{\chi_{\mb n}}\right\}$ and also the photon number in each of the $M$ output signal modes. Denote the measurement result $(\mb{X},\mb{Y})$, where $\mb{X} = (X_1, \ldots, X_M)$ if $\ket{\chi_{\mb{X}}}_A$ is the measurement result on $A$ and $\mb{Y} = (Y_1,\ldots, Y_M)$ if $Y_m$ photons are observed in the $m$-th output signal mode. We can then show (Appendix C) that the FI $\cl{J}_\tau \bracs{\mb{X},\mb{Y}} = 4(N+M)$ for any NDS probe, so that this measurement achieves the quantum-optimal QFI \eqref{ndsqfi}. While this implies that the maximum likelihood estimator based on $(\mb{X},\mb{Y})$ achieves the quantum limit for a large number of copies \cite{RS15probstats,Kay93ssp1}, a quantum-optimal estimator may not exist for a finite sample \cite{Kay93ssp1}. For a multimode number-state probe $\otimes_{m=1}^M \ket{n_m}_{S_m}$ with $\sum_{m=1}^M n_m = N$, consider the estimator \begin{align} \label{Gcheck} \check{G} := \pars{Y+ M}/\pars{N+M}, \end{align} where $Y = \sum_{m=1}^M Y_m$ is the total photon number measured in the signal modes. Using the fact that $Y - N \thicksim \mr{NB}(N+M, \sech^2 \tau)$, we can show that $\check{G}$ is unbiased and that $\mr{Var}[\check{G}] = \frac{G(G-1)}{N+M}$ so the QCRB \eqref{ndsqfi} is achieved even on a finite sample for any multimode number-state probe. On the other hand, a $G$-independent measurement that achieves the coherent-state QFI \eqref{csQFI} is unknown. The estimator $\check{G}$ above remains unbiased but has the suboptimal variance $\mathrm{MSE}^{\textsf{coh}}[\check{G}] = \frac{G(G-1)}{N+M} + \frac{G^2 N}{(N+M)^2}$ (Appendix D.~III). Homodyne and heterodyne detection in each output mode have the respective (suboptimal) FIs $\cl{J}^{\tsf{coh+hom}}_G = \frac{N}{G(2G-1)} + \frac{2M}{(2G-1)^2}$ and $\cl{J}^{\tsf{coh+het}}_G = \frac{N/2 + M}{G^2}.$ These Fisher information quantities are compared in Fig.~\ref{fig:totalfi}. \emph{Practical quantum advantage---} To examine whether a quantum advantage can be demonstrated in the laboratory, we study the estimation of $G$ using single-photon probes and photodetectors of efficiency $\eta_d <1$. (See Fig.~\ref{fig:ineffpd}). For any multimode number-state probe $\otimes_{m=1}^M\ket{n_m}$, photon counting in each output mode remains the QFI-achieving measurement and the QFI can be obtained numerically (Appendix~D.II). We also calculate the QFI of a coherent-state probe $\otimes_{m=1}^M \ket{\sqrt{N_m}}$ of the same $N$ and $M$ (Appendix D.I), and also the MSE of the unbiased estimator \begin{align} \label{Gchecklossy} \check{G} &= \pars{\eta_d^{-1}Y +M}/\pars{N+M} \end{align}generalizing that of Eq.~\eqref{Gcheck} (Appendix D.III). Since single-photon states are more readily prepared than multiphoton Fock states \cite{M-SSM20}, we compare their performance relative to coherent states in Fig.~\ref{fig:ineffdetplots}. The MSE $\mathrm{MSE}^{\tsf{1-photon}}\bracs{\check{G}}$ of $\check{G}$ for single-photon probes (for which $M=N$) is always less than that for coherent states (Appendix~D.III), and Fig.~\ref{fig:ineffdetplots}, left and center). Moreover, for each value of $\eta_d$, there is a threshold value of the gain (which is independent of $M$) beyond which $\mathrm{MSE}^{\tsf{1-photon}}\bracs{\check{G}}$ falls below the QCRB for coherent states (Fig.~\ref{fig:ineffdetplots}, right), so that a quantum advantage is guaranteed for sensing gain values known to lie beyond the threshold. \begin{figure}[t] \centering \includegraphics[trim=54mm 98mm 106mm 92mm, clip=true, scale=0.4]{Ineffdetection.pdf} \caption{Gain estimation under inefficient detection: Each mode of a product signal-only probe $\otimes_{m=1}^M\ket{\psi_m}$ passes through a QLA $\mathcal{A}_G$. Detection with quantum efficiency $\eta_d$ is modeled by a beam splitter with mode $\hat{f}$ in vacuum and output mode $\hat{b}$ that is measured using an ideal photodetector D, resulting in photon count $Y_m$.} \label{fig:ineffpd} \end{figure} \emph{Energy-constrained Bures distance---} As our final result, we derive the energy-constrained Bures distance \cite{Shi19uniform} between the amplifier channels $\cl{A}_G^{\otimes M}$ and $\cl{A}_{G'}^{\otimes M}$. This distance is one of several energy-constrained channel divergence measures between bosonic channels, with many applications in quantum information and sensing \cite{PLO+17,PL17,Win17arxiv,Shi18ecdn,Shi19uniform,SWA+18, SH19,BD20,BDL+21,SSW20arxiv,TW16arxiv}. Its calculation is equivalent to minimizing the output fidelity $F\pars{\rho_G,\rho_{G'}}$ over all $M$-signal-mode probes \eqref{probe} with average signal energy $N$. We show (Appendix E) that this minimum equals $F_{\rm min}\pars{\rho_G,\rho_{G'}} = \nu^M\bracs{\pars{1-\{N\}}\nu^{\lfloor N \rfloor} + \{N\}\nu^{\lfloor N \rfloor + 1}},$ where $\lfloor N \rfloor$ and $\{N\}$ are respectively the integer and fractional parts of $N$. This results adds QLAs to the short list of channels for which exact values of energy-constrained channel divergences are known and also gives bounds on other divergences between QLAs \cite{Aud14}. \begin{figure*}[t] \begin{subfigure}[b]{0.345\textwidth}\label{fig:4a} \centering {\includegraphics[trim=8mm 62mm 23mm 67mm, clip=true, scale=0.34]{eta07.pdf}} \end{subfigure} \begin{subfigure}[b]{0.322\textwidth} \centering {\includegraphics[trim=17.5mm 62mm 21mm 67mm, clip=true, scale=0.34]{eta09.pdf}} \end{subfigure} \begin{subfigure}[b]{0.322\textwidth} \centering {\includegraphics[trim=7mm 62mm 23mm 67mm, clip=true, scale=0.34]{thresholdgain.pdf}} \end{subfigure} \caption{Performance of single-photon probes with inefficient detection: (Left) and (Center) -- QCRBs of multimode single-photon (blue solid) and coherent-state (red solid) probes along with the MSE of $\check{G}$ of Eq.\eqref{Gchecklossy} for single-photon (blue dashed) and coherent-state (red dashed) probes for $\eta_d = 0.7$ (left) and $\eta_d = 0.9$ (center) with $M=N=20$. (Right) -- The threshold gain beyond which single-photon probes and photon counting beat the coherent-state QCRB.}\label{fig:ineffdetplots} \end{figure*} \emph{Discussion---} We have delineated the optimal precision of sensing the gain of QLAs regardless of their implementation platform and explicit physical realization. Our problem formulation constrained the average signal energy to equal $N$ but since the optimal QFI increases with $N$, NDS states of average energy $N$ are optimal over all probes with average energy less than or equal to $N$. For multimode number-state probes, we identified a concrete quantum-optimal estimator and showed the in-principle feasibility of quantum-enhanced gain sensing using standard single-photon sources \cite{M-SSM20} and photon counting even under inefficient detection. Additional loss in the signal path upstream of the QLA can also be accounted for by our calculation techniques. The use of brighter TMSV sources is expected to harness the photon contribution to the QFI of Eq.~\eqref{ndsqfi} even better, and finding good measurements and estimators for TMSV probes with imperfect detection is of great interest for future work. Our study can be generalized to the estimation of multiple \cite{LYL+19} and distributed \cite{GBB+20,*ZZ21} gain parameters. The implications of our results for relativistic metrology problems \cite{AAF10,M-MFM11,*ABF14,*ABS+14} also remain to be explored. Noisy attenuator channels (relevant to quantum illumination, noisy imaging, and quantum reading \cite{TEG+08,GMT+20,Pir11,OLP+21} among other applications), noisy amplifier channels (which model laser amplifiers with incomplete inversion \cite{Aga12quantum,Ser17qcv}), and additive noise channels (relevant to noisy continuous-variable teleportation \cite{BK98,*FSB+98}) are compositions of pure-loss channels with QLA channels. Our work here, together with complementary results in loss sensing \cite{Nai18loss}, is expected to be basic to a general theory of fundamental limits for sensing such noisy phase-covariant Gaussian channels, while highlighting the role of $M$ as an important resource therein. \begin{acknowledgments} This work is supported by the Singapore Ministry of Education Tier 1 Grants RG162/19 (S) and RG146/20, the NRF-ANR joint program (NRF2017-NRF-ANR004 VanQuTe), the National Research Foundation (NRF) Singapore under its NRFF Fellow program (Award No. NRF-NRFF2016-02), and the FQXi R-710-000-146-720 Grant ``Are quantum agents more energetically efficient at making predictions?'' from the Foundational Questions Institute and Fetzer Franklin Fund (a donor-advised fund of Silicon Valley Community Foundation). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation or the Ministry of Education, Singapore. \end{acknowledgments} \onecolumngrid \section*{APPENDICES} \section{Appendix A: Action of the paramp unitary} \noindent In this appendix, we compute the action of the paramp unitary $\hat{U}(\tau) := \exp\pars {-\iota t \hat{H}_I /\hbar} = \exp \bracs{\tau \pars{\hat{a}\hat{e} - \hat{a}^\dag \hat{e}^\dag}}$ on the states $\kets{n}\kete{0}$, which will enable us to compute $\Psi_\tau$ for arbitrary probes $\ket{\psi}_{AS}$. Recognizing that $\hat{U}(\tau)$ is a two-mode squeezing operation, we rewrite it in the alternative form (Cf. Eq.~(3.7.50) of \cite{BR02methods}) \begin{equation} \begin{aligned} \label{disentU} \hat{U}(\tau) =& \exp\bracs{\pars{\tanh \tau} \hat{a}^\dag \hat{e}^\dag} \, \exp\bracs{\pars{\ln \sech \tau}\pars{\hat{a}^\dag \hat{a} + \hat{e}\hat{e}^\dag}} \,\exp\bracs{-\pars{\tanh \tau} \hat{a} \hat{e}}. \end{aligned} \end{equation} Upon expanding the exponentials appearing above into power series, we find that $\hat{U}(\tau)$ acts on the product state $\kets{n}\kete{0}$ as \begin{align} \label{actiononn0} \hat{U}(\tau) \kets{n}\kete{0}&= \sech^{(n+1)} \tau\sum_{a=0}^\infty \sqrt{{n +a \choose a}} \tanh^a \tau \kets{n+a}\kete{a}, \end{align} which appears above Eq.~(4) of the main text. We recall the definition of a negative binomial distribution. Consider independent and identically distributed trials of flipping a coin with probability $p$ of landing heads. For integer $r \geq 1$, let $X$ be the number of tails obtained before obtaining $r$ heads. The probability distribution of $X$ is \begin{align} P_X(x) = {x+r-1 \choose x} (1-p)^x\,p^r; \;\;x=0,1,\ldots \end{align} This distribution is called a negative binomial distribution and denoted $\mr{NB}(r,p)$ \cite{RS15probstats}. Comparing with Eq.~\eqref{actiononn0}, we see that the number $a$ of photons added by the amplifier to the $S$ and $E$ modes is distributed according to the $\mr{NB}(n+1,\sech^2\tau)$ distribution when the input is $\kets{n}\kete{0}$. \section{Appendix B: Fidelity between output states of $ASE$ and between the output states of $AS$ for NDS probes} \label{sec:fidelity} \noindent In this appendix, we prove Eq.~(5) of the main text. We first need the following combinatorial lemma. \begin{lemma} \label{lem:comb} Given an $M$-vector $\mb{n} = \pars{n_1, \ldots, n_M}$ of nonnegative integers and an integer $a \geq 0$, we have \begin{align} \label{lemma} \sum_{\mb{a} \geq \mb{0}: \tr \mb{a} =a} \bracs{\prod_{m=1}^M {n_m + a_m \choose a_m}} &= {\tr \mb{n}+ M-1 + a \choose a}, \end{align} where $\mb{a} = \pars{a_1, \ldots, a_M} \geq \mb{0}$ is another $M$-vector, and the trace of a vector $\mb{x}$ is defined to be $\tr \mb{x} := \sum_{m=1}^M x_m$. \begin{proof} For integer $n\geq 0$, we have the Taylor series expansion (sometimes called the ``negative binomial theorem''): \begin{align} \label{taylor} (1-x)^{-n} = \sum_{k=0}^\infty {n +k - 1 \choose k}\,x^k, \end{align} which is valid for $\abs{x} <1$. For any $\mb{n} \geq \mb{0}$, we can write \begin{align} \prod_{m=1}^M (1-x)^{-(n_m +1)} = (1-x)^{- \pars{\tr \mb{n} + M}}. \end{align} For integer $a \geq 0$, consider the coefficient of $x^a$ on each side of this identity. Applying \eqref{taylor} to each of the factors on the left-hand side and separately to the right-hand side, we see that this coefficient equals in turn the expressions on either side of Eq.~\eqref{lemma}. \end{proof} \end{lemma} The overlap $\braket{{\Psi_\tau}}{{\Psi_{\tau'}}}$ between output states of the $ASE$ system equals \begin{align} \braket{{\Psi_\tau}}{{\Psi_{\tau'}}} &= \sum_{\mb{a}\geq\mb{0}} \varbraket{\psi_{\tau;\mb{a}}}{\psi_{\tau';\mb{a}}} \label{app:fidelity}\\ &= \sum_{\mb{a} \geq \mb{0}} \sum_{\mb{n} \geq \mb{0}} p_{\mb{n}} \, \sqrt{A_\tau (\mb{n},\mb{a} ) A_{\tau'}(\mb{n},\mb{a})} \\ &= \sum_{\mb{n} \geq \mb{0}} p_{\mb{n}} \pars{\sech \tau \sech \tau'}^{n+M} \sum_{\mb{a} \geq \mb{0}} \bracs{\prod_{m=1}^M {n_m + a_m \choose a_m} \pars{\tanh \tau \, \tanh \tau'}^{a_m}}, \end{align} where, as before, $n := \tr \mb{n}$. We now rearrange the sum over $\mb{a}$ in terms of $a := \tr \mb{a}\geq 0$ and use the lemma to get \begin{align} &\braket{{\Psi_\tau}}{{\Psi_{\tau'}}} \\ &= \sum_{\mb{n} \geq \mb{0}} p_{\mb{n}} \pars{\sech \tau \sech \tau'}^{n+M} \sum_{a=0}^\infty \sum_{a \geq \mb{0}: \tr \mb{a} = a} \bracs{\prod_{m=1}^M {n_m + a_m \choose a_m} } \pars{\tanh \tau \, \tanh \tau'}^a\\ &= \sum_{\mb{n} \geq \mb{0}} p_{\mb{n}} \pars{\sech \tau \sech \tau'}^{n+M} \sum_{a=0}^\infty {n+M -1 +a \choose a}\pars{\tanh \tau \, \tanh \tau'}^a \\ &= \sum_{\mb{n} \geq \mb{0}} p_{\mb{n}} \pars{\cosh \tau \,\cosh \tau' - \sinh \tau \, \sinh \tau'}^{-(n+M)} \\ &= \sum_{n=0}^\infty p_n \bracs{\sech(\tau'-\tau)}^{n+M} \\ &\equiv \sum_{n=0}^\infty p_n \nu^{n+M}, \label{app:ndsfidelity} \end{align} which establishes Eq.~(5) of the main text. As argued there, the right-hand side of Eq.~\eqref{app:fidelity} is also the fidelity $F\pars{\rho_\tau, \rho_{\tau'}}$ between the output states of the $AS$ system alone when the probe $\ket{\psi}_{AS}$ is NDS. \section{Appendix C: Fisher Information of the Schmidt-bases measurement for NDS probes} \noindent Suppose that an NDS probe $\ket{\psi} = \sum_{\mb{n} \geq \mb{0}} \sqrt{p}_{\mb{n}}\ket{\chi_{\mb n}}_A \ket{\mb n}_S$ is used and that we measure at the output of the amplifier its Schmidt bases, i.e., the basis $\left\{\keta{\chi_{\mb n}}\right\}$ in the ancilla system and also the photon number in each of the $M$ output signal modes. The measurement result is denoted $(\mb{X},\mb{Y})$, where $\mb{X}=(X_1, \ldots, X_M)$ is the index of the measurement result on $A$ ($\mb{X} = \mb{x}$ if the result $\keta{\chi_{\mb{x}}}$ is obtained) and $\mb{Y} = (Y_1,\ldots, Y_M)$ denotes the outcome of the photon number measurement in the $M$ signal modes. From Eq.~(4) of the main text, the joint distribution of this observation is \begin{align} \label{app:measpmf} P(\mb{x}, \mb{y}; \tau) &= p_{\mb{x}} A(\mb{x}, \mb{y} - \mb{x}) = p_{\mb{x}} \pars{\prod_{m=1}^M {y_m \choose x_m} \sech^{2(x_m+1)}\tau \tanh ^{2 (y_m - x_m)}\tau}. \end{align} Using the general expression \begin{align} \label{FI} \cl{J}_{\theta} \bracs{Z} = - \mathbb{E} \bracs{{\partial_{\theta}^2 \ln P(Z;\theta)}} \end{align} for the Fisher information on a parameter $\theta$ of a measurement $Z$ \cite{Kay93ssp1}, we have \begin{align} \cl{J}_\tau \bracs{\mb{X},\mb{Y}} &= 2 \sum_{m=1}^M \bracs{\pars{\mathbb{E}[X_m]+1} \sech^2\tau + {\mathbb{E}[Y_m -X_m]}\pars{\sech^2 \tau + \csch^2 \tau}} \\ &= 4 \sum_{m=1}^M \bracs{ \mathbb{E}[X_m] + 1} = 4(N+M). \label{countingFI} \end{align} Here, we have used the fact that, conditioned on $X_m =x_m$, $Y_m \thicksim \mr{NB}(x_m+1, \sech^2 \tau)$ so that $\mathbb{E}[Y_m -X_m] = \pars{\mathbb{E}[X_m] +1}\sinh^2 \tau$, and the energy constraint is used in the last step. Since Eq.~\eqref{countingFI} coincides with the NDS-probe QFI of Eq.~(6) of the main text, we see that this measurement is quantum-optimal for any such probe. \section{Appendix D: Effect of nonunity detection efficiency} \noindent Consider the measurement configuration depicted in Fig.~3 of the main text. The annihilation operator of the measured mode equals \begin{equation} \begin{aligned} \label{app:detmode} \hat{b} &= \sqrt{\eta_d}\, \hat{a}_{\tsf{out}} + \sqrt{1-\eta_d}\, \hat{f} \\ &= \sqrt{\eta_d G} \,\hat{a}_{\tsf{in}} + \sqrt{\eta_d(G-1)} \,\hat{e}_{\tsf{in}}^\dag + \sqrt{1-\eta_d} \,\hat{f}. \end{aligned} \end{equation} The photon number operator of $\hat{b}$ becomes \begin{align} \label{app:numbop} \hat{N}_b &= \eta_d\,\hat{N}_{\tsf{out}} + \sqrt{\eta_d (1-\eta_d)}\pars{\hat{a}_{\tsf{out}}^\dag \hat{f} + \hat{a}_{\tsf{out}} \hat{f}^\dag} + (1-\eta_d) \hat{f}^\dag\hat{f}, \end{align} where $\hat{N}_{\tsf{out}} = \hat{a}_{\tsf{out}}^\dag\hat{a}_{\tsf{out}}$. \subsection{I.~Classical baseline} \noindent Suppose the coherent-state probe $\kets{\sqrt{N}}$ is used in Fig.~3 of the main text, but arbitrary measurements are allowed on the mode $\hat{b}$ downstream of the beam splitter. This corresponds to the case where a system loss (including detection efficiency) $\eta_d$ is present in the system. The smallest MSE (optimized over all quantum measurements) in estimating $G$ using classical probes is set by the QCRB of the state in the mode $\hat{b}$. Using Eq.~\eqref{app:detmode}, and noting that the modes $\hat{e}_{\tsf{in}}$ and $\hat{f}$ are in the vacuum state, we find that the mean vector and Wigner covariance matrix of the quadratures $\hat{q} = \pars{\hat{b} + \hat{b}^\dag}/\sqrt{2}$ and $\hat{p} = \pars{\hat{b} - \hat{b}^\dag}/\sqrt{2}i$ of the $\hat{b}$ mode are given respectively by \begin{equation}\label{app:csmeancov} \begin{aligned} \boldsymbol{\mu}_G &= \pars{\sqrt{2\eta_d G N}, 0}^{\trans}, \\ \boldsymbol{\Sigma}_G &= \bracs{\eta_d(G-1) + 1/2} \mathbb{1}_2, \end{aligned} \end{equation} where $\mathbb{1}_2$ is the $2 \times 2$ identity matrix. Since $\hat{b}$ is in a Gaussian state (in fact, a displaced thermal state), we can invoke the results of Ref.~\cite{PJT+13} on computing the QFI of parameters of Gaussian states. Using Eqs.~\eqref{app:csmeancov}, we obtain after some computation the QFI \begin{align} \label{app:csqfiineffdet} \cl{K}_G^{\tsf{coh}} &= \frac{\eta_d N}{G\bracs{2\eta_d (G-1) + 1}} + \frac{\eta_d}{(G-1)\bracs{\eta_d(G-1)+1}} \end{align} on $G$ in the lossy regime, which is displayed in Fig.~4 (left and center plots) of the main text. As in the lossless case, it is unknown if this QFI can be achieved on a finite sample using a $G$-independent measurement. \subsection{II.~Quantum Fisher information for number-state probes} \noindent Consider the scenario depicted in Fig.~3 of the main text for the number-state probe $\ket{\psi}_S = \otimes_{m=1}^M \ket{n_m}_{S_m}$. The ultimate limit on the mean squared error of estimating $G$ is set by the QCRB of the state family in the set of modes $\left\{\hat{b}_m\right\}_{m=1}^M$. Since the state of those modes is no longer Gaussian, we cannot appeal to the literature on parameter estimation for Gaussian-state families. From the action of the QLA on number-state inputs written above Eq.~(4) of the main text, however, the output state of the amplifier when $\ket{N}_S$ is input can be written as: \begin{align} \label{app:rhotau} \rho_\tau &= \sech^{2(N+1)} \tau \sum_{a=0}^\infty {N +a \choose a} \tanh^{2a} \tau \kets{N+a}\bra{N+a}. \end{align} Since the beam splitter acts on number states according to $\kets{n}\bra{n} \mapsto \sum_{k=0}^n {n \choose k} \eta_d^k (1 -\eta_d)^{n-k} \kets{k}\bra{k}$, the state of the $\hat{b}$ mode becomes \begin{equation}\label{app:rhotau'} \begin{aligned} \rho_{\tau}'&= \sum_{k=0}^\infty \pars{\sech^{2(N+1)} \tau \sum_{a=\mr{max}(k-N,0)}^\infty {N+a \choose a} {N+a \choose k} \eta_d^k (1-\eta_d)^{N+a-k} \tanh^{2a} \tau} \kets{k}\bra{k} \\ &\equiv \sum_{k=0}^\infty P_{\tau}(k) \kets{k}\bra{k}. \end{aligned} \end{equation} Since the state family $\left\{\rho_{\tau}'\right\}$ is diagonal in the number basis, the QFI on $\tau$ equals the FI of the family of photon number distributions $\left\{P_\tau(k); k = 0,1,2,\ldots\right\}$ and is achieved by photodetection. Using Eq.~\eqref{FI}, this quantity is \begin{align} \cl{K}_\tau &= \cl{J}_\tau[Y] \\ &= -\sum_{k=0}^\infty P_\tau(k) \partial^2_{\tau} \bracs{\ln P_\tau(k)} \\ &= \sum_{k=0}^\infty \bracs{ P_\tau^{-1}(k) \bracs{\partial_{\tau} P_\tau (k) }^2 - \partial_\tau^2 P_\tau(k)}. \end{align} Evaluating the derivatives results in the somewhat unwieldy expressions \begin{align} \partial_{\tau} P_\tau (k) = 2\,\eta_d^k \sech^{2(N+2)} \tau &\sum_{a=\mr{max}(k-N,0)}^\infty {N +a \choose a} {N +a \choose k} (1-\eta_d)^{N+a-k}\bracs{a - (N+1) \sinh^2 \tau} \tanh^{2a-1}\tau, \\ \partial^2_{\tau} P_\tau (k) = 2\,\eta_d^k \sech^{2(N+3)} \tau &\sum_{a=\mr{max}(k-N,0)}^\infty {N +a \choose a} {N +a \choose k} (1-\eta_d)^{N+a-k}\;\nonumber\\ \times & \bracs{2 (N+1)^2 \sinh^4 \tau - \pars{4aN + N + 6a+1}\sinh^2 \tau + 2a^2 - a} \tanh^{2a-1}\tau, \end{align} using which the QFI $\cl{K}_\tau$ can be evaluated numerically. The QFI $\cl{K}_G$ on $G$ follows as $\cl{K}_G = \cl{K}_\tau/ [4G(G-1)].$ The QFI for a multimode number-state probe is the sum of terms of the above form for each mode. The result for multimode single-photon probes is displayed in Fig.~4 (left and center plots) of the main text. \subsection{III.~Mean squared error of the estimator $\check{G}$ of Eq.~(9) of the main text} \noindent In this section, we compute the MSE achieved by the estimator \begin{align} \label{app:Gcheck} \check{G} = \frac{\pars{\sum_{m=1}^M Y_m}/{\eta_d} + M}{N+M} \end{align} on $M$-mode number-state and coherent-state probes with average total energy $N$, where the $\left\{Y_m\right\}$ denote the observed photocounts in the modes $\left\{\hat{b}_m\right\}$. Using Eq.~\eqref{app:numbop} and the fact that the modes $\hat{e}$ and $\hat{f}$ are in the vacuum state, we have after some algebra that the first and second moments of the photocount in each mode satisfy (we omit mode subscripts to reduce clutter): \begin{align} \mean{Y} &= \mean{\hat{N}_{b}} = \eta_d \mean{\hat{N}_{\tsf{out}}}, \label{app:meanY} \\ \mean{Y^2} &= \mean{\hat{N}_b^2} = \eta_d^2 \mean{\hat{N}^2_{\tsf{out}}} + \eta_d(1-\eta_d)\mean{\hat{N}_{\tsf{out}}}, \label{app:2ndmomY} \end{align} where $\hat{N}_{\tsf{out}} = \hat{a}^\dag_\tsf{out} \hat{a}_{\tsf{out}}$. The mean and second moment of $\hat{N}_{\tsf{out}}$ are obtained from the Heisenberg-evolution equations for a QLA given in the main text: \begin{align} \mean{\hat{N}_{\tsf{out}}} &= G\mean{\hat{N}_{\tsf{in}}} + G-1, \label{app:meanNout}\\ \mean{\hat{N}_{\tsf{out}}^2} &= G^2\mean{\hat{N}^2_{\tsf{in}}} + 3G(G-1)\mean{\hat{N}_{\tsf{in}}} + (G-1)(2G-1), \label{app:2ndmomNout} \end{align} where $\hat{N}_{\tsf{in}} = \hat{a}_{\tsf{in}}^\dag \hat{a}_{\tsf{in}}$. Using Eqs.~\eqref{app:meanY} and \eqref{app:meanNout}, we see that the estimator of Eq.~\eqref{app:Gcheck} is unbiased for any probe state. If the probe is of product form, the variance of $\check{G}$ can be obtained by applying Eqs.~\eqref{app:2ndmomY} and \eqref{app:2ndmomNout}. In particular, for a number-state probe $\otimes_{m=1}^M \ket{n_m}$ with $N = \sum_{m=1}^M n_m$, we find that \begin{align} \label{MSEGchecknum} \mr{MSE}^{\tsf{num}}\bracs{\check{G}} = \frac{G(G-1)}{N+M} + \frac{1-\eta_d}{\eta_d(N+M)}\bracs{G - \frac{M}{N+M}}, \end{align} where the first term corresponds to the QCRB under ideal photodetection and the second term is the excess error introduced due to the inefficiency. For a coherent-state probe $\otimes_{m=1}^M \ket{\sqrt{N_m}}$ with $N = \sum_{m=1}^M N_m$, we have \begin{align} \label{MSEGcheckcoh} \mr{MSE}^{\tsf{coh}}\bracs{\check{G}} = \frac{G(G-1)}{N+M} + \frac{G^2 N}{(N+M)^2} +\frac{1-\eta_d}{\eta_d(N+M)}\bracs{G - \frac{M}{N+M}}. \end{align} Again, the first term corresponds to the quantum-optimal error while the middle term represents an additional error owing to the suboptimality of the coherent-state probe. In fact, the sum of the first two terms is strictly greater than the QCRB ${\cl{K}_G^{\tsf{coh}}}^{-1}$ (cf. Eq.~(7) of the main text), corresponding to the fact that photon counting is not a QFI-achieving measurement for coherent-state probes. Finally the last term is the excess error owing to inefficient detection, which is identical to the corresponding term in Eq.~\eqref{MSEGchecknum}. These results on the MSE of $\check{G}$ are plotted in Fig.~4 of the main text (left and center plots) for coherent states and multimode single-photon probes, for which $M=N$. For each $\eta_d$, Eq.~\eqref{MSEGchecknum} drops below the coherent-state QCRB (inverse of Eq.~\eqref{app:csqfiineffdet}) whenever $G$ is greater than a threshold value -- this value is plotted in Fig.~4 (right plot) of the main text. \section{Appendix E: Energy-constrained Bures distance between amplifier channels} In this appendix, we obtain the energy-constrained Bures distance between two given product amplifier channels $\cl{A}_G^{\otimes M}$ and $\cl{A}_{G'}^{\otimes M}$ acting on $M$ modes. To define this quantity, consider the ancilla-assisted channel discrimination problem in which each signal mode of a probe of $M$ of signal modes and total energy $N$ entangled with an arbitrary ancilla $A$ queries a black box containing one of the quantum-limited amplifiers $\cl{A}_G$ or $\cl{A}_{G'}$. Since the total signal energy is constrained, we can ask what probe state maximizes a chosen state distinguishability measure at the output. Several such channel distinguishability measures under an energy constraint, known as energy-constrained channel divergences, have been proposed recently, e.g., the energy-constrained diamond distance and the \emph{energy-constrained Bures (ECB) distance} among others, with several applications in bosonic quantum information \cite{PLO+17,PL17,Win17arxiv,Shi18ecdn,Shi19uniform,SH19,SSW20arxiv,BDL+21,Shi19uniform,SWA+18,TW16arxiv}. We focus here on the ECB distance \cite{Shi19uniform}, defined for the problem at hand as follows: The ECB distance between $\cl{A}_G^{\otimes M}$ and $\cl{A}_{G'}^{\otimes M}$ is given by the expression: \begin{align} \label{ecbdistance} B_N\pars{\cl{A}_\tau^{\otimes M},\cl{A}_{\tau'}^{\otimes M}} := \sup_{\rho_{AS} : \Tr \rho_{AS}\hat{I}_A\otimes \hat{N}_S =N} \sqrt{1 - F\pars{{\rm id}\otimes\cl{A}_\tau^{\otimes M}(\rho_{AS}),{\rm id}\otimes\cl{A}_{\tau'}^{\otimes M}(\rho_{AS})}}, \end{align} where $F$ is the fidelity, $A$ is an arbitrary ancilla system, id is the identity channel on $A$, $\hat{N}_S$ is the total photon number operator on $S$, and the optimization is over all states $\rho_{AS}$ of $AS$ with signal energy $N$. The formulation of Eq.~\eqref{ecbdistance} differs slightly from that in \cite{Shi19uniform} in using an equality energy constraint, and is normalized to lie between 0 and 1 rather than 0 and $\sqrt{2}$. As we show, the two definitions give the same ECB distance up to normalization for the problem at hand. First, note that an arbitrary $\rho_{AS}$ satisfying the constraint can be purified using an additional ancilla to give a probe of the form of Eq.~(1) of the main text, so that the optimization over probe states of $AS$ can be restricted to pure states. Maximizing the output Bures distance \label{ecbdist} is equivalent to minimizing the output fidelity. We claim that an optimal probe $\ket{\psi}_{AS}$ must be of the NDS form, i.e., the $\{\ket{\chi_{\mb n}}_A\}$ appearing in Eq.~(1) of the main text must be orthonormal. To see this, recall from Sec.~\ref{sec:fidelity} that if the environment modes are accessible, the output fidelity of the purified states takes the value given in Eq.~\eqref{app:ndsfidelity}. Since the fidelity between the accessible outputs $\rho_{\tau}$ and $\rho_{\tau'}$ on the $AS$ system cannot be less than between their purifications, we have \begin{align} \label{fidbound} F\pars{\rho_{\tau},\rho_{\tau'}} \geq \nu^M \sum_{n=0}^\infty p_n \nu^{n}. \end{align} On the other hand, we argued in the main text that the right-hand side is achieved by any NDS probe with the same photon number distribution $\left\{p_n\right\}$, so the optimum fidelity is achieved on an NDS probe. Alternatively, this conclusion follows directly from the argument of Section 12 of \cite{SWA+18} that NDS probes are optimal for discriminating between any pair of phase-covariant channels, which amplifier channels are. Since $M$ is fixed, we need to minimize $\sum_n p_n \nu^n$ under the energy constraint $\sum_n n p_n =N$. Since the function $x \mapsto \nu^x$ is convex, we have $\sum_n p_n \nu^n \geq \nu^{\sum_n n p_n} = \nu^N$ for any probe. If $N$ is an integer, this value is achieved iff the probe satisfies $p_N =1$, e.g., the probe could be a multimode number state of total photon number $N$. For general $N$, we can reprise an argument from \cite{Nai18loss} as follows: For any $\{ p_n\}$ satisfying the energy constraint, let ${A}_{\downarrow} = \sum_{n \leq \lfloor N \rfloor} p_n$, and $A_{\uparrow} = 1 - {A}_{\downarrow}$. For ${N}_{\downarrow} = A_{\downarrow}^{-1} \sum_{n \leq \lfloor N \rfloor} n\,p_n \leq \lfloor N \rfloor$ and ${N}_{\uparrow} = A_{\uparrow}^{-1}\sum_{n \geq \lceil N \rceil} n\,p_n \geq \lceil N \rceil$, we have $A_\downarrow\,N_\downarrow + A_\uparrow\,N_\uparrow = N$. Convexity of $x \mapsto \nu^x$ implies that $\sum_n p_n \nu^n \geq {A}_{\downarrow} \,\nu^{N_{\downarrow}} + {A}_{\uparrow} \,\nu^{N_{\uparrow}}$. Convexity also implies that the chord joining $({N}_{\downarrow}, \nu^{N_{\downarrow}})$ and $({N}_{\uparrow}, \nu^{N_\uparrow})$ lies above that joining $(\lfloor N \rfloor, \nu^{\lfloor N \rfloor})$ and $(\lceil N \rceil , \nu^{\lceil N \rceil})$ in the interval $\lfloor N \rfloor \leq x \leq \lceil N \rceil$ -- this follows from the fact that the intersection of the epigraph of $x \mapsto \nu^x$ and the region above the line joining $(\lfloor N \rfloor, \nu^{\lfloor N \rfloor})$ and $(\lceil N \rceil , \nu^{\lceil N \rceil})$ is the intersection of two convex sets and hence is also convex. Denote the fractional part of $N$ by $\left\{N\right\} = N - \lfloor N\rfloor$. Since the energy constraint can be satisfied by taking ${N}_\downarrow = \lfloor N \rfloor, {N}_\uparrow = \lceil N \rceil, p_{\lfloor N \rfloor} = 1 -\{N\}$, and $p_{\lceil N \rceil} = \{N\}$, the energy-constrained minimum fidelity equals: \begin{align} \label{ecminf} F^{\rm min}\pars{\rho_{\tau},\rho_{\tau'}} = \nu^M \bracs{\pars{1- \{N\}}\,\nu^{\lfloor N \rfloor} + \{N\}\,\nu^{\lceil N \rceil}}. \end{align} The ECB distance $B_N\pars{\cl{A}_\tau^{\otimes M},\cl{A}_{\tau'}^{\otimes M}} $ then follows from Eq.~\eqref{ecbdistance}. Since the ECB distance is an increasing function of $N$, it equals (up to normalization) the ECB distance defined using an inequality constraint in \cite{Shi19uniform}. For quantifying the advantage of using quantum probes, we can define a \emph{classical} energy-constrained Bures distance (CECB distance) analogous to \eqref{ecbdistance} except that the probes $\rho_{AS}$ are restricted to be in the set of classical states of the form \begin{align} \label{classical probe} \rho_{AS} = \int_{\mathbb{C}^{M'}} \mathop{}\!\mathrm{d}^{2M'} \balpha \int_{\mathbb{C}^{M}} \mathop{}\!\mathrm{d}^{2M} \bbeta \,P(\balpha,\bbeta) \ket{\balpha}\bra{\balpha}_A\otimes\ket{\bbeta} \bra{\bbeta}_S, \end{align} where $\bbeta = \pars{\alpha_S^{(1)}, \ldots, \alpha_S^{(M)}} \in \mathbb{C}^M$ indexes $M$-mode coherent states $\kets{\bbeta}$ of $S$, $\balpha = \pars{\alpha_A^{(1)}, \ldots, \alpha_A^{(M')}}\in \mathbb{C}^{M'}$ indexes $M'$-mode coherent states $\kets{\balpha}$ of $A$, and $P\pars{\balpha,\bbeta} \geqslant 0$ is a probability distribution. Additionally, the signal energy constraint implies that $P\pars{\balpha,\bbeta}$ should satisfy \begin{align} \label{CEC} \int_{\mathbb{C}^{M'}} \mathop{}\!\mathrm{d}^{2M'} \balpha \int_{\mathbb{C}^M} \mathop{}\!\mathrm{d}^{2M} \bbeta\, P(\balpha,\bbeta) \pars{\sum_{m=1}^M \abs{\alpha_S^{(m)}}^2} = {N}. \end{align} Denote by $\cl{S}^{\tsf{cl}}_N$ the set of classical states satisfying the energy constraint \eqref{CEC}. The CECB distance is then defined as \begin{equation} \begin{aligned} \label{cecbdistance} B_N^{\tsf{cl}}\pars{\cl{A}_\tau^{\otimes M},\cl{A}_{\tau'}^{\otimes M}} := \sup_{\rho_{AS} \in \cl{S}_N^{\tsf{cl}}} \sqrt{1 - F\pars{{\rm id}\otimes\cl{A}_\tau^{\otimes M}(\rho_{AS}),{\rm id}\otimes\cl{A}_{\tau'}^{\otimes M}(\rho_{AS})}}. \end{aligned} \end{equation} Consider first a coherent-state probe $\kets{\sqrt{N}}$ of a single signal mode. Since the corresponding output state $\rho_\tau = \Tr_E \hat{U}(\tau)\, \ket{\sqrt{N}}\bra{\sqrt{N}}_S \otimes\ket{0}\bra{0}_E\, \hat{U}^\dag(\tau)$ is a displaced thermal state, we can use results on the fidelity between Gaussian states \cite{MM12} to compute the fidelity $F\pars{\rho_\tau, \rho_{\tau'}}$: \begin{align} \label{CSfidelity} F\pars{\rho_\tau, \rho_{\tau'}} &= \sech\pars{\tau' - \tau}\, \exp\bracs{ \minus \frac{\pars{\cosh \tau' - \cosh \tau}^2}{2\pars{\sinh^2 \tau' + \sinh^2 \tau + 1}} {N}} \end{align} For a coherent-state $\ket{\sqrt{N_1}} \otimes \cdots \otimes \ket{\sqrt{N_M}} \in \cl{S}^{\tsf{cl}}_N$, it follows from Eq.~\eqref{CSfidelity} that the output fidelity is \begin{align} \label{cecminf} F^{\tsf{coh}} = \nu^M\, \exp\bracs{ - \frac{\pars{\cosh \tau' - \cosh \tau}^2}{2\pars{\sinh^2 \tau' + \sinh^2 \tau + 1}} {N}}. \end{align} The strong concavity of the fidelity \cite{NC00} over mixtures and the convexity with respect to $N$ of the exponential appearing in \eqref{cecminf} imply that the above expression is the minimum fidelity over all probes in $\cl{S}^{\tsf{cl}}_N$. The minimum output Bures distance over classical probes then follows from Eq.~\eqref{cecbdistance}. Note that the dependence on $M$ of the minimum fidelity appears in both the quantum \eqref{ecminf} and classical \eqref{cecminf} expressions as the factor $\nu^M$.
2,869,038,156,506
arxiv
\section{Introduction} In \cite{G3}, Gabai introduced \emph{width} as an extremely useful knot invariant. Width is a certain function (whose exact definition isn't needed for our purposes) from the set of height functions on a knot in $S^3$ to the natural numbers. It becomes an invariant after minimizing over all possible height functions. A particular height function of a knot is \emph{thin} if it realizes the minimum width. Thin embeddings produce very useful topological information about the knot (see, for example, Gabai's proof of Property R in \cite{G3}; Gordon and Luecke's solution to the knot complement problem \cite{GL}; and Thompson's proof \cite{Thompson} that small knots have thin position equal to bridge position.) Scharlemann and Thompson extended Gabai's width for knots to a width for graphs in $S^3$ \cite{ST-graphthin} and gave a new proof of Waldhausen's classification of Heegaard splittings of $S^3$ \cite{Wald}. They also applied a similar idea to handle structures of 3-manifolds, producing an invariant of 3-manifolds also called width \cite{ST-thin}. A handle decomposition which attains the width is said to be \emph{thin}. Thin handle decompositions for 3-manifolds have been very useful for understanding the structure of Heegaard splittings of 3-manifolds. There have been a number of attempts (eg. \cite{Bachman, HRS, HS01, Johnson, TT2}) to define width of knots (and later for tangles and graphs) in a 3-manifold by using various generalizations of the Scharlemann-Thompson constructions. These definitions have been used in various ways, however they have never been as useful as Scharlemann and Thompson's thin position for 3-manifolds. For instance, although Scharlemann and Thompson's thin position has the property that all thin surfaces in a thin handle decomposition for a closed 3-manifold are essential surfaces, the same is not true for the thin positions applied to knot and graph complements. (The papers \cite{Bachman} and \cite{TT2} are exceptions. The former, however, applies only to links in closed 3-manifolds and in the latter there are a number of technical requirements which limit its utility.) In this paper, we define an \emph{oriented multiple v.p.-bridge surface} as a certain type of surface $\mc{H}$ in a 3-manifold $M$ transverse to a graph $T \subset M$. The components of $\mc{H}$ are partitioned into \emph{thick surfaces} $\mc{H}^+$ and \emph{thin surfaces} $\mc{H}^-$. We exhibit a collection of thinning moves which give rise to a partial order, denoted $\to$, on the set of \emph{oriented multiple v.p.-bridge surfaces} (terms to be defined later) for a (3-manifold, graph) pair $(M,T)$. These thinning moves include the usual kinds of destabilization and untelescoping moves, known to experts, but we also include several new ones, corresponding to the situation when portions of the graph $T$ are cores of compressionbodies in a generalized Heegaard splitting of $M$ (in the sense of \cite{ST-thin}). More significantly, we also allow untelescoping using various generalizations of compressing discs. Throughout the paper, we show how these generalized compressing discs arise naturally when considering bridge surfaces for (3-manifold, graph) pairs. If $\mc{H}$ and $\mc{K}$ are oriented bridge surfaces, we say that $\mc{H} \to \mc{K}$ if certain kinds of carefully constructed sequences of thinning moves produce $\mc{K}$ from $\mc{H}$. If no such sequence can be applied to $\mc{H}$ then we say that $\mc{H}$ is \emph{locally thin}. If the reader allows us to defer some more definitions until later, we can state our results as: \begin{main} Let $M$ be a compact, orientable 3-manifold and $T \subset M$ a properly embedded graph such that no vertex has valence two and no component of $\partial M$ is a sphere intersecting $T$ two or fewer times. Assume also that no sphere in $M$ intersects $T$ exactly once transversally. Then $\to$ is a partial order on $\overrightarrow{\vpH}(M,T)$. Furthermore, if $\mc{H} \in \overrightarrow{\vpH}(M,T)$, then there is a locally thin $\mc{K} \in \overrightarrow{\vpH}(M,T)$ such that $\mc{H} \to \mc{K}$. Additionally, if $\mc{H}$ is locally thin then the following hold: \begin{enumerate} \item Each component of $\mc{H}^+$ is sc-strongly irreducible in the complement of the thin surfaces. \item No component of $(M, T)\setminus \mc{H}$ is a trivial product compressionbody between a thin surface and a thick surface. \item Every component of $\mc{H}^-$ is c-essential in the exterior of $T$. \item If there is a 2-sphere in $M$ intersecting $T$ three or fewer times and which is essential in the exterior of $T$, then some component of $\mc{H}^-$ is such a sphere. \end{enumerate} \end{main} The properties of locally thin surfaces are proved in part by using sweep-out arguments. (See Theorems \ref{Properties Locally Thin} and \ref{thm:They are thin levels}.) The existence of a locally thin $\mc{K}$ with $\mc{H} \to \mc{K}$ is proved using a new complexity which decreases under thinning sequences. (See Theorem \ref{partial order}.) Although this complexity behaves much as Gabai's or Scharlemann-Thompson's widths do, we view it as being more like the complexities used to guarantee that hierarchies of 3-manifolds terminate. In the sequel \cite{TT4} we will show how powerful these locally thin positions for (3-manifold, graph) pairs are. In that paper, we construct two families of non-negative half-integer invariants of (3-manifold, graph) pairs. The invariants of one family are similar to the bridge number and tunnel number of a knot. The invariants of the other family are very similar to Gabai's width for knots in $S^3$. We prove that these invariants (under minor hypotheses) are additive for both connect sum and trivalent vertex sum and detect the unknot. In Section \ref{sec:def} and Section \ref{sec:compbodies} we establish our notation and important definitions including the definition of a multiple v.p.-bridge surface. We describe our simplifying moves in Sections \ref{Reducing Complexity} and \ref{sec: elementary thinning}. In Section \ref{sec:complexity}, we define a complexity for oriented multiple v.p.-bridge surfaces and show it decreases under our simplifying moves. Section \ref{sec:complexity} also uses the simplifying moves to define a partial order $\to$ on the set $\overrightarrow{\vpH}(M,T)$ of oriented multiple v.p.-bridge surfaces for $(M,T)$. The main theorem, Theorem \ref{partial order}, shows that given $\mc{H} \in \overrightarrow{\vpH}(M,T)$ there is a least element $\mc{K} \in \overrightarrow{\vpH}(M,T)$ with respect to the partial order $\to$ such that $\mc{H} \to \mc{K}$. The least elements are called ``locally thin.'' In Section \ref{sec: sweepouts}, we study the important properties of locally thin multiple v.p.-bridge surfaces. Theorem \ref{Properties Locally Thin} lists a number of these properties, one of which is that each component of $\mc{H}^-$ is essential in the exterior of $T$. Section \ref{thin decomp spheres} sets us up for working with connected sums in \cite{TT4} by showing that if there is a sphere in $M$, transversely intersecting $T$ in three or fewer points, and which is essential in the exterior of $T$, then there is such a sphere that is a thin level for any locally thin multiple v.p.-bridge surface. \subsection*{Acknowledgements} Some of this paper is similar in spirit to \cite{TT2}, but here we operate under much weaker hypotheses and obtain much stronger results. We have been heavily influenced by Gabai's work in \cite{G3}, Scharlemann and Thompson's work in \cite{ST-thin}, and Hayashi and Shimokawa's work in \cite{HS01}. Throughout we assume some familiarity with the theory of Heegaard splittings, as in \cite{Scharlemann-Survey}. We thank Ryan Blair, Marion Campisi, Jesse Johnson, Alex Zupan, and the attendees at the 2014 ``Thin Manifold'' conference for helpful conversations. Thanks also to the referee for many helpful comments and, in particular, finding a subtle but serious error in the original version of Section \ref{sec:complexity}. The resolution of this error led to stronger results and simplified proofs. The second named author is supported by an NSF CAREER grant and the first author by grants from the Colby College Division of Natural Sciences. \section{Definitions and Notation} \label{sec:def} We let $I = [-1,1] \subset \mathbb{R}$, $D^2$ be the closed unit disc in $\mathbb{R}^2$, and $B^3$ be the closed unit ball in $\mathbb{R}^3$. For a topological space $X$, we let $|X|$ denote the number of components of $X$. All surfaces and 3-manifolds we consider will be orientable, smooth or PL, and (most of the time) compact. If $S$ is a surface, then $\chi(S)$ is its euler characteristic. A \defn{(3-manifold, graph) pair} $(M,T)$ (or simply just a \defn{pair}) consists of a compact, orientable 3-manifold (possibly with boundary) $M$ and a properly embedded graph $T \subset M$. We do not require $T$ to have vertices so $T$ can be empty or a knot or link. Since $T$ is properly embedded in $M$ all valence 1 vertices lie on $\partial M$. We call the valence 1-vertices of $T$ the \defn{boundary vertices} or \defn{leaves} of $T$ and all other vertices the \defn{interior vertices} of $T$. We require that no vertex of $T$ have valence 0 or 2, but we allow a graph to be empty. For any subset $X$ of $M$, let $\eta(X)$ be an open regular neighborhood of $X$ in $M$ and $\overline{\eta(X)}$ be its closure. If $S$ is a (orientable, by convention) surface properly embedded in $M$ and transverse to $T$, we write $S \subset (M,T)$. If $S \subset (M,T)$, we abuse notation slightly and write \[ (M,T) \setminus S = (M \setminus S, T \setminus S) = (M \setminus {\eta}(S), T \setminus \eta(S)). \] We also write $S \setminus T$ for $S \setminus {\eta}(T)$. Observe that $\partial (M \setminus T)$ is the union of $(\partial M)\setminus T$ with $\partial \overline{\eta(T)}$. A surface $S \subset (M,T)$ is \defn{$\partial$-parallel} if $S \setminus T$ is isotopic relative to its boundary into $\partial (M\setminus T)$. We say that $S \subset (M,T)$ is \defn{essential} if $S\setminus T$ is incompressible in $M\setminus T$, not $\partial$-parallel, and not a 2-sphere bounding a 3-ball in $M\setminus T$. We say that the graph $T \subset M$ is \defn{irreducible} if whenever $S \subset (M,T)$ is a 2-sphere we have $|S \cap T| \neq 1$. The pair $(M,T)$ is \defn{irreducible} if $T$ is irreducible and if the 3-manifold $M \setminus T$ is irreducible (i.e. does not contain an essential sphere.) We will need notation for a few especially simple (3-manifold, graph) pairs. The pair $(B^3, \text{arc})$ will refer to any pair homeomorphic to the pair $(B^3, T)$ with $T$ an arc properly isotopic into $\partial B^3$. The pair $(S^1 \times D^2, \text{core loop})$ will refer to any pair homeomorphic to the pair $(S^1 \times D^2, T)$ where $T$ is the product of $S^1$ with the center of $D^2$. Finally, we will often convert vertices of $T$ into boundary components of $M$ and vice versa. More precisely, if $V$ is the union of all the interior vertices of $T$, we say that $(\mathring{M}, \mathring{T}) = (M \setminus \eta(V), T \setminus \eta(V))$ is obtained by \defn{drilling out the vertices of $T$}. Similarly, we will sometimes refer to \defn{drilling out} certain edges of $T$; i.e. removing an open regular neighborhood of those edges and incident vertices from both $M$ and $T$. \subsection{Compressing discs of various kinds} We will be concerned with several types of discs which generalize the classical definition of a compressing disc for a surface in a 3-manifold. \begin{definition} Suppose that $S \subset (M,T)$ is a surface. Suppose that $D$ is an embedded disc in $M$ such that the following hold: \begin{enumerate} \item $\partial D \subset (S \setminus T)$, the interior of $D$ is disjoint from $S$, and $D$ is transverse to $T$. \item $|D \cap T| \leq 1$ \item $D$ is not properly isotopic into $S \setminus T$ in $M \setminus T$ via an isotopy which keeps the interior of $D$ disjoint from $S$ until the final moment. Equivalently, there is no disc $E \subset S$ such that $\partial E = \partial D$ and $E \cup D$ bounds either a 3-ball in $M$ disjoint from $T$ or a 3-ball in $M$ whose intersection with $T$ consists entirely of a single unknotted arc with one endpoint in $E$ and one endpoint in $D$. \end{enumerate} Then $D$ is an \defn{sc-disc}. More specifically, if $|D \cap T| = 0$ and $\partial D$ does not bound a disc in $S\setminus T$, then $D$ is a \defn{compressing disc}. If $|D \cap T| = 0$ and $\partial D$ does bound a disc in $S\setminus T$, then $D$ is a \defn{semi-compressing disc}. If $|D \cap T| = 1$ and $\partial D$ does not bound an unpunctured disc or a once-punctured disc in $S\setminus T$, then $D$ is a \defn{cut disc}. If $|D \cap T| = 1$ and $\partial D$ does bound an unpunctured disc or a once-punctured disc in $S\setminus T$, then $D$ is a \defn{semi-cut disc}. A \defn{c-disc} is a compressing disc or cut disc. The surface $S \subset (M,T)$ is \defn{c-incompressible} if $S$ does not have a c-disc; it is \defn{c-essential} if it is essential and c-incompressible. \end{definition} \begin{remark} Semi-cut discs arise naturally when $T$ has an edge containing a local knot, as in Figure \ref{Fig:semicut}. Semi-compressing discs occur in part because even though a 3-manifold $M$ may be irreducible, there is no guarantee that a given 3-dimensional submanifold is also irreducible. \end{remark} \begin{center} \begin{figure}[tbh] \includegraphics[scale=0.5]{FiguresV6/semicutdisc} \caption{Except in very particular situations, the black disc is a semicut disc for the green surface.} \label{Fig:semicut} \end{figure} \end{center} \section{Compressionbodies and multiple v.p.-bridge surfaces} \label{sec:compbodies} \subsection{Compressionbodies} In this section we generalize the idea of a compressionbody to our context. \begin{definition} Suppose that $H$ is a closed, connected, orientable surface. We say that $(H \times I, T)$ is a \defn{trivial product compressionbody} or a \defn{product region} if $T$ is isotopic to the union of vertical arcs, and we let $\partial_\pm (H \times I) = H \times \{\pm 1\}$. If $B$ is a 3--ball and if $T \subset B$ is a (possibly empty) connected, properly embedded, $\partial$-parallel tree, having at most one interior vertex, then we say that $(B, T)$ is a \defn{trivial ball compressionbody}. We let $\partial_+ B = \partial B$ and $\partial_- B = \varnothing$. A \defn{trivial compressionbody} is either a trivial product compressionbody or a trivial ball compressionbody. Figure \ref{Fig:TrivialCompBody} shows both types of trivial compressionbodies. \begin{center} \begin{figure}[tbh] \includegraphics[scale=0.5]{FiguresV6/TrivialCompBodies} \caption{On the left is a trivial product compressionbody; in the center is a trivial ball compressionbody with $T$ an arc; on the right is a trivial ball compressionbody with $T$ a tree having a single interior vertex..} \label{Fig:TrivialCompBody} \end{figure} \end{center} A pair $(C,T)$ is a \defn{v.p.-compressionbody} if there is some component denoted $\boundary_+C$ of $\boundary C$ and a collection of pairwise disjoint sc-discs $\mc{D} \subset (C,T)$ for $\partial_+ C$ such that the result of $\partial$-reducing $(C, T)$ using $\mc{D}$ is a union of trivial compressionbodies. Observe that trivial compressionbodies are v.p.-compressionbodies as we may take $\mc{D} = \varnothing$. Figure \ref{Fig: vpcompressionbody} shows two different v.p.-compressionbodies. We will usually represent v.p.-compressionbodies more schematically as in Figure \ref{Fig: arc types}. The set $\boundary C \setminus \boundary_+C$ is denoted by $\boundary_-C$. If no two discs of $\mc{D}$ are parallel in $C \setminus T$ then $\mc{D}$ is a \defn{complete collection of discs} for $(C,T)$. An edge of $T$ which is disjoint from $\partial_+ C$ (and so has endpoints on $\partial_- C$ and the vertices of $T$) is a \defn{ghost arc}. An edge of $T$ with one endpoint in $\partial_+ C$ and one endpoint in $\partial_- C$ is a \defn{vertical arc}. A component of $T$ which is an arc having both endpoints on $\partial_+ C$ is a \defn{bridge arc}. A component of $T$ which is homeomorphic to a circle and is disjoint from $\partial C$ is called a \defn{core loop}. $C$ is a \defn{compressionbody} if $(C,\varnothing)$ is a v.p.-compressionbody. A compressionbody $C$ is a \defn{handlebody} if $\partial_- C = \varnothing$. A \defn{bridge disc} for $\partial_+ C$ in $C$ is an embedded disc in $C$ with boundary the union of two arcs $\alpha$ and $\beta$ such that $\alpha \subset \partial_+ C$ joins distinct points of $\boundary_+C \cap T$ and $\beta$ is a bridge arc of $T$. \end{definition} \begin{remark} Suppose that $(C,T)$ is a v.p.-compressionbody and that $(\punct{C}, \punct{T})$ is the result of drilling out the vertices of $T$. Considering the components of $\partial \punct{C} \setminus \partial C$ as components of $\partial_- C$, we see that $(\punct{C}, \punct{T})$ is also a v.p.-compressionbody having the same complete collection of discs as $(C,T)$. Furthermore, every component of $\punct{T}$ is a vertical arc, ghost arc, bridge arc, or core loop. The ``v.p.'' stands for ``vertex-punctured'' as this notion of compressionbody is a generalization of the compressionbodies used in \cite{TT2}: the v.p.-compressionbody $(\punct{C}, \punct{T})$ satisfies \cite[Definition 2.1]{TT2} with $\Gamma = \punct{T}$ (in the notation of that paper.) The notation ``v.p.'' will also be helpful as a reminder that the first step in calculating many of the various quantities we consider is to drill out the vertices of $T$ and treat them as boundary components of $M$. \end{remark} \begin{center} \begin{figure}[tbh] \includegraphics[scale=0.4]{FiguresV6/vpcompressionbody} \includegraphics[scale=0.4]{FiguresV6/CompBdyBasic} \caption{On the left is an example of a v.p.-compressionbody $(C,T)$ with $\partial_- C$ the union of spheres. On the right, is an example of a v.p.-compressionbody $(C,T)$ with $\partial_- C$ the union of two connected surfaces, one of which is a sphere twice-punctured by $T$. } \label{Fig: vpcompressionbody} \end{figure} \end{center} \begin{center} \begin{figure}[tbh] \includegraphics[scale=0.4]{FiguresV6/arctypes} \caption{A v.p.-compressionbody $(C,T)$. From left to right we have three vertical arcs, one ghost arc, one bridge arc, and one core loop in $T$.} \label{Fig: arc types} \end{figure} \end{center} The next two lemmas establish basic properties of v.p.-compressionbodies. \begin{lemma}\label{spheres in v.p.-compressionbodies} Suppose that $(C,T)$ is a v.p.-compressionbody such that no spherical component of $\partial_- C$ intersects $T$ exactly once. Suppose $P \subset (C,T)$ is a closed surface transverse to $T$. If $D$ is an sc-disc for $P$, then if $|D \cap T| = 1$ either $\partial D$ is essential on $P \setminus T$ or $\partial D$ bounds a disc in $P$ intersecting $T$ exactly once. Furthermore, if $P$ is a sphere, then after some sc-compressions it becomes the union of spheres, each either bounding a trivial ball compressionbody in $(C,T)$ or parallel to a component of $\partial_- C$. \end{lemma} \begin{proof} Suppose first that there is an sc-disc $D$ for $P$ such that $\partial D$ bounds an unpunctured disc $E \subset P$ but $|D \cap T| = 1$. Then $E \cup D$ is a sphere in $(C,T)$ intersecting $T$ exactly once. Every sphere in $C$ must separate $C$. Let $W \subset C \setminus (E \cup D)$ be the component disjoint from $\partial_+ C$. The fundamental group of every component of $\partial_- C$ injects into the fundamental group of $C$ and every curve on every component of $\partial_- C$ is homotopic into $\partial_+ C$. Thus, any component of $\partial_- C$ contained in $W$ must be a sphere. Drilling out the vertices of $T$ along with all edges of $T$ disjoint from $\partial_+ C$ creates a new v.p.-compressionbody $(C', T')$. As before, any essential curve in $\partial_- C'$ is non-null-homotopic in $C'$ and is homotopic into $\partial_+ C = \partial_+ C'$. Thus, $\partial_- C' \cap W$ can contain no essential curves. There is an edge $e \subset (T \cap W)$ with an endpoint in $D$. Beginning with $e$, traverse a path across edges of $T \cap W$ and components of $\partial_- C \cap W$ (each necessarily a sphere) so that no edge of $T \cap W$ is traversed twice. The path terminates when it reaches a component of $\partial_- C \cap W$ which is a once-punctured sphere, contrary to our hypotheses. Thus, no such sc-disc $D$ can exist. Suppose now that $P$ is a sphere. Use c-discs to compress $P$ as much as possible. By the previous paragraph, we end up with the union $P'$ of spheres, each intersecting $T$ no more times than does $P$. Let $\Delta$ be a complete collection of discs for $(C, T)$ chosen so as to minimize $|\Delta \cap P'|$ up to isotopy of $\Delta$. If some component $P_0$ of $P'$ is disjoint from $\Delta$, then it is contained in the union of trivial v.p.-compressionbody obtained by $\partial$-reducing $(C, T)$ using $\Delta$. Standard results from 3-manifold topology show that $P_0$ is either $\partial$-parallel to a component of $\partial_- C$ or is the boundary of a trivial ball compressionbody in $(C, T)$. If $\Delta \cap P' \neq \varnothing$, then it consists of circles. Since we have minimized $|\Delta \cap P'|$ up to isotopy, each circle of $\Delta \cap P'$ which is innermost on $\Delta$ bounds a semi-compressing or semi-cut disc $D \subset \Delta$ for $P'$. By the previous paragraph, compressing $P'$ using $D$ creates an additional component of $P'$ and preserves the property that each component of $P'$ intersects $T$ no more times than does $P$. The result follows by repeatedly performing such compressions until $P'$ becomes disjoint from $\Delta$. \end{proof} \begin{lemma}\label{filling and puncturing} Suppose that $(C,T)$ is a (3-manifold, graph)-pair such that no component of $\partial_- C$ is a sphere intersecting $T$ exactly once. Then the following hold: \begin{enumerate} \item If $P \subset \partial_- C$ is an unpunctured sphere or a twice-punctured sphere, the result $(\wihat{C}, \wihat{T})$ of capping off $P$ with a trivial ball compressionbody is still a v.p.-compressionbody. \item If $p$ is a point in the interior of $C$ (possibly in $T$), then the result of removing an open regular neighborhood of $p$ from $C$ (and $T$ if $p \in T$) is a v.p.-compressionbody. \end{enumerate} \end{lemma} \begin{proof} First, suppose that $P \subset \partial_- C$ is a zero or twice-punctured sphere. Let $\Delta$ be a complete collection of sc-discs for $(C, T)$. Let $(C', T')$ be the result of $\partial$-reducing $(C, T)$ using $\Delta$. Then $(C', T')$ is the union of v.p.-compressionbodies one of which is a product v.p.-compressionbody containing $P$. Capping off $P$ with a trivial ball compressionbody converts this product v.p.-compressionbody into a trivial ball compressionbody. Thus, the result of $\partial$-reducing $(\wihat{C}, \wihat{T})$ using $\Delta$ is the union of trivial v.p.-compressionbodies. Thus, $(\wihat{C}, \wihat{T})$ is a v.p.-compressionbody. Now suppose that $p$ is a point in the interior of $C$. Let $\Delta$ be a complete collection of sc-discs for $(C,T)$. By general position, we may isotope $\Delta$ to be disjoint from $T$. Let $(C', T')$ be the result of $\partial$-reducing $(C,T)$ using $\Delta$. Each component of $(C', T')$ is a trivial v.p.-compressionbody, one of which $(W, T_W)$ contains $p$. If $(W, T_W)$ is a trivial ball compressionbody either with $T_W$ an arc containing $p$ or with $p$ an interior vertex of $T_W$, then the result of removing $\eta(p)$ from $(W, T_W)$ is again a trivial compressionbody, and the result follows. Suppose, therefore, that if $p \in T_W$, then either $(W, T_W) \neq (B^3, \text{ arc})$ or $p$ is not a vertex of $T_W$. If $p \in T$, there is a sub-arc of an edge of $T_W$ joining $\partial_+ W$ to $p$. Let $E$ be the frontier of a regular neighborhood of that edge. Then $E$ is a semi-cut disc for $\partial_+ W$ cutting off a v.p.-compressionbody from $(W, T_W)$ which is $(S^2 \times I, \text{two vertical arcs})$. (The fact that $E$ is a semi-cut disc follows from the considerations of the previous paragraph.) We may isotope $E$ so that $\partial E$ is disjoint from the remnants of $\Delta$ in $\partial_+ W$. The disc $E$ is then a cut disc or semi-cut disc for $(C, T)$ such that $\Delta \cup E$ is a collection of s.c.-discs such that $\partial$-reducing $(C\setminus \eta(p), T\setminus \eta(p))$ is the union of trivial compressionbodies. Hence, $(C\setminus \eta(p), T\setminus \eta(p))$ is a v.p.-compressionbody. If $p \not\in T$, the proof is similar except we can pick any (tame) arc joining $\partial_+ C$ to $p$ which is disjoint from $T$. \end{proof} \begin{lemma}\label{Lem: Invariance} Suppose that $(C,T)$ is a v.p.-compressionbody such that no component of $\partial_- C$ is a 2-sphere intersecting $T$ exactly once. The following are true: \begin{enumerate} \item\label{it: no sc} $(C,T)$ is a trivial compressionbody if and only if there are no sc-discs for $\partial_+ C$. \item\label{it: no sc for neg boundary} There are no c-discs for $\partial_- C$. \item\label{it: reducing} If $D$ is an sc-disc for $\partial_+ C$, then reducing $(C,T)$ using $D$ is the union of v.p.-compressionbodies. Furthermore, there is a complete collection of discs for $(C,T)$ containing $D$. \end{enumerate} \end{lemma} \begin{proof} Proof of (1): From the definition of v.p.-compressionbody, if there is no sc-disc for $\partial_+ C$, then $(C, T)$ is a trivial compressionbody. The converse requires a little more work, but follows easily from standard results in 3-dimensional topology. Proof of (2): This is similar to the proof of Lemma \ref{spheres in v.p.-compressionbodies}. Suppose that $\partial_- C$ has a c-disc $P$. As in the previous lemma, there is no sc-disc $D$ for $P$ such that $\partial D$ bounds an unpunctured disc on $P$ but $|D \cap T| = 1$. Consequently, compressing $P$ using any sc-disc $D$ creates a new disc $P'$ intersecting $T$ no more often than did $P$ and with $\partial P' = \partial P$. Since $\partial P$ is essential on $\partial_- C \setminus T$, the disc $P'$ is a c-disc for $\partial_- C$. Thus, we may assume that $P$ is disjoint from a complete collection of discs for $(C, T)$. It follows easily that $P$ is $\partial$-parallel in $(C, T)$ and so is not a c-disc for $\partial_- C$, contrary to our assumption. Proof of (3): Let $(C,T)$ be a v.p.-compressionbody and suppose that $D$ is an sc-disc for $\partial_+ C$. Let $(\wihat{C}, \wihat{T})$ be the result of capping off all zero and twice-punctured sphere components of $\partial_- C$ with trivial ball compressionbodies. By Lemma \ref{filling and puncturing}, $(\wihat{C}, \wihat{T})$ is a v.p.-compressionbody. If $D$ is not an sc-disc for $(\wihat{C}, \wihat{T})$, it is $\partial$-parallel. Boundary-reducing $(\wihat{C}, \wihat{T})$ with $D$ results in two v.p.-compressionbodies: one a trivial ball compressionbody and the other equivalent to $(\wihat{C}, \wihat{T})$. Removing regular neighborhoods of certain points in the interior of $\wihat{C}$, converts $(\wihat{C}, \wihat{T})$ back into $(C, T)$. By Lemma \ref{filling and puncturing}, $\partial$-reducing $(C,T)$ using $D$ results in v.p.-compressionbodies. A collection of sc-discs for those compressionbodies, together with $D$, gives a collection $\Delta$ of sc-discs such that $\partial$-reducing $(C,T)$ using $\Delta$ results in trivial v.p.-compressionbodies. Thus, the lemma holds if $D$ is $\partial$-parallel in $(\wihat{C}, \wihat{T})$. Now suppose that $D$ is not $\partial$-parallel in $(\wihat{C}, \wihat{T})$. Choose a complete collection $\Delta$ of sc-discs such that $\partial$-reducing $(\wihat{C}, \wihat{T})$ using $\Delta$ results in the union $(C', T')$ of trivial v.p.-compressionbodies. Out of all possible choices, choose $\Delta$ to intersect $D$ minimally. We prove the lemma by induction on $|D \cap \Delta|$. If $|D \cap \Delta| = 0$, then $D$ is $\partial$-parallel in $(C', T')$. In this case, the result follows easily. Suppose, therefore, that $|D \cap \Delta| \geq 1$. The intersection $D\cap \Delta$ is the union of circles and arcs. Suppose, first, that there is a circle of intersection. Let $\zeta \subset D \cap \Delta$ be innermost on $\Delta$. Compressing $D$ using the innermost disc $E \subset \Delta$ results in a disc $D'$ and a sphere $P$. By Lemma \ref{spheres in v.p.-compressionbodies}, if $|E \cap T| = 1$, then $|D' \cap T| = 1$ and $|P \cap T| = 2$. On the other hand, if $|E \cap T|=0$, then both $D'$ and $P$ are disjoint from $T$. By Lemma \ref{spheres in v.p.-compressionbodies}, there is a sequence of sc-compressions of $P$ which result in zero and twice-punctured spheres. (If $|E \cap T| = 0$, there are no twice-punctured spheres.) These spheres either bound trivial ball compressionbodies in $(\wihat{C}, \wihat{T})$ or are parallel to components of $\partial_- \wihat{C}$. But since $\partial_- \wihat{C}$ contains no zero or twice-punctured sphere components, the latter situation is impossible. The sphere $P$ is thus obtained by tubing together inessential spheres. It follows that $P$ is also inessential: it bounds a 3-ball either disjoint from $T$ or intersecting $T$ in an unknotted arc. Since $D$ is a zero or once-punctured disc, this ball gives an isotopy of $\Delta$ reducing the intersection with $D$, a contradiction. Thus, there are no circles of intersection in $D \cap \Delta$. Let $\zeta$ be an arc of intersection in $D \cap \Delta$ which is outermost in $\Delta$. Let $E \subset \Delta$ be the outermost disc. We may choose $\zeta$ and $E$ so that $E$ is disjoint from $T$. Boundary-reducing $D$ using $E$ results in two discs $D'$ and $D''$, at most one of which is once-punctured. Observe that a small isotopy makes both disjoint from $D$. By our inductive hypotheses applied to $D$ and $D'$, the result of $\partial$-reducing $(C,T)$ along both of them (either individually or together) is the union $(W, T_W)$ of v.p.-compressionbodies. Remove the regular neighborhoods of points corresponding to the zero and twice-punctured sphere components of $\partial_- C$. By Lemma \ref{filling and puncturing}, we still have v.p.-compressionbodies, which we continue to call $(W, T_W)$. Let $\mc{D}$ be the union of sc-discs for $\partial_+ W$, including $D'$ and $D''$, such that the result of $\partial$-reducing $(W, T_W)$ using $\mc{D}$ is the union of trivial compressionbodies. The disc $D$ is contained in one of these trivial compressionbodies and, therefore, must be $\partial$-parallel. The result of $\partial$-reducing $(C,T)$ along $D \cup \mc{D}$ is then the union of trivial compressionbodies, as desired. \end{proof} \begin{remark} \label{rmk:semicut} It is not necessarily the case that if $(C,T)$ is an irreducible v.p.-compressionbody then there is no sc-disc for $\partial_- C$. To see this, let $(C,T)$ be the result of removing the interior of a regular neighborhood of a point on a vertical arc in an irreducible v.p.-compressionbody $(\tild{C},\tild{T})$. Then there is an sc-disc for $\partial_- C$ which is boundary-parallel in $\tild{C} \setminus \tild{T}$ and which cuts off from $(C,T)$ a compressionbody which is $S^2 \times I$ intersecting $T$ in two vertical arcs. See the top diagram in Figure \ref{Fig: CutPrimeNegBdy}. Similarly, if $\partial_- C$ contains 2-spheres disjoint from $T$, there will be a semi-compressing disc for any component of $\partial_- C$ which is not a 2-sphere disjoint from $T$. We also cannot drop the hypothesis that no component of $\partial_- C$ is a sphere intersecting $T$ exactly once. The bottom diagram in Figure \ref{Fig: CutPrimeNegBdy} shows an sc-disc with the property that boundary-reducing the v.p.-compressionbody along that disc does not result in the union of v.p.-compressionbodies. \end{remark} \begin{center} \begin{figure}[tbh] \includegraphics[scale=0.5]{FiguresV6/naivecounterexamples} \caption{Above is an example of a v.p.-compressionbody $(C,T)$ where $\partial_- C$ has a semi-cut disc (shown in blue). Below is an example of an sc-disc with the property that boundary-reducing the v.p.-compressionbody along that disc does not result in the union of v.p.-compressionbodies. } \label{Fig: CutPrimeNegBdy} \end{figure} \end{center} In what follows, we will often use Lemma \ref{Lem: Invariance} without comment. \subsection{Multiple v.p.-bridge surfaces}\label{sec: multiple} The definition of a multiple v.p.-bridge surface for the pair $(M,T)$ which we are about to present is a version of Scharlemann and Thompson's ``generalized Heegaard splittings'' \cite{ST-thin} in the style of \cite{HS01}, but using v.p.-compressionbodies. We will also make use of orientations in a similar way to what shows up in Gabai's definition of thin position \cite{G3} and the definition of Johnson's ``complex of surfaces'' \cite{Johnson}. \begin{definition}\label{Def: multiple bridge surfaces} A connected closed surface $H \subset (M,T)$ is a \defn{v.p.-bridge surface} for the pair $(M,T)$ if $(M,T)\setminus H$ is the union of two distinct v.p.-compressionbodies $(H_\uparrow, T_\uparrow)$ and $(H_\downarrow, T_\downarrow)$ with $H = \partial_+ H_\uparrow = \partial_+ H_\downarrow$. If $T = \varnothing$, then we also call $H$ a \defn{Heegaard surface} for $M$. A \defn{multiple v.p.-bridge surface} for $(M,T)$ is a closed (possibly disconnected) surface $\mc{H} \subset (M,T)$ such that: \begin{itemize} \item $\mc{H}$ is the disjoint union of $\mc{H}^-$ and $\mc{H}^+$, each of which is the union of components of $\mc{H}$; \item $(M,T)\setminus \mc{H}$ is the union of embedded v.p.-compressionbodies $(C_i, T_i)$ with $\mc{H}^- \cup \partial M= \bigcup \partial_- C_i$ and $\mc{H}^+ = \bigcup \partial_+ C_i$; \item Each component of $\mc{H}$ is adjacent to two distinct compressionbodies. \end{itemize} If $T = \varnothing$, then $\mc{H}$ is also called a \defn{multiple Heegaard surface} for $M$. The components of $\mc{H}^-$ are called \defn{thin surfaces} and the components of $\mc{H}^+$ are called \defn{thick surfaces}. We denote the set of multiple v.p.-bridge surfaces for $(M,T)$ by $vp\mathbb{H}(M,T)$. \end{definition} Note that each component of $\mc{H}^+$ is a v.p.-bridge surface for the component of $(M,T)\setminus \mc{H}^-$ containing it. In particular, if $H \in vp\mathbb{H}(M,T)$ is connected, then it is a v.p.-bridge surface and $H^- = \varnothing$. Also, observe that the components of $\partial M$ are not considered to be thin surfaces; the surfaces $\partial M$ and $\mc{H}^-$ play different roles in what follows. We now introduce orientations and flow lines. \begin{definition} Suppose that $\mc{H}$ is a multiple v.p.-bridge surface for $(M,T)$. Suppose that each component of $\mc{H}$ is given a transverse orientation so that the following hold: \begin{itemize} \item If $(C,T_C)$ is a component of $(M,T) \setminus \mc{H}$ then the transverse orientations of the components of $\partial_- C \cap \mc{H}^-$ either all point into or all point out of $C$. \item If $(C,T_C)$ is a component of $(M,T) \setminus \mc{H}$, then if the transverse orientation of $\partial_+ C$ points into (respectively, out of) $C$, then the transverse orientations of the components of $\partial_- C \cap \mc{H}^-$ point out of (respectively, into) $C$. \end{itemize} A \defn{flow line} is a non-constant oriented path in $M$ transverse to $\mc{H}$, not disjoint from $\mc{H}$, and always intersecting $\mc{H}$ in the direction of the transverse orientation. If $S_1$ and $S_2$ are components of $\mc{H}$, then a flow line from $S_1$ to $S_2$ is a flow line which starts at $S_1$ and ends at $S_2$. The multiple v.p.-bridge surface $\mc{H}$ is an \defn{oriented} multiple v.p.-bridge surface if each component of $\mc{H}$ has a transverse orientation as above and there are no closed flow lines. If there is a flow line from a thick surface $H\subset \mc{H}^+$ to a thick surface $J \subset \mc{H}^+$, then we may consider $J$ to be \defn{above} $H$ and $H$ to be \defn{below} $J$. Reversing the transverse orientation on $\mc{H}$ interchanges the notions of above and below. \end{definition} The set of oriented multiple v.p.-bridge surfaces for $(M,T)$ is denoted $\overrightarrow{\vpH}(M,T)$. Note that there is a forgetful map from $\overrightarrow{\vpH}(M,T)$ to $vp\mathbb{H}(M,T)$. Any of our results for $vp\mathbb{H}(M,T)$ can be turned into results for $\overrightarrow{\vpH}(M,T)$, though the converse isn't true. Given thick surfaces $H$ and $J$, it is not necessarily the case that $H$ is above $J$ or vice versa, even if they are in the same component of $M$. See Figure \ref{Fig: orientedbridgesurface} for a depiction of an oriented multiple v.p.-bridge surface. Not all multiple v.p.-bridge surfaces can be oriented. For example, circular thin position (defined in \cite{Fabiola}) although we can define ``above'' and ``below'', the set of thick surfaces below a given thick surface $H$ will equal the set of thick surfaces above $H$. Notice, however, that every connected multiple v.p.-bridge surface, once it is given a transverse orientation is an oriented multiple v.p.-bridge surface since it separates $M$. \begin{center} \begin{figure}[tbh] \includegraphics[scale=0.4]{FiguresV6/examplembs} \caption{An example of an oriented multiple v.p.-bridge surface. Blue horizontal lines represent thin surfaces or boundary components. Black horizontal lines represent thick surfaces.} \label{Fig: orientedbridgesurface} \end{figure} \end{center} Finally, for this section, we observe that cutting open along a thin surface induces oriented multiple bridge surfaces of the components. \begin{lemma}\label{cutting} Suppose that $\mc{H} \in \overrightarrow{\vpH}(M,T)$ and that $F \subset \mc{H}^-$ is a component. Let $(M', T')$ be a component of $(M,T) \setminus F$ and let $\mc{K} = (\mc{H} \setminus F) \cap M'$. Then $\mc{K} \in \overrightarrow{\vpH}(M',T')$. \end{lemma} The proof of Lemma \ref{cutting} follows immediately from the definitions, as the orientation on $\mc{H}$ restricts to an orientation on $\mc{K}$ and the flow lines for $\mc{K}$ form a subset of the flow lines for $\mc{H}$. \section{Simplifying Bridge Surfaces}\label{Reducing Complexity} This section presents a host of ways of replacing certain types of multiple v.p.-bridge surfaces by new ones that are closely related but are ``simpler" (we will make this concept precise is section \ref{sec:complexity}). These simplifications are similar to the notion of ``destabilization'' and ``weak reduction'' for Heegaard splittings. Versions of many of these have appeared in other papers (e.g., \cites{HS01, STo, TT1, TT2, RS}.) The operations are: (generalized) destabilization, unperturbing, undoing a removable arc, untelescoping, and consolidation. \subsection{Destabilizing} Given a Heegaard splitting one can always obtain a Heegaard splitting of higher genus by adding a cancelling pair of a one-handle and a two-handle, or (if the manifold has boundary) by tubing the Heegaard surface to the frontier of a collar neighborhood of a component of the boundary of the manifold. In the case where the manifold contains a graph, the core of the 1-handle, the co-core of the 2-handle, or the core of the tube might be part of the graph. (Though in this paper, we do not need to consider the case when \emph{both} the 1-handle and the 2-handle contain portions of the graph.) In the realm of Heegaard splittings, the higher genus Heegaard splitting is said to be either a stabilization or a boundary-stabilization of the lower genus one. Observe that drilling out edges of $T$ disjoint from $\mc{H}\in vp\mathbb{H}(M,T)$ preserves the fact that $\mc{H}$ is a multiple v.p.-bridge surface. This suggests we also need to consider boundary-stabilization along portions of the graph $T$. Without further ado, here are our versions of destabilization: \begin{definition}\label{Def: gen stab} Suppose that $\mc{H} \in vp\mathbb{H}(M,T)$ and let $H$ be a component of $ \mc{H}^+$. There are six situations in which we can replace $H$ by a new thick surface $H'$ that is obtained from $H$ by compressing along an sc-disc $D$. If $H$ satisfies any of these conditions we say that $H$ and $\mc{H}$ contain a \defn{generalized stabilization}. See Figure \ref{Fig: Stabs} for examples. \begin{itemize} \item There is a pair of compressing discs for $H$ which intersect transversally in a single point and are contained on opposite sides of $H$ and in the complement of all other surfaces of $\mc{H}$. In this case we say that $H$ and $\mc{H}$ are \defn{stabilized}. The pair of compressing discs is called a \defn{stabilizing pair}. The surface $H'$ is obtained from $H$ by compressing along either of the discs. \item There is a pair of a compressing disc and a cut disc for $H$ which intersect transversally in a single point and are contained on opposite sides of $H$ and in the complement of all other surfaces of $\mc{H}$. In this case we say that $H$ and $\mc{H}$ are \defn{meridionally stabilized}. The pair of compressing disc and cut disc is called a \defn{meridional stabilizing pair}. The surface $H'$ is obtained by compressing $H$ along the cut disc. \item There is a separating compressing disc $D$ for $H$ contained in the complement of all other surfaces of $\mc{H}$ such that the following hold. Let $W$ be the component of $M\setminus \mc{H}^-$ containing $H$. Compressing $H$ along $D$ produces two connected surfaces, $H'$ and $H''$, where $H'$ is a v.p.-bridge surface for $W$ and $H''$ bounds a trivial product compressionbody disjoint from $H'$ with a component $S$ of $\partial M$. In this case we say that $H$ and $\mc{H}$ are \defn{boundary-stabilized} along $S$. \item There is a separating cut disc $D$ for $H$ contained in the complement of all other surfaces of $\mc{H}$ such that the following hold. Let $W$ be the component of $M\setminus \mc{H}^-$ containing $H$. Compressing $H$ along $D$ produces two connected surfaces, $H'$ and $H''$, where $H'$ is a v.p.-bridge surface for $W$ and $H''$ bounds a trivial product compressionbody disjoint from $H'$ with a component $S$ of $\partial M$. In this case we say that $H$ and $\mc{H}$ are \defn{meridionally boundary-stabilized} along $S$. \item Let $G$ be a non-empty collection of vertices and edges of $T$ disjoint from $\mc{H}$. Let $\widetilde M = M \setminus G$. If $H$ and $\mc{H}$ as a multiple v.p.-bridge surface of $\widetilde M$ are (meridionally) boundary stabilized along a component of $\boundary\widetilde{M}$ which is not a component of $\boundary M$, then $H$ and $\mc{H}$ are \defn{(meridionally) ghost boundary-stabilized} along $G$. \end{itemize} \end{definition} \begin{center} \begin{figure}[tbh] \labellist \small\hair 2pt \pinlabel{$H'$} [l] at 239 296 \pinlabel{$H''$} [l] at 213 243 \pinlabel{$H'$} [l] at 649 296 \pinlabel{$H''$} [l] at 622 243 \pinlabel{$H'$} [l] at 518 90 \pinlabel{$H''$} [l] at 518 55 \pinlabel{$G$} [l] at 301 26 \endlabellist \includegraphics[scale=0.4]{FiguresV6/Stabs} \caption{Depictions of stabilization, meridional stabilization, $\partial$-stabilization, meridional $\partial$-stabilization, and meridional ghost $\partial$-stabilization. The disc $D$ in the final picture, corresponds to the core of the left-most tube. In the last three cases, portions of the surfaces $H'$ and $H''$, which appear after compressing along the disc $D$, have been labelled. In the final case, we have shaded the product region between $H''$ and $S$.} \label{Fig: Stabs} \end{figure} \end{center} \begin{remark} In the definitions of (meridional) (ghost) $\partial$-stabilization, it's important to note that the statement that $H'$ is a v.p.-bridge surface for the component of $M \setminus \mc{H}^-$ containing it is a precondition of being able to destabilize. Not every sc-compression of a thick surface resulting in a $\partial$-parallel surface is a destabilization. Performing a (meridional) (ghost) $\partial$-destabilization moves one or more components of $\partial M$ from one side of $H$ to the other side of $H'$. This is the reason we don't place transverse orientations on the components of $\partial M$. \end{remark} \begin{remark}\label{Destab Rem} Suppose that $H \subset \mc{H}^+$ has a generalized stabilization and let $H'$ be the surface obtained from $H$ by sc-compressing as in the definition above. It is easy to check (as in the classical settings) that $\mc{K} = (\mc{H} \setminus H) \cup H'$ is a multiple v.p.-bridge surface for $(M,T)$. If $\mc{H} \in \overrightarrow{\vpH}(M,T)$, the transverse orientation on $\mc{H}$ induces a transverse orientation on $\mc{K}$. Clearly, no new non-constant closed flow lines are created. In particular, if $\mc{H} \in \overrightarrow{\vpH}(M,T)$, there is a natural way of thinking of $\mc{K}$ as an element of $\overrightarrow{\vpH}(M,T)$. We say that the (oriented) multiple v.p.-bridge surface $\mc{K}$ is obtained by \defn{destabilizing} $\mc{H}$ (and that the thick surface $H'$ is obtained by \defn{destabilizing} the thick surface $H$.) \end{remark} \subsection{Perturbed and Removable Bridge Surfaces} We can sometimes push a bridge surface across a bridge disc and obtain another bridge surface. This operation is called unperturbing. \begin{definition} Let $\mc{H} \in vp\mathbb{H}(M,T)$ and let $H \subset \mc{H}^+$ be a component. Suppose that there are bridge discs $D_1$ and $D_2$ for $H$ in $M \setminus \mc{H}^-$, on opposite sides, disjoint from the vertices of $T$, and which have the property that the arcs $\alpha_1 = \partial D_1 \cap H$ and $\alpha_2 = \partial D_2 \cap H$ share exactly one endpoint and have disjoint interiors. Then $H$ and $\mc{H}$ have a \defn{perturbation}. The discs $D_1$ and $D_2$ are called a \defn{perturbing pair} of discs for $H$ and $\mc{H}$. \end{definition} \begin{remark} The type of perturbation we have defined here might better be called an ``arc-arc''-perturbation. There are also perturbations where the bridge discs are allowed to contain vertices of $T$, but we will not need them in this paper. \end{remark} \begin{center} \begin{figure}[tbh] \includegraphics[scale=0.5]{FiguresV6/Perturbing} \caption{Unperturbing $H$.} \label{Fig: Perturbed} \end{figure} \end{center} \begin{lemma}\label{Lem: Unperturb} Let $\mc{H}$ be an (oriented) multiple v.p.-bridge surface for $(M,T)$. Suppose that $H\subset \mc{H}^+$ is a perturbed component with perturbing discs $D_1$ and $D_2$. Let $E$ be the frontier of the neighborhood of $D_1$. Then compressing $H$ along $E$ and discarding the resulting twice punctured sphere component results in a new surface $H'$ so that $\mc{K}=(\mc{H}-H)\cup H'$ is an (oriented) multiple v.p.-bridge surface for $(M,T)$. \end{lemma} \begin{proof} This is nearly identical to Lemma 3.1 of \cite{STo}. We can alternatively think of $H'$ as obtained from $H$ by an isotopy along $D_1$. On the side of $H$ containing $D_1$, this isotopy removed a bridge arc and so $H$ is still the positive boundary of a v.p.-compressionbody to that side. Let $(C, T_C)$ be the v.p.-compressionbody containing $D_2$. Let $D$ be the frontier of a regular neighborhood of $D_2$ in $C$, so that $D$ cuts off a $(B^3, \text{ arc})$ containing $D_2$ from $(C,T_C)$. Note that $D$ is an sc-disc for $(C,T_C)$. Let $\Delta$ be a complete set of sc-discs for $(C,T_C)$ containing $D$ and chosen so as to minimize $|\partial \Delta \cap \partial D_1|$. Observe that no component of $\Delta\setminus D$ is inside the $(B^3, \text{ arc})$ cut off by $D$. Suppose $E \subset \Delta\setminus D$ is a disc with boundary intersecting $\partial D_1$, and which contains the intersection point of $\partial \Delta \cap \partial D_1$ closest to the point $\partial D \cap \partial D_1$. Let $E'$ be the disc obtained by tubing $E$ to a parallel copy of $D$, along a subarc of $\partial D_1$. It is not difficult to confirm that $(\Delta \setminus E) \cup E'$ is still a complete collection of sc-discs for $(C, T_C)$. However, it intersects $\partial D_1$ fewer times than $\Delta$, a contradiction. Thus, $\partial D_1$ is disjoint from $\Delta \setminus D$. Boundary-reduce $(C, T_C)$ using $\Delta \setminus D$. We arrive at the union of v.p.-compressionbodies, one of which contains $\partial D_1$. We can now see that the isotopy of $H$ across $D_1$, either combines two bridge arcs into another bridge arc or combines a vertical arc and a bridge arc into a bridge arc. Thus, the result of unperturbing is still a multiple v.p.-bridge surface. If $\mc{H}$ is oriented we make $\mc{K}$ oriented by using the transverse orientations induced from $\mc{H}$. Clearly, no new closed flow lines are created. \end{proof} We say that the (oriented) multiple v.p.-bridge surface $\mc{K}$ constructed in the proof is obtained by \defn{unperturbing} $\mc{H}$. See Figure \ref{Fig: Perturbed} for a schematic depiction of the unperturbing operation. \subsection{Removable Pairs} Suppose that $\mc{H}$ is an (oriented) multiple v.p.-bridge surface for $(M,T)$ such that no component of $\mc{H}^- \cup \partial M$ is a sphere intersecting $T$ exactly once. Let $H$ be a component of $\mc{H}^+$, with $\mc{D}_\uparrow$ and $\mc{D}_\downarrow$ complete sets of discs for $(H_\uparrow, T_\uparrow)$ and $(H_\downarrow, T_\downarrow)$ respectively. Suppose that there exists a bridge disc $D$ for $H$ in $H_\uparrow$ (or $H_\downarrow$) with the following properties: \begin{itemize} \item it is disjoint from the vertices of $T$; \item it is disjoint from $\mc{D}_\uparrow$ (resp. $\mc{D}_\downarrow$); \item the arc $\partial D \cap H$ intersects a single component $D^*$ of $\mc{D}_\downarrow$ (resp. $ \mc{D}_\uparrow$). $D^*$ is a disc and $|D \cap D^*|=1$, \end{itemize} then $\mc{H}$ and $H$ are \defn{removable}. The discs $D$ and $D^*$ are called a \defn{removing pair}. See the left side of Figure \ref{Fig: Removable}. \begin{example}\label{Ex: 2-handle} Suppose that $H \in vp\mathbb{H}(M,T)$ is connected and that $M'$ is obtained from $M$ by attaching a 2-handle to $\partial M$ or Dehn-filling a torus component of $\partial M$. Let $\alpha$ be either a co-core of the 2-handle or a core of the filling torus. Using an unknotted path in $M - H$, isotope $\alpha$ so that it intersects $H$ exactly twice. Then $H \in vp\mathbb{H}(M,T \cup \alpha)$ is removable. The component $\alpha$ is called the \defn{removable component of $T \cup \alpha$}. \end{example} \begin{lemma}\label{Removing Decr Compl.} Suppose that $\mc{H} \in vp\mathbb{H}(M,T)$ is removable. Then there is an isotopy of $\mc{H}$ in $M$ to $\mc{K} \in vp\mathbb{H}(M,T)$ supported in the neighborhood of the removing pair so that $\mc{K}$ intersects $T$ two fewer times than $\mc{H}$ does. Furthermore, if $\mc{H}$ is oriented, so is $\mc{K}$. \end{lemma} \begin{proof} Let $H$ be the thick surface which is removable. We will construct an isotopy from $H$ to a surface $H'$ supported in a regular neighborhood of the removing pair and let $\mc{K} = (\mc{H} - H) \cup H'$. We will show that $\mc{K}$ is a multiple v.p.-bridge surface. Assuming it is, if $\mc{H} \in \overrightarrow{\vpH}(M,T)$, we give $H'$ the normal orientation induced by $H$. It is then easy to show that $\mc{K} \in \overrightarrow{\vpH}(M,T)$. Without the loss of much generality, we may assume that $\mc{H} = H$ is connected. Let $D \subset H_\uparrow$ and $D^*\subset H_\downarrow$ be the removing pair and let $\mc{D}_\uparrow$ and $\mc{D}_\downarrow$ be the corresponding complete set of discs from the definition of ``removable''. Isotope $T$ across $D$ so that $T \cap D$ lies in $H_\downarrow$. Let $T'$ be the resulting graph and let $D^*_c$ be the cut disc that $D^*$ gets converted into. Equivalently, we may isotope $H$ across $D$ and let $H'$ be the resulting surface. See Figure \ref{Fig: Removable}. The graph $T'_\uparrow = T' \cap H_\uparrow$ is obtained from $T_\uparrow = T \cap H_\uparrow$ by removing a component of $T_\uparrow$. After creating $T'$ from $T$, the collection $\mc{D}_\uparrow$ remains a set of discs that decompose $(H_\uparrow, T'_\uparrow)$ into trivial compressionbodies, although there may now be discs in $\mc{D}_\uparrow$ which are parallel or which are boundary-parallel in $H_\uparrow \setminus T'_\uparrow$. Thus, $(H_\uparrow, T'_\uparrow)$ is a v.p.-compressionbody. To show that $(H_\downarrow, T'_\downarrow)$ is a v.p.-compressionbody, note that cut-compressing $(H_\downarrow, T')$ along $D^*_c$ results in the same collection of compressionbodies as compressing $(H_\downarrow, T)$ along $D^*$. Therefore $\mc{D}_\downarrow$ with $D^*$ replaced by the induced cut disc $D^*_c$ is a complete collection of sc-discs for $(H_\downarrow, T'_\downarrow)$ and so $(H_\downarrow, T'_\downarrow)$ is a v.p.-compressionbody. We conclude that $\mc{K}$ is an (oriented) multiple v.p.-bridge surface. \end{proof} The surface $\mc{K}$ in the preceding lemma is said to be obtained by \defn{undoing a removable arc} of $\mc{H}$. \begin{center} \begin{figure}[tbh] \includegraphics[scale=0.4]{FiguresV6/Removable} \caption{Undoing a removable arc} \label{Fig: Removable} \end{figure} \end{center} \section{Untelescoping and Consolidation}\label{sec: elementary thinning} If we let $T$ be empty in everything discussed so far and if we ignore the transverse orientations, then we are in Scharlemann-Thompson's set-up for thin position. We need a way to recognize when the multiple bridge surface can be ``thinned" and a way to show that this thinning process eventually terminates. Scharlemann and Thompson thin by switching the order in which some pair (1-handle, 2-handle) are added and they use Casson-Gordon's criterion \cite{CG} to recognize that this is possible by finding disjoint compressing discs on opposite sides of a thick surface. In this section, we use compressions along sc-weak reducing pairs of discs in place of handle exchanges. \subsection{Untelescoping} Suppose that $\mc{H} \in vp\mathbb{H}(M,T)$. If $\mc{H}$ has the property that there is a component $H \subset \mc{H}^+$, and disjoint sc-discs $D_-$ and $D_+$ for $H$ on opposite sides so that $D_-$ and $D_+$ are disjoint from $\mc{H}^-$, we say that $\mc{H}$ is \defn{sc-weakly reducible}, that $H$ is the \defn{sc-weakly reducible component} and that $\{D_-, D_+\}$ is a \defn{sc-weakly reducing pair}. If $\mc{H}$ is not sc-weakly reducible, we say it is \defn{sc-strongly irreducible}. If $D_-$ and $D_+$ are c-discs, we also say that $\mc{H}$ is \defn{c-weakly reducible}, etc. Suppose that no component of $\mc{H}^- \cup \partial M$ is a sphere intersecting $T$ exactly once. Then, given an sc-weakly reducible $\mc{H} \in vp\mathbb{H}(M,T)$, we can create a new $\mc{K} \in vp\mathbb{H}(M,T)$ by \defn{untelescoping} $\mc{H}$ as follows: \begin{definition}\label{Def: untel for unorient} Let $\{D_-, D_+\}$ be an sc-weakly reducing pair for an sc-weakly reducible component $H$ of $\mc{H}^+$. Let $N$ be the component of $M \setminus \mc{H}^-$ containing $H$. Let $F$ be the result of compressing $H$ using both $D_-$ and $D_+$. Let $H_\pm$ be the result of compressing $H$ using only $D_\pm$ and isotope each of $H_\pm$ slightly into the compressionbody containing $D_\pm$, respectively. Let $\mc{K}^- = \mc{H}^- \cup F$ and $\mc{K}^+ = (\mc{H}^+ \setminus H) \cup (H_- \cup H_+)$. See Figure \ref{Fig: untelescope} for a schematic picture. The component of $F$ adjacent to copies of both $D_-$ and $D_+$ is called the \defn{doubly spotted} component. (The terminology is taken from \cite{Scharlemann-Survey}.) \end{definition} \begin{center} \begin{figure}[tbh] \includegraphics[scale=0.4]{FiguresV6/untelescope} \caption{Untelescoping $H$. The red curves are portions of $T$. The blue lines on the left are sc-discs for $H$. Note that if a semi-cut or cut disc is used then a ghost arc is created.} \label{Fig: untelescope} \end{figure} \end{center} \begin{lemma}\label{lem:untelescope} If $\mc{H} \in vp\mathbb{H}(M,T)$ and if $\mc{K}$ is obtained by untelescoping $\mc{H}$, then $\mc{K} \in vp\mathbb{H}(M,T)$. \end{lemma} \begin{proof} Let $H \subset \mc{H}^+$ be the component which is untelescoped using discs $\{D_-, D_+\}$. Let $W_+$ and $W_-$ be the two compressionbody components of $M\setminus \mc{H}$ that have copies of $H$ as their positive boundaries. Let $\mc{D}_\pm$ be a complete collections of discs for the compressionbodies $W_\pm$ containing $D_\pm$. The discs $\mc{D}_\pm \setminus D_\pm$ after an isotopy are a complete collection of discs for the components of $(M,T)\setminus \mc{K}$ adjacent to $H_\pm$ and not adjacent to $F$. An isotopy of the disc $D_\pm$ makes it into an sc-disc for $H_\mp$. Boundary-reducing the submanifold bounded by $H_\pm$ and $F$ using $D_\mp$ creates the union of product compressionbodies. Thus, $\mc{K} \in vp\mathbb{H}(M,T)$. \end{proof} To extend this operation to oriented multiple v.p.-bridge surfaces we simply give $H_-$ and $H_+$ the transverse orientations induced from $H$. We defer until Lemma \ref{Lem: niceness persists} the proof that if $\mc{H}$ is oriented, then so is $\mc{K}$. \subsection{Consolidation} Untelescoping usually creates product compressionbodies which need to be removed as in Scharlemann-Thompson thin position. In our situation though this process is complicated by the presence of the graph $T$. We call the operation ``consolidation.'' \begin{definition}\label{Consolidation} Suppose that $\mc{H}$ is an (oriented) multiple v.p.-bridge surface for $(M,T)$ and that $(P, T_P)$ is a product compressionbody component of $(M,T) \setminus \mc{H}$ which is adjacent to a component of $\mc{H}^-$ (and, therefore, not adjacent to a component of $\partial M$.) Let $\mc{K} = \mc{H} \setminus (\partial_- P \cup \partial_+ P)$. If $\mc{H}$ is oriented, give each component of $\mc{K}$ the induced orientation from $\mc{H}$. We say that $\mc{K}$ is obtained from $\mc{H}$ by \defn{consolidation} or by \defn{consolidating} $(P, T_P)$. (These terms were introduced in \cite{TT2}.) \end{definition} The next two lemmas verify that consolidation is a valid operation in $vp\mathbb{H}(M,T)$. See Figure \ref{Fig: combining indices} for a schematic depiction of the v.p.-compressionbodies in the following lemma. \begin{figure}[ht] \labellist \small\hair 2pt \pinlabel {$A$} at 198 194 \pinlabel {$P$} at 279 98 \pinlabel {$B$} at 279 36 \pinlabel {$C$} at 652 194 \pinlabel {$\partial_+ A$} [r] at 1 237 \pinlabel {$\partial_- A$} [r] at 42 131 \pinlabel {$\partial_+P = \partial_+ B$} [r] at 213 67 \endlabellist \includegraphics[scale=0.3]{FiguresV6/CombiningIndices} \caption{The v.p.-compressionbodies $A$, $B$, $P$, and $C$ in Lemma \ref{Lem: Consolidation preserves v.p.-comprbdy}.} \label{Fig: combining indices} \end{figure} \begin{lemma}\label{Lem: Consolidation preserves v.p.-comprbdy} Suppose that $(P,T_P)$ is a trivial product compression body and that $(A,T_A)$ and $(B, T_B)$ are v.p.-compressionbodies with interiors disjoint from each other and from the interior of $P$. Assume also that $\partial _- P \subset \partial_- A$ and $\partial_+B = \partial_+ P$. Let $(C,T) = (A, T_A) \cup (P, T_P) \cup (B, T_B)$ and assume that $T$ is properly embedded in $C$. Then $(C,T)$ is a v.p.-compressionbody. \end{lemma} \begin{proof} We can dually define a v.p.-compressionbody to be a 3-manifold containing a properly embedded 1-manifold obtained by taking a collection of trivial v.p.-compressionbodies and adding to their positive boundary some 1-handles and 1-handles containing a single piece of tangle as their core. With this dual definition, the lemma is obvious. \end{proof} \begin{lemma}\label{lem:consolidation} Suppose that $\mc{H}$ is an (oriented) multiple v.p.-bridge surface for $(M,T)$ and that $\mc{K}$ is obtained by consolidating a product region $(P,T_P)$ of $\mc{H}$. Then $\mc{K}$ is an (oriented) multiple v.p.-bridge surface for $(M,T)$. \end{lemma} \begin{proof} This follows immediately from Lemma \ref{Lem: Consolidation preserves v.p.-comprbdy} and the observation that any closed flow line for $\mc{K}$ could be isotoped to be a closed flow line for $\mc{H}$. \end{proof} \subsection{Elementary Thinning Sequences}\label{sec:elem thinning sequences} As mentioned before, untelescoping often produces product regions. These product regions, in general, are of two types -- they can be between a thin and thick surface neither of which existed before the untelescoping or they can be between a newly created thick surface and a thin surface (or a boundary component) that existed before the untelescoping operation. In fact, consolidating product regions of the first type can create additional product regions of the second type. The next definition specifies the order in which we will consolidate, before untelescoping further. \begin{definition}\label{def:oriented elem. thinning} Suppose that $\mc{H}$ is an sc-weakly reducible oriented multiple v.p.-bridge surface for $(M,T)$. Let $\mc{H}_1$ be obtained by untelescoping $\mc{H}$ using an sc-weak reducing pair. Let $\mc{H}_2$ be obtained by consolidating all trivial product compressionbodies of $\mc{H}_1\setminus\mc{H}$. There may now be trivial product compressionbodies in $M \setminus \mc{H}_2$. Let $\mc{H}_3$ be obtained by consolidating all those products. We say that $\mc{H}_3$ is obtained from $\mc{H}$ by an \defn{elementary thinning sequence}. \end{definition} See Figure \ref{Fig: thinning sequence} for a depiction of the creation of $\mc{H}_2$ from $\mc{H}$. \begin{center} \begin{figure}[tbh] \labellist \small\hair 2pt \pinlabel {$\mc{H}$} [bl] at 4 333 \pinlabel {$\mc{H}_1$} [bl] at 246 333 \pinlabel {$\mc{H}_2$} [bl] at 4 3 \pinlabel {$\mc{H}_3$} [bl] at 349 3 \endlabellist \includegraphics[scale=0.4]{FiguresV6/ThinningSequence} \caption{The surface $\mc{H}_2$ is created by untelescoping and consolidation. One or both of the compressionbodies $M \setminus \mc{H}_2$ shown in the figure may be product regions adjacent to $\mc{H}^-$. We consolidate those product regions to obtain $\mc{H}_3$.} \label{Fig: thinning sequence} \end{figure} \end{center} To understand the effect of an elementary thinning sequence, we examine the untelescoping operation a little more carefully. \begin{lemma}\label{lem:Sep weak reduction} Suppose that $H$ is a connected (oriented) v.p.-bridge surface and that $D_\uparrow$ and $D_\downarrow$ are an sc-weak reducing pair. Let $H_- \subset H_\downarrow$ and $H_+ \subset H_\uparrow$ be the new thick surfaces created by untelescoping $H$. Let $F$ be the thin surfaces. Then the following are equivalent for a component $\Phi$ of $F$: \begin{enumerate} \item $\Phi$ is not doubly spotted and is adjacent to a remnant of $D_\uparrow$ (or $D_\downarrow$, respectively). \item The disc $D_\uparrow$ (or $D_\downarrow$, respectively) is separating and $\Phi$ bounds a product region in $H_\uparrow$ (or $H_\downarrow$, respectively) with a component of $H_+$ (or $H_-$, respectively.) \end{enumerate} \end{lemma} \begin{proof} Suppose $\Phi$ is only adjacent to $D_\uparrow$. In this case $D_\uparrow$ must be separating as otherwise $\Phi$ would have two spots from $D_\uparrow$, and as $H$ is connected, it would also have to have a spot coming from $D_\downarrow$. Compressing $H$ along $D_\uparrow$ then results in two components. Let $H'$ be the component that doesn't contain $\boundary D_\downarrow$. Then $H'$ is not affected by compressing along $D_\downarrow$ to obtain $F$. Thus $H'$ is parallel to $\Phi$. Conversely if $\Phi$ is parallel to some component $H'$ of $H_+$ say, then $\Phi$ must be disjoint from the compressing disc $D_\downarrow$ and is therefore not double spotted. \end{proof} Using the notation from Definition \ref{def:oriented elem. thinning}, we have: \begin{lemma}\label{lem: Controlling Product Regions} Suppose that $\mc{H} \in vp\mathbb{H}(M,T)$ and that $(M,T) \setminus \mc{H}$ has no trivial product compressionbodies adjacent to $\mc{H}^-$. Let $\mc{H}_1$, $\mc{H}_2$, and $\mc{H}_3$ be the surfaces in an elementary thinning sequence beginning with the untelescoping of a component $H \subset \mc{H}^+$. Then the doubly spotted component of $\mc{H}_1$ persists into $\mc{H}_3$ and no component of $(M,T) \setminus \mc{H}_3$ is a trivial product compressionbody adjacent to $\mc{H}_3^-$. \end{lemma} \begin{proof} Let $H_-$ and $H_+$ be the thick surfaces resulting from untelescoping the thick surface $H \subset \mc{H}^+$ and let $F$ be the thin surface, with $F_0$ the doubly spotted component. Since $F$ is obtained by compressing using an sc-disc, $F$ is not parallel to either of $H_-$ or $H_+$. In creating $\mc{H}_2$ we remove all components of $F$ which are not doubly spotted (Lemma \ref{lem:Sep weak reduction}). The doubly spotted surface is not parallel to the remaining components of $H_-$ or $H_+$ since we can obtain it by an sc-compression of each of them. Thus, the doubly spotted component persists into $\mc{H}_2$. Let $H'_-$ and $H'_+$ be the components of $H_-$ and $H_+$ remaining in $\mc{H}_2$. If either of $H_-$ or $H_+$ bounds a trivial product compressionbody with $\mc{H}^-$, we create $\mc{H}_3$ by consolidating those trivial product compressionbodies. Suppose that a component $(W, T_W) \subset (M, T) \setminus \mc{H}_3$ contains $F \subset \partial_- W$. Since $H_-$ and $H_+$ each had an sc-compression producing the doubly-spotted components of $F$, $(W, T_W)$ must contain an sc-disc for $\partial_+ W$. Consequently, $(W, T_W)$ is not a trivial product compressionbody. The result then follows from the assumption that no component of $(M,T) \setminus \mc{H}$ was a trivial product compressionbody adjacent to $\mc{H}^-$. \end{proof} \begin{corollary}\label{Lem: niceness persists} Suppose that $\mc{H}, \mc{K}$ are multiple v.p.-bridge surfaces for $(M,T)$ such that $M \setminus \mc{H}$ has no trivial product compressionbodies adjacent to $\mc{H}^-$. Assume that $\mc{K}$ is obtained from $\mc{H}$ using an elementary thinning sequence. Then the following are true: \begin{enumerate} \item $\mc{K}^- \neq \varnothing$ \item $\mc{K}$ has no trivial product compressionbodies disjoint from $\partial M$, \item If $\mc{H}$ is oriented, so is $\mc{K}$. \end{enumerate} \end{corollary} \begin{proof} By Lemma \ref{lem: Controlling Product Regions}, the doubly spotted component of $\mc{K}^- \setminus \mc{H}^-$ does not get consolidated during the elementary thinning sequence and $\mc{K}$ has no trivial product compressionbodies adjacent to $\mc{K}^-$. Suppose that $\mc{H}$ is oriented. We wish to show that $\mc{K}$ is oriented. We have described how to give transverse orientations to the components of $\mc{H}_1$ and these induce transverse orientations on $\mc{H}_2$, and $\mc{K}$. It follows immediately from the construction that the transverse orientations are coherent on the v.p.-compressionbodies. We need only show that we cannot create closed flow lines. Since consolidation does not create closed flow lines, it suffices to show that $\mc{H}_1$ does not have any closed flow lines. Suppose that $\alpha$ is a closed flow line for $\mc{H}_1$. It must intersect $H_\pm$. As we have noted before, the (possibly disconnected) surface $H_\pm$ is obtained from $H$ by compressing along an sc-disc $D_\pm$. We can recover $H$ from $H_\pm$ by tubing (possibly along an arc component of $T\setminus \mc{H}_1$). We can isotope $\alpha$ to be disjoint from the tube, at which point it becomes a closed flow line for $\mc{H}$, a contradiction. Thus, $\mc{H}_1$, $\mc{H}_2$, and $\mc{H}_3$ are all oriented. \end{proof} \section{Complexity}\label{sec:complexity} The theory of 3-manifolds is rife with various complexity functions on surfaces which guarantee certain processes terminate. In \cite {ST-thin}, Scharlemann and Thompson used a version of Euler characteristic as their measure of complexity to ensure that untelescoping (and consolidation) of Heegaard surfaces will eventually terminate. Since that foundational paper, similar complexities have been used by many authors, eg. \cites{HS01, Johnson}. The requirement for a complexity is that it decreases under all possible types of compressions and any other moves that ``should" simplify the decomposition. In our context, we need a complexity that decreases under destabilizing a generalized stabilization, unperturbing, undoing a removable arc, and applying an elementary thinning sequence. The next example demonstrates some of the difficulties that arise in our context. \subsection{An example}\label{Ex: Spherical Untelescoping} Traditionally thin position in the style of Scharlemann-Thompson \cite{ST-thin} is done only for irreducible 3-manifolds. However, the following example (see Figure \ref{Fig: UnteleSpheres}) shows that, at an informal level, it should be possible to define a thin position for reducible 3-manifolds. Let $P$ be the result of removing a regular neighborhood of two points from a 3-ball. Choose one component of $\partial P$ as $\partial_+ P$ and the other two as $\partial_- P$. Let $M$ be the result of gluing two copies of $P$ along $H = \partial_+ P$. Then there is a certain sense in which the splitting of $M$ can be untelescoped to a simpler splitting, but the new thick surface appear more complicated. Figure \ref{Fig: UnteleSpheres} shows the original Heegaard surface and another, ostensibly thinner, multiple Heegaard surface. The surface on the right can be obtained from the one on the left by thinning using semi-compressing discs. \begin{figure}[tbh] \centering \includegraphics[scale=0.4]{FiguresV6/UnteleSpheres} \caption{On the left in black is the original Heegaard surface and on the right is a multiple Heegaard surface which ``should'', by all rights, be thinner. The thick surfaces are in solid black and the thin surface is in dashed black. Below each figure is a schematic representation with the boundary components in blue, the thick surfaces in long black lines, and the thin surface in a short black line.} \label{Fig: UnteleSpheres} \end{figure} Although this example concerns a reducible manifold, we will run into similar problems when we have thin surfaces which are spheres twice-punctured by the graph $T$. If, in the example on the left, we add in a single ghost arc on each side of the Heegaard surface and four vertical arcs, one adjacent to each boundary component, we obtain an irreducible pair $(M,T)$ with a connected v.p.-bridge surface that can be thinned to the surface on the right using semi-cut discs. Observe that neither of the v.p.-compressionbodies in the example on the left contains a compressing disc or a cut disc and that neither is a trivial v.p.-compressionbody. \subsection{Index of v.p.-compressionbodies} We introduce the index of a v.p.-compressionbody (see below) as a first step is developing a useful complexity for oriented multiple v.p.-bridge surfaces. This index is a proxy for counting handles. The index of a compressionbody without an embedded graph was first defined by Scharlemann and Schultens \cite{Scharlemann-Schultens-JSJ}. \begin{definition} For a v.p.-compressionbody $(C, T_C)$ such that $T_C$ does not have interior vertices, define \[ \mu(C, T_C) = 3(-\chi(\partial_+ C) + \chi(\partial_- C)) + 2(|\partial_+ C \cap T| - |\partial_- C \cap T|)+6. \] If $T_C$ does have interior vertices, drill them out and then calculate $\mu$. For convenience, define $\mu(\varnothing) = 0$. \end{definition} \begin{remark} The +6 isn't strictly needed, but allows us to work with non-negative integers. \end{remark} Observe that $\mu(B^3, \varnothing) = 0$; $\mu(B^3, \text{ arc}) = 4$; and the index of any other trivial v.p.-compressionbody is 6. Since the euler characteristic of a closed surface is even, index is always even. The next lemma is proved by considering the effect of a $\partial$-reduction on $\mu$. \begin{lemma}\label{lem:compressing} Suppose that $(C, T_C)$ is a v.p.-compressionbody such that no component of $\partial_- C$ is a sphere intersecting $T_C$ exactly once. If $D \subset (C, T_C)$ is an sc-disc for $\partial_+ C$ and if $(C_1, T_1)$ and $(C_2, T_2)$ are the result of $\partial$-reducing $(C, T_C)$ using $D$ (we allow $(C_2, T_2)$ to be empty) then \begin{equation}\label{eq: boundary red} \mu(C_1, T_1)+ \mu(C_2, T_2)= \mu(C, T_C) - 6 + 4|D \cap T_C|+6\delta \end{equation} where $\delta=1$ if $D$ is separating and 0 otherwise. Consequently, $\mu(C_1, T_1) < \mu(C, T)$. Furthermore, for any v.p.compressionbody $\mu(C, T_C) \geq 0$ with $\mu(C, T_C) = 0$ if and only if $(C, T_C)=(B^3, \varnothing)$; $\mu(C, T_C)=4$ if and only if $(C, T_C)=(B^3, arc)$; and in all other cases $\mu(C, T_C)\geq 6$. \end{lemma} \begin{proof} If $T_C$ has interior vertices, drill them out. Suppose first that $D \subset (C, T_C)$ is an sc-disc for $\partial_+ C$. Recall that $|D \cap T_C| \in \{0,1\}$. Let $\Delta$ be a complete collection of sc-discs containing $D$. Let $(C', T') = (C_1, T_1) \cup (C_2, T_2)$. We prove the lemma by induction on $|\Delta|$. Let $(C', T')$ be the result of $\partial$-reducing $(C, T_C)$ using $D$. Considering the effect of $\partial$-reduction on euler characteristic and the number of punctures produces Equation \eqref{eq: boundary red}. Notice that $\Delta \setminus D$ is a complete collection of sc-discs for $(C', T')$. If $|\Delta| = 1$, then $(C', T')$ is the union of trivial v.p.-compressionbodies. If $\delta = 0$, then $(C, T_C)$ is either $(S^1 \times D^2, \varnothing)$ or $(S^1 \times D^2, \text{ core loop})$. In either case, $\mu(C, T_C) = 6$ and $\mu(C_1, T_1) = \mu(C', T')$ is either 0 or 4. Suppose $\delta = 1$. Since $D$ is an sc-disc, neither $(C_1, T_1)$ nor $(C_2, T_2)$ is $(B^3, \varnothing)$. Similarly, if $|D \cap T_C| = 1$, then neither can be $(B^3, \text{ arc})$. In particular, \[ \mu(C_1, T_1) = \mu(C, T_C) + 4|D \cap T_C| - \mu(C_2, T_2) < \mu(C, T_C). \] The proof of the inductive step is similar; we apply the inductive hypothesis to $(C_2, T_2)$ to conclude that $\mu(C_2, T_2) \geq 6$ when it is non-trivial. \end{proof} The next lemma considers the effect of consolidation on index. See Figure \ref{Fig: combining indices} for a diagram. \begin{lemma}\label{lem:trivialconsolidation} Suppose that $\mc{H}\in vp\mathbb{H}(M,T)$ is a multiple v.p.-compressionbody. Suppose $(A, T_A)$, $(P, T_P)$ and $(B, T_B)$ are v.p.-compressionbodies with $(P, T_P)$ a product v.p.-compressionbody such that $\boundary_-P \subset \boundary_- A$ and $\boundary_+B = \boundary_+P$. Let $C=A \cup P \cup B$ and $T = T_A \cup T_P \cup T_B$. Then $\mu(C, T_C)=\mu(A, T_A)+\mu(B, T_B)-6$. \end{lemma} \begin{proof} By the definition of product v.p.-compressionbody, $-\chi(\partial_+ P) = -\chi(\partial_- P)$ and $|\partial_+ P \cap T| = |\partial_- P \cap T|$. Let $\alpha = \partial_- A \setminus \partial_- P$. Recall that $\partial_+ C = \partial_+ A$ and $\partial_- C = \alpha \cup \partial_- B$. We have: \[\begin{array}{rcl} \mu(A, T_A) + \mu(B, T_B) &=& 3(-\chi(\partial_+ A) + \chi(\alpha)) + 2(|\partial_+ A \cap T| - |\alpha \cap T|) + 6 \\ && + 3\chi(\partial_- B) - 2|\partial_- B \cap T| + 6 \\ && + 3\chi(\partial_- P) - 2|\partial_- P \cap T| - 3\chi(\partial_+ B) + 2|\partial_+ B \cap T|\\ &=& \mu(C, T) + 6. \end{array} \] \end{proof} For a thick surface $H \subset \mc{H}^+$, let $\mu_\downarrow(H)=\mu(H_\downarrow)$ and $\mu_\uparrow(H)=\mu(H_\uparrow)$. We now define the oriented indices $I_\uparrow(\mc{H})$ and $I_\downarrow(\mc{H})$. These will contribute to a complexity which decreases under all relevant moves. Informally, for each thick surface we calculate the sum of the number of ``handles'' which are immediately above some thick surface which is either $H$ or above $H$ and the number of ``handles'' which are immediately below some thick surface which is either equal to $H$ or below $H$. We place these numbers into a non-increasing sequence and compare the results lexicographically. Instead of working with ``handles'', however, we use the indices of v.p.-compressionbodies. \begin{definition} Let $\mc{H} \in \overrightarrow{\vpH}(M,T)$. Each v.p.-compressionbody $(C, T_C) \subset (M,T) \setminus \mc{H}$ is adjacent to a single thick surface $H \subset \mc{H}^+$. The transverse orientation on $H$ either points into or out of $C$. If it points into $C$, then $(C, T_C) = H_\uparrow$ and if it points out of $C$, then $(C, T_C) = H_\downarrow$. In the former case we say that $(C, T_C)$ is an \defn{upper} v.p.-compressionbody for $\mc{H}$ and we say it is a \defn{lower} v.p.-compressionbody in the latter case. Consider the set of flow lines beginning at $H$. A v.p.-compressionbody component (other than $H_\downarrow$) of $(M,T)\setminus \mc{H}$ intersecting one of these flow lines is said to be \defn{above} $H$. We say that a v.p.-compressionbody is \defn{below} $H$ if reversing the transverse orientation of $\mc{H}$ makes it above $H$. Define ${\mc{H}^H_\uparrow}$ to be the set of all upper compression bodies $J_\uparrow$ above $H$. Define ${\mc{H}^H_\downarrow}$ to be the set of all lower compression bodies $J_\downarrow$ below $H$. Since there are no closed flow lines, the sets $\mc{H}^H_\uparrow$ and $\mc{H}^H_\downarrow$ are disjoint. Define the \defn{upper index} and \defn{lower index} of $H$ to be (respectively): \[\begin{array}{rcl} I_{\uparrow}(H)&=& 6 - 6|\mc{H}^H_\uparrow| + \sum\limits_{J_\uparrow \in \mc{H}^H_\uparrow} \mu_\uparrow(J) \\ I_{\downarrow}(H)&=& 6 - 6| \mc{H}^H_\downarrow|+ \sum\limits_{K_\downarrow \in \mc{H}^H_\downarrow}\mu_\downarrow(K). \end{array} \] \end{definition} In Lemma \ref{lem:bounded below} below, we verify that both $I_\uparrow(H)$ and $I_\downarrow(H)$ are non-negative. To package the indices for thick surfaces into an invariant for $\mc{H}$, let $\overrightarrow{\mathbf{c}}(\mc{H})$, the \defn{oriented complexity} of $\mc{H}$, be the \emph{non-increasing} sequence whose terms are the quantities $I(H)=I_\uparrow(H) + I_\downarrow(H)$ for each thick surface $H \subset \mc{H}^+$. \subsubsection{Oriented complexity decreases under generalized destabilization, unperturbing, and undoing a removable arc} \begin{lemma}\label{lem:I decreases under gen destab} Assume that no component of $\partial M$ is a sphere intersecting $T$ in two or fewer points. Suppose that $\mc{K}$ is obtained from $\mc{H}$ by a generalized destabilization, unperturbing, or undoing a removable arc. Then $\overrightarrow{\mathbf{c}}(\mc{K}) < \overrightarrow{\mathbf{c}}(\mc{H})$. \end{lemma} \begin{proof} Let $H \subset \mc{H}^+$ be the thick surface to which we apply the generalized destabilization, unperturbing, or undoing a removable arc. Let $H'$ be the new thick surface, so that $\mc{K} = (\mc{H} \setminus H) \cup H'$. Suppose first that we are performing a destabilization or meridional destabilization. In this case, $-\chi(H') = -\chi(H) - 2$ and $|H'\cap T|$ is either $|H \cap T|$ or $|H \cap T| + 2$. Thus, $\mu_\downarrow(H') < \mu_\downarrow(H)$ and $\mu_\uparrow(H') < \mu_\uparrow(H)$. It follows that $I(H') < I(H)$ and for every thick surface $J \subset \mc{H}^+\setminus H$, the index $I(J)$ does not increase under the destabilization or meridional destabilization. The cases when we unperturb or undo a removable arc are very similar: we simply use the fact that $|H' \cap T| = |H \cap T| - 2$. Now suppose that we perform a $\partial$-destabilization, meridional $\partial$-destabilization, ghost $\partial$-destabilization, or meridional ghost $\partial$-destabilization. Since indices are calculated by drilling out the interior vertices of $T$, we may assume that there are none. In all these cases, there is a (possibly disconnected) closed subsurface $S \subset \partial M$ and a (possibly empty) subset $\Gamma$ of edges of $T$ each disjoint from $\mc{H}$. These are such that $H'$ is the result of compressing $H$ along a separating sc-disc $D$ and then discarding a component $H''$ which is the frontier of the regular neighborhood of $S \cup \Gamma$. In particular, the euler characteristic of the discarded component is $\chi(S) - 2|\Gamma|$. We claim that $\mu_\downarrow(H') < \mu_\downarrow(H)$ and $\mu_\uparrow(H') < \mu_\uparrow(H)$. Let $p = |D \cap T|$. Observe that $|H' \cap T| = |H \cap T| - |S \cap T| + p$. (If $p = 1$, this follows from the fact that $D$ is separating.) Also note that $-\chi(H') = -\chi(H) + \chi(S) - 2|\Gamma| - 2$. Hence, \[ -3\chi(H') + 2|H' \cap T| = -3\chi(H) + 2|H \cap T| + 3 \chi(S) - 6|\Gamma| - 2|S \cap T| + 2p - 6. \] Without loss of generality, we may assume that $S \cup \Gamma \subset H_\uparrow$. The (ghost) (meridional) $\partial$-stabilization then moves $S \cup \Gamma$ to the lower compressionbody $H'_\downarrow$. That is, $S \cup \Gamma \subset H'_\downarrow$. In particular, $\partial_- H'_\uparrow = \partial_- H_\uparrow \setminus S$ and $\partial_- H'_\downarrow = \partial_- H_\downarrow \cup S$. Thus, we have: \[ \begin{array}{rcl} \mu_\uparrow(H') &=& \mu_\uparrow(H) + (3\chi(S) - 6|\Gamma| - 2|S \cap T| + 2p - 6) + (- 3\chi(S) + 2|S \cap T|) \\ \mu_\downarrow(H') &=& \mu_\downarrow(H) + (3\chi(S) - 6|\Gamma| - 2|S \cap T| + 2p - 6) + (3\chi(S) - 2|S \cap T|)\\ \end{array} \] The first term in parentheses in each equation comes from the change of $H$ to $H'$ and the second term comes from the movement of $S \cup \Gamma$ from $\partial_- H_\uparrow$ to $H'_\downarrow$. Simplifying, and using the fact that $2p \in \{0,2\}$, we obtain: \[ \begin{array}{rcl} \mu_\uparrow(H') &\leq & \mu_\uparrow(H) - 6|\Gamma| - 4 \\ \mu_\downarrow(H') &\leq& \mu_\downarrow(H) - 6|\Gamma| + 6 \chi(S) - 4|S \cap T| - 4 \\ \end{array} \] In particular, $\mu_\uparrow(H') < \mu_\uparrow(H)$. The situation for $\mu_\downarrow$ requires more analysis. Let $S_0 \subset S$ be the subset which is the union of all spherical components of $S$ and let $S_1 = S\setminus S_0$. We have \[ \mu_\downarrow(H') \leq \mu_\downarrow(H) - 6|\Gamma| + 12|S_0| - 4|S_0 \cap T| - 4 \\ \] By assumption, each component of $S_0$ intersects $T$ at least three times, so $4|S_0 \cap T| \geq 12|S_0|$. Thus, $\mu_\downarrow(H') < \mu_\downarrow(H)$. Since $S \subset \partial M$ and does not belong to $\mc{H}'$, we can conclude that for each thick surface $J \subset \mc{H}\setminus H$, the indices $I_\uparrow(J)$ and $I_\downarrow(J)$ do not increase under the (meridional) (ghost) $\partial$-destabilization. Furthermore, since $I_\uparrow(H') + I_\downarrow(H') < I_\uparrow(H) + I_\downarrow(H)$, we have \[ \overrightarrow{\mathbf{c}}(\mc{K}) < \overrightarrow{\mathbf{c}}(\mc{H}), \] as desired. \end{proof} \subsubsection{Oriented complexity decreases under consolidation} \begin{lemma}\label{lem:consolidation leaves I} Suppose that $\mc{K} \in \overrightarrow{\vpH}(M,T)$ is obtained from $\mc{H}\in \overrightarrow{\vpH}(M,T)$ by consolidating a thick surface $H \subset \mc{H}^+$ with a thin surface $Q \subset \mc{H}^-$. Then $\overrightarrow{\mathbf{c}}(\mc{K}) < \overrightarrow{\mathbf{c}}(\mc{H})$. \end{lemma} \begin{proof} Without loss of generality, we may suppose that $Q \subset \partial_- (H_\downarrow)$. (If not, reverse orientations so that above and below are interchanged.) That is, $H_\downarrow$ is the product compressionbody bounded by $H$ and $Q$. Let $C \neq H_\downarrow$ be the other v.p.-compressionbody such that $Q \subset \partial_- C$. Let $J \subset \mc{H}^+\setminus H$ be another thick surface. We will show that $I_\uparrow(J)$ and $I_\downarrow(J)$ are unchanged by the consolidation. The v.p.-compressionbodies of $(M,T) \setminus \mc{K}$ are obtained from those of $(M,T) \setminus \mc{H}$ by replacing $C$, $H_\downarrow$, and $H_\uparrow$ with their union. By Lemma \ref{lem:trivialconsolidation}, we have \begin{equation}\label{muconsolthin} \mu(C \cup H_\downarrow \cup H_\uparrow) = \mu(C) + \mu(H_\uparrow) - 6. \end{equation} The consolidation does not affect flow lines, and so if there is no flow line from $J$ to $H$ or from $H$ to $J$, then $I_\uparrow(J)$ and $I_\downarrow(J)$ are clearly unaffected. If there is a flow line from $J$ to $H$, then Equation \eqref{muconsolthin} implies that $I_\uparrow(J)$ decreases by 6. But we also have $|\mc{K}^J_\uparrow| = |\mc{H}^J_\uparrow| - 1$, and so $I_\uparrow(J)$ also increases by 6. Thus, $I_\uparrow(J)$ is unchanged by the consolidation. Clearly, $I_\downarrow(J)$ is also unchanged by the consolidation, because the consolidation happens above $J$. If there is a flow line from $H$ to $J$, then clearly $I_\uparrow(J)$ is unchanged by the consolidation. On the other hand, $I_\downarrow(J)$ decreases by 6 because we have removed $\mu(H_\downarrow)$ from the sum. However, $I_\downarrow(J)$ also increases by 6 since $|\mc{K}^J_\downarrow| = |\mc{H}^J_\downarrow| - 1$. Thus, $I_\downarrow(J)$ is also unchanged by the consolidation. Thus, $\overrightarrow{\mathbf{c}}(\mc{K})$ is simply obtained from $\overrightarrow{\mathbf{c}}(\mc{H})$ by removing the term $I_\uparrow(H) + I_\downarrow(H)$. Since the sequence was non-increasing, we have $\overrightarrow{\mathbf{c}}(\mc{K}) < \overrightarrow{\mathbf{c}}(\mc{H})$. \end{proof} \subsubsection{Oriented complexity decreases under an elementary thinning sequence} Suppose that $\mc{H}$, $\mc{H}_1$, $\mc{H}_2$, $\mc{H}_3 = \mc{K}$ are the multiple v.p.-bridge surfaces in an elementary thinning sequence obtained by untelescoping a thick surface $H \subset \mc{H}^+$ using sc-discs $D_-$ and $D_+$. As we've done before, let $H_-$ and $H_+$ be the new thick surfaces and $F$ the new thin surface. We will generally work with $\mc{H}_2$ and $\mc{H}_3$ (rather than $\mc{H}_1$) so $H_\pm$ is obtained from $H$ by compressing along $D_\pm$ and discarding a component if $\delta_\pm = 1$. The surface $F$ is then obtained from $H_\pm$ by compressing along $D_\mp$ and possibly discarding a component. \begin{lemma} \label{lem:mu goes down} The following hold for $H_-, H_+ \subset \mc{H}^+_2$. \begin{enumerate} \item $\mu_\downarrow(H_-) < \mu_\downarrow(H)$ \item $\mu_\uparrow(H_+)< \mu_\uparrow(H)$ \item $\mu_\downarrow(H_-) + \mu_\downarrow(H_+) = \mu_\downarrow(H)+6$ \item $\mu_\uparrow(H_-)+\mu_\uparrow(H_+) = \mu_\uparrow(H)+6$. \end{enumerate} \end{lemma} \begin{proof} Claims (1) and (2) follow immediately from Lemma \ref{lem:compressing}. Claim (4) can be obtained from the proof of Claim (3) by interchanging $+$ and $-$ and $\uparrow$ and $\downarrow$. We prove Claim (3). Let $p_- = |D_- \cap T|$. Let $\delta_- = 1$, if $\partial D_-$ separates $H$ and 0 otherwise. If $\delta_- = 1$, let $R_\downarrow$ be the v.p.-compressionbody such that $(H_-)_\downarrow \cup R_\downarrow$ is the result of $\partial$-reducing $H_\downarrow$ using $D_-$. See Figure \ref{fig:Rregion}. If $\delta_- = 0$, then let $R_\uparrow = \varnothing$ and recall that $\mu(R_\uparrow) = 0$, by convention. \begin{figure}[ht] \labellist \small\hair 2pt \pinlabel {$D_+$} [bl] at 52 111 \pinlabel {$D_-$} [tr] at 154 40 \pinlabel {$(H_-)_\downarrow$} at 429 28 \pinlabel {$R_\downarrow$} at 618 28 \pinlabel {$(H_+)_\downarrow$} at 618 96 \endlabellist \includegraphics[scale = 0.5]{FiguresV6/Rregion} \caption{The region between the thin surface and thick surface, both indicated with dashed lines, is consolidated in the passage from $\mc{H}_1$ to $\mc{H}_2$ when $\partial D_-$ separates $H$. The v.p.-compressionbody $R_\downarrow$ is then a subset of $(H_+)_\downarrow$. } \label{fig:Rregion} \end{figure} From the definition of index (see Lemma \ref{lem:compressing}) we have \[ \mu_\downarrow(H_-) + \mu(R_\downarrow) = \mu_\downarrow(H) - 6 + 4p_- + 6\delta_-. \] If $\delta_- = 0$, notice that $\mu_\downarrow(H_+) = 12 - 4p_-$, since a single compression creates $F$ from $H_+$. If $\delta_- = 1$, then before the consolidation that creates $\mc{H}_2$ from $\mc{H}_1$, the index of $(H_+)_\downarrow$ is again $12 - 4p_-$. The consolidation removes a surface parallel to $\partial_+ R_\uparrow$ from the negative boundary of the v.p.-compressionbody and replaces it with $\partial_- R_\uparrow$. Recalling the additional (+6) term in the definition of $\mu$ we have, in either case, \[ \mu_\downarrow(H_+) = 12 - 4p_- + \mu(R_\downarrow) - 6\delta_- \] Thus, \[ \mu_\downarrow(H_-) + \mu_\downarrow(H_+) = \mu_\downarrow(H) + 6. \] \end{proof} \begin{corollary}\label{corollary:I decreases} The following hold for $H_-, H_+ \subset \mc{H}^+_2$: \begin{enumerate} \item $I_\downarrow(H_-)<I_\downarrow(H)$ \item $I_\uparrow(H_-) = I_\uparrow(H)$ \item $I_\downarrow(H_+) = I_\downarrow(H)$ \item $I_\uparrow(H_+) < I_\uparrow(H)$ \end{enumerate} \end{corollary} \begin{proof} We prove (2) and (4). Conclusions (1) and (3) follow by reversing the orientation on $\mc{H}$. Observe that each flow line beginning at $H$ extends to a flow line beginning at $H_-$ and can be restricted to a flow line beginning at $H_+$. Thus, the set $(\mc{H}_2)^{H_-}_\uparrow$ is obtained from the set $\mc{H}^H_\uparrow$ by removing the v.p.-compressionbody $H_\uparrow$ and replacing it with the v.p.-compressionbodies $(H_-)_\uparrow$ and $(H_+)_\uparrow$. Observe that $|(\mc{H}_2)^{H_-}_\uparrow| = |\mc{H}^H_\uparrow| + 1$. Hence, using Lemma \ref{lem:mu goes down} part (4), \[\begin{array}{rcl} I_\uparrow(H_-) &=& I_\uparrow(H) - \mu_\uparrow(H) + \mu_\uparrow(H_-) + \mu_\uparrow(H_+) - 6 \\ &=& I_\uparrow(H) + 6 - 6 \\ &=& I_\uparrow(H). \end{array}. \] This proves Conclusion (2). On the other hand, $(\mc{H}_2)^{H_+}_\uparrow$ is obtained from $\mc{H}^H_\uparrow$ by removing $H_\uparrow$ and replacing it with $(H_+)_\uparrow$. From Lemma \ref{lem:mu goes down}, part (3) we have $\mu_\uparrow(H_+) < \mu_\uparrow(H)$. Hence, $I_\uparrow(H_+) < I_\uparrow(H)$, proving Conclusion (4). \end{proof} \begin{corollary}\label{cor:elem thinning decreases I} If $\mc{K}$ is obtained from $\mc{H}$ by an elementary thinning sequence, then $\overrightarrow{\mathbf{c}}(\mc{K}) < \overrightarrow{\mathbf{c}}(\mc{H})$. \end{corollary} \begin{proof} Let $\mc{H}_1$, $\mc{H}_2$, and $\mc{H}_3 = \mc{K}$ be the multiple v.p.-bridge surfaces created during the elementary thinning sequence. Corollary \ref{corollary:I decreases}, shows that $I(H_\pm) < I(H)$. If $J \subset \mc{H}^+ \setminus H$, then both $I_\uparrow(J)$ and $I_\downarrow(J)$ are unchanged in passing from $\mc{H}$ to $\mc{H}_2$. To see this, consider the possible locations of $J$. If $J$ is neither above nor below $H$, then $J$ is neither above nor below either of $H_-$ nor $H_+$ and so $I(J)$ is unchanged. If $J$ is below $H$, then $J$ is below both $H_-$ and $H_+$, as any flow line from $J$ to $H$ extends to a flow line from $J$ to $H_+$ passing through $H_-$. In this case, passing from $\mc{H}$ to $\mc{H}_2$ increases the number of v.p.-compressionbodies above $J$ by 1 and does not change the number of v.p.-compressionbodies below $J$. In the calculation of $I_\uparrow(J)$, we replace $\mu_\uparrow(H)$ with $\mu_\uparrow(H_-) + \mu_\uparrow(H_+)$. By Lemma \ref{lem:mu goes down}, this increases the sum of the indices of the upper v.p.-compressionbodies above $J$ by 6. It does not change the sum of the indices of the lower v.p.-compressionbodies below $J$. Thus, $I(J)$ does not increase when passing from $\mc{H}$ to $\mc{H}_2$. The analysis when $J$ is above $H$ is nearly identical. We may conclude, therefore, that $\overrightarrow{\mathbf{c}}(\mc{H}_2) < \overrightarrow{\mathbf{c}}(\mc{H})$. Finally, either $\mc{H}_3 = \mc{H}_2$ or $\mc{H}_3$ is obtained from $\mc{H}_2$ by one or two consolidations. Thus, by Lemma \ref{lem:consolidation leaves I}: \[ \overrightarrow{\mathbf{c}}(\mc{H}_3) \leq \overrightarrow{\mathbf{c}}(\mc{H}_2) < \overrightarrow{\mathbf{c}}(\mc{H}), \] as desired. \end{proof} \subsection{Index is non-negative} The next lemma will help ensure that our oriented complexity guarantees that we cannot perform an infinite sequence of simplifying moves on an oriented v.p.-compressionbody. \begin{lemma}\label{lem:bounded below} Suppose that no component of $\mc{H}^-$ is a sphere intersecting $T$ exactly once. Then for any thick surface $H \subset \mc{H}^+$, both $I_\uparrow(H)$ and $I_\downarrow(H)$ are non-negative. \end{lemma} \begin{proof} We prove the statement for $I_\uparrow(H)$; the proof of the statement for $I_\downarrow(H)$ is nearly identical. We may assume that $T$ has no interior vertices (drill them out if necessary). We will also work under the assumption that no v.p.-compressionbody of $(M,T)\setminus \mc{H}$ is a product adjacent to a component of $\mc{H}^-$. To see that we may do this, recall from the proof of Lemma \ref{lem:consolidation leaves I} that consolidation leaves $I_\uparrow(H)$ unchanged if $H$ is not consolidated. If $H$ is consolidated, and $H_\downarrow$ is the product region, then it is easy to see that either $I_\uparrow(H) = \mu(H_\uparrow) \geq 0$ or $I_\uparrow(H)$ is equal to $\mu(H_\uparrow) + I_\uparrow(J) \geq I_\uparrow(J)$ for some thick surface $J$ above $H$. Finally, if $H_\uparrow$ is the product region, then there exists a thick surface $J$ above $H$ such that $\partial_- H_\uparrow \subset \partial_- J_\downarrow$. We may calculate $I_\uparrow(H)$ from $I_\uparrow(J)$ by subtracting 6 since $\mc{H}^H_\uparrow = \mc{H}^J_\uparrow \cup H_\uparrow$ and also adding 6 since $\mu(H_\uparrow) = 6$. Thus, if $I_\uparrow(J)$ is non-negative, so is $I_\uparrow(H)$. Henceforth, we assume that $(M,T)\setminus \mc{H}$ has no product regions adjacent to a component of $\mc{H}^-$. We can express the definition of $I_\uparrow(H)$ as: \[ I_{\uparrow}(H) = 6 + \sum_{J_\uparrow \in \mc{H}^H_\uparrow} (\mu(J_{\uparrow})- 6) \\ \] By Lemma \ref{lem:compressing}, $\mu(J_{\uparrow}) \geq 6$ unless $J_{\uparrow }$ is $(B^3, \emptyset)$ or $(B^3, \text{ arc})$ in which cases $\mu(J_{\uparrow })=0$ and $\mu(J_{\uparrow}) =4$ respectively. Thus, if no element of $\mc{H}^H_\uparrow$ is a trivial ball compressionbody, then $I_\uparrow(H) \geq 0$. Assume, therefore that at least one element of $\mc{H}^H_\uparrow$ is a trivial ball compressionbody. We induct on the number $N(H, \mc{H})$ of trivial ball compressionbodies in $\mc{H}^H_\uparrow$. If $H_\uparrow$ is $(B^3, \emptyset)$ or $(B^3, \text{ arc})$, then $|\mc{H}^H_\uparrow| = 1$ and $I_\uparrow(H) = \mu(H_\uparrow) \in \{0,4\}$, as desired. We may assume, therefore, that $|\mc{H}^H_\uparrow| \geq 2$. \begin{figure}[ht] \labellist \small\hair 2pt \pinlabel{$C_1$} at 142 195 \pinlabel{$C_2$} at 237 73 \pinlabel{$V$} at 147 147 \endlabellist \includegraphics[scale=0.5]{FiguresV6/UpperAdjacent} \caption{The v.p.-compressionbodies $C_1$ and $C_2$ are adjacent in $\mc{H}^H_\uparrow$ for $H = \partial_+ C_2$ or any thick surface $H$ below $\partial_+ C_2$. In this example, there is another possible choice for $C_2$.} \label{fig:upperadjacent} \end{figure} We will call v.p.-compressionbodies $C_1, C_2 \in \mc{H}^H_\uparrow$ \defn{adjacent in $\mc{H}^H_\uparrow$} if there is a v.p.-compressionbody $V$ such that $\partial_+C_1=\partial_+V$ and $\partial_-C_2\cap \partial_-V \neq \emptyset$ or vice versa. See Figure \ref{fig:upperadjacent} for an example. If $C_1 \in \mc{H}^H_\uparrow$ is a trivial ball compressionbody, then there must be a v.p.-compressionbody $C_2 \in \mc{H}^H_\uparrow$ adjacent in $\mc{H}^H_\uparrow$ to $C_1$ as there is a flow line from $H$ to $\partial_+ C_1 \neq H$. Observe that in such a situation, $C_2$ is not a trivial ball compressionbody (since $\partial_- C_2 \neq \varnothing$). Furthermore, if $C_1$ is a trivial ball compressionbody adjacent in $\mc{H}^H_\uparrow$ to $C_2$, with $V$ the lower compressionbody incident to both, then $\partial_+ V$ is a zero or twice-punctured sphere. Consequently $\partial_- V$ is the union of spheres. Let $\Gamma$ be the graph with vertices the components of $\partial_- V$ and edges corresponding to the ghost arcs in $V$. Since $\partial_+ V$ is a sphere, $\Gamma$ is the union of isolated vertices and trees. Since no component of $\partial_- V$ is a once-punctured sphere, each component $P$ of $\partial_- V$ is a zero or twice-punctured sphere. In particular, if $C_1 \cap T = \varnothing$, then $P$ is unpunctured. Suppose now that $A \in \mc{H}^H_\uparrow$ is adjacent in $\mc{H}^H_\uparrow$ to a trivial ball compressionbody $C \in \mc{H}^H_\uparrow$. Choose a single component $P$ of $\partial_- A$ such that a flow line from $H$ to $\partial_+ C$ passes through $P$. This implies that $P \subset \partial_- V$ where $V$ is the lower v.p.-compressionbody incident to both $A$ and $C$. By the remarks of the previous paragraph (with $C_1 = C$ and $C_2 = A$), $P$ is a zero or twice punctured sphere (as are all components of $\partial_- V$). Cut $(M, T)$ open along all components of $\partial_- V \setminus P $, turning those components into components of $\partial M$ which are zero or twice-punctured spheres. Let $\mc{H}'$ be the components of $\mc{H}$ which are not now components of $\partial M$. Observe that $\mc{H}'^+ = \mc{H}^+$ and that now every flow line from $H$ to $\partial_+ C$ must pass through $P$. We have not, however, changed $I_\uparrow(H)$ since any compressionbody of $(M,T)\setminus \mc{H}$ which was above $H$ is still a compressionbody of $(M,T)\setminus \mc{H}'$ above $H$ and we have not created any new v.p.-compressionbodies above $H$. We may have disconnected $M$; however, any v.p.-compressionbodies not in the component of $M$ containing $H$ were not above $H$ before the cut and we can ignore them for the purposes of the calculation. For convenience of notation, use $\mc{H}$ instead of $\mc{H}'$ and assume that every flow line from $H$ to $\partial_+ C$ must pass through $P$. Now cut open $M$ along $P$. This cuts $M$ into two components $M_1$ and $M_2$ with $M_1$ containing $H$ and $M_2$ containing $C$. Cap off the components of $\partial M_1$ and $\partial M_2$ corresponding to $P$ with $(B^3, \varnothing)$ or $(B^3, \text{ arc})$ corresponding to whether or not $P$ is a zero or twice punctured sphere. Let $(\wihat{M}_i, \wihat{T}_i)$ for $i = 1,2$ be these new (3-manifold, graph) pairs. Observe that $\wihat{\mc{H}} = \mc{H}\setminus (P \cup \partial_+ C)$ is a multiple v.p.-bridge surface for $(\wihat{M}_1, \wihat{T}_1)$. The only upper v.p.-compressionbody affected by this is $A$; we obtain a new v.p.-compressionbody $\wihat{A}$. If $\wihat{A}$ is a trivial ball compressionbody, then $P = \partial_- A$; $A$ contains no bridge arcs; and $\partial_+ A$ is a sphere. This is enough to guarantee that $A$ is a product compressionbody, contrary to hypothesis. Thus, with respect to $\wihat{\mc{H}}$ there is one fewer trivial ball compressionbody above $H$ than with respect to $\mc{H}$. Let $\wihat{I}$ be $I_\uparrow(H)$ with respect to $\wihat{\mc{H}}$ and let $I$ be $I_\uparrow(H)$ with respect to $\mc{H}$. By our inductive hypothesis, we have $\wihat{I} \geq 0$. We have \[ \mu(A) = \mu(\wihat{A}) + 6 - 2|P \cap T|. \] Thus, \[ I = \wihat{I} + 6 - 2|P \cap T| + (\mu(C) - 6) \geq \mu(C) - 2|P \cap T| \] Recalling that $\mu(C) \in \{0,4\}$ and $|P \cap T| \in \{0,2\}$, we need only realize that if $\mu(C) = 0$, then $|P \cap T| = 0$ to conclude that \[ I \geq 0. \] \end{proof} \begin{remark}\label{rmk:terminates} By Lemma \ref{lem:bounded below}, each term of $\overrightarrow{\mathbf{c}}(\mc{H})$ is non-negative. Thus, any sequence of multiple v.p.-bridge surfaces $\mc{H}$ with $\overrightarrow{\mathbf{c}}(\mc{H})$ strictly decreasing must terminate. \end{remark} \subsection{Extended thinning moves} In this section, we formalize the fact that oriented complexity forbids an infinite sequence of simplifying moves to an oriented multiple v.p.-compressionbody. \begin{definition} An oriented multiple v.p.-bridge surface $\mc{H}$ is \defn{reduced} if it does not contain a generalized stabilization, a perturbation, or a removable arc and if no component of $(M,T)\setminus \mc{H}$ is a trivial product compressionbody adjacent to a component of $\mc{H}^-$. \end{definition} \begin{definition} Suppose that $\mc{H}\in \overrightarrow{\vpH}(M,T)$ is reduced and that $T$ is irreducible. An \defn{extended thinning move} applied to $\mc{H}$ consists of the following steps in the following order: \begin{enumerate} \item Perform an elementary thinning sequence \item Destabilize, unperturb, and undo removable arcs until no generalized stabilizations, perturbations, or removable arcs remain \item Consolidate all components of $\mc{H}^-$ and $\mc{H}^+$ cobounding a trivial product compressionbody in $(M,T) \setminus \mc{H}$ \item Repeat (2) and (3) as much as necessary until $\mc{H}$ does not have a generalized stabilization, perturbation, or removable arc or product region adjacent to $\mc{H}^-$. \end{enumerate} \end{definition} \begin{remark}\label{Step 2 before Step 3} Corollary \ref{cor:elem thinning decreases I}, Lemma \ref{lem:I decreases under gen destab}, and Lemma \ref{lem:consolidation leaves I} show that each of the steps (1), (2), (3), if applied non-vacuously, strictly decrease oriented complexity. Thus, by Remark \ref{rmk:terminates} they can occur only finitely many times, until either we cannot (non-vacuously) perform any of the steps of an extended thinning move or until we have a multiple v.p.-bridge surface having a thin level which is a sphere intersecting $T$ exactly once. We have phrased the steps as we have in order to guarantee that if $\mc{H}$ is reduced, then an extended thinning move applied to $\mc{H}$ results in a reduced multiple v.p.-bridge surface. If $\mc{H} \in \overrightarrow{\vpH}(M,T)$ is not reduced, we may perform a sequence of consolidations, generalized destabilizations, unperturbings, and undoings of removable arcs to make it reduced. (Such a sequence is guaranteed to terminate because each of those operations strictly decreases oriented complexity.) \end{remark} \begin{definition} If $\mc{H}, \mc{K} \in \overrightarrow{\vpH}(M,T)$ then we write $\mc{H} \to \mc{K}$ if either of the following holds: \begin{itemize} \item $\mc{H}$ is reduced and $\mc{K}$ is obtained from $\mc{H}$ by an extended thinning move, or \item $\mc{H}$ is not reduced, $\mc{K}$ is reduced and $\mc{K}$ is obtained from $\mc{H}$ by a sequence of consolidations, generalized destabilizations, unperturbings, and undoing of removable arcs. \end{itemize} We then extend the definition of $\to$ so that it is a partial order on $\overrightarrow{\vpH}(M,T)$. In particular, if $\mc{H}$ is reduced, then $\mc{H} \to \mc{K}$ means that $\mc{K}$ is obtained from $\mc{H}$ by a (possibly empty) sequence of extended thinning moves. \end{definition} Recall that in a poset, a ``least element'' is an element $x$ with the property that no element is strictly less than $x$. In our context, we say that an element $\mc{K} \in \overrightarrow{\vpH}(M,T)$ is a \defn{least element} or \defn{locally thin} if it is reduced and if $\mc{K} \to \mc{K}'$ implies that $\mc{K} = \mc{K}'$. The following result follows immediately from our work above. The hypothesis that $T$ is irreducible guarantees that in a sequence of extended thinning moves we never have a thin surface which is a sphere intersecting $T$ exactly once. \begin{theorem}\label{partial order} Let $(M,T)$ be a (3-manifold, graph) pair with $T$ irreducible. Suppose that no component of $\partial M$ is a sphere intersecting $T$ two or fewer times. Then, for all $\mc{H} \in \overrightarrow{\vpH}(M, T)$ there is a least element (i.e. locally thin) $\mc{K} \in \overrightarrow{\vpH}(M,T)$ such that $\mc{H} \to \mc{K}$. \end{theorem} \section{Sweepouts}\label{sec: sweepouts} Sweepouts, as in most applications of thin position, are the key tool for finding disjoint compressing discs on two sides of a thick surface. In this section, we will use $X - Y$ to denote the set-theoretic complement of $Y$ in $X$, as opposed to $X\setminus Y$ which indicates the complement of an open regular neighborhood of $Y$ in $X$. \begin{definition} Suppose that $(C,T)$ is a v.p.-compressionbody and that $\Sigma \subset C$ is a trivalent graph embedded in $C$ such that the following hold: \begin{itemize} \item $(C,T)\setminus\Sigma$ is homeomorphic to $(\partial_+ C \times I, \text{vertical arcs})$ \item $\Sigma$ contains the ghost arcs of $T$ and no interior vertex of $\Sigma$ lies on a ghost arc \item Each boundary vertex of $\Sigma$ lies on $T$ or on $\partial_- C$. \item Any edge of $T$ which is not a ghost arc and which intersects $\Sigma$ is a bridge arc intersecting $\Sigma$ in a boundary vertex. \end{itemize} Then $\Sigma$ is a \defn{spine} for $(C,T)$. See Figure \ref{Fig: vpSpine} for an example. \end{definition} \begin{center} \begin{figure}[tbh] \includegraphics[scale=0.4]{FiguresV6/vpSpine} \caption{A spine for the v.p.-compressionbody from Figure \ref{Fig: vpcompressionbody} consists of the dashed blue graph together with the edges of $T$ that are disjoint from $\partial_+ C$.} \label{Fig: vpSpine} \end{figure} \end{center} Suppose that $H \in vp\mathbb{H}(M,T)$ is connected and that $\Sigma_\uparrow$ and $\Sigma_\downarrow$ are spines for $(H_\uparrow, T \cap H_\uparrow)$ and $(H_\downarrow, T \cap H_\downarrow)$. The manifold $M - (\Sigma_\uparrow \cup \Sigma_\downarrow)$ is homeomorphic to $H \times (0,1)$ by a map taking $T - (\Sigma_\uparrow \cup \Sigma_\downarrow)$ to vertical edges. We may extend the homeomorphism to a map $h\co M \to I$ taking $\Sigma_\downarrow$ to $-1$ and $\Sigma_\uparrow$ to $+1$. The map $h$ is called a \defn{sweepout} of $M$ by $H$. Note that for each $t \in (0,1)$, $H_t = h^{-1}(t)$ is properly isotopic in $M \setminus T$ to $H \setminus T$, that $h^{-1}(-1) = \partial_- H_\downarrow \cup \Sigma_\downarrow$, and $h^{-1}(1) = \partial_- H_\uparrow \cup \Sigma_\uparrow$. If we perturb $h$ by a small isotopy, we also refer to the resulting map as a sweepout. \begin{theorem}\label{Thm: Sweepout} Let $(M,T)$ be a (3-manifold, graph) pair. Suppose that $F \subset (M,T)$ is an embedded surface and assume that $H \in vp\mathbb{H}(M,T)$ is connected and doesn't bound a trivial v.p.-compressionbody on either side. Then, $H$ can be isotoped transversally to $T$ such that after the isotopy $H$ and $F$ are transverse and one of the following holds \begin{enumerate} \item\label{it: disjoint} $H \cap F = \varnothing$ \item\label{it: essential} $H \cap F \neq \varnothing$, every component of $H \cap F$ is essential in $F$ and no component of $H \cap F$ bounds an sc-disc for $H$. \item\label{it: sc-weakly red} $H$ is sc-weakly reducible. \end{enumerate} \end{theorem} \begin{remark} The essence of this argument can be found in many places. It originates with Gabai's original thin position argument \cite{G3} and is adapted to the context of Heegaard splittings by Rubinstein and Scharlemann \cite{RubSch}. A version for graphs in $S^3$ plays a central role in \cite{GST}. \end{remark} \begin{proof} Let $h$ be a sweepout corresponding to $H$, as above. Perturb the map $h$ slightly so that $h|_F$ is Morse with critical points at distinct heights. Let \[ 0 =v_0 < v_1 < v_2 < \cdots < v_n = 1 \] be the critical values of $h|_F$. Let $I_i = (v_{i-1}, v_i)$. Label $I_i$ with $\downarrow$ (resp. $\uparrow$) if some component of $F \cap H_t$ bounds a sc-disc below (resp. above) $H_t$ for some $t \in I_i$. Observe that, by standard Morse theory, the label(s) on $I_i$ are independent of the choice of $t \in I_i$. \textbf{Case 1:} Some interval $I_i$ is without a label. Let $t \in I_i$. If $H_t \cap F = \varnothing$, then we are done, so suppose that $H_t \cap F \neq \varnothing$. Suppose that some component $\zeta \subset H_t \cap F$ is inessential in $F$. This means that $\zeta$ bounds an unpunctured or once-punctured disc in $F$. Without loss of generality, we may assume that $\zeta$ is innermost in $F$. Let $D \subset F$ be the disc or once-punctured disc it bounds. Since $\zeta$ does not bound an sc-disc for $H_t$, the disc $D$ is properly isotopic in $M\setminus T$, relative to $\partial D$ into $H_t$. Let $B \subset M$ be the 3-ball bounded by $D$ and the disc in $H_t$. By an isotopy supported in a regular neighborhood of $B$, we may isotope $H_t$ to eliminate $\zeta$ (and possibly some other inessential curves of $H_t \cap F$.). Repeating this type of isotopy as many times as necessary, we may assume that no curve of $H_t \cap F$ is inessential in $F$. If $H_t \cap F = \varnothing$, we have the first conclusion. If $H_t \cap F \neq \varnothing$, then we have the Conclusion \eqref{it: essential}. Suppose, therefore, that each $I_i$ has a label. \textbf{Case 2:} Some $I_i$ is labelled both $\uparrow$ and $\downarrow$. Since for each $t \in I_i$, $H_t$ is transverse to $F$ we have Conclusion \eqref{it: sc-weakly red}. \textbf{Case 3:} There is an $i$ so that $I_i$ is labelled $\downarrow$ and $I_{i+1}$ is labelled $\uparrow$, or vice versa. The labels cannot change from $I_i$ to $I_{i+1}$ at any tangency other than a saddle tangency. Let $\epsilon > 0$ be smaller than the lengths of the intervals $I_i$ and $I_{i+1}$. Since $H_t$ is orientable, under the projections of $H_{v_i - \epsilon}$ and $H_{v_i + \epsilon}$ to $H$, the 1-manifold $H_{v_i - \epsilon} \cap F$ can be isotoped to be disjoint from $H_{v_i + \epsilon} \cap F$. Since some component of the former set bounds an sc-disc on one side of $H$ and some component of the latter set bounds an sc-disc on the other side of $H$, we have Conclusion \ref{it: sc-weakly red} again. \textbf{Case 4:} For every $i$, $I_i$ is labelled $\downarrow$ and not $\uparrow$ or for every $i$, $I_i$ is labelled $\uparrow$ and not $\downarrow$. Without loss of generality, assume that each $I_i$ is labelled $\uparrow$ and not $\downarrow$. In particular, $I_1$ is labelled $\uparrow$ and not $\downarrow$. Fix $t \in I_1$ and consider $H_t$. Since $H$ does not bound a trivial v.p.-compressionbody to either side, the spine for $(H_\downarrow, T \cap H_\downarrow)$ has an edge $e$. Since $I_i$ is below the lowest critical point for $h|_F$, the components of $F \cap (H_t)_\downarrow$ intersecting $e$ are a regular neighborhood in $F$ of $F \cap e$. Let $D_\downarrow$ be a meridian disc for $e$ with boundary in $H_t$ and which is disjoint from $F \cap (H_t)_\downarrow$. Since $I_1$ is labelled $\uparrow$, there is a component $\zeta \subset H_t \cap F$ such that $\zeta$ bounds an sc-disc $D_\uparrow$ for $H_t$ in $(H_t)_\uparrow$. The pair $\{D_\uparrow, D_\downarrow\}$ is then a weak reducing pair for $H_t$, giving Conclusion \eqref{it: sc-weakly red}. \end{proof} \begin{remark} Observe that in Conclusion (3), we can only conclude that $H$ is sc-weakly reducible -- not that $H$ is c-weakly reducible. This arises in Case 4 of the proof, when we use an edge of the spine to produce an sc-disc. This is one reason for allowing semi-compressing and semi-cut discs in weak reducing pairs. \end{remark} \begin{corollary}\label{Vertical discs} Suppose that $H \in vp\mathbb{H}(M,T)$ is connected and sc-strongly irreducible. If a component $S$ of $\partial M$ is c-compressible, then the component of $(M,T) \setminus H$ containing $S$ is a trivial product compressionbody. \end{corollary} \begin{proof} Let $F \subset (M,T)$ be a c-disc for $S$. Let $(C, T_C)$ and $(E, T_E)$ be the components of $(M,T)\setminus H$, with $S \subset \partial_- C$. If $(E, T_E)$ is a trivial compressionbody, we may isotope $F$ out of $E$ to be contained in $C$. This contradicts the fact that $\partial_- C$ is c-incompressible in $C$. Hence, $(E, T_E)$ is not a trivial product compressionbody. Since $S \subset \partial_- C$ is c-compressible, it either has positive genus or intersects $T$ at least 3 times. In particular, $(C, T_C)$ is not a trivial ball compressionbody. Suppose, for a contradiction, that $(C, T_C)$ is not a trivial product compressionbody. Then by Theorem \ref{Thm: Sweepout} $H$ can be isotoped transversally to $T$ such that after the isotopy one of the following holds: \begin{enumerate} \item $H \cap F = \varnothing$ \item $H \cap F \neq \varnothing$, every component of $H \cap F$ is essential in $F$ and no component of $H \cap F$ bounds an sc-disc for $H$. \end{enumerate} Since $\partial_- C$ is c-incompressible in $C$, by Lemma \ref{Lem: Invariance}, the first conclusion cannot hold. Since no curve in a disc or once-punctured disc is essential, the second conclusion is also impossible. Thus, $(C, T_C)$ is a trivial product compressionbody. \end{proof} \begin{theorem}[Properties of locally thin surfaces]\label{Properties Locally Thin} Suppose that $(M,T)$ is a (3-manifold, graph) pair, with $T$ irreducible. Let $\mc{H} \in \overrightarrow{\vpH}(M,T)$ be locally thin. Then the following hold: \begin{enumerate} \item $\mc{H}$ is reduced \item Each component of $\mc{H}^+$ is sc-strongly irreducible in the complement of $\mc{H}^-$. \item No component of $(M,T) \setminus \mc{H}$ is a trivial product compressionbody between $\mc{H}^-$ and $\mc{H}^+$. \item Every component of $\mc{H}^-$ is c-essential in $(M,T)$. \item If $(M,T)$ is irreducible and if $\mc{H}$ contains a 2-sphere disjoint from $T$, then $T = \varnothing$ and $M = S^3$ or $M = B^3$. \end{enumerate} \end{theorem} \begin{proof} Without loss of generality, we may assume that $T$ has no vertices (drilling them out to turn them into components of $\partial M$ if necessary). Conclusions (1) and (3) are immediate from the definition of locally thin. If some component of $\mc{H}^+$ is sc-weakly reducible in $(M,T)\setminus \mc{H}^-$, then, since $T$ is irreducible, we could perform an elementary thinning sequence, contradicting the definition of locally thin. Thus, (2) also holds. Next we show that each component of $\mc{H}^-$ is c-incompressible. Suppose, therefore, that $S \subset \mc{H}^-$ is a thin surface. We first show that $S$ is c-incompressible and then that it is not $\partial$-parallel. Suppose that $S$ is c-compressible by a c-disc $D$. By an innermost disc argument, we may assume that no curve of $D \cap (\mc{H}^-\setminus S)$ is an essential curve in $\mc{H}^-$. By passing to an innermost disc, we may also assume that $D \cap (\mc{H}^-\setminus S) = \varnothing$. Let $(M_0, T_0)$ be the component of $(M,T) \setminus \mc{H}^-$ containing $D$. Let $H = \mc{H}^+ \cap M_0$ and recall that $H$ is connected. By Corollary \ref{Vertical discs} applied to $H$ in $(M_0, T_0)$, the v.p.-compressionbody between $S$ and $H$ is a trivial product compressionbody. This contradicts property (3) of locally thin multiple v.p.-bridge surfaces. Thus, each component of $\mc{H}^-$ is c-incompressible. We now show no sphere component of $\mc{H}^-$ bounds a 3-ball in $M \setminus T$. Suppose that $S \subset \mc{H}^-$ is such a sphere and let $B \subset M\setminus T$ be the 3-ball it bounds. By passing to an innermost such sphere, we may assume that no component of $\mc{H}^-$ in the interior of $B$ is a 2-sphere. If there is a component of $\mc{H}^-$ in the interior of $B$, that component would be compressible, a contradiction. Thus the intersection $H$ of $\mc{H}$ with the interior of $B$ is a component of $\mc{H}^+$. The surface $H$ is a Heegaard splitting of $B$. If $H$ is a sphere it is parallel to $S$, contradicting (3). If $H$ is not a sphere, then by \cite{Wald} it is stabilized, contradicting (1). Thus, each component of $\mc{H}^-$ is c-incompressible in $(M,T)$ and not a sphere bounding a 3-ball in $M\setminus T$. In particular, if $(M,T)$ is irreducible no component of $\mc{H}^-$ is a sphere disjoint from $T$. We now show that no component of $\mc{H}^-$ is $\partial$-parallel. Since $T$ may be a graph and not simply a link, this does not follow immediately from our previous work. Suppose, to obtain a contradiction, that a component $F$ of $\mc{H}^-$ is boundary parallel in the exterior of $T$. An analysis (which we provide momentarily) of the proof of \cite[Theorem 9.3]{TT2} shows that $\mc{H}$ either has a perturbation or a generalized stabilization or is removable. We elaborate on this: As in \cite[Lemma 3.3]{TT2}, since all components of $\mc{H}^-$ are c-incompressible, we may assume that the product region $W$ between $F$ and $\partial (M\setminus T)$ has interior disjoint from $\mc{H}^-$. (That is, $F$ is innermost.) Observe that $W$ is a compressionbody with $F = \partial_+ W$ and the component $H = \mc{H}^+ \cap W$ is a v.p.-bridge surface for $(W, T \cap W)$. If there is a component of $T \cap W$ with both endpoints on $F$, then $F$ must be a 2-sphere and $W$ is a 3-ball with $T \cap W$ a $\partial$-parallel arc. In this case, if $T \cap W$ is disjoint from $H$, then $H$ is a Heegaard surface for the solid torus obtained by drilling out the arc $T \cap W$. Since $H$ is not stabilized, it must then be a torus. In particular, since $W$ is a 3--ball, this implies that $H$ is meridionally stabilized, a contradiction. Thus, in particular, if $T \cap W$ has a component with both endpoints on $F$, then $H$ intersects each component of $T \cap W$. We now peform a trick to guarantee that this is also the case when $T \cap W$ does have a component with endpoints on $F$. Let $(\ob{W}, \ob{T})$ be the result of removing from $(W, T \cap W)$ an open regular neighborhood of all edges of $T \cap W$ which are disjoint from $H$. (By our previous remark, all such edges have both endpoints on $\partial W \setminus F$.) Then $\ob{T}$ is a 1--manifold properly embedded in $\ob{W}$ with no edge disjoint from $H$. If $\ob{T}$ has at least one edge, then by \cite[Theorem 3.5]{TT2} (which is a strengthening of \cite[Theorem 3.1]{TT1}) one of the following occurs: \begin{enumerate} \item[(i)] $H \in vp\mathbb{H}(\ob{W}, \ob{T})$ is stabilized, boundary-stabilized along $\partial \ob{W} \setminus F$, perturbed or removable. \item[(ii)] $H$ is parallel to $F$ by an isotopy transverse to $T$. \end{enumerate} Consider possibility (i). If $H \in vp\mathbb{H}(\ob{W}, \ob{T})$ is stabilized, boundary-stabilized along $\partial \ob{W}$, or removable then $H \in vp\mathbb{H}(W, T)$ would have a generalized stabilization or be removable with removing discs disjoint from the vertices of $T$, an impossibility. If $H \in vp\mathbb{H}(\ob{W}, \ob{T})$ is perturbed, $H \in vp\mathbb{H}(W, T)$ has a perturbation (since $\ob{T}$ is a 1-manifold), also an impossibility. Thus, (i) does not occur. Possibility (ii) does not occur since, none of the v.p.-compressionbodies of $(M,T) \setminus \mc{H}$ are trivial product compressionbodies adjacent to $\mc{H}^-$. We may assume, therefore, that $\ob{T}$ has no edges. Then by \cite{ST-class products}, $H$ is either parallel to $F$ by an isotopy transverse to $T$ or is boundary-stabilized along $\partial \ob{W}$. The former situation contradicts the assumption that $(M,T)\setminus \mc{H}$ contains no trivial product compressionbody adjacent to $F \subset \mc{H}^-$. In the latter situation, since $H$ is boundary-stabilized in $\ob{W}$, then it has a generalized stabilization as a surface in $\overrightarrow{\vpH}(W,T)$, contradicting the assumption that $\mc{H}$ is reduced. Thus, once again, $F$ is not boundary-parallel in $M \setminus T$. We have shown, therefore, that no component of $\mc{H}^-$ is $\partial$-parallel and, thus, that each component of $\mc{H}^-$ is c-essential in $(M,T)$. It remains to show that if $(M,T)$ is irreducible and if some component of $\mc{H}$ is a sphere disjoint from $T$, then $T = \varnothing$ and $M = B^3$ or $M = S^3$. Assume that $(M,T)$ is irreducible. We have already remarked that since each component of $\mc{H}^-$ is c-essential no component of $\mc{H}^-$ is a sphere disjoint from $T$. We now show that no component $H$ of $\mc{H}^+$ is a sphere disjoint from $T$, unless $T = \varnothing$ and $M$ is $S^3$ or $B^3$. Suppose that there is such a component $H \subset \mc{H}^+$. Let $(C, T_C)$ and $(D, T_D)$ be the v.p.-compressiobodies on either side of $H$. By the definition of v.p.-compressionbody the surfaces $\partial_- C$ and $\partial_- D$ are the unions of spheres and $T_C$ and $T_D$ are the unions of ghost arcs. Consider the graphs $\Gamma_C = \partial_- C \cup T_C$ and $\Gamma_D = \partial_- D \cup T_D$ (thinking of the components of $\partial_- C$ and $\partial_- D$ as vertices of the graph.) By the definition of v.p.-compressionbody since $H$ is a sphere disjoint from $T$, the graphs $\Gamma_C$ and $\Gamma_D$ are the union of trees. If either $\Gamma_C$ or $\Gamma_D$ has an edge, then a leaf of $\Gamma_C$ or $\Gamma_D$ is a sphere intersecting $T$ exactly once. This contradicts the irreducibility of $(M,T)$. Consequently, both $T_C$ and $T_D$ are empty. Since no spherical component of $\mc{H}^-$ is disjoint from $T$, this implies that $\partial_- C \cup \partial_- D$ is a subset of $\partial M$. Since $M\setminus T$ is irreducible, this implies that $\partial_- C \cup \partial_- D$ is either empty or a single sphere. Consequently, $M$ is either $S^3$ or $B^3$ and $T = \varnothing$. \end{proof} \section{Decomposing spheres}\label{thin decomp spheres} The goal of this section is to show that if we have a bridge surface for a composite knot or graph, we can untelescope it so that a summing sphere shows up as a thin level. We start with a simple observation (likely well-known) that compressing essential twice and thrice-punctured spheres results in a component which is still essential. The proof is straightforward and similar to that of Lemma \ref{spheres in v.p.-compressionbodies}, so we leave it to the reader. \begin{lemma}\label{lem:Always essential} Assume that $(M,T)$ is a 3-manifold graph pair with $T$ irreducible. Suppose that $P \subset (M,T)$ is an essential sphere with $|P \cap T| \leq 3$. Let $P'$ be the result of compressing $P$ along an sc-disc $D$. Then at least one component of $P' \subset (M,T)$ is an essential sphere intersecting $T$ at most 3 times. \end{lemma} \begin{theorem}\label{thm:They are thin levels} Suppose that $(M,T)$ is a (3-manifold, graph) pair with $T$ irreducible. Suppose that there is an essential sphere $P \subset (M,T)$ such that $|P \cap T| \leq 3$. If $\mc{H} \in \overrightarrow{\vpH}(M,T)$ is locally thin, then some component $F$ of $\mc{H}^-$ is an essential sphere with $|F \cap T| \leq 3$. Furthermore, $|F \cap T| \leq |P \cap T|$ and $F$ can be obtained from $P$ by a sequence of compressions using sc-discs. \end{theorem} \begin{proof} Let $P \subset (M,T)$ be an essential sphere such that $|P \cap T| \leq 3$. As in the proof of Lemmas \ref{spheres in v.p.-compressionbodies} and \ref{lem:Always essential}, since $T$ is irreducible, if $P_0$ is a sphere resulting from an sc-compression of $P$, then $|P_0 \cap T| \leq |P \cap T|$ since $T$ is irreducible. Without loss of generality, we may assume that the given $P$ was chosen so that no sequence of isotopies and sc-compressions reduces $|P \cap \mc{H}^-|$. The intersection $P \cap \mc{H}^-$ consists of a (possibly empty) collection of circles. We show it is, in fact, empty. Suppose, for a contradiction, that $\gamma$ is a component of $|P \cap \mc{H}^-|$. Without loss of generality, we may suppose it is innermost on $P$. Let $D \subset P$ be the unpunctured disc or once-punctured disc which it bounds. Since $\mc{H}^-$ is c-incompressible, $\gamma$ must bound a zero or once-punctured disc $E$ in $\mc{H}^-$. Thus, if $|P \cap \mc{H}^-| \neq 0$, then there is a component of the intersection which is inessential in $\mc{H}^-$. Let $\zeta \subset P \cap \mc{H}^-$ be a component which is inessential in $\mc{H}^-$ and which, out of all such curves, is innermost in $\mc{H}^-$. Let $E \subset \mc{H}^-$ be the unpunctured or once-punctured disc it bounds. Observe that $\zeta$ also bounds a zero or once-punctured disc on $P$. If $E$ is not an sc-disc for $P$, then we can isotope $P$ to reduce $|P \cap \mc{H}^-|$, contradicting our choice of $P$. Thus, $E$ is an sc-disc. By Lemma \ref{lem:Always essential}, compressing $P$ along $E$ creates two spheres, at least one of which intersects $T$ no more than 3 times and is essential in the exterior of $T$. Since this component intersects $\mc{H}^-$ fewer times than does $P$, we have contradicted our choice of $P$. Hence $P \cap \mc{H}^- = \varnothing$. We now consider intersections between $P$ and $\mc{H}^+$. Since $P$ is disjoint from $\mc{H}^-$, we may apply Theorem \ref{Thm: Sweepout} to the component $(W, T_W)$ of $(M,T)\setminus \mc{H}^-$ containing $P$. We apply the theorem with $H = \mc{H}^+ \cap W$ and $F = P$. If some component of $\mc{H}^-$ is a once-punctured sphere, we are done, so assume that no component of $\mc{H}^-$ is a once-punctured sphere. By cutting open along $\mc{H}^-$ and replacing $(M,T)$ with the component containing $P$, we may assume that $\mc{H}^- = \varnothing$ and that $H = \mc{H}$ is connected. Apply Theorem \ref{Thm: Sweepout} to $P$ (in place of $F$) to see that we can isotope $H$ transversally to $T$ in $M \setminus \mc{H}^-$ so that one of the following occurs: \begin{enumerate} \item $H \cap P = \varnothing$. \item $H \cap P$ is a non-empty collection of curves, each of which is essential in $P$. \item $H$ is sc-weakly reducible. \end{enumerate} Since $\mc{H}$ is locally thin in $\overrightarrow{\vpH}(M,T)$, (3) does not occur. Since $P$ contains no essential curves, (2) does not occur. Thus, $H \cap P = \varnothing$. Let $(C, T_C)$ be the component of $M \setminus \mc{H}$ containing $P$. By Lemma \ref{spheres in v.p.-compressionbodies}, after some sc-compressions, $P$ is parallel is a component of $\mc{H}^- \cup \partial M$. By Lemma \ref{lem:Always essential}, $P$ is parallel to a component of $\mc{H}^-$ and we are done. \end{proof} \begin{bibdiv} \begin{biblist} \bibselect{AdditiveInvariantsBib2} \end{biblist} \end{bibdiv} \end{document}
2,869,038,156,507
arxiv
\section{Introduction} The rise of two-dimensional materials and a subsequent avalanche of studies \cite{goldberg2010, katsnelson2007, geim2007} have led to significant advances in theory and experiments. With this, the Dirac equation has found happy applications in electronic transport \cite{dassarma2011}, photonic structures \cite{haldane2008, raghu2008} and recently, ultracold matter in optical lattices \cite{uehlinger2013}. The crossover between crystalline structures and relativistic quantum mechanics compells us to analyze these systems from different angles. In this paper we are interested in discrete symmetries, whose implications in elementary particle physics have been clearly established and -- in the frontiers of our knowledge -- occasionally tested \cite{ellis1994, maiani1995, abouzaid2011, beringer2012}. Our tasks imply a revision of dimensionality and its consequences. The $2+1$ dimensional Dirac equation shares many features with the usual $3+1$ dimensional case, but there are also differences that manifest themselves in discrete transformations and the nature of chiral symmetries. In a more general framework, we should point out that nearest-neighbor tight-binding models allow exact solutions, and that their formulation goes beyond the Dirac approximation. Therefore, this is an excellent opportunity to discuss discrete symmetries in a more general setting. As a bonus, we shall see that a symmetry breaking analogous to CPT violation may occur beyond effective Dirac theories. We present our discussion in the following order: In section II we provide the concepts that explain the appearence of discrete transformations as members of the Lorentz group. We also review the origin of the Dirac equation and show how its spinorial dimensionality is related to space-time dimensionality. In section III we focus on parity, analyzing both $3+1$ and $2+1$ dimensional cases. Section IV is devoted to effective Dirac theories; in this section we study the effects of parity on hexagonal lattices and suggest a symmetry breaking of full tight-binding models. We conclude in section V. \section{Preliminary concepts} \subsection{The sheets of the Lorentz group} \begin{figure}[t] \begin{tabular}{c} \includegraphics[width=4cm]{hyperboloid.jpg} \end{tabular} \caption{\label{fig:-1} Disconnected sheets of the time-like hyperboloid $V_{\mu}V^{\mu}=$ constant in $\v M_{2+1}$.} \end{figure} It was Einstein's discovery \cite{einstein1905} that the invariance of Maxwell's equations found by Lorentz should be imposed also to field sources and particles, giving rise to a structure of space-time sustained by a metric $g=\mbox{diag}\left\{ +1,-1,-1,-1 \right\}$. This is the Minkowski space denoted by $\v M_{3+1}$. Elementary textbooks on particle physics postulate the invariance of four-vector norms under Lorentz transformations in any physical theory, and we proceed in the same manner. We denote a vector that transforms linearly under the Lorentz group as $V_{\mu}$, $\mu=0,1,2,3$ and its contravariant vector as $V^{\mu}=g^{\mu \nu}V_{\nu}$ (summation over repeated indices) such that \begin{eqnarray} V_{\mu}V^{\mu} = V_0^2 -V_1^2-V_2^2-V_3^2 \label{0.1} \end{eqnarray} is an invariant. $V_0$ is the component along the axis of time, and the sign of (\ref{0.1}) determines whether the invariant is time-like ($>0$), space-like ($<0$) or light-like ($=0$). The Lorentz transformations are $4\times 4$ matrices $\Lambda$ with the property \begin{eqnarray} \Lambda_{\mu \sigma}\Lambda_{\nu \tau} g^{\sigma \tau} = g_{\mu \nu}, \quad V_{\mu}V^{\mu} = V_{\sigma} V^{\tau} \Lambda^{\sigma \mu} \Lambda_{\tau \mu}. \label{0.2} \end{eqnarray} The set of all such matrices forms a six-dimensional abstract surface that has four disconnected components. It is traditionally denoted by O$(1,3)$ (orthogonal group with signature $\left\{+,-,-,- \right\}$). The most common set of transformations in this group is the one connected continuously to the identity; it contains matrices with positive determinant and is denoted by SO$(1,3)$ (special). Using the continuity of the determinant as a function of matrices, we conclude that the components of the group SO$(1,3)$ and O$(1,3)\backslash$SO$(1,3)$ must be disconnected. Each of these two classes also contain two disconnected components, if we recognize that the invariant relation (\ref{0.1}) represents separate sheets of a hyperboloid in space-time, see fig. \ref{fig:-1}. From here it follows that Lorentz transformations cannot map events continuously from one sheet of the hyperboloid (positive time) to the other (negative time). The transformations that preserve the arrow of time are called {\it orthochronous,\ }denoted by SO$^{+}(1,3)$, which is a continuous group by itself. SO$^{+}(1,3)$ contains the identity matrix, together with all the transformations of the form \begin{eqnarray} \Lambda = \exp \left( i J_{\mu\nu} \theta^{\mu\nu} \right), \label{0.3} \end{eqnarray} where $J_{\mu\nu}$ are the infinitesimal generators of rotations and boosts. The generators $J_{ij}=-J_{ji}$ are true rotations in the plane $x_i$-$x_j$ if $i,j=1,2,3$ while $J_{0i}=-J_{i0}\neq J_{0i}^{\dagger}$ generate the boosts. The six parameters of a transformtation are given by the antisymmetric tensor of 'angles' $\theta^{\mu\nu}$. The reader may consult \cite{greiner1994, barut1986, georgi1999} for a discussion of the Lie bracket related to this group and others. \begingroup \begin{table}[b \caption{\label{tab:table0}% The disconnected components of SO$(1,3)$ and SO$(1,2)$ } \begin{ruledtabular} \begin{tabular}{l|c|d} \textrm{ -- } & \textrm{Det $=+1$}& \textrm{Det $=-1$} \\ \colrule \textrm{Orthochornous} & SO$^+$ & \textrm{P $\cdot$ SO$^+$} \\ \textrm{Non-orthochronous} & PT $\cdot$ SO$^+$ & \textrm{T $\cdot$ SO$^+$} \\ \end{tabular} \end{ruledtabular} \end{table} \endgroup In this paper we shall be interested in those transformations that take us (by composition of transformations) from one sheet of the Lorentz group to the others. They are disconnected from the identity and have either negative determinant or time inversion. We shall refer to them as the discrete symmetries of (\ref{0.1}). We have the nomenclature \begin{eqnarray} P&=& \left( \begin{array}{cccc} +1& & & \\ &-1 & & \\ &&-1& \\ &&& -1 \end{array} \right), \quad T= \left( \begin{array}{cccc} -1& & & \\ &+1 & & \\ &&+1& \\ &&& +1 \end{array} \right), \nonumber \\ PT &=& \left( \begin{array}{cccc} -1& & & \\ &-1 & & \\ &&-1& \\ &&& -1 \end{array} \right). \label{0.4} \end{eqnarray} See table \ref{tab:table0}. It is important to note that all the elements in one sheet of O$(1,3)$ can be identified with one of the operators in the set $\left\{ \v I_4 , P,T ,PT \right\}$. This set is in fact an abelian group isomorphic to the quotient O$(1,3)/$SO$^{+}(1,3) \cong \v Z_2 \otimes \v Z_2$, which is also known as the Klein group. In $2+1$ dimensions, we also have four disconnected regions of the group O$(1,2)$ containing the disjoint transformations \begin{eqnarray} P&=& \left( \begin{array}{ccc} +1& & \\ &-1 & \\ &&+1 \end{array} \right), \quad T= \left( \begin{array}{ccc} -1& & \\ &+1 & \\ &&+1 \end{array} \right), \nonumber \\ PT &=& \left( \begin{array}{ccc} -1& & \\ &-1 & \\ &&+1 \end{array} \right). \label{0.5} \end{eqnarray} Note that the parity operator $P$ must have negative determinant and in the $2+1$ dimensional case it reverses the sign of one and only one space component. \subsection{On the dimensionality of Dirac equations} Relativistic electrons are described by the Dirac equation \cite{dirac1930}, which contains spin as well as positive and negative energy projections. There are two ways of looking at the origin of this equation. First consider the Lorentz invariant (Klein-Gordon) wave equation \begin{eqnarray} \left\{ \square + \frac{m^2 c^2}{\hbar^2} \right\} \phi = 0, \quad \square = \frac{ \partial}{\partial x_{\mu}} \frac{\partial }{\partial x^{\mu}} = \frac{1}{c^2}\frac{\partial^2}{\partial t^2} - \nabla^2, \nonumber \\ \label{0.6} \end{eqnarray} which merely expresses the energy momentum relation $E^2 = c^2 p^2 + m^2 c^4 $. This equation is of second order in time, and requires the specification of two initial conditions for determining the evolution of waves. Dirac took the 'square root' of (\ref{0.6}) with the purpose of finding a proper relativistic hamiltonian, but such an operation only exists in the space of matrices; they form a Clifford algebra. In simpler units $c=\hbar = 1$ we have the factorization \begin{eqnarray} \square + m^2 =\left\{ \gamma_{\mu}\frac{\partial}{\partial x_{\mu}} + i m\right\}\left\{\gamma_{\nu}\frac{\partial}{\partial x_{\nu}} - i m \right\} \label{0.7} \end{eqnarray} if and only if the Clifford condition holds \begin{eqnarray} \{ \gamma_{\mu}, \gamma_{\nu} \} = 2 \v I g_{\mu\nu}, \label{0.8} \end{eqnarray} but then a spinorial wave equation should be satisfied: \begin{eqnarray} \left\{ i \gamma_{\mu}\frac{\partial}{\partial x_{\mu}} - m\right\}\psi = 0. \label{0.9} \end{eqnarray} It is important to recognize here that $\gamma_{\mu}$ is a four-vector of matrices, and that each matrix must be of dimension $4 \times 4$. In fact, a popular representation in terms of Pauli matrices is \begin{eqnarray} \gamma_0 = \left( \begin{array}{cc} \v 1 & 0 \\ 0 & -\v 1 \end{array} \right), \mbox{\boldmath$\gamma$\unboldmath} = \left( \begin{array}{cc} 0 & \mbox{\boldmath$\sigma$\unboldmath} \\ -\mbox{\boldmath$\sigma$\unboldmath} & 0 \end{array} \right). \label{1} \end{eqnarray} In $2+1$ dimensions the situation is different, since we need only three anticommuting matrices. This time we need only $2\times2$ matrices and they can be represented again in terms of Pauli's $\mbox{\boldmath$\sigma$\unboldmath}$ \begin{eqnarray} \gamma_0 = \sigma_3, \quad\gamma_1 = i \sigma_2, \quad \gamma_2 = -i \sigma_1. \label{0.11} \end{eqnarray} The implications of dimensionality here are profound, since the spin of the particle in $\v M_{3+1}$ emerges naturally as $\v S = {\textstyle{1\over2}} \mbox{\boldmath$\sigma$\unboldmath}$. However, in $\v M_{2+1}$ the spin has only one possible direction, i.e. $S_3 = {\textstyle{1\over2}} \sigma_3$. In a similar guise, the $4 \times 4$ structure of the Dirac equation in $\v M_{3+1}$ contains information about positive and negative energies or big and small components in the sense of Pauli \cite{bjorken1964}, whereas in $\v M_{2+1}$, $\sigma_1$ and $\sigma_2$ may play such a role without being related to the usual spin. We must warn the reader that effective theories of electrons in two dimensions work with an {\it effective\ }spin generated by lattices, while the true spin of the electron remains as the three-dimensional $\v S$. See section \ref{sec:graphene}. Yet another way to understand the differences due to dimensionality comes from the representation theory of the groups SO$(1,3)$ and SO$(1,2)$. The Dirac equation is a relation that expresses the invariance of rest mass in the irreducible representation of spin $s={\textstyle{1\over2}}$ -- to be precise, the multiplet $({\textstyle{1\over2}},0) \otimes (0,{\textstyle{1\over2}})$. We recall here that there is a local isomorphism of our six-dimensional, semi-simple group \cite{greiner1994} \begin{eqnarray} \mbox{SO}(1,3) \begin{array}{c} _{\cong} \\ \mbox{\scriptsize local}\end{array} \mbox{SU}(2) \otimes \mbox{SU}^*(2). \label{0.12} \end{eqnarray} The lowest irreducible representation of the r.h.s. is a direct product of two sets of Pauli matrices, corresponding to SU$(2)$ and SU$^*(2)$ (the star indicates complex conjugation of the group parameters). Hence the use of $4 \times 4$ $\gamma$ matrices. In contrast, SO$(1,2)$ is a {\it simple\ }and three-dimensional group, requiring only one set of Pauli matrices for the $s={\textstyle{1\over2}}$ representation. \begin{figure}[b] \begin{tabular}{c} \includegraphics[width=8cm]{mirror3d.jpg} \end{tabular} \caption{\label{fig:0} Schematic view of parity in $3+1$ dimensions. The wavefunctions corresponding to electrons in opposite sides can be related by a spinorial transformation and an inversion of momenta. The spin is invariant.} \end{figure} \begin{figure}[t] \begin{tabular}{c} \includegraphics[width=8cm]{mirror2d2.jpg} \end{tabular} \caption{\label{fig:1} (Color online) Parity in $2+1$ dimensions. The dark blue (dark gray) objects represent electrons that can be transformed into each other, whereas the light blue (light gray) object has the same energy spectrum, but obeys a transformed Dirac equation. } \end{figure} \section{Parity in low dimensional Dirac equations} We investigate the difference between $3+1$ and $2+1$ dimensional Dirac equations in regard to discrete transformations. We shall see that the spinorial representations of such objects have important differences due to dimensionality. Among discrete transformations, it is of particular interest to understand parity, as it has been the subject of many discussions in connection with the chiral properties of electrons in two-dimensional materials such as graphene and boron nitride. In our study, the energy-momentum relations must be invariant, although the corresponding equations may vary under discrete transformations. Two diagrams are shown in figures \ref{fig:0} and \ref{fig:1}. \subsection{A review of parity in 3+1 dimensions} In order to establish a point of comparison, let us review the transformation properties of the $3+1$ dimensional Dirac equation under parity. This is most easily discussed at the level of first quantization; let $\mu=0,1,2,3$, $i=1,2,3$, and let $\gamma_{\mu}$ be the covariant Dirac matrices in the representation (\ref{1}). In natural units, we write the Dirac equation with momentum $p^{\mu} = i \partial / \partial x^{\mu}=(i\partial/\partial t, -i\nabla)^{ \mbox{\scriptsize T}}$ as \begin{eqnarray} \left\{ \gamma_{\mu} p^{\mu} - m \right\} \psi(x_{\lambda}) = 0 \label{2} \end{eqnarray} or \begin{eqnarray} \left\{ \gamma_0 p_0-\mbox{\boldmath$\gamma$\unboldmath} \cdot \v p - m \right\} \psi(t, \v x) = 0. \label{3} \end{eqnarray} Now we perform the transformation $\v x \mapsto -\v x$, $x_0 \mapsto x_0$ and consequently $\v p \mapsto -\v p$, $p_0 \mapsto p_0$. This results in \begin{eqnarray} \left\{ \gamma_0 p_0 +\mbox{\boldmath$\gamma$\unboldmath} \cdot \v p- m \right\} \psi(t, -\v x) = 0. \label{4} \end{eqnarray} We would like to know if there exists a spinorial transformation of $\psi$ such that (\ref{4}) can be transformed back to its original form (\ref{3}), i.e. whether the original wave function and its transformation are described by the same physics. Noting that $\gamma_0 \gamma_i \gamma_0 = - \gamma_i$ and $\gamma_0^2=1$, one has \begin{eqnarray} \gamma_0\left\{ \gamma_0 p_0 +\mbox{\boldmath$\gamma$\unboldmath} \cdot \v p - m \right\}\gamma_0 \gamma_0 \psi(t, -\v x) = 0, \label{5} \end{eqnarray} or \begin{eqnarray} \left\{ \gamma_0 p_0-\mbox{\boldmath$\gamma$\unboldmath} \cdot \v p - m \right\}\gamma_0 \psi(t, -\v x) = 0. \label{6} \end{eqnarray} This equation is identical to (\ref{3}), and its solutions $\tilde \psi(t,\v x)$ are such that \begin{eqnarray} \tilde \psi(t, \v x) = \eta \gamma_0 \psi(t, - \v x), \label{7} \end{eqnarray} where $\eta$ is a global phase factor. This is in fact a transformation law for wavefunctions, and it can be further explored to the level of space-time independent bi-spinors. To this end, let us consider plane waves and spinors in the solution of (\ref{3}) and (\ref{6}). We introduce wave vectors such that $k_{\mu}k^{\mu}= \kappa_{\mu}\kappa^{\mu} = m$ and the normalized bi-spinors $u(k_{\mu}), \tilde u(\kappa_{\mu})$. The wavefunctions read \begin{eqnarray} \psi(t, \v x) = u(k_{\mu}) \mbox{e}^{-i k^{\nu} x_{\nu}}, \nonumber \\ \tilde \psi(t, \v x) = \tilde u(\kappa_{\mu}) \mbox{e}^{-i \kappa^{\nu} x_{\nu}}, \label{8} \end{eqnarray} but in the light of (\ref{7}), we must have the relations \begin{eqnarray} \mbox{\boldmath$\kappa$\unboldmath} = - \v k, \quad \kappa_0 = k_0 \label{9} \end{eqnarray} and \begin{eqnarray} \tilde u(k_0, \v k) = \eta \gamma_0 u(k_0, -\v k). \label{10} \end{eqnarray} This result is in fact quite general, as it can be applied to any superposition of plane waves fulfilling $k_{\mu}k^{\mu}=m$, for which the transformation properties of $u(k_{\mu})$ still hold. In fact, it is customary to use plane wave superpositions with positive ($k_0>0$) and negative ($k_0<0$) energy components of $\psi(t,\v x)$ or their second quantized version \cite{ryder1996}; for the moment we do not need such an expansion. It is fairly easy to show that other parity transformations (negative determinant) produce similar transformations in spinors. For example, if $x_1 \mapsto - x_1$ with the rest of the components invariant, we obtain \begin{eqnarray} \kappa_{\mu} = k_{\mu}, \quad \mu \neq 1 \nonumber \\ \kappa_1 = - k_1, \label{11} \end{eqnarray} and \begin{eqnarray} \tilde u(k_{0},k_{1},k_{2}, k_3) = \eta \gamma_2 \gamma_3 \gamma_0 u(k_{0},-k_{1},k_{2},k_3). \label{12} \end{eqnarray} The spinor transformations (\ref{10}) and (\ref{12}) are mediated by unitary matrices which anticommute with all $\gamma$'s expect for one, and such matrices are built by $\gamma$'s themselves or their products. Is it possible to find similar matrices for problems of different dimensionality? In $2+1$ dimensions, the answer is negative. We shall see this in section \ref{p21}. \subsubsection{Remarks on PT in $3+1$ dimensions} Full space-time inversions in $\v M_{3+1}$ are represented by the negative identity matrix. Using the procedures described above, it is easy to show that the PT transformed Dirac equation can be brought back to its original form, and that the wave functions must be related by \begin{eqnarray} \tilde \psi(x_{\lambda}) = \eta \gamma_5 \psi(-x_{\lambda}), \label{12.1} \end{eqnarray} where $\gamma_5 \equiv i \gamma_0 \gamma_1 \gamma_2 \gamma_3$. It is also worthwhile to recall that the presence of interactions, to the best of our knowledge, respects the CPT symmetry, which includes inversion of charge. In a simplified manner, we may establish this in a Dirac equation with minimal coupling to a gauge field $A_{\mu}$: \begin{eqnarray} \left\{ \gamma_{\mu} p^{\mu} + e \gamma_{\mu}A^{\mu} - m \right\} \psi(x_{\lambda}) = 0. \label{12.2} \end{eqnarray} If $A^{\mu}$ is a vector, the PT transformation maps $ A^{\mu} \mapsto - A^{\mu}$ and the full equation (\ref{12.2}) is invariant upon the application of $\gamma_5$. On the other hand, if $A^{\mu}$ is a pseudovector, then $ A^{\mu} \mapsto + A^{\mu}$ and the theory is invariant after the application of $\gamma_5$ and the reversal of $e \mapsto -e$. It is also important to remember that charge inversion can be achieved by the successive application of complex conjugation and multiplication by $\gamma_0 \gamma_1 \gamma_3$ (the matrix $\gamma_2$ is complex in the representation we have chosen). \subsection{Parity in 2+1 dimensions \label{p21}} Let $\mu=0,1,2$ and $\v x = (x_1,x_2)$. The Dirac equation in $2+1$ dimensions is given now by a $2\times2$ linear differential operator acting on a two-dimensional spinor: \begin{eqnarray} \left\{ \sigma_3 p_0 -i\sigma_2 p_1 + i\sigma_1 p_2 - m \right\} \psi(t, \v x) = 0. \label{13} \end{eqnarray} Here, the Dirac matrices are represented by \begin{eqnarray} \gamma_1 = i \sigma_2, \quad \gamma_2 = -i \sigma_1, \quad \gamma_0 = \sigma_3. \label{14} \end{eqnarray} Now we apply a discrete transformation to (\ref{14}); the space inversion $\v x \mapsto - \v x$ has unit determinant and is irrelevant to our discussion. Let us consider instead $x_1 \mapsto -x_1$ and $x_2 \mapsto x_2$. Our equation (\ref{13}) transforms into \begin{eqnarray} \left\{ \sigma_3 p_0 + i\sigma_2 p_1 + i\sigma_1 p_2 - m \right\} \psi(t, -x_1, x_2) = 0, \label{15} \end{eqnarray} but this equation cannot be brought to its original form (\ref{13}) by the mere application of unitary operators! Hypothetically, a unitary operator $\Pi$ made of $\gamma$'s that restores the signs in (\ref{15}) must have the properties $\left[ \Pi, \gamma_2 \right]=\left[ \Pi, \gamma_0 \right]=0$ and $\left\{ \Pi, \gamma_1 \right\}=0$. These requirements are impossible to meet in the algebra spanned by all $\gamma$'s and their products, since we have \begin{eqnarray} \gamma_0 \gamma_1 \gamma_2 = -i \v I, \, \gamma_0 \gamma_1 = i \gamma_2, \, \gamma_2 \gamma_1 = i \gamma_0, \, \gamma_2 \gamma_0 = i \gamma_1. \nonumber \\ \label{16} \end{eqnarray} The first operator commutes with everything, while the other operators in (\ref{16}) applied to (\ref{15}) would produce two sign flips (positive determinant). A similar situation occurs when we try to introduce complex conjugation as a possible transformation; we have \begin{eqnarray} (\gamma_0 p_0)^{*} = - \gamma_0 p_0, \quad (\gamma_1 p_1)^{*} = - \gamma_1 p_1, \quad (\gamma_2 p_2)^{*} = + \gamma_2 p_2,\nonumber \\ \label{17} \end{eqnarray} and two sign flips would occur again in (\ref{15}). With this, we conclude that the wavefunctions $\psi(t,-x_1,x_2)$ and $\psi(t,x_1,x_2)$ cannot be transformed into each other, although they may satisfy the same energy-momentum relation $k_{\mu}k^{\mu}=m$ when expanded in plane waves. In a theory of many fermions (for example, the second quantization of the theory above) it seems necessary to introduce at least two flavors that account for all possible solutions of the energy-momentum relation but whose equations are inequivalent. We shall see in section \ref{sec:graphene} that this is exactly the case for some two-dimensional systems in condensed matter. Returning to first quantization and the Dirac equation, we point out that a happy accident occurs in the absence of mass. The Dirac operator becomes $\gamma_{\mu}p^{\mu}$; although this operator is not invariant under $x_1 \mapsto - x_1$, it turns out that this transformation can be continuously related with a full space-time inversion: the relation \begin{eqnarray} \left\{ \gamma_0 p_0 + \gamma_1 p_1 - \gamma_2 p_2 \right\} \psi(t,-x_1,x_2)=0 \label{18} \end{eqnarray} can be transformed by applying $-\gamma_1$ from the left \begin{eqnarray} \left\{ \gamma_0 p_0 - \gamma_1 p_1 - \gamma_2 p_2\right\} \gamma_1 \psi(t,-x_1,x_2)=0 \label{19} \end{eqnarray} which is the sought result. This shows that the massless Dirac equation {\it is invariant\ }under $x_1 \mapsto - x_1$ and the solutions are related by \begin{eqnarray} \tilde \psi(t,x_1,x_2) = \eta \gamma_1 \psi(t,-x_1, x_2) \label{20} \end{eqnarray} where $\eta$ is again a phase factor, including signs. We note that the transformation is now mediated by $\gamma_1$, whereas in the $3+1$ dimensional case the matrix was $\gamma_0$. \subsubsection{Hamiltonian formulation in $2+1$ dimensions} The previous results are not too different when we bring the Dirac equation to a hamiltonian form. Here of course, time reversal transformations without energy sign reversal require antilinear operators. It also happens that parity-transformed hamiltonians may have the same spectrum, and indeed $E=\pm \sqrt{p^2 + m^2}$ is invariant under parity. With the traditional notation $\alpha_1=\sigma_1, \alpha_2=\sigma_2, \beta= \sigma_3$ we have the Schr\"odinger equation \begin{eqnarray} \left\{ \mbox{\boldmath$\alpha$\unboldmath} \cdot \v p + m \beta \right\} \psi(t, \v x) = i \frac{\partial \psi(t, \v x)}{\partial t}. \label{21} \end{eqnarray} Although the operator \begin{eqnarray} H^{2+1} = \mbox{\boldmath$\alpha$\unboldmath} \cdot \v p + m \beta \label{22} \end{eqnarray} is not a parity invariant, the spectrum is invariant. This implies that the eigenfunctions are divided at least in two classes (as we saw previously), producing degeneracy when both the original and the transformed hamiltonian belong to the same theory. We examine again the parity transformations at the level of (\ref{21}) and its stationary version. Take $\psi(t,\v x) = \mbox{e}^{-i E t} \phi(\v x)$ and perform the transformation $x_1 \mapsto -x_1$, $x_2 \mapsto x_2$ to find \begin{eqnarray} \left\{-\sigma_1 p_1+ \sigma_2 p_2 + m\sigma_3 \right\} \phi(-x_1,x_2) = E \phi(-x_1,x_2).\nonumber \\ \label{23} \end{eqnarray} Here complex conjugation pays off (but not in the full time-dependent solution!), as it leads to \begin{eqnarray} \left\{ \sigma_1 p_1+ \sigma_2 p_2 + m\sigma_3 \right\} \phi^{*}(-x_1,x_2) = E \phi^{*}(-x_1,x_2). \nonumber \\ \label{24} \end{eqnarray} For this reason $\phi^{*}(-x_1,x_2)$ and $\phi(x_1,x_2)$ have the same energy, but it is left to see whether these solutions are independent or not with respect to their spinorial part. Once again we use a single plane wave to see that if \begin{eqnarray} \phi(\v x) = u(\v k) \mbox{e}^{i \v k \cdot x} \label{25} \end{eqnarray} then \begin{eqnarray} \left\{ \mbox{\boldmath$\sigma$\unboldmath} \cdot \v k + m \sigma_3 \right\} u(\v k) = E u(\v k), \label{26} \end{eqnarray} with its complex conjugate given by \begin{eqnarray} \left\{ \sigma_1 k_1 -\sigma_2 k_2 + m \sigma_3 \right\} u^{*}(\v k) = E u^{*}(\v k). \label{27} \end{eqnarray} Now we must have that $u$ and $u^{*}$ are independent, for the proportionality $u \propto u^{*}$ leads to the contradictory relation $k_2 \sigma_2 u = 0$ by the combination of (\ref{26}) and (\ref{27}). So $u$ is necessarily complex, and the spinors corresponding to opposite parities and equal energies are independent ($k_2=0$ is possible, but reduces effectively the problem to one dimension, and is not of interest). In conclusion, in $2+1$ dimensions only the {\it stationary\ }solutions of opposite parity can be related by a transformation, which turns out to be a complex conjugation, involving thus antiunitary operators. The complex character of the wavefunction and its spinorial part makes $\phi(x_1, x_2)$ and $\phi^*(-x_1,x_2)$ independent. \subsubsection{Remarks on PT in $2+1$ dimensions} Full space-time inversion produces three sign flips in (\ref{13}) and is therefore continuously connected to the $x_1 \mapsto - x_1$ transformation. For this reason, the functions $\psi(-x_{\mu})$ and $\psi(x_{\mu})$ cannot be transformed into each other. How about the functions $\psi(-t,-x_1,x_2)$ and $\psi(t,x_1,x_2)$? This PT transformation can be reproduced by the application of the matrix $\gamma_2$ or by complex conjugation. With this we can show that the functions $\psi(t,x_1,x_2)$, $\psi^{*}(-t,-x_1,x_2)$ and $\gamma_2\psi(-t,-x_1,x_2)$ can be transformed into each other, fulfilling the glorified CPT invariance. At the hamiltonian level we can easily show that the transformation involves energy inversion; the reversed parity equation \begin{eqnarray} \left\{ -\sigma_1 p_1 + \sigma_2 p_2 + m \sigma_3 \right\} \phi(-x_1,x_2) = E \phi(-x_1,x_2) \nonumber \\ \label{28} \end{eqnarray} is transformed now to \begin{eqnarray} \left\{ \sigma_1 p_1 + \sigma_2 p_2 + m \sigma_3 \right\} \sigma_1 \phi(-x_1,x_2) = -E\left[ \sigma_1 \phi(-x_1,x_2) \right] \nonumber \\ \label{29} \end{eqnarray} after multiplying by $-\sigma_1$ from the left. This can be resumed as follows: a function $\tilde \phi$ of positive energy $E$ can be expressed in terms of negative energy solutions in the form \begin{eqnarray} \tilde \phi_E (x_1,x_2) = \eta \sigma_1 \phi_{-E}(-x_1,x_2) \label{30} \end{eqnarray} where $\eta$ is again a global phase factor. With this we show that the symmetric spectrum of this theory (about the point $E=0$) is related to transformations under P alone. \section{Graphene and Boron Nitride: effective theories in flat sheets \label{sec:graphene}} \begin{figure}[t] \begin{tabular}{c} \includegraphics[width=5cm]{hexagonallattice.jpg} \end{tabular} \caption{\label{fig:2} (Color online) An hexagonal lattice formed by two interpenetrating triangular sublattices in blue and red. } \end{figure} \begin{figure}[h!] \begin{tabular}{c} \includegraphics[width=6cm]{hexagonvectors-b1.jpg} \end{tabular} \caption{\label{fig:3} (Color online) Fundamental cell of the hexagonal lattices and primitive vectors. Blue and red sites (dark and light gray) may represent different types of atoms. } \end{figure} It has been noted in the literature of condensed matter physics \cite{katsnelson2007, geim2007}, that electrons in hexagonal lattices (see figure \ref{fig:2}) can be described by effective $2+1$ dimensional Dirac equations. It turns out that there are inequivalent conical points at the edges of the first Brillouin zone (in this case an hexagon) of the honeycomb lattice, where the dispersion relations of propagating waves resemble a relativistic energy-momentum relation \cite{semenoff1984}: \begin{eqnarray} E=\epsilon - \epsilon_0 \approx \pm \sqrt{\Delta^2 (\v k \pm \v k_D)^2 + m^2} \label{31} \end{eqnarray} where $\v k$ is the Bloch momentum of a wave in the crystal, $\v k_D$ is the point of maximal approach of positive and negative surfaces (the famous Dirac points \cite{haldane2008, bittner2010, gomes2012}), $\Delta$ is the nearest-neighbor coupling in the corresponding tight-binding model (in condensed matter physics $\Delta$ is related to the Fermi velocity), $\epsilon_0$ is the center of the lowest energy band and $m$ is the difference between binding energies of atoms at each triangular sublattice (examples with two species include boron nitride, while $m=0$ describes graphene.) In addition to this appealing dispersion relation, one also has an effective spin given by the probability of being in sites of type A or B (see figure \ref{fig:3}). Incidentally, this spin is represented by $\mbox{\boldmath$\sigma$\unboldmath}$ matrices, in full correspondence with our previous considerations of Dirac equations in $2+1$ dimensions. \begin{figure}[t] \begin{tabular}{c} \includegraphics[width=6cm]{blueredcones-b1.jpg} \end{tabular} \caption{\label{fig:4} Dispersion relation in the reciprocal honeycomb lattice. Six conical points can be distiguished. Opposite points are inequivalent.} \end{figure} \begin{figure}[h!] \begin{tabular}{c} \includegraphics[width=6cm]{massivecones-b1.jpg} \end{tabular} \caption{\label{fig:5} Dispersion relation for the massive case: the gap between the blue (upper) and red (lower) bands is originated by a difference of on-site energies between A and B. } \end{figure} \subsection{Parity in effective theories with two fermions \label{p2fermions}} In such effective theories we have two types of Dirac equations fulfilling the dispersion relation (\ref{31}): \begin{eqnarray} \left\{ \gamma_0 p_0 - \gamma_1 p_1 - \gamma_2 p_2 - m \right\} \psi^{+}=0,\nonumber \\ \left\{ \gamma_0 p_0 + \gamma_1 p_1 - \gamma_2 p_2 - m \right\} \psi^{-}=0, \label{32} \end{eqnarray} where $\v p$ is now the momentum around the point $\pm \v k_D$, with eigenvalues $\Delta (\v k \mp \v k_D)$. There are no translations in the reciprocal triangular sublattice that could take us from $\v k_D$ to $-\v k_D$, and we have seen previously that the wavefunctions cannot be transformed into each other. The full theory, however, is invariant under the interchange $+ \leftrightarrow -$. Schematically, we may describe both relations in (\ref{32}) by a single bi-spinorial equation: \begin{eqnarray} \left(\begin{array}{cc} \gamma_{\mu} p^{\mu} - m & 0 \\ 0 & -\gamma_1 (\gamma_{\mu} p^{\mu})\gamma_1 - m \end{array} \right)\left(\begin{array}{c} \psi^{+}(x_{\lambda}) \\ \psi^{-}(x_{\lambda}) \end{array} \right)=0. \nonumber \\ \label{33} \end{eqnarray} We can perform now a parity operation to finally understand why these electrons obey a chiral theory: if $x_1 \mapsto -x_1$ and $p_1 \mapsto -p_1$, the roles of $\pm$ will be interchanged, i.e. \begin{eqnarray} \left(\begin{array}{cc} -\gamma_1 (\gamma_{\mu} p^{\mu})\gamma_1 - m & 0 \\ 0 & \gamma_{\mu} p^{\mu} - m \end{array} \right)\left(\begin{array}{c} \psi^{+}(t,-x_1,x_2) \\ \psi^{-}(t,-x_1,x_2) \end{array} \right)=0. \nonumber \\ \label{34} \end{eqnarray} The complete theory is invariant if we apply the $4 \times 4$ swapping operator \begin{eqnarray} \Gamma \equiv \left(\begin{array}{cc} 0 & \v I_2 \\ \v I_2 & 0 \end{array} \right) \label{35} \end{eqnarray} to the bi-spinor \begin{eqnarray} \Psi(t,-x_1,x_2) \equiv \left(\begin{array}{c} \psi^{+}(t,-x_1,x_2) \\ \psi^{-}(t,-x_1,x_2) \end{array} \right) \label{36} \end{eqnarray} and to the augmented Dirac operator (as a similarity transformation) \begin{eqnarray} D(p_0,p_1,p_2) \equiv \left(\begin{array}{cc} \gamma_{\mu} p^{\mu} - m & 0 \\ 0 & -\gamma_1 (\gamma_{\mu} p^{\mu})\gamma_1 - m \end{array} \right).\nonumber \\ \label{37} \end{eqnarray} We explain the invariance as follows. By virtue of the relations $\Gamma^2 = \v I_4 $, $\Gamma D(p_0,-p_1,p_2) \Gamma = D(p_0,p_1,p_2)$, we have that if \begin{eqnarray} D(p_0,p_1,p_2) \Psi(t,x_1,x_2) = 0, \label{38} \end{eqnarray} then \begin{eqnarray} D(p_0,-p_1,p_2) \Psi(t,-x_1,x_2) = 0 \label{39} \end{eqnarray} and \begin{eqnarray} D(p_0,p_1,p_2) \Gamma \Psi(t,-x_1,x_2)=0. \label{40} \end{eqnarray} The exchange of $\pm$ does the trick. At the level of Hamiltonians the theory is also invariant: defining \begin{eqnarray} \mbox{$\cal H\,$}(\v p) \equiv \left(\begin{array}{cc} \mbox{\boldmath$\alpha$\unboldmath} \cdot \v p + m \beta & 0 \\ 0 & \sigma_2 (\mbox{\boldmath$\alpha$\unboldmath} \cdot \v p) \sigma_2 + m \beta \end{array} \right) \label{41} \end{eqnarray} with stationary functions \begin{eqnarray} \Psi(t, \v x) = \mbox{e}^{-iEt} \Phi(\v x), \label{42} \end{eqnarray} we obtain $\Gamma \mbox{$\cal H\,$} (-p_1,p_2)\Gamma = \mbox{$\cal H\,$}(p_1,p_2)$ and \begin{eqnarray} \mbox{$\cal H\,$}(p_1,p_2) \Gamma \Phi(-x_1,x_2) = E \left[\Gamma \Phi(-x_1,x_2)\right] \label{43} \end{eqnarray} as expected. There is nothing artificial about this procedure, if we regard the theory as made of two types of fermions with equal probability of existence. However, this interpretation leads invariably to more than one particle in the hexagonal sheet (in fact, many of them). This makes sense only in a second-quantized scheme of solid state physics. It is thus natural to ask whether a single-particle formulation may have a similar chiral symmetry. The answer is positive, if we take into account the {\it complete\ }spectrum of the theory, without the conical approximations (\ref{31}) related to effective Dirac equations. Furthermore, it also holds that even without the conical approximation of the dipersion relations, the theory still has a spinorial formulation (spin up and down are A and B) where the effective matrices can be defined solely by the geometry of the lattice \cite{sadurni2010}. We shall play with this formulation in what follows, with the aim of extracting once more the spinorial representations of discrete transformations, but this time in the context of crystals. \subsection{Parity in a complete tight-binding model with one fermion} \subsubsection{The general model with Dirac matrices} The full tight-binding model can be constructed starting from very simple considerations \cite{wallace1947}. For the sake of clarity we discuss it here along the lines indicated in \cite{sadurni2010}. The honeycomb lattice is defined by two interpenetrating triangular sublattices with primitive vectors $\v a_1 = (a /2)(-\v i - \sqrt{3}\v j), \v a_2 = (a /2)(\v i - \sqrt{3}\v j)$. Each point has three nearest neighbors; the origin is connected to such sites by the vectors $\v b_1 = (a/\sqrt{3}) \v j, \v b_2=(a/2)(-\v i - (1/\sqrt{3})\v j), \v b_3 = -\v b_1 - \v b_2$. See figures \ref{fig:2} and \ref{fig:3}. We can label the atomic states\footnote{These are isolated atomic states, but it is more appropriate to use Wannier functions. For maximally localized states, see \cite{marzari1997}.} by site vectors $\v A$ and $\v B$ corresponding to each sublattice, i.e. $|\v A\>$ and $|\v B\>$. They are linear combinations of $\v a_1, \v a_2$ with integer coefficients and the term $\v b_1$ is added in the case of $\v B$. The most common way to write a nearest-neighbor tight-binding model in first quantization is the following: \begin{eqnarray} H &=& \Delta \sum_{\v A, i=1,2,3} | \v A \> \< \v A + \v b_i | + h.c. \nonumber \\ &+& E_A \sum_{\v A} | \v A \> \< \v A | + E_B \sum_{\v B} | \v B \> \< \v B |, \label{44} \end{eqnarray} where $E_A$ and $E_B$ are the binding energies of atoms in lattice A and B respectively. A more convenient way to write this hamiltonian can be achieved by introducing translation operators and some definitions. The goal is to express (\ref{44}) in a way similar to a Dirac hamiltonian. We need Dirac matrices $\mbox{\boldmath$\alpha$\unboldmath} = \mbox{\boldmath$\sigma$\unboldmath}$, and we may define them in terms of localized states \begin{eqnarray} \sigma_{1} = \sum_{\v A} \left[| \v A \> \< \v A + \v b_1 | + | \v A + \v b_1 \> \< \v A | \right], \nonumber \\ \sigma_{2} = -i \sum_{\v A} \left[ | \v A \> \< \v A + \v b_1 | - | \v A + \v b_1 \> \< \v A |\right], \nonumber \\ \sigma_{3} = \sum_{\v A} \left[| \v A \> \< \v A | - | \v A + \v b_1 \> \< \v A + \v b_1 | \right], \label{45} \end{eqnarray} which satisfy the SU$(2)$ algebra $\left[ \sigma_i, \sigma_j \right]=2i \epsilon_{ijk}\sigma_k$ and the Clifford condition $\left\{ \sigma_i, \sigma_j \right\} = 2 \v I_2 \delta_{ij}$. Similarly, we define operators analogous to momenta in the form \begin{eqnarray} P_1 = \frac{\Delta}{2} \sum_{\v A, i} |\v A +\v b_i \> \< \v A + \v b_1 | + |\v A +\v b_i - \v b_1 \> \< \v A | + \mbox{h.c.} , \nonumber \\ P_2 = \frac{\Delta}{2i} \sum_{\v A, i} |\v A +\v b_i \> \< \v A + \v b_1 | + |\v A +\v b_i - \v b_1 \> \< \v A | + \mbox{h.c.} \nonumber \\ \label{46} \end{eqnarray} It is important to note that $P_1$ and $P_2$ are made of translation operators $T_i = \exp \left( i \v a_i \cdot \v p \right)$ connecting sites of the same subtriangular lattice, i.e. \begin{eqnarray} P_1 &=& \frac{\Delta}{2} \left[ 2 \v I + T_1 + T_1^{\dagger} + T_2 + T_2^{\dagger} \right] ,\nonumber \\ P_2 &=& \frac{\Delta}{2i} \left[ T_1 - T_1^{\dagger} + T_2 - T_2^{\dagger} \right]. \label{47} \end{eqnarray} With these identifications, we finally arrive at the hamiltonian \begin{eqnarray} H = \mbox{\boldmath$\alpha$\unboldmath} \cdot \v P + m \beta + \epsilon_0 \label{48} \end{eqnarray} where $m=(E_A-E_B)/2$ and $\epsilon_0 = (E_A+E_B)/2$. Here, we are only one step away from an effective Dirac theory, since the expansions of the exponentials $T_i$ in $P_1$ and $P_2$ around $\Delta \v k_D$, yield linear expressions in $p_1$ and $p_2$ respectively (conical points). However, the full theory with hamiltonian (\ref{48}) has eigenvalues \begin{eqnarray} \epsilon = \epsilon_0 \pm \sqrt{ \Delta^2 | 1 + \mbox{e}^{i \v k \cdot \v a_1} + \mbox{e}^{i \v k \cdot \v a_2} |^2 + m^2}, \label{49} \end{eqnarray} which can be computed using Bloch waves $\sum_{\v A} \mbox{e}^{i \v k \cdot \v A} \<\v x | \v A\>$ in each spinor component. Such spinors diagonalize the following $2 \times 2$ blocks in the hamiltonian \begin{eqnarray} \left(\begin{array}{cc} \epsilon_0 + m& \Delta \left[ 1 + \mbox{e}^{i \v k \cdot \v a_1} + \mbox{e}^{i \v k \cdot \v a_2} \right] \\ \Delta \left[1 + \mbox{e}^{-i \v k \cdot \v a_1} + \mbox{e}^{-i \v k \cdot \v a_2} \right] & \epsilon_0 - m \end{array} \right). \nonumber \\ \label{49.1} \end{eqnarray} \subsubsection{The new $\v P$ as a pseudovector} Now we are ready to discuss the parity transformation $x_1 \mapsto -x_1, x_2 \mapsto x_2$. We have $p_1 \mapsto - p_1$, $p_2 \mapsto p_2$, but in view of the property $\v a_1 \cdot \v p \mapsto \v a_2 \cdot \v p$ and vice versa, the translation operators are now mapped into each other \begin{eqnarray} T_1 \mapsto T_2, \quad T_2 \mapsto T_1, \label{50} \end{eqnarray} leading to a pseudovectorial $\v P$: \begin{eqnarray} P_1 \mapsto P_1 \quad P_2 \mapsto P_2. \label{51} \end{eqnarray} With these relations, the invariance of the full hamiltonian (\ref{48}) is ensured. Incidentally, the Dirac point at $\v k_D = (4 \pi / 3 a)\v i $ is mapped to $- (4 \pi / 3 a)\v i$, which is the inequivalent Dirac point at the opposite vertex. However, both vertices are contained in our single particle theory and its invariance is again confirmed. As to the wavefunctions, the spatiotemporal part is given by Bloch waves and only a change $k_1 \mapsto -k_1$ is needed. The spinorial part remains invariant. \begin{figure}[b] \begin{tabular}{c} \includegraphics[width=9.0cm]{hexagons2-b1.jpg} \end{tabular} \caption{\label{fig:9} Parity symmetry breaking by sheet deformation. The bonds represented by vectors $\v a_1$ and $\v a_2$ have different lengths and different couplings.} \end{figure} \begin{figure}[t] \begin{tabular}{c} \includegraphics[width=5cm]{usualcone-b1.jpg} \end{tabular} \caption{\label{fig:10} Upper view of the dispersion relation surface, showing a symmetric hexagon. $\Delta=1$. } \end{figure} \begin{figure}[t] \begin{tabular}{c} \includegraphics[width=5cm]{deformedcone-b1.jpg} \end{tabular} \caption{\label{fig:11} Upper view of an asymmetric dispersion relation induced by sheet deformation. $\Delta_1=1, \Delta_2 = 1/2$.} \end{figure} \subsection{Discrete symmetry breaking} There are several ways to introduce interactions which violate discrete symmetries. In particle physics we may quote famous examples \cite{garwin1957, lees2012} in which a partial discrete symmetry is violated, such as parity (weak interactions) or time reversal and charge conjugation (CP violation). There are even more exotic proposals \cite{colladay1997} that suggest CPT violation as an effect that emerges due to novel theories. In this paper we restrict ourselves to the importance of dimensionality and its implications in effective theories on the lattice. A most fascinating consequence of reduced dimensionality is the so-called chiral anomaly\cite{niemi1983, quackenbush1989}, which indeed is represented by two types of electrons in hexagonal lattices that suffer transitions from one type to the other (interpreted as tunneling) due to quantum corrections. In connection with explicit symmetry breaking, i.e. at the loevel of the hamiltonian, it is easy to see that lattice deformations do the job in two different forms: 1) by breaking A$\leftrightarrow $B invariance, leading to the appearance of an effective mass as we saw previously and 2) by introducing bond asymmetries (see fig. \ref{fig:9}), e.g. by applying a shear. \subsubsection{Two fermions} In section \ref{p2fermions} we saw that the hamiltonian of the theory could be expressed by an augmented operator $\mbox{$\cal H\,$}$. The exchange invariance of the theory can be broken easily by introducing a non-diagonal operator in the space of spinors $\psi^{\pm}$. An example of such an interaction which does not commute with $\Gamma$ can be proposed to be proportional to \begin{eqnarray} \bar \Gamma = \left( \begin{array}{cc} 0 & -i \v I_2 \\ i\v I_2 & 0 \end{array}\right). \label{52} \end{eqnarray} Evidently, this leads to transitions between the two species. Diagonal terms which do not commute with $\Gamma$ can be conceived as well, but they do not correspond to a coupling between the two inequivalent Dirac points. \subsubsection{Full band theory with one fermion} A very general way to break the invariance of $\mbox{$\cal H\,$}$ under parity is by the introduction of vectorial interactions. When such potentials are external, i.e. not dynamical variables of the world, their transformation properties are determined solely by the coordinates. For example, if \begin{eqnarray} \left\{ \gamma_0 p_0 - \gamma_1 P_1 - \gamma_2 P_2 - m + V_{ \mbox{\scriptsize int}} \right\} \psi = 0 \label{53} \end{eqnarray} then $V_{ \mbox{\scriptsize int}} \equiv \gamma_{\mu} V^{\mu}$ would do the job, as long as $V^{\mu}$ transforms as a vector under parity (remember that $\v P$ is a pseudovector). Another way to break parity symmetry is by introducing complex couplings $\Delta$, such as those used to simulate gauge fields \cite{uehlinger2013}, in particular external magnetic fields. The asymmetry in the lattice bonds can be introduced generally as \begin{eqnarray} P_1 &=& \frac{1}{2} \left[ 2 \Delta_0 + \Delta_1 T_1 + \Delta_1 T_2 \right] + \mbox{h.c.} ,\nonumber \\ P_2 &=& \frac{1}{2i} \left[ \Delta_1 T_1 + \Delta_2 T_2 \right] + \mbox{h.c.}. \label{54} \end{eqnarray} where $\Delta_i$ are complex. If $\Delta_1 \neq \Delta_2$, then the exchange $\v a_1 \leftrightarrow \v a_2$ is no longer a symmetry of the hamiltonian. Generically, there is no way in which the application of operators depending on $\gamma$ matrices may restore the symmetry, and the theory is not invariant. There are two cases to be distinguished: When only the phases of $\Delta_1, \Delta_2$ are different, we recognize that they can be redefined by the application of unitary transformations forming a gauge group U$(1)$. This represents indeed a magnetic field. When the moduli are different, i.e. $|\Delta_1| \neq |\Delta_2|$ then the bonds mediated by the vectors $\v b_2$ and $\v b_3$ are different, a type of asymmetry that can be introduced by a constant deformation that modifies the fundamental cell, but not the periodicity of the medium. The overall effect in such theories amounts to a modification of the dispersion relation. This effect has been extensively investigated \cite{montambaux2009} with the pupose of translating and merging inequivalent Dirac points. A comparison of energy surfaces is given in figures \ref{fig:10} and \ref{fig:11}. Another interesting possibility comes in the form of mutliple neighbor couplings. It turns out that their presence can break the symmetry between upper and lower bands around conical points, indicating that the effective CPT symmetry of the theory (the one that relates particles with antiparticles or electrons with holes) can be broken. The explicit way to achieve this is by adding terms to $\mbox{$\cal H\,$}$ as follows \begin{eqnarray} \mbox{$\cal H\,$} = \epsilon_0 + m \sigma_3 + \mbox{\boldmath$\alpha$\unboldmath} \cdot \v P + \bar \Delta (T_1 + T_2 + T_1 T_2^{\dagger} + \mbox{h.c.}).\nonumber \\ \label{55} \end{eqnarray} In this expression, the last term does not contain Dirac matrices, and it couples the six second neighbors of each site by connecting them through the vectors $\pm a_1, \pm a_2, \pm (a_1 - a_2)$. The constant $\bar \Delta$ modulates the interaction. The resulting dispersion relation and a comparison between energy cones is given in figures \ref{fig:6}, \ref{fig:7} and \ref{fig:8}. Here we should note that a parity transformation leaves such terms invariant (this is again the exchange $\v a_1 \leftrightarrow \v a_2$), but the application of $PT$ at the level of the Dirac equation \begin{eqnarray} && \left\{ \gamma_0p_0 - \mbox{\boldmath$\gamma$\unboldmath} \cdot \v P - m -\bar \Delta \gamma_0 (T_1 + T_2 + T_1 T_2^{\dagger} + \mbox{h.c.}) \right\} \nonumber \\ &\times & \psi(t,x_1,x_2) = 0 \label{56} \end{eqnarray} reveals that \begin{eqnarray} \gamma_2 \left\{ -\gamma_0 p_0 + \gamma_1 P_1 - \gamma_2 P_2 - m \right\}\psi(-t,-x_1,x_2) - \nonumber \\ \bar \Delta \gamma_2 \gamma_0 (T_1 + T_2 + T_1 T_2^{\dagger} + \mbox{h.c.}) \psi(-t,-x_1,x_2) = 0 \label{57} \end{eqnarray} or put another way \begin{eqnarray} && \left\{ \gamma_0p_0 - \mbox{\boldmath$\gamma$\unboldmath} \cdot \v P - m + \bar \Delta \gamma_0 (T_1 + T_2 + T_1 T_2^{\dagger} + \mbox{h.c.}) \right\} \nonumber \\ &\times & \gamma_2 \psi(-t,-x_1,x_2) = 0. \label{58} \end{eqnarray} This equation is not equivalent to (\ref{56}), and the only possible way to restore the sign of the last term is by coupling inversion $\bar \Delta \mapsto - \bar \Delta$. In a world where the actors are transformed but the stage is fixed, such a coupling inversion is not allowed and the dispersion relation must have an up-down asymmetry. Obviously, when the stage is also reversed, we recover CPT invariance of our complete world. \begin{figure}[t] \begin{tabular}{c} \includegraphics[width=6cm]{asymmetriccones-b1.jpg} \end{tabular} \caption{\label{fig:6} Asymmetric bands produced by the introduction of next-to-nearest neighbor interactions. The upper and lower surfaces are different.} \end{figure} \begin{figure}[t] \begin{tabular}{c} \includegraphics[width=5cm]{coneasymmetry-b1.jpg} \end{tabular} \caption{\label{fig:7} Asymmetric bands induced by second neighbors, visualized around conical points. Although the complete system must be CPT symmetric, the effective theory of the electron is not. } \end{figure} \begin{figure}[t] \begin{tabular}{c} \includegraphics[width=5cm]{conicalsymmetry-b1.jpg} \end{tabular} \caption{\label{fig:8} Usual cones with up-down symmetry. Compare with fig. \ref{fig:7}.} \end{figure} \section{Discussion} The role of discrete symmetries in both particle physics and condensed matter systems should not be underestimated. In this paper we have reviewed the subject at the level of the Dirac equation in first quantization. It is important to mention that a frequent approach to symmetries in quantum field theory comes from the invariance of the action that generates the Euler-Lagrange equations, including the Dirac equation. Invariance of the action leads indeed to invariance of the theory, but the converse is not necessarily true; the subtleties of this and other properties arising in a second-quantized scheme have been left aside for the sake of a simple treatment. We encourage our readers with particle physics inclinations to consult references \cite{beringer2012} with respect to state-of-the-art CPT invariance tests. As to the honeycomb lattice, there is a clear message arising from our results: lattice deformations and long range interactions constitute a source of asymmetry that can be used to our favor as a testbed for new effects. However, we must warn the reader that the validity of conical approximations in graphene has been experimentally established for energies in the vicinity of the band center. Thus, the effects arising due to a full-band theory may be visible in other honeycomb realizations. The so-called artificial graphene \cite{kalesaki2014} is worthy of attention. \begin{acknowledgments} E. S. and E. R.-M. would like to express their gratitude to CONACyT for financial support under project CB2012-180585. \end{acknowledgments} \nocite{*} \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
2,869,038,156,508
arxiv
\section{Introduction} \thispagestyle{empty} In this note we prove: \begin{thrm}\label{thrmMain1} There exists a constant $C$ such that the number of isomorphism classes of continuous irreducible $n$-dimensional complex representations of $\mathrm{SL}_d(\bZ_p)$ is less than $Cn^2$ for any positive integer $d$ and prime number $p$. \end{thrm} Larsen and Lubotzky conjectured in \cite[Section 11]{LL} the existence of universal upper bounds. This was then shown to be at least $Cn^{22}$ by Aizenbud and Avni in \cite{AA}. The present paper improves this latter result. For a topological group $\Gamma$, let $r_n(\Gamma)$ be the number of isomorphism classes of continuous irreducible $n$-dimensional complex representations. Assuming that this number is finite for every $n$, one defines the {\it representation zeta function} of $\Gamma$ to be the Dirichlet generating function $$ \zeta_{\Gamma}(s) = \sum_{n\ge 1} r_n(\Gamma)n^{-s}. $$ It is known that if such functions converge then they converge in a open complex half-plane. The smallest $s_0\in\bR$ such that $\zeta_{\Gamma}(s)$ converges for complex numbers $s$ with real part $Re(s) > s_0$ is called the {\em abscissa of convergence} of $\zeta_\Gamma(s)$ and denoted by $\alpha_\Gamma$. If $\zeta_{\Gamma}(s)$ does not converge, we say that $\alpha_\Gamma = \infty$. Let $K$ be a non-archimedean local field containing $\bQ$. By Jaikin-Zapirain \cite{J}, if $\Lambda$ is a compact open subgroup of ${\mathrm{SL}}_d(K)$ which is also a uniform pro-$p$ group, then $\zeta_{\Lambda}(s)$ is a rational function in $p^{-s}$ whose denominator is a product of binomials of the form $1-p^{a-bs}$ for some integers $a, b$. This implies that $\alpha_\Lambda$ is finite and rational. Since every compact open subgroup $\Gamma$ of ${\mathrm{SL}}_d(K)$ contains a finite index subgroup like $\Lambda$ above, also $\alpha_\Gamma$ is finite and rational (and equal to $\alpha_\Lambda$ by \cite[Lemma 2.2]{LM}). Moreover, if \[ R_N(\Gamma) = \sum_{n = 1}^{N} r_n(\Gamma) \] for $N\in \bN$, then $\log (R_N(\Gamma))/ \log N$ tends to $\alpha_\Gamma$ as $N$ tends to infinity. Theorem \ref{thrmMain1} is then true for any compact open subgroup $\Gamma$ of ${\mathrm{SL}}_d(K)$, and it is a consequence of the following more general theorem: \begin{thrm} \label{thrmMain} Let $d$ be a positive integer. Let $K$ be a non-archimedean local field containing $\bQ$. Let $\Gamma$ be a compact open subgroup of $\mathrm{SL}_d(K)$. Then $\alpha_\Gamma < 2$. \end{thrm} In \cite{AA}, the bound $\alpha_\Gamma<22$ had been obtained with $\Gamma$ as in the theorem, along with other bounds for compact open subgroups for all semi-simple algebraic groups defined over ${\bQ}_p$. Our result complements a rather short list of known results besides \cite{AA}. When $\Gamma={\mathrm{SL}}_d(\bZ_p)$, $$ \alpha_{\Gamma}\text{ is }\left\{ \begin{array}{lll} \ge 1/15 & & \text{by \cite{LL},}\\ =1 & \text{ if } d=2 & \text{by \cite{J} and \cite{AKOV12}},\\ =2/3 & \text { if }d=3 \text{ and } p > 3 & \text{by \cite{AKOV13}},\\ =1/2 & \text { if }d=4 \text{ and } p > 2 & \text{by \cite{Z}.} \end{array}\right. $$ The proof of Theorem \ref{thrmMain} is based on Aizenbud-Avni \cite{AA}, together with the additional observation by Bellamy-Schedler \cite{BS} that the ${\mathrm{SL}}_d$-character variety of a compact Riemann surface admits algebraic symplectic singularities. The new idea of this paper is that it is possible to use the machinery of \cite{AA} to deduce upper-bounds for the abscissa of convergence by looking at the singularities of the ${\mathrm{SL}}_d$-character variety rather than at those of the ${\mathrm{SL}}_d$-representation variety. As we will see in the next section, the methods used by \cite{AA} are only capable of computing the values of the representation zeta function at even natural numbers, therefore the present result is the best possible upper-bound obtainable with their techniques. Theorem \ref{thrmMain} has the following notable corollaries as pointed out by Aizenbud-Avni. \begin{cor}\label{corRat} Let $\pi_1(\Sigma_n)$ be the fundamental group at a fixed base point of a compact Riemann surface of genus $n$. Then the representation variety $$\homo (\pi_1(\Sigma_n),{\mathrm{SL}}_d)$$ has rational singularities over $\bQ$ for $n\ge 2$. \end{cor} This follows directly from Theorem \ref{thrmMain} together with \cite[Theorem IV]{AA}. The statement for the cases $n\ge 12$ was shown in \cite[Theorem VIII]{AA}. \begin{cor} Let $d$ be a positive integer. Then there exists a finite set $S$ of prime numbers such that for every global field $k$ of characteristic not in $S$ and every finite set $T$ of places of $k$ containing all archimedean ones, $$\alpha ({\mathrm{SL}}_d(O_{k,T}))\le 2,$$ where $$O_{k,T}=\{x\in k\mid \forall v\not\in T, \|x\|_v\le 1\},$$ with the exception of the case $d=2$ and $k$ equal to $\bQ$ or an imaginary quadratic extension of $\bQ$. In particular, $\alpha ({\mathrm{SL}}_d(\bZ))\le 2$ for $d\ge 3$. \end{cor} This follows directly from Corollary \ref{corRat} together with \cite[Theorem II]{AA2}, where a condition was phrased in terms of the congruence subgroup property for ${\mathrm{SL}}_d(O_{k,T})$. This property holds except for $d=2$ and $k$ equal to $\bQ$ or an imaginary quadratic extension of $\bQ$ by \cite{BMS}, hence the statement of the above corollary. The statement of the corollary with $\alpha\le 22$ was shown in \cite{AA2}. Here $\alpha ({\mathrm{SL}}_d(\bZ))$ is defined to be the abscissa of convergence of the representation zeta function of the group ${\mathrm{SL}}_d(\bZ)$, which in this case enumerates all irreducible complex representations and not only the continuous ones. Our argument for Theorem \ref{thrmMain} extends to other semi-simple algebraic groups if the associated character variety has rational singularities, that is, if the result of \cite{BS} generalizes. In this case, Corollary \ref{corRat} also generalizes to: \begin{prop} \label{prop:semisimple} Let $G$ be a connected, simply connected, semi-simple algebraic group defined over $\bQ$. Let $n\ge 2$. Then the representation variety $$\homo (\pi_1(\Sigma_n),G)$$ has rational singularities if and only if the character variety $$ \homo (\pi_1(\Sigma_n),G)\sslash G $$ has rational singularities. \end{prop} See Section \ref{secPf} for the definitions. The ``only if" direction is due to Boutot \cite{Bo}. The ``if" direction follows as in the proof of Corollary \ref{corRat}, by noting that the only new thing we have used is the fact that the ${\mathrm{SL}}_d$-character variety has rational singularities over $\bQ$. If $n\ge 374$, both spaces in the corollary admit rational singularities regardless of $G$, by \cite[Theorem B]{AA}. Hence the corollary could be a step to get rid of this uniform lower bound. Throughout the article, for a scheme $X$ and an algebra $A$, we denote by $X(A)$ the set of morphisms of $\bZ$-schemes $\spec(A)\rightarrow X$. A variety over a field $k$ is a $k$-scheme of finite type that is separated, reduced, and irreducible. \medskip {\it Acknowledgement.} The authors thank Benjamin Martin for his comments on an earlier version of this paper. The second author would like to thank Avraham Aizenbud and Nir Avni for answering a question on their techniques to pass from a bound for the local abscissa to a bound for the global abscissa. The authors also thank the referees for very important comments. The first author was partly sponsored by a research grant STRT/1 3/005 and a Methusalem grant METH/15/026 from KU Leuven, and by the research projects \allowbreak G0B2115N and G0F4216N from the Research Foundation - Flanders (FWO). The second author is currently supported by the research project G.0792.18N of FWO. \section{Reduction to finite volumes} Let $G$ be an algebraic group defined over $\bZ$. Let $K$ be a non-archimedean local field containing $\bQ$. The set of $K$-rational points $G(K)$ of $G$ is naturally a $K$-analytic manifold, since in characteristic zero every algebraic group is smooth. Let $\Sigma_n$ be a compact Riemann surface of genus $n$, and let $\pi_1(\Sigma_n)$ be the fundamental group of $\Sigma_n$ based at a fixed point. For a compact open subgroup $\Gamma$ of the $K$-analytic group $G(K)$, define $$ R(n,\Gamma)^o:=\{\rho\in \homo (\pi_1(\Sigma_n), \Gamma)\mid \text{ the closure of the image of }\rho\text{ is open}\}. $$ In this case, the quotient $$R(n,\Gamma)^o/\Gamma$$ by the conjugation action admits the structure of a $K$-analytic manifold, together with a specific volume form $v_\Gamma$ by \cite[Section 4.3]{AA}. We will use the following fascinating relation between the volume of $R(n,\Gamma)^o/\Gamma$ and values of $\zeta_\Gamma(s)$ due to Aizenbud-Avni: \begin{thrm}\label{thrmVI}{\rm{(}\cite[Theorem VI]{AA}\rm{)} } Let $G$ be a connected, simply connected, semi-simple algebraic group over $\bQ$. Let $K$ be a non-archimedean local field containing $\bQ$. Let $\Gamma$ be a compact open subgroup of $G(K)$. Then there exists a non-zero constant $c_{G,\Gamma}$ such that $$ \int_{R(n,\Gamma)^o/\Gamma} |v_\Gamma | = c_{G,\Gamma}\cdot \zeta_\Gamma (2n-2) $$ for $n\ge 2$, if any the two sides of the equation is finite. In other words, if one side converges, the other side converges too, and equals the value prescribed by the equation. \end{thrm} \begin{proof} Strictly speaking, the statement of Theorem VI in \cite{AA} is slightly differently phrased. There are two differences with respect to the way we stated it: The first difference is that a certain ``FRS property" is assumed in {\it loc.\ cit.} This property was used by them to apply \cite[Theorem 3.4]{AA} to guarantee the finiteness on both sides of the equality on top of p.303 of \cite{AA}. If we assume however that one of those two sides is finite, the other is finite and equal to the former side since the equality is guaranteed by \cite[Corollary 3.6 (2)]{AA}. The extra assumption in \cite[Theorem VI]{AA} that the characteristic of the residue field of $K$ must be $>2$ was not used in their proof, hence we can skip it. Similarly, the first equality in their proof of \cite[Theorem VI]{AA} on p.302 uses \cite[Proposition I]{AA} and it is stated with a finiteness assumption on $\zeta_\Gamma (2n-2)$. This finiteness assumption can be dropped, in the sense that the equality is true as soon as one of the sides is finite, as it is shown in their proof of \cite[Proposition I]{AA}. \end{proof} If $\zeta_\Gamma (s)$ converges for $s=s_0$, then it converges for $Re(s)>Re(s_0)$ and the function thus defined is holomorphic, \cite[Corollary 1, p. 66]{S}. In particular, to show Theorem \ref{thrmMain} it is enough by Theorem \ref{thrmVI} to prove the following, which is the main technical result of this article: \begin{thrm}\label{mainTec} With the assumptions as in Theorem \ref{thrmVI}, the volume of $R(n,\Gamma)^o/\Gamma$ with respect to $v_\Gamma$ is finite when $\Gamma$ is a compact open subgroup of $\mathrm{SL}_d(K)$ and $n\ge 2$. \end{thrm} The proof of this theorem will be based on the essential features of the form $v_\Gamma$ which we recall next. \section{Proof of Theorem \ref{mainTec}}\label{secPf} Let $G={\mathrm{SL}}_d$ for a positive integer $d$. This an algebraic group defined over $\bZ$. Let $$ R(n,G) := \homo (\pi_1(\Sigma_n),G) $$ be the representation variety. This is an affine scheme of finite type defined over $\bZ$ which is a fine moduli space for the functor from commutative rings to sets given by \begin{align*} A\mapsto R(n,G)(A)&=\homo (\pi_1(\Sigma_n),G(A)) =\\ &= \{(g_1,h_1,\ldots, g_n,h_n)\in G(A)^{\times 2n}\mid [g_1,h_1]\cdots[g_n,h_n]=1 \}. \end{align*} There is a group scheme action of $G$ on $R(n,G)$ defined over $\bZ$ given by conjugation. Let $$M(n,G):=R(n,G)_\bQ\sslash G_\bQ=\spec A^{G_\bQ}$$ be the categorical quotient, where the subscript $(\_)_\bQ$ denotes the base change to $\bQ$, $A$ is the ring of global sections of the structure sheaf of $R(n,G)_\bQ$, and $A^{G_\bQ}$ is the ring of invariants $$ A^{G_\bQ}:=\{a\in A\mid m(a)=a\otimes 1\}, $$ where $m:A\rightarrow A\otimes_\bQ \Gamma(G_\bQ,\cO_{G_\bQ})=A\otimes_\bQ(\Gamma(G,\cO_G)\otimes\bQ)$ is the algebra homomorphism giving the action $G_\bQ\times R(n,G)_\bQ\rightarrow R(n,G)_\bQ$. By \cite[Theorem 1.1 on p.27]{Mu}, $M(n,G)$ is also a universal categorical quotient. In particular, for a field extension $\bQ\subset k$, the base change $$ M(n,G)_k=M(n,G)\times_{\spec \bQ} \spec k =\spec (A^{G_\bQ}\otimes_\bQ k) $$ is the spectrum of the $k$-algebra $$ (A\otimes_\bQ k)^{G_{ k}}, $$ where $G_{ k}=G\times_{\spec\bQ} \spec k$. In particular $ M(n,G)_\bC $ is an affine $\bC$-scheme of finite type, equal to the spectrum of the $\bC$-algebra of invariants $ (A\otimes\bC)^{G(\bC)}, $ commonly denoted $R(n,G(\bC))\sslash G(\bC)$ in the literature.\par We will make use of the following result due to Bellamy-Schedler \cite[Theorem 1.18]{BS}): \begin{thrm}\label{thrmSymp}{\rm {(Bellamy-Schedler)}} $M(n,G)_\bC$ is a complex variety with algebraic symplectic singularities if $G=\mathrm{SL}_d$ and $n\ge 1$. \end{thrm} By definition, a normal complex variety has (complex) algebraic symplectic singularities if there exists an algebraic symplectic 2-form on the smooth locus extending to an algebraic 2-form on any resolution of singularities. Recall that, by Beauville \cite{B}, holomorphic, and hence also complex algebraic, symplectic singularities are Gorenstein rational singularities. For our purposes, working over $\bC$ is an underkill, so next we argue that by looking at the coefficients of the polynomials defining our scheme and the symplectic form, we can restrict to $\bQ$: \begin{lemma}\label{lemQ} If $G=\mathrm{SL}_d$ and $n\ge 1$, then $M(n,G)$ is a variety over $\bQ$ with Gorenstein rational singularities over $\bQ$. \end{lemma} \begin{proof} By Theorem \ref{thrmSymp}, $M(n,G)_\bC$ is a normal variety over $\bC$. This implies that $M:=M(n,G)$ is a normal variety over $\bQ$, see \cite[\href{http://stacks.math.columbia.edu/tag/038L}{Tag 038L}]{St}. In particular, it is a variety over $\bQ$. Let $h:\tilde{M}\rightarrow M$ be a strong resolution of singularities over $\bQ$ of $M$, that is, a proper birational morphism which is an isomorphism above the regular locus of $M$. To show that $M$ has rational singularities over $\bQ$, we will show that $R^ih_*\cO_{\tilde{M}}=0$ for $i>0$. Let $h_\bC:\tilde{M}_\bC\rightarrow M_\bC$ be the base change of $h$ to $\bC$. Since regularity is preserved after base change to $\bC$, $h_\bC$ is a strong resolution of singularities of $M_\bC$ over $\bC$. Since $M_\bC\rightarrow M$ is a base change of the flat morphism $\spec(\bC)\rightarrow\spec(\bQ)$, it is also a flat morphism. By flat base change \cite[\href{http://stacks.math.columbia.edu/tag/02KE}{Tag 02KE}]{St}, $R^i(h_\bC)_*\cO_{\tilde{M}_\bC}=(R^ih_*\cO_{\tilde{M}})\otimes_\bQ\bC$ for all $i\ge 0$. By Theorem \ref{thrmSymp}, $R^i(h_\bC)_*\cO_{\tilde{M}_\bC}=0$ for $i>0$ since $M_\bC$ has rational singularities over $\bC$. It follows that $R^ih_*\cO_{\tilde{M}}=0$ for $i>0$, and hence $M$ has rational singularities over $\bQ$. Finally, $M$ is Gorenstein since $M_\bC$ is, by \cite[\href{http://stacks.math.columbia.edu/tag/0C02}{Tag 0C02}]{St}. \end{proof} We recall now the essential features of the volume form $v_\Gamma$, following \cite[4.3]{AA}. Firstly, there is an open subscheme $$R(n,G)^{o}\subset R(n,G)$$ defined over $\bZ$, invariant under the action of $G$ by conjugation, such that $R(n,G)^{o}_\bQ$ is a smooth variety over $\bQ$, and such that for any algebraically closed field $k$ of characteristic zero, $R(n,G)^{o}(k)$ consists of representations $\pi_1(\Sigma_n)\rightarrow G(k)$ with image generating a Zariski dense subgroup. Moreover, the geometric quotient $$M(n,G)^{o}=R(n,G)^{o}_\bQ/ G_\bQ$$ exists and is a smooth $\bQ$-subvariety of $M(n,G)$. On $M(n,G)^{o}$ one has an algebraic 2-form. Over $\bC$, this is the classical 2-form constructed by Atiyah, Bott, and Goldman, which extends to the whole smooth locus of $M(n,G)_\bC$ to give the symplectic structure from Theorem \ref{thrmSymp}. We do not repeat here the definition, but only point out that from the proof in \cite{BS} of Theorem \ref{thrmSymp}, one sees easily that this algebraic symplectic 2-form on the smooth locus of $M(n,G)_\bC$ is defined over $\bQ$, hence it gives a 2-form $$\eta_G\in \bigwedge^2\Omega^1_{M(n,G)^{sm}/\bQ},$$ on the smooth locus of $M(n,G)$. We will now use: \begin{prop}{{\rm}(}\cite[Corollary 3.10]{AA}{{\rm )}} Let $X$ be a variety with rational singularities over a non-archimedean local field of characteristic zero $K$, and let $\omega$ be a top differential form on $X^{sm}$. Then for any $A\subset X(K)$, the integral $$ m(A)=\int_{A\cap X^{sm}(K)}|\omega| $$ defines a Radon measure, i.e. finite on compact subsets. \end{prop} Here, if $\omega=gdx_1\wedge\ldots dx_n$ on a compact open subset $U$ of $X^{sm}(K)$ analytically diffeomorphic to an open subset of $K^n$, then $|\omega |(U)$ is by definition $\int_U|g|d\lambda$, where $|g|$ is the normalized absolute value on $K$ and $\lambda$ is the standard additive Haar measure on $K^n$. This definition extends uniquely to a non-negative Borel measure $|\omega|$ on $X^{sm}(K)$, see \cite[3.1]{AA}. To use this proposition, consider the base change of $M(n,G)$ to $K$. Then Lemma \ref{lemQ} holds with $\bQ$ replaced by $K$. This together with the previous proposition implies: \begin{cor} \label{cor:Radon} Let $n\ge 2$ and $G=\mathrm{SL}_d$. Let $$v_G=(\eta_G)^{\wedge \dim M(n,G) /2 }$$ be the top form on $M(n,G)^{sm}$ generated by $\eta_G$. Then for any $A\subset M(n,G)(K)$, the integral $$m(A)=\int_{A \cap M(n,G)^{sm}(K)} |v_{G}| $$ defines a Radon measure. \end{cor} \begin{proof}[Proof of Theorem \ref{mainTec}] Let $p$ be the residue field characteristic of $K$ and let $\mathcal{O}_K$ be its ring of integers. The group $G(K)$ has a natural analytic structure, which, by \cite[16.6.(i)]{dixsauman1999analytic}, implies that it is an $\mathcal{O}_K$-analytic group in the sense of \cite[Definition~13.8]{dixsauman1999analytic}. The topological group $\Gamma$, therefore, can be given the structure of a $p$-adic analytic group by \cite[Theorem~13.23]{dixsauman1999analytic}; and hence, by \cite[Corollary~8.34]{dixsauman1999analytic}, it contains a finite index open {uniform pro-$p$ subgroup}, say $\Gamma_1$. See \cite[Definition~4.1]{dixsauman1999analytic} for the definition of uniform pro-$p$ group. Since the abscissas $\alpha_\Gamma$ and $\alpha_{\Gamma_1}$ are equal by \cite[Lemma~2.2]{LM}, the representation zeta functions of $\Gamma$ and $\Gamma_1$ have the same domain of convergence. We may therefore assume that $\Gamma$ is a uniform pro-$p$ group. Consider now the natural map of $K$-analytic manifolds $$ q : R(n,\Gamma)^o/\Gamma \rightarrow M(n,G)^{sm}(K). $$ The map $q$ is \'{e}tale onto the image, with finite fibres (cf.\ Lemma~\ref{lem:finite_fibres}) and by definition the volume form $v_\Gamma$ on the source of this map is the pull-back of $v_G$. We will show in Lemma \ref{lemBDD} below that the number of points in the fibres of $q$ is bounded. Hence by Corollary \ref{cor:Radon}, to prove Theorem \ref{mainTec} it is enough to find a compact subset in $M(n,G)(K)$ containing the image under $q$ of $R(n,\Gamma)^o/\Gamma$. For this purpose, denote by $$ Q: R(n,G)(K)\rightarrow M(n,G)(K) $$ the map defined by the categorical quotient, and by $$ \Phi: G(K)^{\times 2n}\rightarrow G(K) $$ the map $$(g_1,h_1,\ldots, g_n,h_n)\mapsto [g_1,h_1]\cdot\ldots\cdot[g_n,h_n].$$ Note that $$ R(n,G)(K)=\Phi^{-1}(1). $$ We will show that the image $$ Q(\Phi^{-1}(1)\cap \Gamma^{\times 2n}) $$ is compact and contains the image $q(R(n,\Gamma)^o/\Gamma)$. This will finish the proof. Since the image of a compact set under a continuous map is compact, to show the first claim it suffices to show that $\Phi^{-1}(1)\cap \Gamma^{\times 2n}$ is compact. Indeed, by hypothesis, $\Gamma\subset G(K)$ is compact, and so $\Gamma^{\times 2n}\subset G(K)^{\times 2n}$ is also compact. Since $\Phi^{-1}(1)$ is closed in $G(K)^{\times 2n}$, $\Phi^{-1}(1)\cap \Gamma^{\times 2n}$ is closed in $\Gamma^{\times 2n}$ and hence compact. For the second claim, note that \begin{equation}\label{eqRo} R(n,\Gamma)^o=[R(n,G)^o(K)]\cap [\Phi^{-1}(1)\cap \Gamma^{\times 2n}]. \end{equation} Hence $$ q(R(n,\Gamma)^o/\Gamma)=Q(R(n,\Gamma)^o) $$ and the latter is contained in $Q(\Phi^{-1}(1)\cap \Gamma^{\times 2n})$. \end{proof} \begin{rmk} As explained in the introduction, Corollary~\ref{corRat} now follows directly from Theorem~\ref{mainTec} and \cite{AA}. A similar argument also shows Proposition~\ref{prop:semisimple}. Indeed let $G$ be a connected, simply connected, semi-simple algebraic group defined over $\bQ$, and let $n\in\bZ_{>0}$ be such that the character variety $M(n,G)$ has rational singularities. Corollary~\ref{cor:Radon}, holds also in this more general setting and the proof of Theorem~\ref{mainTec} also carries over: the argument reducing the problem to the case in which $\Gamma$ is a uniform pro-$p$ group goes through unchanged, and the map $q$ has fibres of bounded cardinality too, as we shall see in Section~\ref{sec:bounded_fibres}. It follows that for all non-archimedean local fields $K$ of characteristic $0$ and all compact open $\Gamma\leq G(K)$, the abscissa $\alpha_\Gamma < 2n -2$. So, by \cite[Theorem~IV]{AA}, $\homo(\pi_1(\Sigma_n),G)$ has rational singularities. \end{rmk} \section{Boundedness of $\Gamma$-orbits in a $G(K)$-orbit} \label{sec:bounded_fibres} We address here the last technical issue left open in the proof of Theorem \ref{mainTec}. Let $G$ be a semi-simple algebraic group defined over $\bQ$ and let $K$ be a non-archimedean local field of characteristic zero. \begin{lemma}\label{lemBDD} Let $\Gamma$ be an open uniform pro-$p$ subgroup of $G(K)$. Then there exists $N_\Gamma>0$ such that the number of points in every fibre of the \'{e}tale map $$ q : R(n,\Gamma)^o/\Gamma \rightarrow M(n,G)^{sm}(K). $$ is smaller than $N_\Gamma$. \end{lemma} The rest of this section is dedicated to the proof of this lemma. \newcommand{\orbit}[2]{\mathcal{O}_{#1}{(#2)}} \newcommand{K}{K} \newcommand{\con}[2]{#1.#2} \newcommand{\mathbf{Z}}{\mathbf{Z}} Recall that $q$ is an \'etale map onto the image between two analytic manifolds, which means that it is a local analytic diffeomorphism. We start by proving that the fibres of $q$ are indeed finite. \begin{lm} \label{lem:finite_fibres} The fibres of $q$ have finite cardinality. \begin{proof} Let $x \in R(n,G)^{o}(K)$ and let $X$ be its $G(K)$-orbit. The fibre of $q$ above $X$ is $(X \cap \Gamma^{\times 2n})/\Gamma$, where $\Gamma$ acts as a subgroup of $G(K)$. By \cite[Lemma~4.8~(3)]{AA} $X$ is Zariski closed in $G(K)$ and so $X \cap \Gamma^{\times 2n}$ is compact, which implies that the orbit space $(X \cap \Gamma^{\times 2n})/\Gamma$ is compact as well. This proves that the fibre in hand is finite because $q$ is a local diffeomorphism, hence its fibres are discrete. \end{proof} \end{lm} Let $g,h \in G(K)$, we write $\con{g}{h} = g h g^{-1}$. By extension, if $x = (x_1,y_1\dots,x_n,y_n) \in G(K)^{\times 2n}$, we write $\con{g}{x} = (\con{g}{x_1},\con{g}{y_1}\dots,\con{g}{x_n},\con{g}{y_n})$. Fix throughout $x = (x_1,y_1\dots,x_n,y_n)\in R(n,\Gamma)^{o}$ and let $X$ be its $G(K)$-orbit as in the proof of the previous lemma. Let also $H_x = \overline {\langle x_1,y_1,\dots,x_n,y_n\rangle}$ be the subgroup of $\Gamma$ topologically generated by the entries of $x$, where the overline will denote closure from now on. Recall that $H_x$ is open in $\Gamma$ because $x\in R(n,\Gamma)^{o}$, and so it has finite index in $\Gamma$. In order to show that $q$ has fibres of bounded cardinality, we shall show that $X \cap \Gamma^{\times 2n}$ splits in a fixed number of $\Gamma$-orbits. We start by finding a better description for this set. Indeed, we shall show that, for $g\in G(K)$, the $2n$-tuple $\con{g}{x}$ lies in $\Gamma^{\times 2n}$ if and only if $g$ normalizes $\Gamma$. For this purpose, notice that $$\con{g}{x}\in\Gamma^{\times 2n}\iff \con{g}{H_x} = H_{\con{g}{x}}\leq \Gamma.$$ In other words it suffices to show that $g$ normalizes $\Gamma$ whenever we find an open subgroup $H\leq\Gamma$ such that $\con{g}{H}\leq \Gamma$. Before doing so, we recall some properties of uniform pro-$p$ groups that we shall need for this proof. The group $\Gamma$ is uniform and therefore finitely generated by definition. Let $\lbrace \gamma_1, \dots ,\gamma_r\rbrace$ be a minimal topological generating set for $\Gamma$. By definition, a uniform pro-$p$ group is powerful. So, by \cite[Proposition~3.7]{dixsauman1999analytic}, \[ \Gamma = \overline{ \langle\gamma_1\rangle}\cdots\overline{ \langle\gamma_r\rangle}. \] Hence, for $\gamma\in\Gamma$ there are $\lambda_1,\dots,\lambda_r\in\mathbf{Z}_p$ such that $\gamma = \gamma_1^{\lambda_1}\cdots\gamma_r^{\lambda_r}$. Here, for $m\in\mathbf{Z}_p$, we write $\gamma^{m}$ to mean the element $(\gamma^{m_1},\gamma^{m_2},\dots)$ in the pro-finite completion of $\langle \gamma\rangle$, where $m = (m_1, m_2,\dots)\in\mathbf{Z}_p = \varprojlim_{j\in\mathbf{Z}_{> 0}} \mathbf{Z}/p^j\mathbf{Z}$. By \cite[Theorem~4.9]{dixsauman1999analytic} the function $\Gamma \rightarrow \mathbf{Z}_p^r$ sending $\gamma$ to $(\lambda_1,\dots,\lambda_r)$ is a homeomorphism and it becomes an isomorphism of $\mathbf{Z}_p$-modules if $\Gamma$ is endowed with the $\mathbf{Z}_p$-module structure described in \cite[Section~4.3]{dixsauman1999analytic} where left multiplication is defined by $m \gamma = \gamma^m$. The next lemma introduces a left- and right-translation invariant metric on the locally compact group $G(K)$. Let \[ V_i = \overline{\langle \gamma_1^{p^{i}},\dots,\gamma_r^{p^{i}}\rangle}\,\,\,(i\in\mathbf{Z}_{\geq 0}). \] Then $\lbrace V_i\rbrace_{i\in\mathbf{Z}_{\geq 0}}$ is a decreasing countable compact (open) base at the identity. Moreover, if $\mu$ is the Haar measure on $G(K)$ such that $\mu(\Gamma) = 1$, $\mu(V_i) = p^{-ir}$ for all $i\in\mathbf{Z}_{\geq 0}$. In what follows we denote by $\lvert m\rvert_p$ the $p$-adic absolute value of $m\in\mathbf{Z}_p$ and with $\|(\lambda_1,\dots,\lambda_r)\|_p = \max_{i = 1,\dots, r} \lvert \lambda_i\rvert_p$ the $p$-adic norm of $(\lambda_1,\dots,\lambda_r)\in\mathbf{Z}_p^r$. In addition, if $A$ and $B$ are two sets, we denote by $A \triangle B = (A \cup B)\smallsetminus (A\cap B)$ the symmetric difference of $A$ and $B$. \begin{lm} \label{lem:metric} The function \[ \rho(a,b) = \sup_{i\in\mathbf{Z}_{\geq 0}}\mu(aV_i \triangle bV_i) \] defines a left- and right-translation invariant metric on $G(K)$. Moreover, if $\gamma = \prod_{j = 1}^{r} \gamma_j^{\lambda_j}$ for $\lambda_1,\dots,\lambda_r\in\mathbf{Z}_p$, then $\rho(1, \gamma) =2 (p^{-1} \| (\lambda_1,\dots,\lambda_r)\|_p)^r$. \begin{proof} With the exception of the right-invariance, the first part is just \cite[Lemma~1]{str1974metrics}. To show right-invariance and the second part, we use the properties of the filtration $\lbrace V_i\rbrace_{i\in \mathbf{Z}_{\geq 0}}$. First of all, let $a,b\in G(K)$. For all $i\in \mathbf{Z}_{\geq 0}$, $V_i$ is a subgroup of $G(K)$, so \[ aV_i\triangle bV_i = \begin{cases} aV_i \cup bV_i & \text{ if } ab^{-1}\notin V_i \\ \emptyset & \text{ otherwise}. \end{cases} \] Thus $\mu(aV_i\triangle bV_i) = 2 \mu(V_i) = 2 p^{-r i}$ if $ ab^{-1}\notin V_i$ and $0$ otherwise. This implies that, for $g\in G(K)$, $\mu(agV_i\triangle bgV_i) = \mu(aV_i\triangle bV_i)$, because $ab^{-1} = agg^{-1}b^{-1}$. Thus $\rho$ is right-translation invariant too. The last observation also shows the last part of the statement, as we notice that $\gamma\in V_i$ if $p^{-i} \geq \| (\lambda_1,\dots,\lambda_r)\|_p$ and $\gamma\notin V_i$ otherwise. \end{proof} \end{lm} Let $N$ be the normalizer of $\Gamma$ in $G(K)$. \begin{lm} \label{lem:normalizer} If $H$ is an open subgroup of $\Gamma$ and $g\in G(K)$ is such that $\con{g}{H}\leq \Gamma$, then $g\in N$. \begin{proof} Since $\Gamma$ is uniform, it is homeomorphic to $\mathbf{Z}_p^r$ and hence we may assume that $H$ is topologically generated by $\gamma_1^{\eta},\dots, \gamma_r^{\eta}$ for some $\eta\in\mathbf{Z}$. Moreover, by hypothesis, $\con{g}{H}\leq \Gamma$; so for each $i \in\lbrace 1,\dots, r\rbrace$ \[ (\con{g}{\gamma_i})^{\eta} = \gamma_1^{\lambda_{i1}}\cdots \gamma_r^{\lambda_{ir}}, \] for some $\lambda_{i1},\dots,\lambda_{ir}\in\mathbf{Z}_p$. The invariance under left- and right-translations of the metric $\rho$ implies that $\rho(1, \con{g}{(\gamma_i)^{\eta}} ) = \rho(1,\gamma_i^{\eta})$ so, by Lemma~\ref{lem:metric}, \[ \vert \eta\rvert_p \allowbreak = \| (\lambda_{i1},\dots,\lambda_{ir})\|_p. \] Hence $\eta^{-1}(\lambda_{i1},\dots,\lambda_{ir})\in \mathbf{Z}_p^{r}$ and therefore $\con{g}{\gamma_i} \in \Gamma$ because the function associating an element of $\Gamma$ with its exponents for the generating set $\lbrace\gamma_1,\dots,\gamma_r\rbrace$ is a $\mathbf{Z}_p$-module isomorphism between the $\mathbf{Z}_p$-module $\Gamma$ and $\mathbf{Z}_p^r$. \end{proof} \end{lm} Lemma~\ref{lem:normalizer} shows that $g\in N$ whenever the conjugation by $g\in G(K)$ sends $H_x$ to a subgroup of $\Gamma$. This means exactly that for each $y \in X\cap \Gamma^{\times 2n}$, there is $h\in N$ such that $y = \con{h}{x}$, and hence that $X\cap \Gamma^{\times 2n} = \con{N}{x}$. The next lemma finishes the proof of Lemma~\ref{lemBDD}. Let $Z = Z(G(K))$. Then $\Gamma Z\leq N$ because $\Gamma$ and $Z$ are both subgroups of $N$ and commute with each other. \begin{lm} The fibres of $q$ have cardinality $[N:\Gamma Z]$. \begin{proof} We show that the set of $\Gamma$-orbits $(X \cap \Gamma^{\times 2n})/\Gamma$ is in bijection with the set of left $\Gamma Z$-cosets $N/\Gamma Z$. For $y\in X\cap \Gamma$, let $\orbit{\Gamma}{y}$ denote its $\Gamma$-orbit. We shall show that the following rule defines a bijective map: \[ \varphi:\xymatrix@R=3pt{N/\Gamma Z \ar[r] &(X \cap \Gamma^{\times 2n})/\Gamma\\ h\Gamma Z \ar@{|->}[r] &\orbit{\Gamma}{h.x}.} \] We start by proving that $\varphi$ is well-defined. Let $h \in N$ and $\gamma z \in\Gamma Z$. Since $N$ normalizes $\Gamma$, there exists $\gamma'\in \Gamma$ such that $h\gamma = \gamma' h$. Thus $\orbit{\Gamma}{\con{h}{x}} = \orbit{\Gamma}{\con{\gamma'h}{x}} = \orbit{\Gamma}{\con{h\gamma z}{x}}$ because $Z$ acts trivially on $\Gamma^{\times 2n}$.\par The map $\varphi$ is surjective because $X\cap \Gamma = \con{N}{x}$ as Lemma~\ref{lem:normalizer} and subsequent discussion show. We now show that $\varphi$ is injective. Take $h,h'\in N$ such that $\orbit{\Gamma}{\con{h}{x}} = \orbit{\Gamma}{\con{h'}{x}}$. There is $\gamma\in \Gamma$ such that \[ \gamma \con{h'}{x} = \con{h}{x}. \] Hence $h^{-1}\gamma h = z\in Z$ because by the proof of \cite[Lemma~4.8~(3)]{AA} the stabilizer of $x$ for the $G(K)$-action on $R(n,G)^{o}(K)$ is $Z$. It follows that $\gamma h' = h z = z h$ and so $h^{-1}h' = \gamma^{-1} z\in \Gamma Z$, showing that $\varphi$ is injective. \end{proof} \end{lm}
2,869,038,156,509
arxiv
\section{Introduction} The RR Lyrae variables are known, primarily from studies of globular clusters, to lie on or immediately above the Horizontal branch (HB) in an HR diagram. Globular cluster studies also show a class of variable stars lying in an instability strip in an HR diagram which extends approximately 3 mag above the HB. Variables with similar characteristics are also found in the general field, both in the halo and disc. All these variables, both in clusters and the field are classed together as ``type II Cepheids" (CephIIs). These stars have been divided into three classes according to their periods. Those of short period (roughly $P < 7$ days) are called BL Her stars, whilst longer period ones (up to $P \sim 20$ days) are called W Vir stars. At even longer periods, many of the CephIIs show characteristic alternations of deep and shallow minima and are classed as RV Tau stars. This subdivision of CephIIs has not been universally adopted. Thus Sandage \& Tammann (2006) review and summarize a system of classification based on light-curve parameters that relate to their population characteristic and these partially correlate with their metallicities. It should be noted that the ``population II Cepheids" with which Sandage \& Tammann are primarily concerned are a subset of the ``type II Cepheids". Maas et al. (2007) have shown that the shorter period CephIIs in the general field differ from those of longer period in their detailed chemical composition. The short period stars are generally believed (Gingold 1976, 1985) to be evolving across the instability strip from the HB towards the AGB. The longer period stars, on the other hand, are believed to be on blueward excursions into the instability strip from the AGB due to shell flashing. In the present paper we discuss the luminosities of RR Lyrae and CephIIs on the basis of the revised Hipparcos trigonometrical parallaxes (van Leeuwen 2007a, see also van Leeuwen 2007b) and newly derived pulsation parallaxes for three CephII variables. \section{Period-Luminosity and Metallicity-Luminosity Relations} Here we present various relations which are required in the interpretation of our data. \subsection{Relationships for RR Lyrae variables} It has long been thought that the luminosities of RR Lyrae variables can be expressed in the form: \begin{equation} M_{V} = a [Fe/H] + b \end{equation} However, the values of $a$ and $b$ have been much disputed, as has the question of the linearity of the equation. In the following we adopt: \begin{equation} M_{V} = 0.214 [Fe/H] + (19.39 -Mod(LMC)). \end{equation} This is based on RR Lyraes in the LMC (Gratton et al. 2004). Adopting an LMC modulus of 18.39\footnote{This includes a correction for metallicity effects based on Marci et al. 2006.} as derived from classical Cepheids (Benedict et al. 2007, van Leeuwen et al. 2007), the constant term becomes:\\ $b=+1.0$.\\ The LMC RR Lyraes, on which this relation is based, cover a range in [Fe/H] from $\sim -0.8$ to $-2.2$, but are mainly concentrated between $-1.3$ and $-1.8$. There is evidence, however, that the slope of the relation is not universal. Clementini et al. (2005) find that in the Sculptor dwarf spheroidal, over roughly the same metallicity range, the slope is $0.092 \pm 0.027$ compared with the LMC $0.214 \pm 0.047$ and they suggest that the Sculptor RR Lyraes are on average more evolved than those in the LMC. That there is a period-luminosity relation for RR Lyraes in the $K$ band (PL($K$)), possibly independent, or nearly independent of metallicity, goes back at least to the work of Longmore et al. (1986) on globular clusters. The most recent version of such a relation was given by Sollima et al. (2006) again based on globular clusters. The relative distances of the clusters came from main-sequence fitting and the zero-point of their final relation was set by a trigonometrical parallax of RR Lyrae itself (Benedict et al. 2002). They found: \begin{eqnarray} M_{K_{s}} = -2.38 (\pm 0.04) \log P + 0.08 (\pm 0.11) [Fe/H] - \nonumber\\ 1.05 (\pm 0.13), \end{eqnarray} where $K_{s}$ is the $K_{s}$ magnitude in the 2MASS system. The term in [Fe/H] is small and not statistically significant. \subsection{Relationships for type II Cepheids} In the past various PL relations for CephIIs at visual wavelengths have been suggested based primarily on globular cluster work. More recently, it was shown from globular cluster data that a well defined PL($K_{s}$) relation, with small scatter, applied (Matsunaga et al. 2006). The globular cluster distances were determined from a relation for horizontal branch stars similar to eq. 2 and we may write the Matsunaga CephII relation as: \begin{equation} M_{K_{s}} = -2.41 (\pm 0.05) \log P + c, \end{equation} where $c = 17.39 - Mod(LMC)$\\ and $c = -1.0$ for $Mod(LMC) = 18.39$ as above.\\ The (internal) standard error of the constant term is $\pm 0.02$ at the mean $\log P$ (1.120). Matsunaga et al. pointed out that RR Lyraes in clusters lay on an extrapolation of this relation to shorter periods. The subsequent work of Sollima et al. (2006) confirms this (compare eqs. 3 and 4). Matsunaga et al. examined their data for a metallicity effect on the PL($K$) zero-point and found a term, $-0.10 \pm 0.06$. This is clearly not significant and is of opposite sign to the metallicity term in the RR Lyrae relation (eq. 3) which is also not significant. This suggests that a combined RR Lyrae/CephII PL($K$) is virtually metal independent in globular clusters. Some caution is necessary in accepting this, however, since there are only four CephIIs in the Matsunaga sample with $[Fe/H] > -1.0$ and these all have periods greater than 10 days\footnote{But see the discussion of the field variables below.}. In addition to the above the following three PL relations at optical wavelengths will be required later. They are based on CephIIs in NGC\,6441 and 6388 and are taken directly from Pritzl et al. (2003). \begin{equation} M_{V} = -1.64(\pm 0.05) \log P + 0.05 (\pm 0.05), \end{equation} \begin{equation} M_{B} = -1.23(\pm 0.09) \log P + 0.31 (\pm 0.09), \end{equation} \begin{equation} M_{I} = -2.03(\pm 0.03) \log P - 0.36 (\pm 0.01). \end{equation} \section{The RR Lyrae Variables} \subsection{Data} Table~\ref{142rr} lists the data for 142 RR Lyrae variables. \begin{center} \onecolumn \begin{longtable}{rrrcrrrrcccc} \caption[Basic data used in the analysis.]{Basic data used in the analysis.}\label{142rr} \\ \hline \multicolumn{1}{c}{Hipparcos} & \multicolumn{1}{c}{name} & \multicolumn{1}{c}{$\pi$} & $\Delta \pi$ & \multicolumn{1}{c}{$V$} & \multicolumn{1}{c}{$J$} & \multicolumn{1}{c}{$H$}& \multicolumn{1}{c}{$K_{s}$} & P & [Fe/H] & E$_{(B-V)}$ & typ\\ & & \multicolumn{2}{c}{(mas)}& \multicolumn{4}{c}{(mag)} & (day)& & (mag)\\ \hline \endfirsthead \hline \multicolumn{1}{c}{Hipparcos} & \multicolumn{1}{c}{name} & \multicolumn{1}{c}{$\pi$} & $\Delta \pi$ & \multicolumn{1}{c}{$V$} & \multicolumn{1}{c}{$J$} & \multicolumn{1}{c}{$H$}& \multicolumn{1}{c}{$K_s$} & P & [Fe/H] & E$_{(B-V)}$ & typ\\ & & \multicolumn{2}{c}{(mas)}& \multicolumn{4}{c}{(mag)} & (day)& & (mag)\\ \hline \endhead \multicolumn{12}{l}{{Continued on Next Page\ldots}} \\ \endfoot \\ \hline \endlastfoot 226 & RU Scl & 0.99 & 1.96 & 10.220 & 9.474 & 9.294 & 9.229 & 0.493347 & --1.27 & 0.018 & \\ 320 & UU Cet & 1.59 & 5.73 & 12.080 & 11.137 & 10.863 & 10.837 & 0.606080 & --1.28 & 0.021 & \\ 1878 & SW And & --0.01 & 1.84 & 9.710 & 8.809 & 8.578 & 8.505 & 0.442262 & --0.24 & 0.038 & \\ 2655 & RX Cet & 3.24 & 4.74 & 11.440 & 10.606 & 10.378 & 10.319 & 0.573685 & --1.28 & 0.025 & \\ 4541 & W Tuc & 5.37 & 2.41 & 11.410 & 10.594 & 10.373 & 10.344 & 0.642260 & --1.57 & 0.021 & \\ 4725 & RU Cet & 7.14 & 4.62 & 11.680 & 10.597 & 10.487 & 10.465 & 0.586267 & --1.66 & 0.023 & \\ 5803 & RU Psc & 1.30 & 2.08 & 10.190 & 9.347 & 9.162 & 9.117 & 0.390333 & --1.75 & 0.043 & c \\ 6029 & XX And & --0.79 & 2.50 & 10.680 & 9.727 & 9.488 & 9.409 & 0.722755 & --1.94 & 0.039 & \\ 6094 & VW Scl & 2.34 & 2.79 & 11.030 & 10.418 & 10.193 & 10.136 & 0.510913 & --0.84 & 0.016 & \\ 6115 & AM Tuc & --1.93 & 2.28 & 11.670 & 10.865 & 10.617 & 10.563 & 0.405769 & --1.49 & 0.023 & c \\ 7149 & RR Cet & 0.48 & 1.85 & 9.730 & 8.829 & 8.623 & 8.520 & 0.553030 & --1.45 & 0.022 & \\ 7398 & VX Scl & 3.71 & 3.64 & 12.020 & 11.094 & 10.894 & 10.853 & 0.637058 & --2.25 & 0.014 & \\ 8163 & SV Scl & 5.50 & 2.37 & 11.380 & 10.718 & 10.596 & 10.543 & 0.377380 & --1.77 & 0.014 & c \\ 8939 & CI And & 0.77 & 5.87 & 12.280 & 11.182 & 11.018 & 11.185 & 0.484728 & --0.69 & 0.062 & \\ 9932 & SS For & 3.57 & 1.98 & 10.190 & 9.546 & 9.305 & 9.246 & 0.495424 & --0.94 & 0.014 & \\ 10491 & RV Cet & 2.16 & 2.70 & 10.920 & 9.903 & 9.580 & 9.520 & 0.623350 & --1.60 & 0.024 & \\ 11517 & RZ Cet & --0.04 & 4.92 & 11.850 & 11.031 & 10.787 & 10.737 & 0.510606 & --1.36 & 0.029 & \\ 12199 & CS Eri & 2.70 & 1.10 & 9.000 & 8.144 & 8.014 & 7.973 & 0.311332 & --1.41 & 0.018 & c \\ 14601 & X Ari & 0.99 & 1.90 & 9.570 & 8.365 & 8.042 & 7.941 & 0.651154 & --2.43 & 0.180 & \\ 14856 & SV Eri & 3.18 & 2.53 & 9.960 & 8.958 & 8.710 & 8.642 & 0.713865 & --1.70 & 0.085 & \\ 16321 & SX For & --5.39 & 2.38 & 11.120 & 10.035 & 9.847 & 9.772 & 0.605342 & --1.66 & 0.012 & \\ 19993 & AR Per & --1.32 & 2.02 & 10.510 & 9.012 & 8.710 & 8.642 & 0.425551 & --0.30 & 0.108 & \\ 22442 & RX Eri & 1.31 & 1.70 & 9.690 & 8.737 & 8.485 & 8.429 & 0.587246 & --1.33 & 0.058 & \\ 22466 & U Pic & 3.21 & 2.21 & 11.380 & 10.689 & 10.464 & 10.381 & 0.440373 & --0.72 & 0.009 & \\ 22750 & BB Eri & 5.44 & 3.58 & 11.520 & 10.321 & 10.147 & 10.110 & 0.569909 & --1.32 & 0.048 & \\ 22952 & U Lep & 2.32 & 2.97 & 10.570 & 9.814 & 9.565 & 9.542 & 0.581479 & --1.78 & 0.027 & \\ 24471 & RY Col & 3.35 & 1.79 & 10.900 & 10.254 & 9.732 & 9.699 & 0.478832 & --0.91 & 0.026 & \\ 29528 & RX Col & --4.02 & 5.53 & 12.720 & 11.634 & 11.393 & 11.313 & 0.593780 & --1.70 & 0.082 & \\ 34743 & TZ Aur & --3.70 & 6.39 & 11.910 & 10.975 & 10.771 & 10.731 & 0.391676 & --0.79 & 0.037 & \\ 35281 & AA CMi & 1.40 & 5.22 & 11.570 & 10.570 & 10.384 & 10.281 & 0.476327 & --0.15 & 0.011 & \\ 35584 & HH Pup & 2.39 & 2.53 & 11.290 & 10.248 & 10.044 & 9.975 & 0.390748 & --0.50 & 0.158 & \\ 35667 & RR Gem & 0.43 & 3.24 & 11.380 & 10.566 & 10.306 & 10.275 & 0.397292 & --0.29 & 0.054 & \\ 37779 & HK Pup & --2.90 & 3.60 & 11.370 & 10.240 & 10.010 & 9.915 & 0.734229 & --1.11 & 0.160 & \\ 37805 & TW Lyn & --6.62 & 8.60 & 12.000 & 11.075 & 10.854 & 10.778 & 0.481862 & --0.66 & 0.051 & \\ 38561 & SZ Gem & 6.04 & 4.19 & 11.750 & 11.072 & 10.798 & 10.748 & 0.501143 & --1.46 & 0.013 & \\ 39009 & UY Cam & 0.19 & 1.99 & 11.530 & 11.002 & 10.872 & 10.859 & 0.267044 & --1.33 & 0.022 & c \\ 39849 & XX Pup & --0.15 & 3.81 & 11.250 & 10.321 & 10.118 & 10.084 & 0.517203 & --1.33 & 0.068 & \\ 40186 & DD Hya & --5.41 & 5.88 & 12.220 & 11.457 & 11.241 & 11.228 & 0.501771 & --0.97 & 0.013 & \\ 41936 & TT Cnc & 2.42 & 5.55 & 11.350 & 10.330 & 10.047 & 9.968 & 0.563430 & --1.57 & 0.043 & \\ 44428 & TT Lyn & --1.48 & 1.75 & 9.860 & 8.908 & 8.655 & 8.611 & 0.597429 & --1.56 & 0.017 & \\ 45709 & RW Cnc & 1.05 & 4.98 & 11.850 & 10.677 & 10.561 & 10.530 & 0.547224 & --1.67 & 0.020 & \\ 48503 & T Sex & 2.24 & 1.56 & 10.040 & 9.438 & 9.268 & 9.200 & 0.324706 & --1.34 & 0.044 & c \\ 49628 & RR Leo & 5.01 & 3.16 & 10.730 & 10.021 & 9.778 & 9.730 & 0.452392 & --1.60 & 0.037 & \\ 50073 & WZ Hya & 4.50 & 5.17 & 10.900 & 9.945 & 9.669 & 9.610 & 0.537713 & --1.39 & 0.075 & \\ 50289 & WY Ant & --0.27 & 2.51 & 10.870 & 9.970 & 9.744 & 9.674 & 0.574341 & --1.48 & 0.059 & \\ 53213 & AF Vel & 0.57 & 3.19 & 11.440 & 10.354 & 10.079 & 10.040 & 0.527414 & --1.49 & 0.250 & \\ 55825 & W Crt & --1.95 & 3.43 & 11.540 & 10.774 & 10.590 & 10.539 & 0.412015 & --0.54 & 0.040 & \\ 56088 & TU UMa & 0.56 & 1.68 & 9.820 & 8.919 & 8.740 & 8.660 & 0.557658 & --1.51 & 0.022 & \\ 56350 & AX Leo & --3.10 & 7.00 & 12.260 & 11.302 & 11.048 & 10.951 & 0.726845 & --1.72 & 0.033 & \\ 56409 & SS Leo & 2.50 & 4.01 & 11.030 & 10.259 & 10.008 & 9.943 & 0.626335 & --1.79 & 0.018 & \\ 56734 & SU Dra & 1.27 & 1.53 & 9.780 & 8.898 & 8.676 & 8.619 & 0.660418 & --1.80 & 0.010 & \\ 56742 & BX Leo & 7.73 & 6.17 & 11.610 & 10.889 & 10.743 & 10.709 & 0.362757 & --1.28 & 0.023 & c \\ 56785 & ST Leo & --0.45 & 3.47 & 11.490 & 10.690 & 10.480 & 10.446 & 0.477990 & --1.17 & 0.038 & \\ 57625 & X Crt & --3.98 & 4.50 & 11.480 & 10.482 & 10.213 & 10.148 & 0.732842 & --2.00 & 0.027 & \\ 58907 & IK Hya & 1.39 & 1.62 & 10.110 & 9.144 & 8.863 & 8.760 & 0.650371 & --1.24 & 0.061 & \\ 59208 & UU Vir & 2.24 & 2.91 & 10.560 & 9.596 & 9.436 & 9.414 & 0.475597 & --0.87 & 0.018 & \\ 59411 & AB UMa & 0.12 & 1.94 & 10.940 & 9.934 & 9.678 & 9.623 & 0.599593 & --0.49 & 0.022 & \\ 59946 & SW Dra & 2.24 & 1.42 & 10.480 & 9.594 & 9.362 & 9.319 & 0.569671 & --1.12 & 0.014 & \\ 61029 & UZ CVn & 6.50 & 7.59 & 12.120 & 11.219 & 10.941 & 10.885 & 0.697791 & --1.89 & 0.019 & \\ 61031 & SV Hya & 3.79 & 2.16 & 10.530 & 9.673 & 9.455 & 9.366 & 0.478542 & --1.50 & 0.080 & \\ 61225 & S Com & 5.16 & 3.66 & 11.630 & 10.823 & 10.678 & 10.619 & 0.586585 & --1.91 & 0.019 & \\ 61809 & U Com & 7.40 & 4.05 & 11.740 & 11.186 & 10.984 & 10.987 & 0.292736 & --1.25 & 0.014 & c \\ 63054 & AT Vir & 1.32 & 3.03 & 11.340 & 10.547 & 10.363 & 10.332 & 0.525785 & --1.60 & 0.030 & \\ 64875 & ST Com & --3.68 & 3.55 & 11.460 & 10.461 & 10.258 & 10.186 & 0.598927 & --1.10 & 0.024 & \\ 65063 & AV Vir & 2.22 & 4.73 & 11.820 & 10.853 & 10.615 & 10.566 & 0.656910 & --1.25 & 0.028 & \\ 65344 & AM Vir & --1.79 & 3.17 & 11.520 & 10.509 & 10.253 & 10.199 & 0.615063 & --1.37 & 0.067 & \\ 65445 & AU Vir & 0.06 & 4.99 & 11.590 & 11.085 & 10.918 & 10.847 & 0.339616 & --1.50 & 0.028 & c \\ 65547 & SX UMa & 1.90 & 1.81 & 10.840 & 10.288 & 10.135 & 10.071 & 0.307139 & --1.81 & 0.010 & c \\ 66122 & RV UMa & --0.30 & 1.85 & 10.770 & 10.058 & 9.854 & 9.831 & 0.468069 & --1.20 & 0.018 & \\ 67087 & RZ CVn & --2.03 & 2.99 & 11.570 & 10.733 & 10.518 & 10.478 & 0.567403 & --1.84 & 0.014 & \\ 67227 & RV Oct & 1.75 & 2.17 & 10.980 & 9.879 & 9.614 & 9.526 & 0.571169 & --1.71 & 0.180 & \\ 67354 & SS CVn & 2.14 & 3.83 & 11.840 & 11.185 & 10.951 & 10.936 & 0.478510 & --1.37 & 0.006 & \\ 67976 & V499 Cen & --0.01 & 2.97 & 11.120 & 10.225 & 9.926 & 9.922 & 0.521205 & --1.43 & 0.085 & \\ 68188 & ST CVn & --1.28 & 4.11 & 11.370 & 10.626 & 10.459 & 10.449 & 0.329065 & --1.07 & 0.012 & c \\ 68292 & UY Boo & 1.45 & 3.00 & 10.940 & 9.981 & 9.755 & 9.723 & 0.650889 & --2.56 & 0.033 & \\ 68908 & W CVn & 2.95 & 2.42 & 10.550 & 9.667 & 9.454 & 9.371 & 0.551753 & --1.22 & 0.005 & \\ 69759 & TV Boo & --0.05 & 2.09 & 10.970 & 10.373 & 10.282 & 10.248 & 0.312557 & --2.44 & 0.010 & c \\ 70702 & ST Vir & --5.10 & 5.66 & 11.520 & 10.914 & 10.748 & 10.671 & 0.410806 & --0.67 & 0.039 & \\ 70751 & AF Vir & --9.08 & 5.23 & 11.800 & 10.939 & 10.769 & 10.684 & 0.483735 & --1.33 & 0.023 & \\ 71186 & RS Boo & 1.62 & 1.91 & 10.370 & 9.744 & 9.559 & 9.507 & 0.377339 & --0.36 & 0.012 & \\ 72115 & TW Boo & --2.23 & 2.28 & 11.290 & 10.407 & 10.192 & 10.170 & 0.532277 & --1.46 & 0.013 & \\ 72342 & AE Boo & 0.33 & 2.00 & 10.650 & 9.974 & 9.819 & 9.762 & 0.314893 & --1.39 & 0.023 & c \\ 72444 & TY Aps & 1.78 & 3.07 & 11.850 & 10.819 & 10.532 & 10.456 & 0.501695 & --0.95 & 0.169 & \\ 72691 & BT Dra & --1.26 & 2.08 & 11.640 & 10.735 & 10.478 & 10.397 & 0.588673 & --1.75 & 0.010 & \\ 72721 & XZ Aps & --4.19 & 5.48 & 12.380 & 11.284 & 11.006 & 10.923 & 0.587275 & --1.06 & 0.135 & \\ 74556 & AP Ser & --0.16 & 4.32 & 11.110 & 10.462 & 10.305 & 10.268 & 0.340805 & --1.58 & 0.042 & c \\ 75225 & TV CrB & 1.89 & 5.75 & 11.870 & 11.037 & 10.814 & 10.774 & 0.584629 & --2.33 & 0.033 & \\ 75234 & FW Lup & 1.58 & 1.18 & 9.060 & 7.995 & 7.836 & 7.671 & 0.484169 & --0.20 & 0.077 & \\ 75942 & ST Boo & --0.13 & 1.80 & 11.010 & 10.185 & 9.981 & 9.930 & 0.622286 & --1.76 & 0.021 & \\ 75982 & VY Ser & --0.77 & 1.99 & 10.130 & 9.205 & 8.944 & 8.826 & 0.714101 & --1.79 & 0.040 & \\ 76313 & CG Lib & --0.50 & 5.67 & 11.550 & 10.437 & 10.208 & 10.125 & 0.306787 & --1.19 & 0.297 & c \\ 77663 & VY Lib & --1.84 & 4.04 & 11.730 & 10.480 & 10.174 & 10.070 & 0.533941 & --1.34 & 0.192 & \\ 77830 & AN Ser & --4.47 & 4.79 & 10.940 & 10.096 & 9.898 & 9.842 & 0.522069 & --0.07 & 0.040 & \\ 77997 & AT Ser & 0.18 & 5.30 & 11.480 & 10.533 & 10.248 & 10.214 & 0.746570 & --2.03 & 0.037 & \\ 78417 & AR Her & 2.08 & 3.25 & 11.240 & 10.605 & 10.413 & 10.391 & 0.469981 & --1.30 & 0.013 & \\ 79974 & RV CrB & 3.77 & 3.21 & 11.410 & 10.555 & 10.418 & 10.336 & 0.331593 & --1.69 & 0.039 & c \\ 80402 & V445 Oph & 5.60 & 5.33 & 11.050 & 9.649 & 9.401 & 9.262 & 0.397023 & --0.19 & 0.287 & \\ 80853 & VX Her & --0.78 & 2.65 & 10.690 & 9.848 & 9.651 & 9.590 & 0.455362 & --1.58 & 0.044 & \\ 80990 & UV Oct & 2.32 & 1.12 & 9.500 & 8.592 & 8.362 & 8.297 & 0.542587 & --1.74 & 0.091 & \\ 81238 & RW Dra & 1.38 & 2.44 & 11.710 & 10.779 & 10.596 & 10.622 & 0.442909 & --1.55 & 0.011 & \\ 83244 & RW TrA & 5.74 & 3.19 & 11.400 & 10.375 & 10.111 & 10.059 & 0.374039 & --0.13 & 0.105 & \\ 84233 & VZ Her & 3.49 & 2.12 & 11.480 & 10.746 & 10.590 & 10.496 & 0.440331 & --1.02 & 0.027 & \\ 87681 & TW Her & --3.36 & 2.22 & 11.280 & 10.528 & 10.322 & 10.239 & 0.399599 & --0.69 & 0.042 & \\ 87804 & WY Pav & 1.08 & 6.99 & 12.180 & 10.836 & 10.647 & 10.553 & 0.588573 & --0.98 & 0.126 & \\ 88064 & S Ara & --2.11 & 3.31 & 10.780 & 9.867 & 9.601 & 9.560 & 0.451879 & --0.71 & 0.124 & \\ 88402 & MS Ara & 8.81 & 5.20 & 12.070 & 11.036 & 10.763 & 10.664 & 0.524982 & --1.48 & 0.146 & \\ 89326 & V675 Sgr & --1.28 & 2.75 & 10.330 & 9.313 & 9.053 & 9.003 & 0.642280 & --2.28 & 0.130 & \\ 89372 & BC Dra & 1.51 & 1.99 & 11.600 & 10.435 & 10.172 & 10.096 & 0.719590 & --2.00 & 0.068 & \\ 89450 & V455 Oph & --1.47 & 6.69 & 12.360 & 11.395 & 11.160 & 11.088 & 0.453882 & --1.07 & 0.144 & \\ 90053 & IO Lyr & --0.84 & 2.95 & 11.850 & 10.841 & 10.591 & 10.538 & 0.577121 & --1.14 & 0.074 & \\ 91634 & CN Lyr & --3.91 & 2.52 & 11.480 & 10.282 & 10.055 & 9.919 & 0.411383 & --0.58 & 0.178 & \\ 92244 & V413 CrA & --1.75 & 3.26 & 10.600 & 9.497 & 9.248 & 9.148 & 0.589343 & --1.26 & 0.075 & \\ 93476 & MT Tel & 1.17 & 1.46 & 8.980 & 8.323 & 8.176 & 8.076 & 0.316900 & --1.85 & 0.038 & c \\ 94134 & XZ Dra & 2.28 & 1.20 & 10.250 & 9.398 & 9.221 & 9.148 & 0.476497 & --0.79 & 0.062 & \\ 94869 & BK Dra & 0.67 & 1.52 & 11.190 & 10.336 & 10.124 & 10.071 & 0.592076 & --1.95 & 0.052 & \\ 95497 & RR Lyr & 3.79 & 0.19 & 7.760 & 6.759 & 6.546 & 6.489 & 0.566839 & --1.39 & 0.030 & \\ 95702 & BN Vul & 3.56 & 3.08 & 11.020 & 9.138 & 8.793 & 8.677 & 0.594138 & --1.61 & 0.173 & \\ 96101 & V440 Sgr & --0.09 & 3.43 & 10.340 & 9.402 & 9.153 & 9.082 & 0.477479 & --1.40 & 0.085 & \\ 96112 & XZ Cyg & 1.83 & 1.01 & 9.680 & 8.990 & 8.793 & 8.722 & 0.466610 & --1.44 & 0.096 & \\ 96581 & BN Pav & 6.43 & 6.05 & 12.600 & 11.593 & 11.344 & 11.279 & 0.567117 & --1.32 & 0.073 & \\ 98265 & BP Pav & 3.50 & 6.34 & 12.540 & 11.648 & 11.386 & 11.366 & 0.527128 & --1.48 & 0.059 & \\ 101356 & V341 Aql & --4.86 & 5.62 & 10.850 & 9.886 & 9.687 & 9.606 & 0.578017 & --1.22 & 0.086 & \\ 102593 & DX Del & 0.40 & 1.94 & 9.940 & 9.048 & 8.746 & 8.685 & 0.472619 & --0.39 & 0.092 & \\ 103364 & UY Cyg & 2.55 & 2.91 & 11.110 & 10.060 & 9.805 & 9.777 & 0.560714 & --0.80 & 0.129 & \\ 103755 & RV Cap & 0.85 & 3.82 & 11.040 & 9.703 & 9.717 & 9.753 & 0.447698 & --1.61 & 0.041 & \\ 104613 & V Ind & 1.09 & 2.06 & 9.960 & 9.274 & 9.028 & 8.985 & 0.479604 & --1.50 & 0.043 & \\ 104930 & SW Aqr & --3.93 & 4.09 & 11.180 & 10.413 & 10.142 & 10.057 & 0.459299 & --1.63 & 0.076 & \\ 105026 & Z Mic & 0.69 & 3.53 & 11.650 & 10.478 & 10.179 & 10.112 & 0.586925 & --1.10 & 0.094 & \\ 105285 & YZ Cap & 4.62 & 2.78 & 11.300 & 10.532 & 10.437 & 10.429 & 0.273461 & --1.06 & 0.063 & c \\ 106645 & SX Aqr & 2.42 & 3.58 & 11.780 & 10.973 & 10.689 & 10.639 & 0.535712 & --1.87 & 0.048 & \\ 106649 & RY Oct & --1.87 & 4.88 & 12.060 & 11.118 & 10.917 & 10.859 & 0.563475 & --1.83 & 0.113 & \\ 107078 & CG Peg & 3.16 & 2.49 & 11.180 & 10.216 & 10.007 & 9.970 & 0.467133 & --0.50 & 0.074 & \\ 107935 & AV Peg & 2.88 & 2.44 & 10.500 & 9.609 & 9.406 & 9.346 & 0.390378 & --0.08 & 0.067 & \\ 108057 & SS Oct & 9.09 & 3.32 & 11.910 & 10.041 & 9.835 & 9.752 & 0.621852 & --1.60 & 0.285 & \\ 108839 & BV Aqr & 7.24 & 4.15 & 10.900 & 10.228 & 10.017 & 10.075 & 0.363653 & --1.42 & 0.034 & \\ 111839 & RZ Cep & 0.60 & 1.48 & 9.470 & 8.168 & 7.959 & 7.883 & 0.308688 & --1.77 & 0.078 & c \\ 112994 & BH Peg & --0.72 & 2.38 & 10.460 & 9.385 & 9.114 & 9.067 & 0.640991 & --1.22 & 0.077 & \\ 115135 & DN Aqr & --1.08 & 2.82 & 11.200 & 10.158 & 9.934 & 9.900 & 0.633757 & --1.66 & 0.025 & \\ 115870 & RV Phe & 1.75 & 4.71 & 11.940 & 11.106 & 10.828 & 10.768 & 0.596416 & --1.69 & 0.007 & \\ 116664 & BR Aqr & 0.71 & 3.48 & 11.420 & 10.648 & 10.421 & 10.370 & 0.481872 & --0.74 & 0.027 & \\ 116942 & VZ Peg & 4.89 & 3.75 & 11.900 & 11.219 & 11.059 & 11.010 & 0.306493 & --1.80 & 0.045 & c \\ 116958 & AT And & --2.25 & 1.85 & 10.710 & 9.478 & 9.181 & 9.087 & 0.616917 & --1.18 & 0.110 & \\ \end{longtable} \end{center} \twocolumn The stars are those listed by Fernley et al. (1998) and we have generally adopted their $V$ magnitudes and [Fe/H] values. The parallaxes and their standard errors are from the revised Hipparcos catalogue (van Leeuwen 2007). Details regarding the formation of the table, particularly the derivation of mean $JHK_{s}$ values from the single 2MASS values, are given in Appendix B\footnote{Since our analysis of the RR Lyrae data was completed, Sollima et al. (2007) have published mean $J,H,K_{s}$ data for RR Lyrae itself. They measured against 2MASS stars as standards and found: $6.74 \pm 0.02$, $6.60 \pm 0.03$ and $6.50 \pm 0.02$. The values we derived (Table~1) are 6.76, 6.55 and 6.49. The Sollima et al. results provide a useful confirmation of our procedure. Since their value of $K_{s}$ is negligibly different from our value we have kept our value in the following.}. DH Peg, which is in the Fernley et al. list, has been omitted because its status is doubtful. It may be a dwarf Cepheid (Fernley et al. 1990). There are a number of other stars which are listed as RR Lyrae stars in the Hipparcos catalogue. In some cases this classification is incorrect or doubtful. For instance DX Cet is actually a $\delta$~Sct star (Kiss et al. 1999). This star is, in fact, of special interest as having a parallax with a small percentage error and falling on the PL relation for fundamental mode $\delta$ Sct pulsators (van Leeuwen 2007). A discussion of stars whose classification as RR Lyrae type is probably incorrect or uncertain will be given elsewhere (Kinman, in preparation). The parallaxes and magnitudes of the very few Hipparcos stars which are probably RR Lyraes and were not in the Fernley list are such that they would make no significant contribution to the results given in this paper. It seemed better therefore to omit them and thus, for instance, have the homogeneous set of [Fe/H] results given by Fernley et al. The reddenings, $E(B-V)$, listed are the means of the two values discussed in section 3.2. These two values agree closely, the maximum difference (0.06 mag) being that for BN Vul, a star at low galactic latitude. For RZ Cep, which is also close to the plane, the difference is 0.03 mag. All other stars show smaller differences. We assume in the following that,\\ $A_{V} = 3.06E(B-V)$\\ and with data on the 2MASS system we adopt,\\ $A_{J} = 0.764E(B-V)$,\\ $A_{H} = 0.450E(B-V)$,\\ $A_{K_{s}} = 0.285E(B-V)$.\\ These values are from Laney \& Stobie (1993) as adjusted for $K_{s}$ by Gieren et al. (1998). The table indicates the c-type variables. The fundamental periods of these stars were obtained by multiplying the observed period by 1.342. \subsection{Results} The revised Hipparcos parallax of RR Lyrae is $\pi = 3.46 \pm 0.64$. Benedict et al. (2002a) found $\pi = 3.82 \pm 0.20$ from HST observations. In the present paper we adopt a weighted mean of these values, $\pi = 3.79 \pm 0.19$. This takes the quoted standard errors, each of which has their own uncertainties, at their face value. Giving higher weight to the globally-determined revised Hipparcos value would increase the derived brightness of the star by $\leq 0.2$ mag. We then obtain the following absolute magnitudes after adding a Lutz-Kelker correction of --0.02 which was calculated on the same basis as that adopted by Benedict et al.:\\ $M_{V} = + 0.54$, $M_{K_{s}} = -0.64$, \\ each with standard error of $\pm 0.11$ In deriving the above figures we have adopted the data for RR Lyrae in Table~1. The reddening, $E(B-V) = 0.030$, given there agrees with the value derived directly from its parallax distance and the Drimmel et al. (2003) formulation discussed below (0.031). There are 142 stars, including RR Lyrae itself, in Table~\ref{142rr}. Reduced parallax solutions (see, e.g. Feast 2002) were carried out for this group of stars. The reddenings were estimated for each star using the Drimmel et al. (2003) three-dimensional Galactic extinction model, including the rescaling factors that correct the dust column density to account for small-scale structure seen in the DIRBE data but not described explicitly by the model. Two initial estimates were made of the distance of a star using the tabulated mean $K_{s}$ or $V$ magnitudes and preliminary PL($K_{s}$) or $M_{V}-[Fe/H]$ relations, both of which correspond to an LMC modulus of $\sim 18.5$. The results were iterated (see e.g. Whitelock et al. (2008)). The values of $E(B-V)$ tabulated and used are the means of the final results from $K_{s}$ and $V$. A reduced parallax solution of eq. 1 for the 142 stars and adopting $a = 0.214$, then leads to:\\ $M_{V} = +0.54$,\\ at the mean metallicity of the sample ($\overline{[Fe/H]} = -1.38$). Similarly, reduced parallax solutions lead to,\\ $M_{K_{s}} = -0.63$\\ at the mean $\log P$ of the sample ($\overline{ \log P} = -0.252$), adopting a PL$(K_{s})$ slope of --2.41 as in eq. 4. The standard error of these derived absolute magnitudes is $\pm 0.10$. (Note that no Lutz-Kelker correction is required in this case). These results are essentially identical to those for RR Lyrae itself and indeed the solution is completely dominated by this one star. Omitting RR Lyrae leads to solutions with very large standard errors. In the following we simply use the results based on RR Lyrae alone, but using the full set of stars would obviously make no difference. We then find,\\ $ b = 19.39 -Mod(LMC) = +0.84 \pm 0.11$\\ for eq. 1 with $a = 0.214$ as in eq. 2. This gives absolute magnitudes brighter by $0.16 \pm 0.12$ than those given by eq. 2 with an LMC modulus of $18.39 \pm 0.05$. The standard error does not take into account the scatter about the $M_{V}$ - [Fe/H] relation, which can be substantial (see e.g. Gratton et al. fig. 19.). This result is consistent with the prediction of Catalan \& Cort\'{e}s (2008) that RR Lyrae is over luminous for its metallicity by $0.06 \pm 0.01$mag compared with the average members of this class. Note that if we adopted their preferred reddening for RR Lyrae we would reduce the over luminosity implied by our result from $0.16\pm 0.12$ to $0.12\pm 0.12$. Main-sequence fitting procedures (Gratton et al. 2003) lead to $b= +0.89 \pm 0.07$. However, other work (e.g. Salaris et al. 2007) has suggested a smaller distance modulus for 47 Tuc, a cluster on which the result of Gratton et al. partly depends. Thus their value of $b$ may need increasing slightly. The statistical parallaxes from Popowski \& Gould (1998) lead to a value of $b = +1.10 \pm 0.12$, that is to absolute magnitudes $0.10 \pm 0.12$ fainter than eq. 2. The parallax data on RR Lyrae leads to a constant term in eq. 3 of $-1.12$. This is 0.07 mag brighter than the value given by Sollima et al. which was based on the HST parallax of RR Lyrae alone and a slightly different $K_{s}$ magnitude. Following the discussion in Sollima et al. (2006), which takes into account metallicities of the LMC variables, the parallax result leads to a distance modulus of the LMC which is $0.22 \pm 0.14$ larger than that deduced from the classical Cepheids ($18.39 \pm 0.05$). A main uncertainty in the Cepheid result was in the metallicity correction adopted, and the RR Lyrae parallax result may indicate that this was overestimated. However, the errors are such that within the uncertainties the classical Cepheids and RR Lyrae variable scales are substantially in agreement. \begin{table} \centering \caption{Data for Type II Cepheids: Hipparcos Parallaxes} \label{IICep_hip} \begin{tabular}{lll} \hline & VY Pyx & $\kappa$ Pav\\ \hline $\log P $ &0.093 &0.959 \\ $[Fe/H]$ &--0.44 & 0.0 \\ $B$ &7.85 & 4.98 \\ $V$ &7.30 & 4.35\\ $I$ & & 3.67\\ $J$ &6.00 & 3.17 \\ $K_{s}$ &5.65 &2.78\\ $E(B-V)$ &0.049 & 0.017\\ $\pi$ &5.00 & 6.51\\ $\sigma_{\pi}$ &0.44 & 0.77\\ $ Mod $ &6.59 & 5.93\\ $\sigma_{Mod} $ & 0.19 & 0.26\\ $ LK $ &--0.06 & --0.12\\ $M_{B}$ &+1.09 & --1.14\\ $M_{V}$ & +0.54 & --1.86\\ $M_{I}$ & & --2.41\\ $M_{K_{s}}$ &--0.92 & --3.27\\ \hline \end{tabular} \end{table} \section{The Type II Cepheids} \subsection{Trigonometrical parallaxes} The relevant data for the two CephIIs on our programme are collected in Table~2. The metallicity of VY Pyx is from Maas et al. (2007). The value quoted for $\kappa$ Pav is from Luck \& Bond (1989). Both stars are comparatively metal-rich. The $BV$ photometry of VY Pyx is from Sanwal \& Sarma (1991), whilst $J$ and $K_{s}$ are single 2MASS values. In view of the low visual amplitude of VY Pyx ($\Delta V = 0.27$), these should be close to mean values. The magnitudes, light curve and period agree satisfactorily with the Hipparcos photometry (ESA 1997). For $\kappa$ Pav the intensity mean $B$, $V$ and $I$ were derived from from the literature cited in Table~3, with $I$ in the Cousins system. $J,K_{s}$ for this star are from the intensity means given in section 4.2.2 transformed to the 2MASS system using the relations derived by Carpenter (2001 as updated on the 2MASS Web page). The reddenings for both stars were estimated on the Drimmel et al. (2003) model described in section 3.2, with distances adopted from the revised Hipparcos parallaxes ($\pi \pm \sigma_{\pi}$) which are also listed. The distance moduli ($Mod$) and their uncertainties come directly from the parallaxes. The Lutz-Kelker ($LK$) corrections needed in deriving the absolute magnitudes are calculated on the same system as used for RR Lyrae (section 3). In discussing the various absolute magnitudes listed we shall use for their standard errors the values derived for the distance moduli. It should be borne in mind that these may be slightly underestimated due to any uncertainty in photometry, reddening and Lutz-Kelker correction. \begin{table*} \centering \caption{Pulsation Parallax solutions for Classical Cepheids and $\kappa$ Pav} \begin{tabular}{llccccccccc} \hline Star & Period & $<K_{o}>$ & $<J_{o}>$ & $<V_{o}>$ & $R_{1}$ & $R_{2}$ & $M_{K}$ & $\pi_{1}$ & $\pi_{2}$ & $p$ \\ \hline $\delta$ Cep & 5.3662475 & 2.295 & 2.678 & 3.667 & $41.3 \pm 1.0$ & $42.5 \pm 1.0$ & $-4.86$ & $3.71 \pm .12$ & $3.72 \pm .09$ & $1.27 \pm .05$ \\ X Sgr & 7.012675 & 2.453 & 2.833 & 3.819 & $49.3 \pm 1.6$ & $47.3 \pm 1.4$ & $-5.16$ & $3.17 \pm .14$ & $3.01 \pm .09$ & $1.20 \pm .06$ \\ $\beta$ Dor & 9.842578 & 1.947 & 2.405 & 3.616 & $62.1 \pm 1.7$ & $63.0 \pm 1.0$ & $-5.64$ & $3.26 \pm .14$ & $3.04 \pm .07$ & $1.18 \pm .06$ \\ $\zeta$ Gem & 10.14992 & 2.128 & 2.605 & 3.884 & $62.7 \pm 1.7$ & $65.4 \pm 1.6$ & $-5.67$ & $2.74 \pm .12$ & $2.76 \pm .07$ & $1.28 \pm .06$ \\ $l$ Car & 35.54327 & 1.046 & 1.639 & 3.225 & $162.3 \pm 4.0$ & $165.7 \pm 3.0$ & $-7.59$ & $2.03 \pm .16$ & $1.87 \pm .04$ & $1.17 \pm .10$ \\ $\kappa$ Pav & 9.0880 & 2.795 & 3.201 & 4.291 & $26.5 \pm 0.8$ & $26.3 \pm 0.6$ & $-3.81$ & $6.51 \pm .77$ & $4.78 \pm .13$ & $0.93 \pm .11$ \\ \hline \end{tabular} The columns contain: (1) star name, (2) period in days, (3,4,5) intensity mean magnitudes corrected for reddening ($<K_{o}>$, $<J_{o}>$ in the SAAO system), (6,7) radii in solar units derived from $K$, $J-K$ ($R_{1}$) and $K$, $V-K$ ($R_{2}$) with $p = 1.27$ (8) the trigonometrical parallax and its s.e. (9) pulsation parallax and its (internal) s.e. (11) $p$. The errors of the mean radius and the trig. parallax have been added in quadrature for $\sigma_{p}$. \\ References: $\delta$ Cep, 1,2,3,A,B,C; X Sgr, 1,4,5,6, D-N; $\beta$ Dor, 7-10, O,P; $\zeta$ Gem, 1,3,13,A,C,Q; $l$ Car, 7,9,10,11,M.R; $\kappa$ Pav, 7,9,13,15,16,P.\\ Optical Photometry references: (1) Moffett \& Barnes 1984, (2) Barnes et al. 1997, (3) Kiss 1998, (4) Shobbrook 1992, (5) Arellano Ferro et al. 1998, (6) Berdnikov \& Turner 2001, (7) Dean et al. 1977, (8) Pel 1976, (9) Dean 1981, (10) Shobbrook 1992, (11) Bersier 2002, (12) Szabados 1981, (13) Dean 1977, (14) Berdnikov 1997, (15) ESA 1997, (16) Cousins \& Lagerweij 1971. Radial Velocity references: (A) Bersier et al. 1994, (B) Butler 1993, (C) Kiss 1998, (D) Moore 1909, (E) Duncan 1932, (F) Stibbs 1955, (G) Feast 1967, (H) Lloyd Evans 1968, (I) Lloyd Evans 1980, (J) Barnes et al. 1987, (K) Wilson et al. 1989, (L) Sasselov \& Lester 1990, (M) Bersier 2002, (N) Mathias et al. 2006, (O) Taylor \& Booth 1998, (P) Wallerstein et al. 1992, (Q) Gorynya et al. 1998, (R) Taylor et al. 1997. \end{table*} There are other stars classified as CephIIs in the Hipparcos catalogue in addition to $\kappa$ Pav and VY Pyx, but their $\sigma_{\pi} / \pi$ values are relatively high and in some cases it is uncertain whether they belong to the CephII class. We have therefore not attempted to use these stars. \subsection{Pulsation parallaxes} \subsubsection{The Projection factor, $p$} The Baade-Wesselink method for radius determination has seen only limited use for CephIIs, even at optical wavelengths, and table~2 in Balog et al. (1997) suggests that such results as have been reported are somewhat inconsistent with each other. For classical Cepheids, the reasons for using IR photometry in determining pulsation parallaxes or Baade-Wesselink radii have been given by Laney \& Stobie (1995 henceforth LS95), and by Gieren, Fouqu\'{e} \& Gomez (1997), among others. This technique has not been used previously in determining radii, luminosities, etc. for CephIIs, except for a few preliminary results given by Laney (1995). Whilst modern pulsation parallaxes are often of high internal consistency, it has been difficult to estimate possible systematic uncertainties. Significant progress in dealing with such systematic uncertainties has become possible since the advent of reasonably accurate parallaxes for nearby classical Cepheids (Benedict et al. 2002b, 2007, van Leeuwen et al. 2007), as these allow a particular pulsation parallax method to be calibrated empirically. Several recent papers (Merand et al. 2005, Groenewegen 2007, Nardetto et al. 2007, Fouqu\'{e} et al. 2007) have tackled the determination of the projection factor ($p$-factor), which has long been one of the principal sources of uncertainty in pulsation parallaxes. Other papers have discussed angular diameter measurements and the surface-brightness colour relation, but these are not as directly relevant to the method used here, as the radii derived in this paper have been calculated using the technique described in Balona (1977), where the surface-brightness coefficient is a free parameter. Conversion from radii to luminosities uses a methodology described below, and is in effect included in the calibration of the $p$-factor. As in LS95, solutions have been derived with a modified version of Luis Balona's software which allows for a non-negligible amplitude, and where photometric magnitudes and colours, as well as radial velocities, are assigned individual errors. All radii used were derived using $K$ as the magnitude and $V-K$ or $J-K$ as the colour, as this approach was shown to be free of serious phase-dependent systematic error by LS95. These authors also show that inclusion or exclusion of the rising branch has a negligible systematic effect on the derived radii, although excluding the rising branch increases the uncertainty in the results. Here $J,K$ are on the SAAO system (see below, Appendix A). Adopted radii are the means of the ($K,V-K$) and ($K,J-K$) values. The adopted formal error in the radius is derived by taking the square root of the mean of the squares of the individual errors in the ($K,V-K$) and ($K,J-K$) radii. The first necessary step is to derive an appropriate value of the $p$-factor {\it for the specific method used here}. Our radius-determination methodology is different from those used by Merand et al. (2005), Groenewegen (2007) and Nardetto et al. (2007), and the radial velocities (selected from the literature) are not based on a single selected line, as described by Nardetto et al. (2007). As a first approximation, $p=1.27$ (Merand et al. 2005, Groenewegen 2007) was adopted, and radii were calculated for five of the classical Cepheids in table~2 of van Leeuwen et al. 2007). Polaris has a limited, variable amplitude and we are unaware of suitable data for an accurate radius solution. For FF Aql the possible influence of a binary companion and the low quality of the $JHK$ data were enough to drop it from the list. The other stars in the van Leeuwen et al. list have higher $\sigma_{\pi}/\pi$ than our five stars. For the remaining five stars, the ($K$, $V-K$) and ($K$, $J-K$) radii were calculated with $p=1.27$, then converted into luminosities. This was done using the tables given in Hindsley \& Bell (1990) to establish the $K$-band absolute magnitudes for a star of one solar radius and the appropriate dereddened $V-K$ and $J-K$ colours, then taking the mean. As discussed in LS95, the $K$ surface brightness as a function of $J-K$ or $V-K$ is very insensitive to surface gravity or microturbulence, which means that neither the radius solution nor the derived luminosity is significantly affected by assumptions about mean or time-varying values for these quantities in the stellar atmosphere. A similar procedure was followed for $\kappa$ Pav, the only CephII which has good $JHK$ and radial velocity data and a usable parallax measurement -- though this is of lower quality than for the five classical Cepheids. Dereddening was done using the reddening coefficients derived by Laney \& Stobie (1993), and $BVIc$ reddenings for each star as calibrated recently by Laney \& Caldwell (2007), using metal abundances from the tables in that paper, or for $\kappa$ Pav the value from Luck \& Bond (1989). The resulting small uncertainty in the colours has only a small effect on the $K$ surface brightness, as it is only a weak function of either $V-K$ or $J-K$. Figs.~\ref{L1}-\ref{L6} show the match of radius displacements calculated from the radius solution and $VJK$ photometry to the integrated radial velocity curve. As would be expected from LS95, there are no serious phase anomalies or discrepancies. Any serious problems with shock waves, etc. that distorted the solutions should appear in these diagrams, but there is no real sign of such an effect -- even for X Sgr (Sasselov \& Lester 1990, Mathias et al. 2006) or $\kappa$ Pav. For any other value of the projection factor, the curves would appear identical to those shown except that the vertical scale would be slightly different. \begin{figure} \includegraphics[width=8.5cm]{L1.ps} \caption{Radius displacements for $\delta$ Cep calculated from the $K$, $J-K$ (open circles) and $K$, $V-K$ (filled circles) radius solutions and photometry, vs. the integrated radial velocity curve (solid line). A projection factor of $p$ = 1.27 was used.}\label{L1} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{L2.ps} \caption{As Fig.~\ref{L1}, but for X Sgr.}\label{L2} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{L3.ps} \caption{As Fig.~\ref{L1}, but for $\beta$ Dor.}\label{L3} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{L4.ps} \caption{As Fig.~\ref{L1}, but for $\zeta$ Gem.}\label{L4} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{L5.ps} \caption{As Fig.~\ref{L1}, but for {\it l} Car.}\label{L5} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{L6.ps} \caption{As Fig.~\ref{L1}, but for $\kappa$ Pav.}\label{L6} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{L7.ps} \caption{Gamma velocities for X Sgr, phased according to the ephemeris and period of Szabados (1990). The squares represent data from Bersier (2002) and Sasselov \& Lester (1990). The triangle is the value from Mathias et al. (2006).}\label{L7} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{L8.ps} \caption{ The projection factor, $p$, plotted against $\log P$ for the classical Cepheids, $\delta$ Cep, X Sgr, $\beta$ Dor, $\zeta$ Gem, and {\it l} Car (filled circles) and the CephII $\kappa$ Pav (open circle). The line shows the trend of $p$ with period suggested by Nardetto et al. (2007), but adjusted to the zero-point given by the five classical Cepheids.}\label{L8} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{L9.ps} \caption{As Fig.~\ref{L1}, but for V553 Cen and adopting $p$ = 1.23.}\label{L9} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{L10.ps} \caption{As Fig~\ref{L1}, but for SW Tau and adopting $p$ = 1.23.}\label{L10} \end{figure} In all cases, it was necessary to establish the phase and period behaviour of the star, so that there were no systematic shifts between the phases or zero-points of the optical photometry, infrared photometry and radial velocities. For X Sgr, it was also necessary to redetermine the orbital velocity curve, in view of the doubts expressed by Mathias et al. (2006). All velocities in the literature for this star, including the most recent, appear to be consistent with the orbital period determined by Szabados (1990), and it proved possible to separate the orbital and pulsational velocities effectively (Fig. 7), though better data are desirable. The $JHK$ data used are listed in Appendix A. Radii, luminosities and pulsation parallaxes for the five classical Cepheids and $\kappa$ Pav, derived as above for $p=1.27$, are given in Table~3, together with the sources for the optical photometry and radial velocities. Also in this table are the trigonometrical parallaxes from van Leeuwen et al. (2007) and the present paper. Requiring that the $p$-factor for each star be adjusted to produce agreement between the pulsation and trigonometric parallaxes leads to the empirical $p$-factors for each star listed in Table 3 together with the associated errors due to the uncertainties in both the radius and the trigonometric parallax. These lead to the empirical $p$-factor for each star listed in the table together with the associated errors due to uncertainty in the radius and in the parallax. These values of $p$ are plotted against $\log P$ in Fig. 8. For all 5 classical Cepheids, the derived $p$-factor falls within a narrow range, and the mean is $1.22\pm 0.02$, weighting the stars equally. An average, weighted according to the inverse square of the error, gives $1.23\pm 0.03$ where the weight of $l$ Car has been set to one and its error has been divided by the square root of the sum of the weights for all five stars . A trend with period may be present, as claimed in Nardetto et al. (2007), though our sample is too small to derive a useful, statistically significant value of a term in $\log P$. If we assume that there is a $\log P$ term of --0.075 (given by Nardetto et al. as appropriate for velocities based on a mix of lines of varying depth), the weighted intercept at $\log P = 1.0$ is $1.23 \pm 0.03$. \begin{table*} \centering \caption{Pulsation Parallax Results for Type II Cepheids} \begin{tabular}{llccccccc} \hline Star & Per & $<K_{o}>$ & $<H_{o}>$ & $<J_{o}>$ & $<V_{o}>$ & $R_{1}$ & $R_{2}$ & $D$ \\ \hline SW Tau & 1.583565 & 7.887 & 7.931 & 8.147 & 8.800 & $8.02 \pm 0.27$ & $8.03 \pm 0.15$ & $732\pm 20 \pm 16$ \\ V553 Cen & 2.060464 & 6.878 & 6.963 & 7.290 & 8.455 & $10.53 \pm 0.33$ & $10.20 \pm 0.25$ & $541 \pm 15 \pm 12$\\ $\kappa$ Pav & $9.0902$ & 2.795 & 2.863 & 3.201 & 4.291 & $26.48 \pm 0.78$ & $26.32 \pm 0.62$ & $204 \pm 5 \pm 4$ \\ \hline \end{tabular} The columns are: (1) star name, (2) period in days (for $\kappa$ Pav this is the mean of the three periods used for the optical photometry), (3,4,5,6) intensity mean magnitudes with the infrared values on the SAAO system, (7,8) radii in solar units from, $K,J-K$ ($R_{1}$) and $K,V-K$ ($R_{2}$), (9) distance in pc based on a mean of $R_{1}$ and $R_{2}$ and with $p=1.23$ (the first standard error reflects the uncertainty in the derived radius, the second the uncertainty in $p$). \end{table*} The derived $p$-factor for $\kappa$ Pav, on the other hand, is strikingly discrepant, so low as to be physically unrealistic, especially given that the colours and surface gravity are in reasonable accord with those given for classical Cepheids by Laney \& Stobie (1994) and Fernie (1995) respectively, while the metallicity is solar (Luck \& Bond 1989) and the radius displacement diagram (Fig.~\ref{L6}) resembles those of the 5 classical Cepheids. However, the parallax for this star is more uncertain than for the five classical Cepheids, and the derived $p$-factor is in fact only about 2 $\sigma$ from the weighted mean of the 5 classical Cepheids. A $p$-factor of 1.23 was adopted for all three CephIIs considered here\footnote{ See also the discussion in section 5.1.}. Details of the radius and luminosity determinations follow. Magnitudes, radii, absolute magnitudes and other relevant data are given in Tables~4 and 5. \subsubsection{$\kappa$ Pav} The best-fitting period for the IR data in Table A1 (JD 2445928-2447769) was 9.0814 d, and the scatter around a low-order (2 to 5) Fourier fit to the resulting magnitudes and colours was about 0.009-0.011 mag. This is rather higher than normal for such a bright star, and suggests a modest amount of phase jitter may have been present. Contemporaneous radial velocity data were available in the literature (Wallerstein et al. 1992), covering almost exactly the same range of Julian dates. A modest number of velocities with slightly later JD were shifted into phase agreement at the adopted period. The light curve of $\kappa$ Pav is known for sudden changes (Wallerstein et al. 1992), so a need for phase adjustments is not surprising. The sources of the visual photometry are given in Table~3. All datasets have been phased at their appropriate periods, then shifted into phase and zero-point agreement with Dean et al. (1977) and Dean (1981). This composite dataset was used to derive a 6th order Fourier fit to the $V$ light curve, with maximum light in $V$ set to phase 0. None of the optical photometry data sets was contemporary with the infrared data. Derived periods and epochs were:\\ 2440140.119 + 9.0947E (Cousins \& Lagerweij)\\ 2441959.49903 + 9.08352E (Dean et al., Dean)\\ 2448164.8647 + 9.092405E (Shobbrook, Hipparcos, Berdnikov, Berdnikov \& Turner) A $V$ magnitude was then calculated for each infrared observation, using an epoch for the IR data which ensured that a Fourier fit to the $V-K$, and $J-K$ data gave phases for minimum light in agreement with those for $B-V$ and $V-I$, a technique for phase alignment validated by LS95. The resulting ($K,J-K$) and ($K,V-K$) radii agree within less than one percent, and there are no significant phase-dependent anomalies (Fig.~\ref{L6}). $E(B-V)=0.017\pm 0.022$ was derived from the $B-V$ and $V-I$ magnitude means (and the solar metallicity given by Luck \& Bond (1989)), using the Cousins reddening method as re-calibrated by Laney \& Caldwell (2007). While this method has not been specifically calibrated for CephIIs, $\kappa$ Pav falls into much the same range in temperature, surface gravity and metallicity as classical Cepheids. This reddening is virtually the same as that derived by the Drimmel method (0.019). The reddening value is in any event not critical -- it affects the luminosity and distance determinations {\it only} through the weak dependence of $K$ surface brightness on the dereddened $V-K$ and $J-K$ colour indices. Dereddened $V-K$ and $J-K$ colours were used to calculate the surface brightness at $K$ as described above, using $\log g$ of 1.2 (Luck \& Bond 1989), and converted to absolute magnitudes at $V, J$ and $K$ using the mean radius and the dereddened empirical colours. 2MASS $J$, $H$ and $K_{s}$, absolute magnitudes were calculated using the transformations on the 2MASS website, as they also were for V553 Cen and SW Tau, below. \begin{table} \centering \caption{ Data for Type II Cepheids: Pulsation Parallaxes} \begin{tabular}{llll} \hline & $\kappa$ Pav & V553 Cen & SW Tau\\ \hline $\log P$ & 0.959 & 0.314 & 0.200 \\ $[Fe/H]$ & 0.0 & +0.24 & +0.22 \\ $B$ & 4.98 & 9.15 & 10.32 \\ $V$ & 4.35 & 8.46 & 9.66 \\ $I$ & 3.67 & 7.76 & 8.94\\ $K_{s}$ & 2.78 & 6.86 & 7.95\\ $K_{W}$ & 2.55 & 6.63 & 7.73 \\ $E(B-V)$ & 0.017 & 0.00 & 0.282 \\ $Mod$ & 6.55 & 8.67 & 9.32\\ $\sigma_{Mod}$ & 0.07 & 0.08 & 0.08\\ $M_{B}$ & --1.64 & +0.48 & --0.15\\ $M_{V}$ & --2.25 & --0.21 & --0.53\\ $M_{I}$ & --2.91 & --0.90 & --0.88\\ $M_{K_{s}}$ & --3.77 & --1.80 & --1.46 \\ \hline \end{tabular} \end{table} \subsubsection{V553 Cen} The period behaviour is simpler than for $\kappa$ Pav, and seems adequately described by: 2448437.1154 + 2.060464E (2444423-2450364)\\ 2443108.6572 + 2.060608E (2440700-2443686)\\ These phases were adopted for the IR photometry (Table A1), for optical photometry by Wisse \& Wisse (1970), Lloyd Evans et al. (1972), Dean et al. (1977), Dean (1981), Eggen (1985), Diethelm (1986), Gray \& Olsen (1991), ESA (1997), Berdnikov \& Turner (1995) and Berdnikov (1997), and for radial velocities by Wallerstein \& Gonzalez (1996) and Lloyd Evans et al. (1972). All optical photometry was adjusted in zero point to match Dean et al. (1977) and Dean (1981), and the radial velocities to match Wallerstein and Gonzalez. The mean $E(B-V)$ for solar metallicity and a microturbulence of 2.5 $\rm km\,s^{-1}$ (Wallerstein and Gonzalez 1996) is $0.00\pm 0.02$ from 54 observations with $B-V$ and $V-I$. These authors also derive $\log g \sim 1.8$. The Drimmel procedure gives $E(B-V) = 0.08$. The derived $(K,J-K)$ and $(K,J-K)$ radii agree within the errors, and the lack of significant phase-dependent anomalies can be seen in Fig.~\ref{L9}. \subsubsection{SW Tau} The period seems essentially constant at 1.583565d over the relevant interval, with an epoch of 2445013.2696 for maximum light in $V$. Optical photometry has been taken from Barnes et al. (1997), Moffett \& Barnes (1984), and Stobie \& Balona (1979), and the zero-point shifted to match Stobie \& Balona. For $B-V$ and $V-I$ magnitude means of 0.653 and 0.796 on the Cousins system, with $[Fe/H]=+0.2$, microturbulence of 3.0 $\rm km\,s^{-1}$ (Maas et al. 2007), $E(B-V)$ is $0.282\pm 0.031$. Log g from Maas et al. is about 2.0. The Drimmel procedure gives $E(B-V) = 0.26$. IR data for SW Tau on the CIT system was taken from Barnes et al. (1997) and transformed to the Carter system by the formulae given in Laney \& Stobie (1993). This was then combined with the SAAO $JHK$ observations, and matched to the SAAO zero point. As would be expected, the resulting shifts were small. Radial velocities used are those from Gorynya et al. (1998) and from Bersier et al. (1994). The derived ($K,J-K$) and ($K,J-K$) radii agree within the errors, and the lack of significant phase-dependent anomalies can be seen in Fig.~\ref{L10}. \section{Discussion} \subsection{$\kappa$ Pav} The trigonometrical and pulsational parallaxes of $\kappa$ Pav are $6.51 \pm 0.77$ and $4.90 \pm 0.17$, a difference of $1.61 \pm 0.79$. This 2$\sigma$ difference is sufficiently large to raise some concerns. The Hipparcos result is from a type 3 solution. In such a solution account is taken of possible variability induced motion. Further investigation shows evidence (Fig. \ref{vL}) for a magnitude dependence difference between the DC and AC Hipparcos magnitudes. These magnitude systems and the interpretation of differences between them are described in the Hipparcos catalogue (ESA 1997). The results for $\kappa$ Pav suggest the presence of a close companion consistent with the need for a type 3 solution. Given the method of reduction employed, the revised Hipparcos parallax should be reliable within the quoted uncertainty. The possibility that $\kappa$ Pav was a spectroscopic binary was suggested by Wallerstein et al. (1992) from a comparison of their work with much earlier observations. There is, however, no evidence of short period variations in $\gamma$ velocity in their data which extended over a considerable time span (JD 2445860-2448283) or the additional data we have used. The five-colour photometry of Janot-Pacheco (1976) shows no evidence of a bright companion. The present work provides internal checks on the possibility of a bright companion. A bright red companion would produce abnormally low surface brightness coefficients in the ($K,J-K$) and, especially, the ($K,V-K$) solutions. A companion of similar colour to the variable would affect the two solutions more equally. In fact, these two surface brightness coefficients are slightly higher for $\kappa$ Pav than the other two CephIIs in the programme, though not significantly so. A blue companion would tend to make the ($K,V-K$) radius smaller than the ($K,J-K$) one. The ($V,B-V$) radius would be smaller still and have an unusually large surface brightness coefficient as seen in the classical Cepheid binary KN Cen (LS95). In $\kappa$ Pav there is no significant difference between the ($K,V-K$) and ($K,J-K$) radii. The ($V,B-V$) radius is smaller by 13 percent. This is a marginal effect and indicates that any blue companion has a relative brightness considerably fainter than in the case of KN Cen. Thus, in summary, no serious anomalies were found in the pulsation parallax analysis besides the problem of phase shifts. However, some caution is necessary in discussing this star. In the following, we discuss the results separately for the two estimates of the parallax. \begin{figure} \includegraphics[width=8.5cm]{vL.ps} \caption{The relation between the Hipparcos AC and DC magnitudes for $\kappa$ Pav. The increasing discrepancy between the AC and DC magnitudes towards fainter magnitudes is an indication for the presence of a close companion that becomes more visible as $\kappa$ Pav becomes fainter.} \label{vL} \end{figure} \subsection{Infrared period-luminosity relations} Table~\ref{irpl} lists the differences of the parallax based absolute magnitudes from the PL($K_{s}$) relation (eq. 4). We adopt $c= -1.0$ corresponding to an LMC modulus of 18.39. Besides the CephII stars, Table~\ref{irpl} lists, in addition, the results for RR Lyrae. As already noted, Matsunaga et al. (2006) suggested that the RR Lyrae variables lay on the same PL($K_{s}$) relation as the CephIIs and this suggestion was strengthened by the work of Sollima et al. (2006). Two standard errors are given, $\sigma_{1}$ is the value derived from the parallax solution and $\sigma_{2}$ combines this in quadrature with the scatter in the PL($K_{s}$) relation as given by Matsunaga et al. (2006) (0.14). This latter value is an upper limit to the intrinsic scatter of the Matsunaga et al. relation since it includes uncertainties in the moduli of the globular clusters they used etc. The first part of Table~6 shows the results from the trigonometrical parallaxes and the second part the results from the pulsation parallaxes. \begin{table} \centering \caption{Differences from Infrared PL relations}\label{irpl} \begin{tabular}{rrrr} \hline Star & $\Delta M_{K}$ & $\sigma_{1}$ & $\sigma_{2} $\\ \hline & & (a)&\\ \hline RR Lyrae & --0.24 & 0.11 & 0.17\\ VY Pyx & +0.30 & 0.19 & 0.24\\ $\kappa$ Pav & +0.04 & 0.26 & 0.30\\ \hline & & (b) &\\ \hline $\kappa$ Pav & --0.46 & 0.07 & 0.16\\ V533 Cen & --0.05 & 0.08& 0.16\\ SW Tau & +0.02 & 0.08 & 0.16\\ \hline \end{tabular} (a) Results using trigonometrical parallaxes.\\ (b) Results using pulsational parallaxes.\\ \end{table} Given the uncertainties in the trigonometrical parallaxes, the results in the first part of Table~\ref{irpl} show satisfactory agreement with the predictions of the infrared PL relation. The two short period stars with pulsation parallaxes (SW Tau, P = 1.58; V553 Cen, P =2.06) agree closely with predictions. This agreement is sufficiently good to hint that the intrinsic scatter in the relations is less than the adopted 0.14, in agreement with the discussion above. Indeed if the possible period dependence of the projection factor $p$ discussed in section 4.2.1 applies, these two stars lie even more closely on a line with the Matsunaga et al. slope. They would then be 0.09 mag (V553 Cen) and 0.08 mag (SW Tau) brighter than that predicted using a zero-point based on an LMC modulus of 18.39. Both SW Tau and V553 Cen are carbon-rich stars of near solar metallicity. SW Tau has [Fe/H] = +0.22 (Maas et al. 2007) and V553 Cen has [Fe/H] = +0.04 (Wallerstein \& Gonzales 1996). The light-curve classification scheme proposed by Diethelm (1990 and other papers referenced there) indicates that, as one would expect, these two stars are disc objects. On the other hand the short-period globular-cluster stars ($P < 5$ days) in Matsunaga et al. (2006) are all of low metallicity ([Fe/H] in the range --1.15 to --1.94). Thus within the uncertainties, the PL($K_{s}$) relation for CephIIs is insensitive to population differences (metallicity, mass) at least at the short period end. The pulsation parallax of $\kappa$ Pav leads to an infrared absolute magnitude that differs significantly from the PL relations derived from the globular clusters and with an LMC modulus of 18.39. Since the formal uncertainty of the pulsation-based absolute magnitude is 0.07 mag the deviation is $6.5\sigma$ and even taking into account the upper limit on the intrinsic scatter in the PL($K_{s}$) relation there is nearly a three sigma deviation. Evidently if this result is accepted then some CephIIs in the field can deviate significantly from the PL($K_{s}$) based on globular cluster variables. Since the metallicity of $\kappa$ Pav is near solar and the results of Matsunaga et al. depend on metal-poor objects, a (large) metallicity effect might be the cause. As there is little metallicity dependence among the metal-poor objects (see section 2.2) this would imply a very non-linear dependence on metallicity. An age/mass difference would be another possible cause (possibly operating more strongly among the longer period CephIIs like $\kappa$ Pav than among the shorter period one). If one adopts the results from the three pulsation parallaxes, an LMC modulus of $18.55 \pm 0.15$ is implied, neglecting any metallicity effect on CephII luminosities. This agrees with the RR Lyrae result given above which implies a modulus of $18.55 \pm 0.12$. Neither of these values are significantly different from the classical Cepheid result ($18.39 \pm 0.05$). However, the smaller distance for $\kappa$ Pav indicated by the revised Hipparcos result and the discussion of section 5.1, suggests that, for the present, the results for this star should be viewed with some caution. Additional pulsation parallaxes of CephIIs with periods near 10 days and/or an improved trigonometrical parallax of $\kappa$ Pav would no doubt throw more light on this problem. \subsection{A Type II Cepheid distance scale} In section 5.2 we compared the Galactic CephII distance scale with that implied by the Classical Cepheid scale (with metallicity corrections). In this section we derive distance moduli for the LMC and for the Galactic Centre, based directly on CephIIs. The two stars V553 Cen and SW Tau give a mean zero-point, $c$ in eq. 4 of $-1.01 \pm 0.06$ where the standard error comes from the standard errors of the two stars. If the pulsation parallax result for $\kappa$ Pav is included the zero-point becomes $ c= -1.16 \pm 0.15$ where the standard error is based on the interagrement of the three stars. Matsunaga et al. (2006) list 2MASS, single-epoch, $J,H,K_{s}$ photometry of LMC CephII stars with known periods from Alcock et al. (1998). There are 21 such stars with $\log P < 1.50$. Longer period stars are not considered here as they may be RV Tau stars. After correcting by $A_{K_{s}} = 0.02$ mag for absorption these data were fitted to a line of the slope derived by Matsunaga et al. (eq.4) viz: \begin{equation} K_{s}^{o} = -2.41 \log P + \gamma. \end{equation} We then find $\gamma = 17.31 \pm 0.08$ or if one somewhat discrepant star is omitted $\gamma = 17.36 \pm 0.07$. With the values of $c$ in the previous paragraph these lead to the following estimates of the LMC modulus: for 21 LMC stars, a modulus of $18.31 \pm 0.10$ from V553 Cen and SW Tau, or $18.47 \pm 0.17$ if we included $\kappa$ Pav. Leaving out the discrepant LMC star we obtain for the two or three star solutions, $18.37 \pm 0.09$ and $18.52 \pm 0.16$. Pending further work on $\kappa$ Pav, the best value is probably $18.37 \pm 0.09$ but none of the values deviate significantly from the Classical Cepheid value $18.39 \pm 0.05$. Groenewegen et al. (2008) have recently estimated mean $K_{s}$ values and periods for 39 CephIIs in the Galactic Bulge. After correction for absorption they fit their data to an equation equivalent to eq. 8 above. Their result gives, $\gamma = 13.404 \pm 0.013$. This together with the results for V553 Cen and SW Tau leads to a modulus of the Galactic Centre of $14.42 \pm 0.06$ and to a Galactic Centre distance of $R_{0} = 7.64 \pm 0.21$ kpc. If we include $\kappa$ Pav we obtain $14.56 \pm 0.15$ and $R_{0} = 8.18 \pm 0.56$ kpc. The first value, which at present should probably be considered the preferred one, is close to that obtained by Eisenhauser et al. (2005) from the motion of a star close to the central black-hole. With the suggested relativistic correction of Zucker et al. (2006) this is, $R_{0} = 7.73 \pm 0.32$ kpc. The value with $\kappa$ Pav included does not differ significantly from this latter result. \subsection{Optical period-luminosity relations} The relations derived for CephIIs in the globular clusters NGC\,6441 and NGC\,6388 (eqs. 5,6,7 above) at optical wavelengths, are quite narrow (see Pritzl et al. 2003, fig. 8). On the other hand, plots of period-luminosity diagrams in $B,V$ or $I$ for all known data for globular clusters and the LMC (e.g. Pritzl et al. fig. 9) show very considerable scatter. Pritzl et al. suggested that at least part of this scatter might be due to poor photometry. This left open the question as to whether general PL relations are as narrow as they found for their two clusters. In Table 7 are the deviations of our programme stars from eqs. 5,6,7. Table 7a gives the results from the trigonometrical parallaxes and Table 7b those from the pulsation parallaxes. In the case of the trigonometrical result for $\kappa$ Pav the deviations are within the expected uncertainty (0.26) whereas they are large for the pulsation parallax result which has a small internal error (0.07). As discussed in section 5.1 we prefer to leave a solution of this matter to further work. The pulsation parallax results for V553 Cen and SW Tau are of special interest since their formal uncertainties are small (0.08). These two stars have deviations of opposite signs both from the optical and infrared relations (Tables 7 and 6). The difference between these two deviations thus gives an estimate of the lower limit of the PL width at different wavelengths, independent of PL zero-point considerations. These differences are: 0.77 mag at $B$, 0.51 at $V$, 0.20 at $I$ and 0.07 at $K_{s}$. The results for VY Pyx, though of lower accuracy agree with these results. This increase in the dispersion with decreasing wavelength is, as in the case of classical Cepheids, naturally explained by the existence of a finite instability strip. The optical differences just quoted are significantly greater than the rms scatter about the PL relations in NGC\,6441 and NGC\,6388 given by Pritzl et al. (2003) which are 0.10, 0.07 and 0.06 in $B, V, I$. The possibility that the greater optical differences estimated from V553 Cen and SW~Tau are due to the adoption of incorrect reddening corrections for these two stars seems unlikely. The lower scatter in the case of the clusters is thus probably due to the smaller range in the masses of the cluster variables compared with the field. The evolutionary state of the metal-rich, short-period, CephIIs in the field has long constituted something of a puzzle (see for instance section 4 of Wallerstein 2002). As briefly summarized in section 1, the short period CephII stars are thought to be moving through an instability strip as they evolve from the blue HB towards the AGB. Old metal-rich globular clusters have, in general, only stubby red HB and it is not clear how stars of the ages and metallicities of these systems could evolve into the CephII instability strip. NGC\,6441 and NGC\,6388 are well known as metal-rich systems which do have extended blue HBs. There has been much discussion in the literature on the cause of this anomaly in these and similar clusters. One possibility is that the effect is due to enhanced helium abundance derived from earlier generations of stars in the clusters (see for instance; Lee et al. 2007, Caloi \& D'Antona 2006, based on earlier work by Rood 1973 and others). This seems unlikely to apply to field, short-period, metal-rich, CephIIs. Thus either an alternative explanation has to be found which will apply to both the field and cluster stars, or, some other means will need to be found to move the field stars into the instability strip. \begin{table} \centering \caption{Deviation from optical relations.}\label{dev_opt} \begin{tabular}{rrrr} \hline star & eq. 6 & eq. 5 & eq. 7\\ \hline & $\Delta M_{B}$ & $\Delta M_{V}$ & $\Delta M_{I}$\\ \hline & & (a) &\\ \hline VY Pyx & +0.89 & +0.64 & \\ $\kappa$ Pav & -0.27 & -0.34 & -0.10 \\ \hline & & (b) &\\ \hline $\kappa$ Pav & -0.77 & -0.77 & -0.60 \\ V553 Cen & +0.56 & +0.26 & +0.09 \\ SW Tau & -0.21 & -0.25 & -0.11 \\ \hline \end{tabular} (a) Results using trigonometrical parallaxes.\\ (b) Results using pulsational parallaxes.\\ \end{table} \section{Conclusions} Parallaxes of RR Lyrae variables from the revised Hipparcos catalogue (van Leeuwen 2007) have been investigated. The parallax of RR Lyrae itself obtained by combining the revised Hipparcos value with an HST determination (Benedict et al. 2002) outweighs that of all other members of the class. It yields $M_{K_{s}} = -0.64 \pm 0.11$ which is $0.16 \pm 0.12$ mag brighter than that implied by observations of RR Lyrae variables in the LMC with a modulus of $18.39 \pm 0.05$ derived from classical Cepheids (Benedict et al. 2007, van Leeuwen et al. 2007 ). For 142 Hipparcos RR Lyrae variables mean $J,H,K_{s}$ based on phased-corrected 2MASS values are given. These should be useful when discussing the proper motions and radial velocities of the stars. Revised Hipparcos parallaxes for the CephIIs $\kappa$ Pav and VY Pyx are given, and pulsation parallaxes for $\kappa$ Pav, V553 Cen and SW Tau derived. Extensive new $J,H,K$ photometry of some of these stars and of some classical Cepheids is tabulated. The latter data are used to establish 1.23 as the most appropriate ``$p$-factor" to use in the pulsational analysis of Cepheids. The short-period, metal- and carbon-rich, disc population CephIIs V553 Cen and SW Tau have pulsation-based absolute magnitudes of high internal accuracy ($\pm 0.08$ mag). They fit closely (mean deviation 0.02 mag) the PL($K_{s}$) relation derived by Matsunaga et al. (2006) from CephIIs in globular clusters and with a zero-point fixed by adopting an LMC modulus of 18.39. The Hipparcos parallax of the short period star VY Pyx, although it has higher uncertainty, agrees with this result. This suggests that at least at short periods the CephIIs in the Galactic disc and in Globular clusters fit the same PL($K_{s}$) relation rather closely. The scatter of V553 Cen and SW Tau about the optical PL relations derived by Pritzl et al. (2003) for the globular clusters NGC\,6388 and NGC\,6441 is much greater than that about the Matsunaga PL($K_{s}$) relation, showing the expected increase in PL widths with decreasing wavelength. This scatter about the optical relations is also much greater than that of the CephIIs in NGC\,6388/6441 themselves. Since the values of [Fe/H] are very similar for V553 Cen and SW Tau this is unlikely to be due to a metallicity effect. It presumably indicates a larger spread in masses for the short period CephIIs in the general field than for those in the clusters. The Hipparcos and pulsation parallaxes of the long-period star $\kappa$ Pav differ by about 2$\sigma$. If the pulsation parallax is adopted, the value of $M_{K_{s}}$ (which is of high internal accuracy, $\sigma = 0.07$ mag) is more than $6\sigma$ from the Matsunaga relation with a zero-point fixed by an LMC modulus of 18.39 and would suggest a significant mass or metallicity effect at about this period ($\sim 10$ days). There are indications that this star may have a close companion. In view of this, further work on the star and of others of similar period is desirable before discussing in detail the implications for long-period CephIIs. The results for V553 Cen and SW Tau together with published data on CephIIs in the LMC and the Galactic Bulge lead to an LMC modulus of $18.37 \pm 0.09$ and to a distance to the Galactic centre of $R_{0}=7.64 \pm 0.21$ kpc. Including the data for $\kappa$ Pav would increase these estimates by $\sim 0.15$ mag. \section*{Acknowledgments} This publication makes use of data products from the Two Micron All Sky Survey, which is a joint product of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. We are grateful to the referee for their comments.
2,869,038,156,510
arxiv
\section{Introduction} \input{intro} \section{Related Work} \input{related_work} \section{Features and Classifiers} \label{sec:data} \subsection{Features Extraction} We extract a multitude of features from voice. We use the OpenSMILE toolkit \cite{eyben2010opensmile}, using the inbuilt \textit{emobase2010.conf} configuration file. This configuration file extract features such as intensity and loudness, cepstrum (e.g MFCC), LPC, pitch and voice quality (e.g jitter, shimmer). The OpenSMILE feature configuration file thus extracts a 1582 dimensional feature vector per audio recording. We call this Ovec feature in Section \ref{sec:results} We use features extracted from the pre-trained PANN model \cite{kong2019panns} on audio datasets and from the pre-trained model on VGG dataset \cite{45611}. We extract feature representations from the PASE model \cite{pase, pase+} and combine them for analysis. The PASE model feature representation is extracted from the CNN block containing 64 units prior to the linear feed forward network. PASE features were generated with the dimensionality of 256 x number of frames. The PASE features were then normalized to the training dataset to generate feature representation which is scaled with zero mean and unit variance. We extract the spectral features of audio files using librosa \cite{mcfee2015librosa} as our audio processing library. The spectral features include, \begin{itemize}[leftmargin=*] \itemsep0em \item \texttt{Zero Crossing Rate} - The rate at which the speech signal passes the zero value. \item \texttt{Spectral Centroids} - The weighted mean of frequencies in the speech spectrogram. \item \texttt{Spectral Roll Off} - This computes the roll off frequency, the point under which 85\% of the power signal is contained. \item \texttt{Tempo} - It is a measure of the beats per minute in the signal. \item \texttt{Root Mean Square Energy} - This computes RMS energy per frame in the signal based on audio samples. \item \texttt{MFCC} - Mel-Frequency Cepstral Coefficients a.k.a.the coefficients of Mel-frequency cepstrum, are one of the most commonly considered spectral features. We used the first 20 coefficients for this experiment. \item \texttt{MFCC Delta First Order} - Temporal MFCC delta features. \item \texttt{MFCC Delta Second Order} - Acceleration MFCC delta features. \end{itemize} For all these spectral features except Tempo, we extracted several statistical features such as min, max, mean, rms (root mean square), median, inter quartile range, first, second and third quartile, standard deviation, skew, and kurtosis, to get better representation of the data. This collectively returned a (14,1) statistical feature per spectral feature. For MFCCs we concatenate the 14 dimension spectral feature per MFCC coefficient, obtaining a 280 sized vector. Eventually, we obtain 833 spectral features which we call as the Custom feature extractor as referred in section \ref{sec:results}. We extracted the YAMNet based feature extractor as well as Open-L3 \cite{Cramer:LearnMore:ICASSP:19} based feature extractor for each voice recording to compare their performance with Vggish \cite{hershey2017cnn} based feature extractor. \subsection{Classifier Description} \subsubsection{CNN model with Pase and Spectrogram features} We built CNN models as our base classifier. Based on the scale of our collected dataset, one important aspect we want to have is the generalization of our algorithm. In the CNN experiments, we use two set of features: the spectrogram and the problem agnostic speech encoder (PASE) \cite{pase, pase+} features. Pase features are designed to be general, robust and transferable features that capture the meaningful information of speech and less likely to contain the superficial features which were sufficient for the training data only. \subsubsection{Machine learning based binary classifiers} We use simpler machine learning based binary classifiers such as RandomForest, Support Vector machines, Logistic regression in order to perform classification. \section{Data and Experiments} \label{sec:experiments} \subsection{Dataset and its description} In this section, we describe the collection process and the statistics of the COVID-19 voice dataset. We used a dataset collected under clinical supervision and curated by Merlin Inc., a private firm in Chile. The subjects usually have symptoms of coughing, sneezing, breathing difficulties etc. So these related symptoms were also recorded with the voice information to account for symptomatic and asymtomatic COVID-19 diagnosis. The COVID positive or negative label was indicated by the subject's lab-certified test results. Additionally metadata also contained information about preexisting conditions such as smoking preference, asthma and detailed comments about the speakers health during the dataset generation. The data samples were recorded over a smartphone, and sampled at 8khz. The dataset consists of 421 positive cases and 989 negative cases. Each case here represents a unique speaker. To control the phonemic variation of the recordings, we asked the subjects to record their voice speaking alphabets a-z, counting from 1-20 and producing coughs. To limit the physical contact and to simulate real application scenario, subjects recorded in a quiet room environment. The total duration of the positive files is 17.5 hours and negative files is 20.5 hours with over 37 hours of total data. \subsection{Experiments} We perform numerous experiments for the detection of COVID where the input is only the voice signal. Each speaker has six voice recordings namely cough, the elongated vowels /AH/, /UW/ and /IY/, alphabet and count. In all experiments, the speakers in the training set were separate from those in the test set to ensure that the models do not inadvertently simply capture speaker identity. Thus, we split the data in $k$ fold cross-validation sets based on speaker identity. We run various classifiers on our data such as RandomForest (RF), Support Vector Machines (SVMs) with Radial Basis Function and Logistic Regression (LR). We analysed all the spectral features on each of these classifiers including combination of features like Ovec and Custom. The RandomForest runs on $k$ fold equal to 3, 5 and 10. We run grid search on all these classifiers to find the best parameters, particularly experimenting with $\gamma$. We report the performance in the Section \ref{sec:results}. \section{Results and Discussion} \label{sec:results} From the dataset consisting of a total of 1410 speakers (or 8460 audio recordings), the entire metadata was received for a total of 815 speakers, out of which there were 296 males and 519 females. \subsection{Age based Analysis} We have broadly categorized the ages into four groups, including the ones whose age was missing, shown in Table \ref{tab:gender_table}. \begin{itemize}[leftmargin=*] \itemsep0em \item Group 1 : age $<=$ 30 years \item Group 2 : 30 $<$ age $<=$ 40 years \item Group 3 : 40 years $<$ age \item Group 4 : Age was missing \end{itemize} For patients with age $>$ 40, our classifier is capable of recognizing COVID-19 patients accurately 65\% of the times whereas those with age $<$ 30, classifier is capable of accurately detecting 56\% of the times. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Gender} \textbf{Group1} & \textbf{Group2} & \textbf{Group3} & \textbf{Group4} \\ \hline \hline Male & 65 & 80 & 80 & 71\\ Female & 127 & 129 & 145 & 115 \\ \hline \end{tabular} \caption{Gender based age analysis} \label{tab:gender_table} \vspace{-2mm} \end{table} The classifier is more reliably able to detect COVID-19 in case of females over males. \begin{figure}[h] \centering \includegraphics[width=8.5cm, height=4cm]{images/symptoms_0_plot_2.png} \includegraphics[width=8.5cm, height=4cm]{images/symptoms_1_plot_2.png} \caption{The top figures shows the distribution of symptoms in COVID-19 positive patients and the lower figure is the distribution of symptoms in COVID-19 negative individuals (breath-diff stands for breathing difficulty). The rows represents each symptom whereas the column represents the frequency of occurrence of a symptom which are marked by dots} \end{figure} We find Sore-throat is present in most of COVID-19 cases where we are able to detect COVID-19 in 57\% of the cases. When sneeze is present, the classifier is capable of detecting COVID-19 in 47\% of the cases. When cough is a known symptom for the speaker, then our classifier detects COVID-19 in 73\% of the cases. \subsection{Smokers and People with Asthma} Having a previous history of asthma or smoking might put people at a higher risk of COVID \cite{asthma}. We received data where people had a history of asthma and/or were smokers. Table \ref{tab:asthma_table} shows the statistics in our data. Our experimental results indicate asthama is more correlated to COVID-19 than smoking. For the patients with COVID-19 and smoking habit, our classifier is accurate by detecting COVID-19 in 66 \% of cases whereas those with COVID-19 and asthama cases we are able to detect COVID-19 accurately in 80 \% of the cases. \begin{table}[h] \centering \begin{tabular}{c|c|c} \textbf{Gender} & \textbf{Has Asthma} & \textbf{Is Smoker} \\ \hline \hline Male & 12 & 95\\ Female & 38 & 139 \end{tabular} \caption{Gender based asthma and smoker statistics} \vspace{-4mm} \label{tab:asthma_table} \end{table} \subsection{Difference in days of diagnosis and data collection } The data collection process introduces a delay between the COVID onset and when the voice sample was recorded. Each speaker is labelled for the day they got diagnosed and the day they recorded their data. Table \ref{tab:diagnosis_table} shows number of people distribution depending on when they were diagnosed (eg. 1 week ago, between 1-2 weeks or greater than 2 weeks) \begin{table}[h] \centering \begin{tabular}{ |c|c| } \hline \textbf{Diagnosis Day Range} & \textbf{Number of People} \\ \hline \hline Diagnosis day $<=$ 7 days & 95 \\ \hline 7 days $<$ Diagnosis day $<=$ 14 days & 209 \\ \hline Diagnosis day $>$ 14 days & 159 \\ \hline Diagnosis day not known & 37 \\ \hline \end{tabular} \caption{Difference between Diagnosis Day and Data collection day} \vspace{-4mm} \label{tab:diagnosis_table} \end{table} In general, we find that COVID-19 can be more reliably detected for patients whose voice samples have been collected within 14 days of diagnosis, than for those whose samples were collected after this period. \begin{table}[] \centering \resizebox{1.0\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline \textbf{Classifier} & \textbf{Feature} & \textbf{AUC} \\ \hline \hline Random Forest & Ovec & 0.76 \\ Random Forest & PANN & 0.78 \\ Random Forest & vggish & 0.59 \\ Random Forest & Open-L3 & 0.64 \\ Random Forest & YAMNet & 0.72 \\ Random Forest & Custom Feat & 0.82 \\ Random Forest & Ovec + Custom Feat & 0.84 \\ Random Forest & Ovec + Custom Feat + vggish & 0.82 \\ Random Forest & Ovec + custom feature + YAMNet & 0.86 \\ CNN & Spectrogram & 0.69 \\ CNN & PASE & 0.73 \\ \hline \end{tabular} } \caption{Results obtained on the full dataset set. Here Ovec refers to features obtained using OpenSMILE, PASE using PASE architecture, custom spectral feature. Random Forest classifier is robust and have improved performance in comparison to CNN based classifier} \label{tab:my_label} \vspace{-4mm} \end{table} \subsection{Audio type analysis} We analyze classifier performance according to the audio-type, to identify which type of recording is most suited to capture the voice signatures related to COVID-19. Our results indicate that vowel /IY/ and vowel /UW/ are better at detecting COVID-19 than other types of audio samples. Cough samples are also useful in detecting COVID-19. \subsection{Overall Classifier results} Table \ref{tab:my_label} proves our hypothesis that a binary classifier such as Random Forest can perform better for COVID-19 detection than CNN counterparts when faced with limited data. Our experiments have observed that Open-L3 features perform the same as vggish feature representations. Figure \ref{fig:Best} shows the best AUC score achieved of 0.94 and ROC score of 0.85 for the Random forest based classifier with Ovec and custom features on a trimmed down dataset where 20\% samples were heard to contain noise in the recordings. \begin{figure}[h] \centering \begin{subfigure}[b]{0.23\textwidth}\centering \includegraphics[width=\textwidth,height=3cm]{images/roc_auc.png} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth}\centering \includegraphics[width=\textwidth,height=3cm]{images/prec_recall.png} \end{subfigure} \hspace*{\fill} \caption{AUC of 0.94 and ROC of 0.85 is obtained using Random Forest based classifier with Ovec and custom feature vector} \label{fig:Best} \end{figure} \section{Conclusion} \label{sec:discuss} Motivated by the urgent need to have alternative methods to augment the medical tests, we designed the experimental setup for the COVID-19 analysis using a wide variety of voice samples. Our analysis provided insights into the type of voice sample which would work best for detecting COVID-19 patients which is the \/eee\/ and \/ooo\/ sounds. Our COVID-19 dataset used in the study was acquired through self-recording using a smartphone application making acquistion feasible at large scale. The preliminary results imply a feasibility for the use of this globally accessible data collection for Sars-COV-2 detection although it doesn't replace RT-PCR or RAT tests. Our results prove that using a binary classifiers such as Random Forest are more feasible to separate between COVID-19 vs non COVID-19 speakers over data intensive neural network classifiers. We find these binary classifiers to generate better and robust results given the limited amount of data available for analysis. \bibliographystyle{IEEEtran}
2,869,038,156,511
arxiv
\section{Introduction} Multiple Input - Multiple Output (MIMO) communications \cite{foschini98} have been adopted in many recent wireless standards, such as IEEE 802.16 \cite{ieee11} and 3GPP LTE \cite{3gpp09}, in the aim of boosting the data rates provided to the customers. A promising solution to achieve spectrally-efficient communications is the universal frequency reuse (UFR) scheme, in which all cells operate on the same frequency channel. However, the downlink capacity of the conventional cellular systems with UFR is limited by inter-cell interference. As a result, it is necessary to introduce coordination among the base stations (BSs) so that they can jointly manage the interferences in all cells to improve the system performance \cite{gesbert07}. Such coordination technique among the BSs in the downlink is also known as network MIMO \cite{Karakayali06} or Coordinated multipoint (CoMP) \cite{Sawahashi10}. Some other approaches in the literature have exploited less complex linear schemes, such as Block Diagonalization (BD) \cite{zhang09} or MMSE \cite{armanda11}. The main drawback of all these systems is that they require channel state information (CSI) and transmit data simultaneously known to all cooperating BSs, with the cost of increased signal overhead. Some recent approaches have been proposed to avoid CSI and data sharing. Non-coherent joint processing \cite{sun11} does not require cell-to-cell CSI exchange at the expense of higher processing cost at the receivers with successive interference cancelation. In \cite{bjornson10}, the authors analyze the case of distributed cooperation where each BS has only local CSI. In this correspondence we consider a cellular scenario with an arbitrary number of multiantenna transmitters (the BSs) and single-antenna receivers (the users). We focus on an {\it intermediate approach} where the BSs optimize the downlink throughput with only the CSI information. Since channel variations are much slower than that of data, the amount and the frequency of information exchange is greatly reduced. Unfortunately, the sum rate maximization problem is non-convex and thus is difficult to solve efficiently. The authors of \cite{huh10} propose to solve the single cell downlink rate maximization problem first (with dirty paper coding (DPC) and zero-forcing (ZF) precoding), and then impose interference limit to the users on the cell edges. In this case, the interference limits to the users are set in a rather heuristic fashion, and the BSs are not coordinating their beamforming. References \cite{yu11} and \cite{venturino10} are two recent works that propose heuristic algorithms that try to provide solutions to similar problems by directly solving the non-convex optimization problem. In this correspondence we provide theoretical insights to the coordinated downlink beamforming problem by identifying a set of lower bounds (one bound per BS) of the non-convex system sum rate. The benefits of such per-BS lower bounds are twofolds: 1) the individual BSs can distributedly optimize their respective lower bounds instead of jointly optimizing the original system sum rate to approach a solution to the sum rate maximization problem; 2) individual BSs can monitor the improvement of the total sum rate by evaluating their respective lower bounds. Utilizing this set of lower bounds, we propose algorithms for the BSs to coordinately optimize their beams. In a special case where each cell has a single user, each lower bound becomes {\it concave}, and we show that the lower bound maximization problem can be solved exactly. This result allows us to obtain a stationary solution of the original sum rate maximization problem. In the general case with multiple users per cell, we propose an algorithm that extend the Iterative Coordinated Beamforming (ICBF) algorithm proposed in \cite{venturino10}, with important difference that the BSs act sequentially instead of simultaneously, and there is no ``inner iteration" needed. The simulation results show that the proposed algorithms have similar sum rate performance as the ICBF algorithm, while requiring significantly less information exchange among the BSs in the backhaul network. The correspondence is organized as follows. In section \ref{secSystemModel}, we give the system description, and provide a general lower bound for each user. In section \ref{secSingleUser} and \ref{secMultipleUser}, we propose algorithms for the BSs to compute their beamformers in different network configurations. In section \ref{secSimulation}, we provide numerical results to demonstrate the performance of the proposed algorithms. This correspondence concludes in Section \ref{secConclusion}. {\it Notations}: For a symmetric matrix $\mathbf{X}$, $\mathbf{X}\succeq 0$ signifies that $\mathbf{X}$ is positive semi-definite. We use ${\mbox{\textrm{Tr}}}(\mathbf{X})$, $|\mathbf{X}|$, $\mathbf{X}^H$, $\mathbf{X}^{\dag}$ and ${\mbox{\textrm{Rank}}}(\mathbf{X})$ to denote the trace, the determinant, the hermitian, the pseudoinverse, and the rank of a matrix, respectively. $[\mathbf{X}]_{i,i}$ denote the $(i,i)$th element of the matrix $\mathbf{X}$. $\mathbf{I}_n$ is used to denote a $n\times n$ identity matrix. We use $[y,\mathbf{x}_{-i}]$ to denote a vector $\mathbf{x}$ with its $i^{th}$ element replaced by $y$. We use $\mathbb{R}^{N\times M}$ and $\mathbb{C}^{N\times M}$ to denote the set of real and complex $N\times M$ matrices; We use $\mathbb{S}^{N}$ and $\mathbb{S}^{N}_{+}$ to denote the set of $N\times N$ hermitian and hermitian semi-definite matrices, respectively. Define $M\oslash t\triangleq \{(M+1)\mod t\}+1$ as an integer taking values from $1,\cdots,M$. \vspace{-0.3cm} \section{Problem Formulation and System Model} \label{secSystemModel} We consider a multi-cell cellular network with a set $\mathcal{M}\triangleq\{1,\cdots, M\}$ of base stations (BSs)/cells; each BS is equipped with $K_m$ transmit antennas; each cell $m$ has a set $\mathcal{N}_m$ of distinctive users; let $\mathcal{N}$ denote the set of all users, and each user is equipped with a single receive antenna. We use $(m,i)$ and $-(m,i)$ to denote the $i$th user in $m$th cell and all the users except user $(m,i)$, respectively. Without loss of generality, we assume that all the cells have the same number of users, and all the BSs are equipped with the same number of antennas: $|\mathcal{N}_m|=N,~K_m=K,~\forall~m\in\mathcal{M}$. The signal $\mathbf{x}_m\in\mathbb{C}^{K}$ transmitted by BS $m$ is $\mathbf{x}_m=\sum_{i\in\mathcal{N}_m}\mathbf{w}_{m,i}b_{m,i}$, where $b_{m,i}$ is the complex information symbol sent by BS $m$ to user $i\in\mathcal{N}_m$, using beam vector $\mathbf{w}_{m,i}\in\mathbb{C}^{K}$. Assume $E[|b_{m,i}|^2]=1$, for all $(m,i)$ and $E[b_{m,i}b^*_{q,j}]=0$, for all $(m,i)\ne (q,j)$. Assume that each BS $m\in\mathcal{M}$ has a total transmission power constraint: $\sum_{i\in\mathcal{N}_m}||\mathbf{w}_{m,i}||^2\le\bar{p}_m$. Let $\mathbf{h}_{q,m_i}\in\mathbb{C}^{K}$ denote the complex channel between the $q$th BS and the $i$th user in $m$th cell. Let $n_{m,i}\in\mathbb{C}$ denote the circularly-symmetric Gaussian noise with variance ${c}_{m,i}$. The signal received by a user $(m,i)$ can be expressed as \begin{align} y_{m,i} &=\mathbf{h}^{H}_{m,i}\mathbf{w}_{m,i}b_{m,i}+\underbrace{\sum_{j\ne i}\mathbf{h}^{H}_{m,m_i}\mathbf{w}_{m,j}b_{m,j}}_{\textrm{Intra-cell Interference}}+\underbrace{\sum_{q\ne m, j\in\mathcal{N}_q}\mathbf{h}^{H}_{q,m_i}\mathbf{w}_{q,j}b_{q,j}}_{\textrm{Inter-cell Interference}}+n_{m,i}\label{eqReceivedSignal}. \end{align} The rate achievable for user $(m,i)$ is given by{\small \begin{align} R_{m,i}(\mathbf{w}_{m,i},\mathbf{w}_{-(m,i)}) \triangleq\log\left(1+\frac{\mathbf{w}^H_{m,i}\mathbf{H}_{m,m_i}\mathbf{w}_{m,i}} {{c}_{m,i}+\sum_{(q,j)\ne (m,i)}\mathbf{w}^H_{q,j}\mathbf{H}_{q,m_i}\mathbf{w}_{q,j}}\right)\label{eqRateScalar}\\ &=\log\left(1+\frac{\mathbf{h}^H_{m,m_i}\mathbf{W}_{m,i}\mathbf{h}_{m,m_i}} {{c}_{m,i}+\sum_{(q,j)\ne (m,i)}\mathbf{h}^H_{q,m_i}\mathbf{W}_{q,j}\mathbf{h}_{q,m_i}}\right \triangleq R_{m,i}(\mathbf{W}_{m,i},\mathbf{W}_{-(m,i)})\label{eqRateMatrix} \end{align}} where $\mathbf{W}_{m,i}\triangleq\mathbf{w}_{m,i}\mathbf{w}^H_{m,i}$ is the transmission covariance of user $(m,i)$, and $\mathbf{H}_{m,m_i}\triangleq \mathbf{h}_{m,m_i}\mathbf{h}^H_{m,m_i}$ is the channel matrix. Clearly, $\mathbf{W}_{m,i}\succeq 0$ and ${\mbox{\textrm{Rank}}}(\mathbf{W}_{m,i})=1$. Define the total interference plus noise at user $(m,i)$ as{\small \begin{align} I_{m,i}(\mathbf{W}_{-(m,i)})&\triangleq {c}_{m,i}+\sum_{j\ne i}\mathbf{h}^H_{m,m_i}\mathbf{W}_{m,j}\mathbf{h}_{m,m_i}+\sum_{q\ne m, j\in\mathcal{N}_q}\mathbf{h}^H_{q,m_i}\mathbf{W}_{q,j}\mathbf{h}_{q,m_i}\nonumber\\ &={c}_{m,i}+\sum_{j\ne i}\mathbf{w}^H_{m,j}\mathbf{H}_{m,m_i}\mathbf{w}_{m,j}+\sum_{q\ne m, j\in\mathcal{N}_q}\mathbf{w}^H_{q,j}\mathbf{H}_{q,m_i}\mathbf{w}_{q,j}\triangleq I_{m,i}(\mathbf{w}_{-(m,i)}). \end{align}} We assume that $I_{m,i}(\mathbf{W}_{-(m,i)})$ is perfectly known at the user $(m,i)$ and the BSs $m$, but not the neighboring BSs. As suggested by \cite{zhang09}, this interference plus noise term can be estimated at each mobile user by various methods, and fed back to its associated BS. Define the collection of matrices $\mathbf{W}_m\triangleq\{\mathbf{W}_{m,i}\}_{i\in\mathcal{N}_m}$, $\mathbf{W}_{-m}\triangleq\{\mathbf{W}_{q,j}\}_{j\in\mathcal{N}_q, q\ne m}$, and $\mathbf{W}\triangleq\{\mathbf{W}_m\}_{m\in\mathcal{M}}$, then the sum rate of all users in cell $m$ can be expressed as: $ {R}_m(\mathbf{W}_{m},\mathbf{W}_{-m})\triangleq\sum_{i\in\mathcal{N}_m} R_{m,i}(\mathbf{W}_{m,i}, \mathbf{W}_{-(m,i)})$. The sum rate of all users in the network is $ {R}(\mathbf{W})\triangleq\sum_{q\in\mathcal{M}} R_{q}(\mathbf{W}_{q},\mathbf{W}_{-q})$. We are interested in the following non-concave sum rate maximization problem\footnote{This problem can also be expressed in an equivalent vector form, with $\{\mathbf{w}_m\}_{m\in\mathcal{M}}$ as design variables.}:{\small \begin{align} \max_{\mathbf{W}}& \quad R(\mathbf{W})\tag{SRM} \nonumber\\ {\textrm{s.t.}}&\quad{\mbox{\textrm{Tr}}}\left[\sum_{i\in\mathcal{N}_m}\mathbf{W}_{m,i}\right]\le \bar{p}_m,~\forall~m\in\mathcal{M}\nonumber\\ & \quad \mathbf{W}_{m,i}\succeq 0,~{\mbox{\textrm{Rank}}}(\mathbf{W}_{m,i})\le 1,~~\forall~(m,i)\nonumber. \end{align}} We mention that all the following discussions are equally applicable to the problem of {\it weighted} sum rate optimization, in which there is a set of non-negative weights associated to the users' rates in the objective. However, we mainly consider the (SRM) problem for simplicity of presentation. In order to approach the problem (SRM), we first establish some useful results that characterize the users' rate \eqref{eqRateMatrix}. \newtheorem{P1}{Proposition} \begin{P1}\label{propConvex} {\it For all $(q,j)\ne(m,i)$, {\small $R_{m,i}(\mathbf{W}_{m,i},\mathbf{W}_{-(m,i)})$} is a convex function of {\small $\mathbf{W}_{q,j}$} on $\mathbb{S}^{K}_{+}$, and a concave function of {\small $\mathbf{W}_{m,i}$} on $\mathbb{S}^{K}_{+}$.} \end{P1} \begin{proof} In order to show the convexity result, it is sufficient to prove that whenever $\mathbf{D}\in \mathbb{S}^{K}$, $\mathbf{D}\ne \mathbf{0}$ and $\mathbf{W}_{q,j}+t\mathbf{D}\succeq 0$, the following function is convex in $t$ \cite[Chapter 3]{cover05} {\small \begin{align} R_{m,i}(t)&\triangleq\log\left(1+\frac{\mathbf{h}^H_{m,m_i}\mathbf{W}_{m,i}\mathbf{h}_{m,m_i}} {{c}_{m,i}+\sum_{(p,l)\ne (q,j), (p,l)\ne (m,i)}\mathbf{h}^H_{p,m_i}\mathbf{W}_{p,l}\mathbf{h}_{p,m_i}+ \mathbf{h}^H_{q,m_i}(\mathbf{W}_{q,j}+t\mathbf{D})\mathbf{h}_{q,m_i}}\right). \end{align}} Let us simplify the expression a bit by defining the constant $c=\mathbf{h}^H_{m,m_i}\mathbf{W}_{m,i}\mathbf{h}_{m,m_i}\ge0$ (note that $\mathbf{W}_{m,i}\succeq 0$). The first and the second derivatives of $R_{m,i}(t)$ w.r.t. $t$ can be expressed as{\small \begin{align} \frac{d R_{m,i}(t)}{d t} &=-\frac{1/\ln(2)}{\left(I_{m,i}(\mathbf{W}_{-(m,i)})+ t\mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i}+c\right)}\frac{c \mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i}}{\left(I_{m,i}(\mathbf{W}_{-(m,i)})+ t\mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i}\right)}.\label{eqFirstDerivativeT}\\ \frac{d^2 R_{m,i}(t)}{d t^2}&=\frac{1/\ln(2)}{(I_{m,i}(\mathbf{W}_{-(m,i)})+ t\mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i}+c)^2}\frac{c (\mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i})^2}{I_{m,i}(\mathbf{W}_{-(m,i)})+ t\mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i}}\nonumber\\ &+ \frac{1/\ln(2)}{I_{m,i}(\mathbf{W}_{-(m,i)})+t\mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i}+c}\frac{c (\mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i})^2}{(I_{m,i}(\mathbf{W}_{-(m,i)})+ t\mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i})^2}. \end{align}} Clearly $I_{m,i}(\mathbf{W}_{-(m,i)})+ t\mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i}> 0$ for all $\mathbf{W}_{q,j}+t\mathbf{D}\succeq 0$. We also have that $\mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i}$ is real and $(\mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i})^2\ge 0$, due to the assumption that $\mathbf{D}\in\mathbb{S}^K$, and the subsequent implication that $(\mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i})^{H}=\mathbf{h}^H_{q,m_i}\mathbf{D}\mathbf{h}_{q,m_i}$. We conclude that whenever $\mathbf{D}\in\mathbb{S}^{K}$ and $\mathbf{W}_{q,j}+t\mathbf{D}\succeq 0$, $\frac{d^2 R_{m,i}(t)}{d t^2}\ge 0$, which implies that $R_{m,i}(\mathbf{W}_{m,i},\mathbf{W}_{-(m,i)})$ is convex in $\mathbf{W}_{q,j}$ for all $(q,j)\ne (m,i)$. The fact that $R_{m,i}(\mathbf{W}_{m,i},\mathbf{W}_{-(m,i)})$ is concave in $\mathbf{W}_{m,i}$ can be shown similarly as above. \end{proof} Note that the above property is only true in the space of covariance matrix $\mathbf{W}_m$, but not in the transmit beamformer space $\mathbf{w}_m$. This convex-concave property of the individual users' transmission rate is instrumental in deriving a set of lower bounds for the system sum rate. For a particular user $(m,i)$, the system sum rate $R(\mathbf{W})$ can be expressed as{\small \begin{align} R(\mathbf{W})=\underbrace{R_{m,i}(\mathbf{W}_{m,i},\mathbf{W}_{-(m,i)})}_{\textrm{ concave in $\mathbf{W}_{m,i}$ }}+\underbrace{\sum_{(q,j)\ne(m,i)}R_{q,j}(\mathbf{W}_{m,i},\mathbf{W}_{-(m,i)})}_{\textrm{ convex in $\mathbf{W}_{m,i}$}}.\label{eqConvexConcave} \end{align}} Defined {\small $R_{-(m,i)}\left({\mathbf{W}}\right)\triangleq\sum_{(q,j)\ne(m,i)}R_{q,j}\left({\mathbf{W}}\right)$}. We can find a lower bound for $R(\mathbf{W})$ by linearizing the $R_{-(m,i)}\left({\mathbf{W}}\right)$ with respect to $\mathbf{W}_{m,i}$ around a fixed $\widehat{\mathbf{W}}$. Utilizing the fact that $R_{-(m,i)}\left({\mathbf{W}}\right)$ is convex in $\mathbf{W}_{m,i}$, we obtain {\small \begin{align} \sum_{(q,j)\ne(m,i)}R_{q,j}\left(\mathbf{W}_{m,i},\widehat{\mathbf{W}}_{-(m,i)}\right) &\ge R_{-(m,i)}\left(\widehat{\mathbf{W}}\right)- \sum_{(q,j)\ne(m,i)}{\mbox{\textrm{Tr}}}\left[T_{q,j}\left(\widehat{\mathbf{W}}\right)\mathbf{H}_{m,q_j}(\mathbf{W}_{m,i}- \widehat{\mathbf{W}}_{m,i})\right]\label{eqApproximation}\\ \textrm{with}\quad\quad\quad T_{q,j}\left(\widehat{\mathbf{W}}\right)&\triangleq \frac{1/\ln(2)}{I_{q,j}(\widehat{\mathbf{W}}_{-(q,j)})+ \mathbf{h}^H_{q,j}\widehat{\mathbf{W}}_{q,j}\mathbf{h}_{q,j}} \frac{\mathbf{h}^H_{q,q_j}\widehat{\mathbf{W}}_{q,j}\mathbf{h}_{q,q_j}}{I_{q,j}(\widehat{\mathbf{W}}_{-(q,j)})}\ge 0 \label{eqTax}. \end{align}} Let us define a concave function of $\mathbf{W}_{m,i}$ {\small \begin{align} U_{m,i}(\mathbf{W}_{m,i},\widehat{\mathbf{W}}_{-(m,i)})&\triangleq R_{m,i}(\mathbf{W}_{m,i},\widehat{\mathbf{W}}_{-(m,i)}) +R_{-(m,i)}\left(\widehat{\mathbf{W}}\right)- \sum_{(q,j)\ne(m,i)}{\mbox{\textrm{Tr}}}\left[T_{q,j}\left(\widehat{\mathbf{W}}\right)\mathbf{H}_{m,q_j}(\mathbf{W}_{m,i}- \widehat{\mathbf{W}}_{m,i})\right].\nonumber \end{align}} Then from \eqref{eqConvexConcave}, \eqref{eqApproximation} and the definition of $U_{m,i}(.)$, we must have{\small \begin{align} U_{m,i}(\mathbf{W}_{m,i},\widehat{\mathbf{W}}_{-(m,i)})\le R(\mathbf{W}_{m,i},\widehat{\mathbf{W}}_{-(m,i)}),~\forall \ \mathbf{W}_{m,i}\succeq 0 \label{eqLowerBound} \end{align}} where the equality is achieved when $\mathbf{W}_{m,i}=\widehat{\mathbf{W}}_{m,i}$. We refer to this lower bound as the ``per-user" lower bound, as it is defined w.r.t. each user $(m,i)$. Such lower bound is useful, because if we can find a $\mathbf{W}^*_{m,i}$ that satisfies {\small$ U_{m,i}({\mathbf{W}}^*_{m,i},\widehat{\mathbf{W}}_{-(m,i)})> U_{m,i}(\widehat{\mathbf{W}}_{m,i},\widehat{\mathbf{W}}_{-(m,i)})$}, then the system sum rate must increase, as{\small \begin{align} R(\mathbf{W}^*_{m,i},\widehat{\mathbf{W}}_{-(m,i)})\ge U_{m,i}(\mathbf{W}^*_{m,i},\widehat{\mathbf{W}}_{-(m,i)} )> U_{m,i}(\widehat{\mathbf{W}}_{m,i},\widehat{\mathbf{W}}_{-(m,i)} )=R(\widehat{\mathbf{W}}_{m,i},\widehat{\mathbf{W}}_{-(m,i)}).\label{eqRateIncreasePerUser} \end{align}} \vspace{-0.5cm} \section{Multi-cell Network with Single User In Each Cell}\label{secSingleUser} We first consider an important scenario in which each BS transmits to a single user. This scenario may arise in a heterogeneous network when each BS transmits to a relay in its cell. As there is a single user in each cell, we simplify the notation by using $U_m(.)$, $T_q(.)$, $I_m(.)$ instead of $U_{m,i}(.)$, $T_{q,i}(.)$ and $I_{m,i}(.)$, respectively. We use $\mathbf{W}_m$ to denote the covariance of BS $m$ to its user; we use $\mathbf{H}_{m,q}$ to denote the channel between BS $m$ to the user in the cell of BS $q$. Notice that the per-user bound identified in Section \ref{secSystemModel} becomes {\it per-BS} bound, as each BS has a single user in this scenario. For simplicity, define $\sum_{q\ne m}T_{q}\left(\widehat{\mathbf{W}}\right)\mathbf{H}_{m,q}=\mathbf{A}_m\succeq 0$, then the per-BS bound can be expressed as: {\small \begin{align} U_{m}(\mathbf{W}_{m},\widehat{\mathbf{W}}_{-m})\triangleq R_{m}(\mathbf{W}_{m},\widehat{\mathbf{W}}_{-m}) +R_{-m}\left(\widehat{\mathbf{W}}\right)- {\mbox{\textrm{Tr}}}\left[\mathbf{A}_m(\mathbf{W}_{m}- \widehat{\mathbf{W}}_{m})\right]. \end{align}} Define the feasible set for BS $m$ as $\mathcal{F}_m\triangleq\{\mathbf{W}_m: {\mbox{\textrm{Tr}}}\left[\mathbf{W}_{m}\right]\le \bar{p}_m,~ \mathbf{W}_{m}\succeq 0,~{\mbox{\textrm{Rank}}}(\mathbf{W}_{m})\le 1\}$. The idea is to let the BSs take turns to optimize their respective lower bounds $\{U_m(.)\}$. Assuming other BSs' transmissions are fixed as $\widehat{\mathbf{W}}_{-m}$, the Lower Bound Maximization problem (LBM) for BS $m$ is{\small \begin{align} &\max_{\mathbf{W}_m\in\mathcal{F}_m}U_m(\mathbf{W}_m, \widehat{\mathbf{W}}_{-m})\tag{LBM}. \end{align}} Notice that after relaxing the rank constraint, the problem (LBM) is a concave problem in the variable $\mathbf{W}_m$. In the sequel, we will refer to the problem (LBM) {\it without} the rank constraint as (R-LBM), and define its feasible set as $\mathcal{F}^{R}_m\triangleq \{\mathbf{W}_m: {\mbox{\textrm{Tr}}}\left[\mathbf{W}_{m}\right]\le \bar{p}_m,~ \mathbf{W}_{m}\succeq 0\}$. The problem (R-LBM) is a concave determinant maximization (MAXDET) problem \cite{vandenberghe98}, and can be solved efficiently using convex program/SDP solvers such as CVX \cite{cvx}. However, in practice such general purpose solver may still induce heavy computational burden. Moreover, the resulting optimal solution of the relaxed problem may have rank greater than one. Fortunately, these difficulties can be resolved. We have found an explicit construction that generates a rank-1 solution of the problem (R-LBM) (hence the optimal solution of problem (LBM)). The rank reduction problem of downlink beamforming has been recently studied in \cite{huang10}, \cite{wiesel08} and \cite{huh10}. However the algorithms proposed in those works cannot be directly used to obtain a rank-1 solution to (LBM): reference \cite{huang10} considers problems with linear objective functions; references \cite{huh10} and \cite{wiesel08} consider the relaxation of the MAXDET problem {\it without} the linear penalty terms \footnote{With linear penalty in the form of ${\small -{\mbox{\textrm{Tr}}}\left[\mathbf{A}_m(\mathbf{W}_{m}- \widehat{\mathbf{W}}_{m,q})\right]}$, equation (43) is no longer equivalent to equation (44) in \cite{wiesel08}.}. Removing all the terms in the objective of (R-LBM) that are not related to ${\mbox{$\mathbf{W}$}}_m$, we can write the partial Lagrangian of the problem (R-LBM) as{\small \begin{align} L(\mathbf{W}_m,\mu_m)=\log\bigg|\mathbf{I}+\mathbf{W}_m\mathbf{H}_{m,m}\frac{1}{I_{m}(\widehat{{\mbox{$\mathbf{W}$}}}_{-m})} \bigg|-\mbox{Tr}[(\mathbf{A}_m+\mu_n\mathbf{I})\mathbf{W}_m]+\mu_m\bar{p}_m \end{align}} where $\mu_m\ge0$ is the Lagrangian multiplier associated with the power constraint. Notice the fact that $\mathbf{A}_m\succeq 0$, then for any $\mu_m>0$, we can perform the Cholesky decomposition $\mathbf{A}_m+\mu_m\mathbf{I}=\mathbf{L}^H\mathbf{L}$, which results in $\mbox{Tr}[(\mathbf{A}_m+\mu_m\mathbf{I})\mathbf{W}_m]=\mbox{Tr}[\mathbf{L}\mathbf{W}_m\mathbf{L}^H]$. Define $\bar{\mathbf{W}}_m(\mu_m)=\mathbf{L}\mathbf{W}_m\mathbf{L}^H$, we have{\small \begin{align} L(\mathbf{W}_m,\mu_m)&=\log\bigg|\mathbf{I}+\mathbf{L}^{-1}\bar{\mathbf{W}}_m(\mu_m) \mathbf{L}^{-H}\mathbf{H}_{m,m}\frac{1}{I_{m}(\widehat{{\mbox{$\mathbf{W}$}}}_{-m})}\bigg|- \mbox{Tr}[\bar{\mathbf{W}}_m(\mu_m)]+\mu_m\bar{p}_m\nonumber\\ &\stackrel{(a)}=\log\left|\mathbf{I}+\bar{\mathbf{W}}_m(\mu_m)\mathbf{V}\mathbf{\Delta}\mathbf{V}^H\right|- \mbox{Tr}[\bar{\mathbf{W}}_m(\mu_m)]+\mu_m\bar{p}_m\nonumber\\ &\stackrel{(b)}=\log\left|\mathbf{I}+\widehat{\mathbf{W}}_m(\mu_m)\mathbf{\Delta}\right|- \mbox{Tr}[\mathbf{V}\widehat{\mathbf{W}}_m(\mu_m)\mathbf{V}^H]+\mu_m\bar{p}_m\nonumber\\ &=\log\left|\mathbf{I}+\widehat{\mathbf{W}}_m(\mu_m)\mathbf{\Delta}\right|- \mbox{Tr}[\widehat{\mathbf{W}}_m(\mu_m)]+\mu_m\bar{p}_m=L(\widehat{\mathbf{W}}_m(\mu_m)) \end{align}} where in $(a)$ we have used the eigendecomposition: $\mathbf{L}^{-H}\mathbf{H}_{m,m}\mathbf{L}^{-1}\frac{1}{I_{m}(\widehat{{\mbox{$\mathbf{W}$}}}_{-m})} =\mathbf{V}\mathbf{\Delta}\mathbf{V}^H$; in $(b)$ we have defined $\widehat{\mathbf{W}}_m(\mu_m)=\mathbf{V}^H\bar{\mathbf{W}}_m(\mu_m)\mathbf{V}$. Let $\widehat{\mathbf{W}}_m^*(\mu_m)$ denote an optimal solution to the problem $\max_{\widehat{\mathbf{W}}_m(\mu_m)\succeq 0} L(\widehat{\mathbf{W}}_m(\mu_m))$. We claim that there must exist a $\widehat{\mathbf{W}}^*_m(\mu_m)$ that is {\it diagonal}. Note that ${\mbox{\textrm{Rank}}}(\mathbf{H}_{m,m})=1$ implies ${\mbox{\textrm{Rank}}}(\mathbf{\Delta})\le 1$. Thus $\widehat{\mathbf{W}}^*_m(\mu_m)\mathbf{\Delta}$ has at most a {\it single column}. This implies that we can remove the off diagonal elements of $\mathbf{I}+\widehat{\mathbf{W}}^*_m(\mu_m)\mathbf{\Delta}$ without changing the values of $\left|\mathbf{I}+\widehat{\mathbf{W}}^*_m(\mu_m)\mathbf{\Delta}\right|$. Consequently, for any given $\widehat{\mathbf{W}}^*_m(\mu_m)$, we can construct a diagonal optimal solution $\widehat{\mathbf{W}}^{*,D}_m(\mu_m)$ by removing all its off diagonal elements. This operation removes all the off diagonal elements of $\mathbf{I}+\widehat{\mathbf{W}}^*_m(\mu_m)\mathbf{\Delta}$, and it does not change either $\left|\mathbf{I}+\widehat{\mathbf{W}}^*_m(\mu_m)\mathbf{\Delta}\right|$ or $\textrm{Tr}[\widehat{\mathbf{W}}^*_m(\mu_m)]$. Consequently $\widehat{\mathbf{W}}^{*,D}_m(\mu_m)$ is also optimal. When restricting $\widehat{\mathbf{W}}^{*}_m(\mu_m)$ to be diagonal, we can find its closed-form expression{\small \begin{align} [\widehat{\mathbf{W}}_m^*(\mu_m)]_{i,i}=\left[\frac{[\mathbf{\Delta}]_{i,i}-1}{[\Delta]_{i,i}}\right]^+,\textrm{if $[\mathbf{\Delta}]_{i,i}\ne 0$;}\quad [\widehat{\mathbf{W}}_m^*(\mu_m)]_{i,i}=0,\ \textrm{otherwise},\label{eqWCompute} \end{align}} where $[x]^+=\max\{0,x\}$. Then we can obtain ${\mathbf{W}}^*_m(\mu_m)=\mathbf{L}^{-1}\mathbf{V}\widehat{\mathbf{W}}_m^{*}(\mu_m)\mathbf{V}^H\mathbf{L}^{-H}$. Combining the fact that ${\mbox{\textrm{Rank}}}(\mathbf{\Delta})\le 1$ with \eqref{eqWCompute} we conclude ${\mbox{\textrm{Rank}}}(\widehat{\mathbf{W}}^*(\mu_m))\le 1$, and consequently ${\mbox{\textrm{Rank}}}({\mathbf{W}}^*_m(\mu_m))\le 1$, {\it for any $\mu_m>0$}. It is relatively straightforward to show that $\mbox{Tr}[\mathbf{W}_m^*(\mu_m)]$ is strictly decreasing with respect to $\mu_m$. Consequently if the optimal multiplier $\mu_m^*>0$, then a bisection method can be used to find $\mu^*_m$ that satisfies the feasibility conditions $\mbox{Tr}[\mathbf{W}_m^*(\mu^*_m)]\le \bar{p}_m$. Furthermore, we can also show that when $\mu^*_m=0$, $\mathbf{A}_m$ must have full rank. In this case, we can find the Cholesky decomposition $\mathbf{A}_m=\mathbf{L}\mathbf{L}^H$, and the above construction can still be used to directly obtain $\mathbf{W}^*_m(0)$ (without bisection), that satisfy ${\mbox{\textrm{Rank}}}(\mathbf{W}_m^*(0))\le 1$. In conclusion, for any $\mu_m^*\ge 0$, we obtain ${\mbox{\textrm{Rank}}}({\mathbf{W}}^*_m(\mu^*_m))\le 1$. Table \ref{tableUtilityMaximization} summarizes the above procedure. \begin{table}[htb] \begin{center} \vspace{-0.1cm} \caption{ The Optimization of (LBM)} \label{tableUtilityMaximization} { \begin{tabular}{|l|} \hline S1) Choose $\mu^u_m$ and $\mu^l_m$ such that $\mu_m^*$ lies in $[\mu^l_m,~\mu^u_m]$.\\ S2) Let $\mu^{mid}_m=(\mu^l_m+\mu^u_m)/2$. Compute decomposition:\\ ~~~~~$\mathbf{L}^H\mathbf{L}=\mathbf{A}_m+\mu^{mid}_m\mathbf{I}$\\ ~~~~~$\mathbf{V}\Delta\mathbf{V}^H= \mathbf{L}^{-H}\mathbf{H}_{m,m}\mathbf{L}^{-1}\frac{1}{I_{m}(\widehat{{\mbox{$\mathbf{W}$}}}_{-m})}$.\\ S3) Compute $\widehat{\mathbf{W}}^*_m(\mu^{mid}_m)$ by \eqref{eqWCompute}.\\ S4) Compute $\mathbf{W}_m^*(\mu^{mid}_m)=\mathbf{L}^{-1}\mathbf{V}\widehat{\mathbf{W}}_m^{*}(\mu^{mid}_m) \mathbf{V}^H\mathbf{L}^{-H}$.\\ S5) If $\mbox{Tr}(\mathbf{W}_m^*(\mu^{mid}_m))>\bar{p}_m$, let $\mu^{l}_m=\mu^{mid}_m$; otherwise let $\mu^{u}_m=\mu^{mid}_m$.\\ S6) If $|\mbox{Tr}(\mathbf{W}_m^*(\mu^{mid}_m))-\bar{p}_m|<\epsilon$ or $|\mu^{u}_m-\mu^{l}_m|<\epsilon$, stop; otherwise go to S2). \\ \hline \end{tabular}} \end{center} \vspace{-0.5cm} \end{table} In the following, we identify a special structure of the problem (R-LBM) that allows it to admit a rank-1 solution. To this end, we tailor the rank reduction procedure (abbreviated as RRP) proposed in \cite{huang10} to fit our problem \footnote{Note that the RRP procedure in \cite{huang10} cannot be directly applied to our problem. This is because in \cite{huang10}, the RRP is used to identify rank-1 solution of semidefinite programs with linear objective and constraints. Our problem is different in that the objective function is of a logdet form. }. Assume that using standard optimization package we obtain an optimal solution $\widetilde{\mathbf{W}}^*_m$ to the convex problem (R-LBM), with ${\mbox{\textrm{Rank}}}(\widetilde{\mathbf{W}}^*_m)=r>1$. Let $\widetilde{\mathbf{W}}^{(1)}_m=\widetilde{\mathbf{W}}^*_m$, and let $r^{(1)}=r$. At iteration $t$ of the the RRP, we perform a eigen decomposition $\widetilde{\mathbf{W}}^{(t)}_m=\mathbf{V}^{(t)}{\mathbf{V}^{(t)}}^H$, where $\mathbf{V}^{(t)}\in\mathbb{C}^{K\times r^{(t)}}$. If $R^{(t)}>1$, find $\mathbf{D}^{(t)}\in\mathbb{S}^{r^{(t)}}$ such that the following three conditions are satisfied \begin{align} &{\mbox{\textrm{Tr}}}(\mathbf{D}^{(t)}{\mathbf{V}^{(t)}}^H\mathbf{H}_{m,m}\mathbf{V}^{(t)})=0\label{eqDH}\\ &{\mbox{\textrm{Tr}}}(\mathbf{D}^{(t)}{\mathbf{V}^{(t)}}^H\mathbf{A}_m\mathbf{V}^{(t)})=0\label{eqDA}\\ &{\mbox{\textrm{Tr}}}(\mathbf{D}^{(t)}{\mathbf{V}^{(t)}}^H\mathbf{V}^{(t)})=0\label{eqDI}. \end{align} If such $\mathbf{D}^{(t)}$ cannot be found, exit. Otherwise, let $\lambda(\mathbf{D}^{(t)})$ be the eigenvalue of $\mathbf{D}^{(t)}$ with the largest absolute value, and construct $\widetilde{\mathbf{W}}^{(t+1)}_m=\mathbf{V}^{(t)}(\mathbf{I}_r-\frac{1}{\lambda(\mathbf{D}^{(t)})}\mathbf{D}^{(t)}) {\mathbf{V}^{(t)}}^H\succeq 0$. Clearly, ${\mbox{\textrm{Rank}}}(\mathbf{I}_r-\frac{1}{\lambda(\mathbf{D}^{(t)})})\le r^{(t)}-1$, as a result, ${\mbox{\textrm{Rank}}}(\widetilde{\mathbf{W}}^{(t+1)}_m)\le {\mbox{\textrm{Rank}}}(\widetilde{\mathbf{W}}^{(t)}_m)-1$, i.e., the rank has been reduced by at least one. Utilizing \eqref{eqDH}--\eqref{eqDI}, we obtain{\small \begin{align} &\mathbf{h}^H_{m,m}\widetilde{\mathbf{W}}^{(t+1)}_m\mathbf{h}_{m,m} ={\mbox{\textrm{Tr}}}[\mathbf{H}_{m,m}\widetilde{\mathbf{W}}^{(t+1)}_m]\nonumber ={\mbox{\textrm{Tr}}}\left[\mathbf{H}_{m,m}\mathbf{V}^{(t)}(\mathbf{I}_r-\frac{1}{\lambda(\mathbf{D}^{(t)})}\mathbf{D}^{(t)}) {\mathbf{V}^{(t)}}^H\right]\nonumber\\ &\quad\quad={\mbox{\textrm{Tr}}}[\mathbf{H}_{m,m}\widetilde{\mathbf{W}}^{(t)}_m] =\mathbf{h}^H_{m,m}\widetilde{\mathbf{W}}^{(t)}_m\mathbf{h}^{H}_{m,m}\label{eqHCancel}\\ &{\mbox{\textrm{Tr}}}[\mathbf{A}_m\widetilde{\mathbf{W}}^{(t+1)}_m] ={\mbox{\textrm{Tr}}}\left[\mathbf{A}_m\mathbf{V}^{(t)}(\mathbf{I}_r-\frac{1}{\lambda(\mathbf{D}^{(t)})}\mathbf{D}^{(t)}) {\mathbf{V}^{(t)}}^H\right] ={\mbox{\textrm{Tr}}}[\mathbf{A}_m\widetilde{\mathbf{W}}^{(t)}_m]\label{eqACancel}\\ &{\mbox{\textrm{Tr}}}[\widetilde{\mathbf{W}}^{(t+1)}_m] ={\mbox{\textrm{Tr}}}\left[\mathbf{V}^{(t)}(\mathbf{I}_r-\frac{1}{\lambda(\mathbf{D}^{(t)})}\mathbf{D}^{(t)}) {\mathbf{V}^{(t)}}^H\right] ={\mbox{\textrm{Tr}}}[\widetilde{\mathbf{W}}^{(t)}_m]\label{eqWCancel}. \end{align}} Equation \eqref{eqHCancel} and \eqref{eqACancel} ensure that the objective value of (R-LBM) does not change, i.e., $U_m(\widetilde{\mathbf{W}}^{(t+1)}_m, \widehat{{\mbox{$\mathbf{W}$}}}_{-m})=U_m(\widetilde{\mathbf{W}}^{(t)}_m, \widehat{{\mbox{$\mathbf{W}$}}}_{-m})$. Equation \eqref{eqWCancel} ensures ${\mbox{\textrm{Tr}}}[\widetilde{\mathbf{W}}^{(t+1)}_m]={\mbox{\textrm{Tr}}}[\widetilde{\mathbf{W}}^{(t)}_m]\le \bar{p}_m$. Combined with the fact that $\widetilde{\mathbf{W}}^{(t+1)}_m\succeq 0$, we have that $\widetilde{\mathbf{W}}^{(t+1)}_m$ is also an optimal solution to the problem (R-LBM). Evidently, performing the above procedure for at most $r$ times, we will obtain a rank-1 solution $\mathbf{W}^*_m$ that solves the problem (LBM). Now the question is that under what condition can we find $\mathbf{D}^{(t)}$ that satisfies \eqref{eqDH}--\eqref{eqDI} in each iteration $t$. Note that $\mathbf{D}^{(t)}$ is a $r^{(t)}\times r^{(t)}$ Hermitian matrix, hence finding $\mathbf{D}^{(t)}$ that satisfies \eqref{eqDH}--\eqref{eqDI} is equivalent to solving a system of three linear equations with $(R^{(t)})^2$ unknowns \footnote{The number of unknowns for the real part of $\mathbf{D}^{(t)}$ is $\frac{(R^{(t)}+1)R^{(t)}}{2}$, and the number of unknowns for the imaginary part of $\mathbf{D}^{(t)}$ is $\frac{(R^{(t)}-1)R^{(t)}}{2}$.}. As long as $(R^{(t)})^2>3$, the linear system is underdetermined and such $\mathbf{D}^{(t)}$ can be found. Consequently, the RRP procedure, when terminated, gives us a $\mathbf{W}^*_m$ with ${\mbox{\textrm{Rank}}}^2(\mathbf{W}^*_m)\le 3$. As the rank of a matrix is an integer, we must have ${\mbox{\textrm{Rank}}}(\mathbf{W}^*_m)=1$. It is important to note, however, that the ability of the RRP procedure to recover a rank-1 solution for problem (R-LBM) lies in the fact that {\it we only have three linear terms of $\mathbf{W}_m$ in both the objectives and the constraints}. This results in solving a linear system with {\it three} equations in each iteration of the RRP procedure. If we have an additional linear constraint of the form ${\mbox{\textrm{Tr}}}(\mathbf{B}\mathbf{W}_m)\le c$ for some constant $c$, the RRP procedure may produce a solution $\mathbf{W}^*_m$ with ${\mbox{\textrm{Rank}}}^2(\mathbf{W}^*_m)\le 4$, which does not guarantee ${\mbox{\textrm{Rank}}}(\mathbf{W}^*_m)=1$. We have used the RRP procedure to identify the structure of problem (R-LBM) that allows for the existence of a rank-1 solution. However in practice this procedure is not that useful as it requires solving (R-LBM) to begin with. Therefore we will use our own algorithm listed in Table \ref{tableUtilityMaximization} to directly get a rank-1 solution of (R-LBM). Summarizing the above discussion, we propose the following algorithm, named Successive and Sequential Convex Approximation Beam Forming (SSCA-BF): 1) {\bf Initialization}: Let $t=0$, randomly choose a set of feasible covariances $\mathbf{W}_m^{0},~\forall~m\in\mathcal{M}$. 2) {\bf Information Exchange}: Choose $m=M\oslash t$, let each BS $q\ne m$ compute and transfer $T_q(\mathbf{W}^t)$ to BS $m$. 3) {\bf Maximization}: BS $m$ use the procedure in Table \ref{tableUtilityMaximization} to obtain a solution ${\mathbf{W}}_m^{t+1}$ of problem (LBM) with the objective function $U_m(\mathbf{W}_m, {\mathbf{W}}^t_{-m})$. Let $\mathbf{W}^{t+1}=[\mathbf{W}_m^{t+1}, \mathbf{W}^t_{-m}]$. 4) {\bf Continue}: If $|R(\mathbf{W}^{t+1})-R(\mathbf{W}^{t+1-M})|<\epsilon$, stop. Otherwise, set $t=t+1$, go to Step 2). In Step 4), $\epsilon>0$ is the stopping criteria. The above algorithm is distributed in the sense that as long as the BS $m$ have the information specified in Step 2) and the channels $\{\mathbf{H}_{m,q}\}_{q\ne m}$, it can carry out the computation by itself. \newtheorem{T1}{Theorem} \begin{T1}\label{theromConvergenceSingleUser} {\it The sequence {\small $\{R(\mathbf{W}^t)\}$} produced by the SSCA-BF algorithm is non-decreasing and converges. Moreover every limit point of the sequence {\small $\{\mathbf{W}^t\}$} is a stationary solution to the problem (SRM).} \end{T1} \begin{proof} Fix a iteration $t$ and let $m=M\oslash t$. Due to the fact that we are able to solve the problem (LBM) exactly, we have $U_m(\mathbf{W}^{t+1}_{m},{\mathbf{W}}^t_{-m})\ge U_m({\mathbf{W}}^t_m)$. Using \eqref{eqRateIncreasePerUser} and the fact that $U_m({\mathbf{W}}^t_m)=R({\mathbf{W}}^t_m)$, we have {\small \begin{align} R({\mathbf{W}}^{t+1})=R({\mathbf{W}}^{t+1}_{m},{\mathbf{W}}^t_{-m})\ge U_{m}({\mathbf{W}}^{t+1}_{m},{\mathbf{W}}^t_{-m} )\ge U_{m}({\mathbf{W}}^t_{m},{\mathbf{W}}^t_{-m} )= R({\mathbf{W}}^t).\label{eqRateIncrease} \end{align}} Clearly the system sum rate is upper bounded, then the sequence $\{R(\mathbf{W}^t)\}_{t=1}^{\infty}$ is nondecreasing and converges. Take any converging subsequence of $\{{\mbox{$\mathbf{W}$}}^{t}\}_{t=1}^{\infty}$, and denote it as $\{{\mbox{$\mathbf{W}$}}^{l}\}_{l=1}^{\infty}$. Define ${\mbox{$\mathbf{W}$}}^*=\lim_{l\to\infty}{\mbox{$\mathbf{W}$}}^l$. For all BS $m\in\mathcal{M}$, we must have $U_m({\mbox{$\mathbf{W}$}}_m^*,{\mbox{$\mathbf{W}$}}^*_{-m})\ge U_m({\mbox{$\mathbf{W}$}}_m,{\mbox{$\mathbf{W}$}}^*_{-m}),~\forall~{\mbox{$\mathbf{W}$}}_m\in\mathcal{F}_m$, i.e., \begin{align} {\mbox{$\mathbf{W}$}}^*_m\in\arg\max_{\mathbf{W}_m\in\mathcal{F}_m} U_m({\mbox{$\mathbf{W}$}}_m,{\mbox{$\mathbf{W}$}}^*_{-m}), \ \forall \ m\in\mathcal{M}. \end{align} Checking the KKT conditions of the above $M$ optimization problems, it is straightforward to see that they are equivalent to the KKT condition of the original problem (SRM). It follows that ${\mbox{$\mathbf{W}$}}^*$ is a KKT point of the problem (SRM). In summary, any limit point of the sequence $\{{\mbox{$\mathbf{W}$}}^{t}\}_{t=1}^{\infty}$ is a KKT point of the problem (SRM). \end{proof} \vspace{-0.1cm} \section{Multi-cell Network with Multiple Users In Each Cell}\label{secMultipleUser} In this section, we consider the network with multiple users per cell. In this scenario, we can no longer perform the SSCA-BF algorithm {\it cyclicly among all the users} to maximize the system sum rate. The reason is that different users in the same BS share a {\it coupled constraint} {\small ${\mbox{\textrm{Tr}}}(\sum_{i\in\mathcal{N}_m}\mathbf{W}_{m,i})\le\bar{p}_m$}. For example, consider a network with a single BS $m$ and multiple users. Suppose at time $0$, $\mathbf{W}^0_{m,i}=\mathbf{0},~\forall~i\in\mathcal{N}_m$. Suppose BS $m$ optimizes user $(m,1)$ first (solving problem (LBM) for user $(m,1)$ with constraints {\small ${\mbox{\textrm{Tr}}}(\mathbf{W}_{m,1})+{\mbox{\textrm{Tr}}}(\sum_{j\ne 1, j\in\mathcal{N}_m}\mathbf{W}^0_{m,j})\le\bar{p}_m$} and $\mathbf{W}_{m,i}\succeq 0$). The covariance so obtained has the form $\mathbf{W}^*_{m,1}=\bar{p}_m\frac{\mathbf{h}_{m,m_1}\mathbf{h}^H_{m,m_1}}{||\mathbf{h}_{m,m_1}||}$, and must have the property ${\mbox{\textrm{Tr}}}(\mathbf{W}^*_{m,1})=\bar{p}_m$. Then all the subsequent computations ($t=1,\cdots$) within BS $m$ yields $\mathbf{W}^*_{m,i}=\mathbf{0}$, $\forall~i\ne1$, because each of the problem has to satisfy the joint power constraint. In order to avoid the above problem, we propose to compute the covariance matrices {\it BS by BS}, instead of user by user, i.e, to update the set $\mathbf{W}_{m}=\{\mathbf{W}_{m,i}\}_{i\in\mathcal{N}_m}$ at the same time, and cycle through the BSs. To this end, we first identify a set of {\it per-BS} lower bounds that will be useful in the subsequent development. \newtheorem{P3}{Proposition} \begin{P1}\label{propLowerBoundBS} {\it For all feasible ${\mathbf{W}}_m$ and a fixed $\widehat{\mathbf{W}}$ we have the following inequality {\small \begin{align} R_{m}(\mathbf{W}_{m},\widehat{\mathbf{W}}_{-m})+R_{-m}\left(\widehat{\mathbf{W}}\right)- \sum_{i\in\mathcal{N}_m}\sum_{q\ne m}\sum_{j\in\mathcal{N}_q}{\mbox{\textrm{Tr}}}\left[T_{q,j}\left(\widehat{\mathbf{W}}\right)\mathbf{H}_{m,q_j}(\mathbf{W}_{m,i}- \widehat{\mathbf{W}}_{m,i})\right]\le R(\mathbf{W}_{m},\widehat{\mathbf{W}}_{-m})\label{eqLowerBoundBS} \end{align}} where the equality is achieved when $\mathbf{W}_{m}=\widehat{\mathbf{W}}_{m}$. Define the left hand side of \eqref{eqLowerBoundBS} as $\bar{U}_{m}(\mathbf{W}_{m},\widehat{\mathbf{W}}_{-m})$, which is the lower bound associated with BS $m$.}\end{P1} \begin{proof} We can verify, similarly as in Proposition \ref{propConvex}, that $R_{-m}\left(\mathbf{W}_{m},{\mathbf{W}}_{-m}\right)$ is {\it jointly convex} with the set of matrices $\{\mathbf{W}_{m,i}\}_{i\in\mathcal{N}_m}$. Then the lower bound in \eqref{eqLowerBoundBS} can be obtained by Taylor expansion. Due to space limit, we do not reiterate the proof here. \end{proof} Unfortunately, unlike the lower bound $U_{m}(.)$ obtained for the single user per BS case, $\bar{U}_{m}(.)$ is {\it not} concave in $\mathbf{W}_{m}$, due to the non-concavity of {\small$R_m(\mathbf{W}_{m},\mathbf{W}_{-m})$} w.r.t. $\mathbf{W}_m$. In the following, we propose a heuristic algorithms to optimize the per-BS lower bound. We first express the lower bound $\bar{U}_{m}(\mathbf{W}_{m},\widehat{\mathbf{W}}_{-m})$ in an equivalent form (where $\mathbf{w}_m\triangleq\{\mathbf{w}_{m,i}\}_{i\in\mathcal{N}_m}$){\small \begin{align} \bar{U}_{m}(\mathbf{w}_{m},\widehat{\mathbf{w}}_{-m})&\triangleq R_{m}(\mathbf{w}_{m},\widehat{\mathbf{w}}_{-m})+R_{-m}\left(\widehat{\mathbf{w}}\right)- \sum_{i\in\mathcal{N}_m}\sum_{q\ne m}\sum_{j\in\mathcal{N}_q}T_{q,j}\left(\widehat{\mathbf{w}}\right) \left(\mathbf{w}^H_{m,i}\mathbf{H}_{m,q_j}\mathbf{w}_{m,i}- \widehat{\mathbf{w}}^H_{m,i}\mathbf{H}_{m,q_j}\widehat{\mathbf{w}}_{m,i}\right)\nonumber. \end{align}} Then individual BSs' lower bound optimization problem is \begin{align} \max_{\mathbf{w}_m}&\quad \bar{U}_{m}(\mathbf{w}_{m},\widehat{\mathbf{w}}_{-m})\label{LBM-BS}\\ \textrm{s.t.}&\quad \sum_{i\in\mathcal{N}_m}\mathbf{w}^H_{m,i}\mathbf{w}_{m,i}\le\bar{p}_m\nonumber \end{align} Take the derivative of the Lagrangian of the problem \eqref{LBM-BS} w.r.t. $\mathbf{w}_{m,i}$ to be zero, we obtain {\small\begin{align} &\ln(2)\left(\sum_{q\ne m}\sum_{j\in\mathcal{N}_q}T_{q,j}(\widehat{\mathbf{w}}_q, \widehat{\mathbf{w}}_{-q}) \mathbf{H}_{m,q_j}+\sum_{l\ne i,l\in\mathcal{N}_m}T_{m,l}(\mathbf{w}_m,\widehat{\mathbf{w}}_{-m}) \mathbf{H}_{m,m_l}+\mu_m\mathbf{I}_{p}\right)\mathbf{w}_{m,i}\nonumber\\ &= \frac{\mathbf{H}_{m,m_i}\mathbf{w}_{m,i}}{\sum_{q\ne m}\sum_{j\in\mathcal{N}_q}\widehat{\mathbf{w}}^H_{q,j}\mathbf{H}_{q,m_i}\widehat{\mathbf{w}}_{q,j}+ \sum_{l\in\mathcal{N}_m }{\mathbf{w}}^H_{m,l}\mathbf{H}_{m,m_i}{\mathbf{w}}_{m,l}},~\forall~i\in\mathcal{N}_m\label{eqLagrangianZero} \end{align}} where $\mu_m\ge 0$ is the dual variable associated with the power constraint, and $T_{m,l}\left(\mathbf{w}_m,\widehat{\mathbf{w}}_{-m}\right)$ is defined {\small \begin{align} T_{m,l}\left(\mathbf{w}_m,\widehat{\mathbf{w}}_{-m}\right)&= \frac{1/\ln(2)}{\sum_{q\ne m, j\in\mathcal{N}_q}\widehat{\mathbf{w}}^H_{q,j}\mathbf{H}_{q,m_l}\widehat{\mathbf{w}}_{q,j}+ \sum_{i\in\mathcal{N}_m }{\mathbf{w}}^H_{m,i}\mathbf{H}_{m,m_l}{\mathbf{w}}_{m,i}}\times\nonumber\\ &\frac{\mathbf{w}^H_{m,m_l}\mathbf{H}_{m,l}{\mathbf{w}}_{m,l}} {\sum_{q\ne m, j\in\mathcal{N}_q}\widehat{\mathbf{w}}^H_{q,j}\mathbf{H}_{q,m_l}\widehat{\mathbf{w}}_{q,j}+\sum_{i\ne l, i\in\mathcal{N}_m }{\mathbf{w}}^H_{m,i}\mathbf{H}_{m,m_l}{\mathbf{w}}_{m,i}}. \end{align}} A tuple $(\mu_m, \mathbf{w}_i)$ that satisfies the $N$ equations in \eqref{eqLagrangianZero} as well as the complementarity and feasibility conditions $\mu_m\ge 0, \mu_m(\bar{p}_m-\sum_{i\in\mathcal{N}_m}\mathbf{w}^H_{m,i}\mathbf{w}_{m,i})= 0$ and $\bar{p}_m-\sum_{i\in\mathcal{N}_m}\mathbf{w}^H_{m,i}\mathbf{w}_{m,i}\ge 0$ is a stationary solution to the problem \eqref{LBM-BS}. Let us define{\small \begin{align} \mathbf{M}_{m,i}(\mu_m, \widehat{\mathbf{w}})\triangleq \ln(2)\left(\sum_{(q,j)\ne (m,i)}T_{q,j}(\widehat{\mathbf{w}}_m, \widehat{\mathbf{w}}_{-m}) \mathbf{H}_{m,q_j}+\mu_m\mathbf{I}_{p}\right). \end{align}} It is shown in \cite[Proposition 1]{venturino10} that the optimal beam vector $\mathbf{w}_{m,i}$ that satisfy \eqref{eqLagrangianZero} must satisfy the following identity \begin{align} \mathbf{w}_{m,i}=\beta_{m,i}(\mu_m)\mathbf{M}^\dag_{m,i}(\mu_m, \widehat{\mathbf{w}})\mathbf{h}_{m,m_i}\label{eqWUpdate} \end{align} for some constant $\beta_{m,i}(\mu_m)$ that can be computed as{\small \begin{align} \beta_{m,i}(\mu_m)=\sqrt{ \frac{\left[\mathbf{h}^H_{m,m_i}\mathbf{M}^{\dag}_{m,i}(\mu_m, \widehat{\mathbf{w}})\mathbf{h}_{m,m_i}-I_{m,i}(\widehat{\mathbf{w}}_{-(m,i)})\right]^+} {(\mathbf{h}^H_{m,m_i}\mathbf{M}^{\dag}_{m,i}(\mu_m, \widehat{\mathbf{w}})\mathbf{h}_{m,m_i})^2}}\label{eqBeta}. \end{align}} As a result, we can compute $\{\mathbf{w}_{m,i}\}_{i\in\mathcal{N}_m}$ by first computing $\beta_{m,i}(\mu_m)$ according to \eqref{eqBeta}, and then use bisection (similarly as in the classic water filling algorithm) to find an appropriate $\mu_m\ge 0$ such that the power constraint for BS $m$ is satisfied. To this end, we propose a Sequential Beamforming (S-BF) algorithm: 1) {\bf Initialization}: Let $t=0$, randomly choose a set of feasible transmission beams $\mathbf{w}_m^{0},~\forall~m\in\mathcal{M}$. 2) {\bf Information Exchange}: Choose $m=\{(t+1) \textrm{mode}(M) \}+1$, let each BS $q\ne m$ compute and transfer $\{T_{q,j}(\mathbf{w}^t)\}_{j\in\mathcal{N}_q}$ to BS $m$ through the backhaul network. 3) {\bf Computation}: BS $m$ updates its beam vectors according to \eqref{eqWUpdate} and \eqref{eqBeta}, with $\widehat{\mathbf{w}}={\mathbf{w}}^t$. Use bisection to find $\mu_m$ that ensures the power constraint. Obtain the solution ${\mathbf{w}}_{m}^{*}$. 4) {\bf Update}: If $\bar{U}_{m}({\mathbf{w}}^*_{m},{\mathbf{w}}^t_{-m} )\ge \bar{U}_{m}({\mathbf{w}}^t)$ Set $\mathbf{w}^{t+1}=[{\mathbf{w}}_m^{*},\mathbf{w}^{t}_{-m}]$; otherwise Set $\mathbf{w}^{t+1}={\mathbf{w}}^{t}$. 5) {\bf Continue}: If $|R(\mathbf{w }^{t+1})-R(\mathbf{w}^{t+1-M})|<\epsilon$, stop. Otherwise, set $t=t+1$, go to Step 2). Note that in Step 4) we check if the lower bound is increased. If this is indeed the case, we accept the new set of beams $\mathbf{w}^*_m$. This procedure ensures $R(\mathbf{w}^{t+1})\ge R(\mathbf{w}^t)$. The S-BF algorithm is a variant/extention of the the ICBF algorithm proposed in \cite{venturino10}: Step 2) and Step 3) of S-BF is a sequential version of the ICBF algorithm. However, the S-BF algorithm does have several advantages/differences to the ICBF algorithm: \emph{i)} The ICBF tries to solve the KKT system of the problem (SRM), while S-BF tries to optimize the per-BS lower bound {\it for each BS}; \emph{ii)} In S-BF algorithm the BSs update sequentially while in the ICBF algorithm the BSs update at the same time. One important consequence of such difference in updating schedule is the amount of information exchange needed in each iteration: in our algorithm, all BSs only need to send a single copy of their local information to a {\it single} BS, while in ICBF algorithm, they need to send to {\it all other} BSs. As will be shown in Section \ref{secSimulation}, the total information exchange needed for both S-BF and SSCA-BF algorithm is significantly less than the ICBF algorithm; \emph{iii)} Due to the utilization of the per-BS lower bound in Step 4), the system sum rate of the proposed S-BF algorithm monotonically increases and converges, while the ICBF algorithm does not possess such convergence guarantee; \emph{iv)} In S-BF algorithm, there is no ``inner iteration", in which all the BSs update their beam vectors at the same time to reach some {\it intermediate convergence} (note that in ICBF algorithm, the convergence of the inner iteration is {\it not} guaranteed). Such ``inner iteration" is undesirable, because \emph{a)} it is hard to decide on, in a distributed fashion, whether convergence has been reached and \emph b) in each of such inner iterations, extra feedback information needs to be exchanged between the BSs and their users. \vspace{-0.3cm} \section{Numerical Results}\label{secSimulation} In this section, we give numerical results demonstrating the performance of the proposed algorithms. We mainly consider a network with a set $\mathcal{W}$ of BS, where $|\mathcal{W}|=14$ (see Fig. \ref{figTopology} for the system topology of the network with randomly generated user locations). $4$ of the BSs are coordinated for transmission (in the set $\mathcal{M}$), i.e., $M=4$. All other BSs' (in the set $\mathcal{W}/\mathcal{M}$) transmission is regarded as noise. The BS to BS distance is 2 km. Let $d_{q,m_i}$ be the distance between BS $q$ and $i$th user in $m$th cell. The channel coefficients are modeled as zero mean circularly symmetric complex Gaussian vector with $\left({200}/{d_{q,m_i}}\right)^{3.5}L_{q,m_i}$ as variance for each part, where $10\log10(L_{q,m_i})$ is a real Gaussian random variable modeling the shadowing effect with zero mean and standard deviation 8. The environmental noise power is modeled as the power of thermal noise plus the power of noises/interferences generated by non-coordinating BSs: $ c_{m,i}=\sigma^2+\sum_{w\in\mathcal{W}-\mathcal{M}}\left({200}/{d_{w,m_i}}\right)^{3.5}L_{w,m_i}\bar{p}_w$. We take $\bar{p}_m=1$ for all $m\in\mathcal{W}$, and define the $SNR$ as $10\log10(\bar{p}_m/\sigma^2)$. The stopping criteria is set to be $\epsilon=10^2$ for all the algorithms. \begin{figure*}[htb] \vspace*{-.5cm} \begin{minipage}[t]{0.49\linewidth} \centering {\includegraphics[width= 1\linewidth]{Fig/Topology.eps} \vspace*{-0.8cm}\caption{Topology of simulated network.}\label{figTopology} \vspace*{-0.1cm}} \end{minipage}\hfill \begin{minipage}[t]{0.49\linewidth} \centering {\includegraphics[width= 1\linewidth]{Fig/figCompareRate_4_BS.eps} \vspace*{-0.8cm}\caption{Comparison of system throughput of Different Algorithms. $K=5$, $N=5$, $M=4$. Users $i\in\mathcal{N}_m$ uniformly placed within $d_{m,m_i}\in[200,~1000]$ meters within each BS.}\label{figSumRate1} \vspace*{-0.1cm}} \end{minipage} \vspace*{-0.4cm} \end{figure*} In Fig. \ref{figSumRate1} and Fig. \ref{figSumRate2}, we consider networks with $N=K=5$ and $N=K=10$, where the the users $i\in\mathcal{N}_m$ that are associated with BS $m$ are uniformly placed within $d_{m,m_i}\in[200,~1000]$ meters. We show the sum rate performance of the S-BF algorithm comparing with the ICBF algorithm in \cite{venturino10} and the non-coordinating schemes where the BSs individually perform zero forcing beamforming and channel matched filter beamforming. In Fig. \ref{figSumRate3} we consider network with $N=K=5$ and $d_{m,m_i}\in[200,~300],~\forall~m, i$. Clearly all the coordinated schemes achieve similar throughput performance, which is significantly higher than the non-coordinated schemes. \begin{figure*}[htb] \vspace*{-.5cm} \begin{minipage}[t]{0.49\linewidth} \centering {\includegraphics[width= 1\linewidth]{Fig/figCompareRate_4_BS_3.eps} \vspace*{-0.6cm}\caption{Comparison of system throughput of Different Algorithms. $K=10$, $N=10$, $M=4$. Users $i\in\mathcal{N}_m$ uniformly placed within $d_{m,m_i}\in[200,~1000]$ meters within each BS.}\label{figSumRate2} \vspace*{-0.1cm}} \end{minipage}\hfill \begin{minipage}[t]{0.49\linewidth} \centering {\includegraphics[width= 1\linewidth]{Fig/figCompareRate_4_BS_2.eps} \vspace*{-0.6cm}\caption{Comparison of system throughput of different Algorithms. $K=5$, $N=5$, $M=4$. Users $i\in\mathcal{N}_m$ uniformly placed within $d_{m,m_i}\in[200,~300]$ meters within each BS.}\label{figSumRate3} \vspace*{-0.1cm}} \end{minipage} \vspace*{-0.5cm} \end{figure*} We then compare the amount of inter-cell information needed for different coordinated schemes. We define the {\it unit of information transfer} as the total information needed from the set of coordinated BS for updating the beam vectors for {\it a single BS} $m\in\mathcal{M}$. Clearly, in each iteration of the S-BF algorithm, a single unit of information is needed to go through the backhaul network, while in ICBF algorithm, $M$ units of information are needed. In Fig. \ref{figTimeConvergence1} and Fig. \ref{figTimeConvergence2}, we demonstrate the averaged number of iterations and the averaged total units of information needed for different coordinated schemes until convergence. We observe that the total units of information needed for the proposed SSCA-BF and S-BF algorithms are around $25\%$ less than the ICBF algorithm when $M=4$, and around $40\%$ less when $M=9$. \footnote{The network with $M=9$ is generated similarly as the case of $M=4$, i.e., the center $9$ BSs are coordinating, while the other BSs around them are non-coordinating and their transmissions are considered as noises.} We also emphasize that typically, several {\it inner iterations} are needed per outer iteration of ICBF, and we have not count the extra information needed between the BSs and the users in these inner iterations. As a results, in Fig. \ref{figTimeConvergence1} and Fig. \ref{figTimeConvergence2} we see that the {\it total iterations} needed for ICBF algorithm are close to the S-BF algorithm. In all the simulations presented above, the results are obtained by averaging over $500$ randomly generated user locations and channel realizations. \begin{figure*}[htb] \vspace*{-.5cm} \begin{minipage}[t]{0.49\linewidth} \centering {\includegraphics[width= 1\linewidth]{Fig/figTimeConvergence.eps} \vspace*{-0.7cm}\caption{Comparison of the Number of Iterations/Information Units Needed for Convergence. $K=5$, $N=5$, $M=4$.}\label{figTimeConvergence1} \vspace*{-0.6cm}} \end{minipage}\hfill \begin{minipage}[t]{0.49\linewidth} \centering {\includegraphics[width= 1\linewidth]{Fig/figTimeConvergence2.eps} \vspace*{-0.7cm}\caption{Comparison of the Number of Iterations/Information Units Needed for Convergence. $K=5$, $N=5$, $M=9$.}\label{figTimeConvergence2} \vspace*{-0.6cm}} \end{minipage} \vspace{-0.6cm} \end{figure*} \vspace{-0.2cm} \section{Conclusion}\label{secConclusion} In this correspondence, we study the sum rate maximization problem using beamforming in a multi-cell MISO network. We have explored the structure of the problem and identified a set of lower bounds for the system sum rate. For the case of a single user per cell, we proposed an algorithm that reaches the KKT point of the sum rate maximization problem. For the case of multiple users per cell, we propose and algorithm that achieve high system throughput with reduced backhaul information exchange among the BSs. \vspace{-0.3cm} \bibliographystyle{IEEEbib}
2,869,038,156,512
arxiv
\section{Introduction} Opinion dynamics in social networks has become a problem of increasing research interest during the last decades. This can be explained by the multiplication of digital social networks that allow a faster and more persistent influence of opinions. In this context, governmental institution and private companies use marketing over social networks as a key tool for promoting their products or ideas. However, to the best of our knowledge, there is no formal analysis pointing out the improvements that can be achieved by using the network topology in the design of the marketing strategy. Indeed, most of the existing studies focus on the analysis of models without control, {\it i.e. } they study the convergence, dynamical patterns or asymptotic configurations of the open-loop opinion dynamics. Various mathematical models \cite{DeGroot,Friedkin,Deffuant2000,krause2002,Altafini,NilCoSa2016} have been proposed to capture different features of these complex dynamics. Empirical models based on in vitro and in vivo experiments have also been developed \cite{Davis,Ohtsubo,SamPloSOne}. One controversial problem is related to emergence of consensus in social networks. Social studies pointed out that, in general, opinions tend to converge one toward another during interactions. Therefore, is not surprising that consensus received a particular attention in opinion dynamics literature \cite{Axelrod1997,GalamMoscovici1991}. While some mathematical models naturally lead to consensus \cite{DeGroot,Friedkin}, others lead to network clustering \cite{krause2002,Altafini,MG10}. In order to enforce consensus, some recent studies propose the control of one or a few agents, see \cite{Camponigro,Dietrich}. Besides these methods of controlling opinion dynamics towards consensus, we also find recent attempts to control the discrete-time dynamics of opinions such that as many agents as possible reach a certain set after a finite number of influences \cite{Hegselmann}. Another relatively new line of research is based on the change of opinions through the change of susceptibility and resistance parameters \cite{Fogg2002,MariekeEtAl2015,Abebe2018}. Basically, each individual is characterized by certain parameters that make it more or less easy to influence. In \cite{Abebe2018} (and some reference therein) the authors zoom in the model and see how the persuasion can be realized by acting on the susceptibility of individuals. Instead, we are looking directly at the outcome of the persuasion strategy and use this information in the long term evolution of opinion dynamics. Viral marketing refers to the practice where a seller attempts to artificially create word-of-mouth advertising among potential customers, and the effectiveness of this trend has been well established by social scientists and economists \cite{leskovec2007dynamics,arthur2009pricing}. In \cite{masucci2014strategic}, the authors consider multiple influential entities competing to control the opinion of consumers under a game theoretical setting. However, this work assumes an undirected graph and a voter model for opinion dynamics resulting in strategies that are independent of the node centrality. On the other hand, \cite{varma2017opinion} considers a similar competition with opinion dynamics over a directed graph and no budget constraints.\\ In this paper, we consider a different problem that requires minimizing the distance between opinions and the desired value using a given control/marketing budget. Moreover, we assume that the maximal marketing influence cannot instantaneously shift the opinion of one individual to the desired value. Basically, we consider a continuous time opinion dynamics and we want to design a marketing strategy that minimizes the distance between opinions and the desired value after a given finite number of discrete-time campaigns under budget constraints.The main motivation for this choice is the time scale of relevant events. The campaign refers to sales before some events and their duration is much smaller than the duration of the spreading of opinions related to the advertised products. There exist many practical situations where the use of an hybrid OD model seems completely natural. For instance, during a presidential campaign it is common to measure the opinions of the electors through polls just before, and just after, a time-localized event such as a big political meeting or a TV debate (see e.g., \cite{mcclurg2006electoral}). Despite its natural relevance, to the best of the authors’ knowledge, no hybrid controlled OD model has been proposed to study the opinion dynamics in social networks under an external influence. To solve this control design problem we write the overall closed-loop dynamics as a linear-impulsive system and we show that the optimal strategy is to influence as much as possible the most central/popular individuals (see \cite{Bonacich} for a formal definition of centrality) of the network as far as the graph modeling the social network is weakly connected ({\it i.e. } it contains at least a directed spanning tree). We also point out that the budget allocation has to take into account the size of clusters (maximal subsets of weakly connected agents, see \cite{MG10}) when the graph is disconnected. \\ To the best of our knowledge, our work is different from all the existing results on opinion dynamics control. Unlike the few previous works on the control of opinions in social networks, we do not control the state of the influencing entity. Instead, we consider that value as fixed and we control the influence weight that the marketer has on different individuals of the social network. By doing so, we emphasize the advantages of targeted marketing with respect to broadcasting (uniform) strategies when budget constraints have to be taken into account. Moreover, we show that, although the individual control action $u_i(t_k)$ at time $t_k$ can be chosen in the interval $[0,\bar{u}]$, the optimal choice is discrete: either $0$ or $\bar{u}$.\\ The rest of the paper is organized as follows. Section \ref{sec:model} formulates the opinion dynamics control problem under consideration. A useful preliminary result for solving a specific optimization problem with constraints is given in Section \ref{prelim}. To motivate our analysis, we emphasize in Section \ref{ex_mot} the improvements that can be obtained by targeted advertising with respect to a uniform/broadcasting control. Section \ref{main} contains the results related to the optimal control strategy. We first analyze the case when the campaign budget is given a priori and must be optimally partitioned among the network agents. Secondly, we look at the case when the campaign budget is unknown but the campaigns are distanced in time. Thirdly, we consider the case of large networks that can be approximated as a union of clusters/sub-networks. All these three cases point out that the optimal control contains only $0$ or $\bar{u}$ actions. These results motivate us to study in Section \ref{Sec:discrete_action} the space-time distribution of the budget under the assumption that all the components of $u(t_k)$ are either $0$ or $\bar{u}$. We conclude that the budget has to be allotted according to the influence power of each agent which in turn depends on the initial condition, centrality and the size of the clusters in which it lies. Numerical examples and concluding remarks end the paper. \section{Problem statement} \label{sec:model} We consider an entity (for example, a company) that is interested in attracting consumers to some product (electrical cars, healthy food, etc). Consumers belong to a social network and we refer to any consumer as an agent. For the sake of simplicity, we consider a fixed social network over the set of vertices $\mathcal{V}= \{1,2,\dots,N\}$ of $N$ agents. In other words, we identify each agent with its index in the set $\mathcal{V}$. To agent $i \in \mathcal{V}$ we assign a normalized scalar opinion $x_i(t) \in [0,1]$ that evolves under the influence of neighbors' opinions and external entity persuasion/advertising action. We use $x(t) = (x_1(t),x_2(t),\dots,x_N(t))^\top$ to denote the state of the network at any time $t$, where $x(t) \in \mathcal{X}$ and $\mathcal{X}= [0,1]^N$. In order to obtain a larger market share with a minimum investment, the external entity applies an action vector on marketing campaigns at discrete time instants. The set of campaigns time instants is finite: $\mathcal{T}=\{t_0,t_1,\dots,t_M\}$. The number of campaigns $M$ is considered to be finite but arbitrarily large because we are interested in the finite (arbitrarily large) time behavior of the network. A given action therefore corresponds to a given marketing campaign aiming at influencing the consumer's opinion. Between two consecutive campaigns, the consumer's opinion is only influenced by the other consumers of the networks. We assume that $t_{k}-t_{k-1}=\delta_k\in[\delta_{\min},\delta_{\max}]$ where $0<\delta_{\min}<\delta_{\max}$ are two fixed real numbers. Throughout the paper we consider $d\in\{0,1\}$ be the desired opinion that the external entity would like to be adopted for all the consumers. We also consider $\forall i\in\mathcal{V}$ the following dynamics: \begin{equation}\label{eq_dynamics} \left\{\begin{split} &\dot{x}_i(t)=\sum_{j=1}^Na_{ij} [x_j(t)-x_i(t)], \quad t\in[t_k,t_{k+1})\\ &x_i(t_k)=u_i(t_k)d+[1-u_i(t_k)] x_i(t_k^-) \end{split}\right.,\ \forall k\in{\mathbb N}, \end{equation} where $u_i(t_k)\in[0,\bar{u}],\ \forall i\in\mathcal{V}$, where $\bar{u} \in (0,1)$ is a saturation on each component of the control, and \begin{equation} \sum_{k=0}^M \sum_{i=1}^N u_i(t_k)\le B \end{equation} where $B$ represents the total budget of the external entity for the marketing campaigns. It is worth pointing out that external influences are modeled through a sequence of impulsive dynamics (second equation of \eqref{eq_dynamics}). This corresponds either to the case when the duration of the campaign is much shorter than the time between two consecutive campaigns or to the case in which the real dynamics during the campaign is neglected and only the resulting state is used as an entry for the next inter-campaign period. It can also be noticed that the state-jump resulting from external influence is both related to the budget allocated to Agent $i$ at time $t_k$ i.e., $u_i(t_k)$ and the value of the state before advertising $x_i(t_k^-)$. While the former proportionality is intuitive the later expresses an increasing resistance of individuals while approaching the advertised state.\\ It is also important to highlight that we assume a uniform behavior of the agents with respect to external influence. In real social networks, some agents (central ones for instance) may be harder to influence. This means that for a given value of the external influence their state jump will be smaller than the jump of other agents under the same external influence. This can be done by adding a scaling factor in the second equation of \eqref{eq_dynamics}. Dynamics \eqref{eq_dynamics} can be rewritten using the collective variable $X(t)=[d, x(t)^\top]^\top$ as: \begin{equation}\label{eq_collective_dynamics} \left\{\begin{split} &\dot{X}(t)=-\mathcal{L} X(t)\\ &X(t_k)=\mathcal{P} X(t_k^-) \end{split}\right. , \end{equation} where \[\mathcal{L}=\left(\begin{array}{cc} 0 & {\bf 0}_{1,N}\\ {\bf 0}_{N,1} & L\end{array}\right),\ \mathcal{P}=\left(\begin{array}{cc} 1 & {\bf 0}_{1,N}\\ u(t_k) & I_N-\mathrm{diag}(u(t_k))\end{array}\right)\] with $\mathrm{diag}(u(t_k))\in{\mathbb R}^{N\times N}$ being the diagonal matrix having the components of $u(t_k)$ on the diagonal. Here, $L$ is the Laplacian matrix associated to the graph formed by the adjacency matrix elements $a_{i,j}$, i.e., $L_{ij} = -a_{ij}$ for $i \neq j$ and $L_{ii} = \sum_{i \neq j} a_{ij}$. \begin{definition} The (vector) centrality of Agent $i$ is the $i^{th}$ component of the left eigenvector $v$ of $L$ associated with the eigenvalue $0$ and satisfying $v^\top{\bf 1}_N=1$. \end{definition} \begin{remark}It is worth noticing that: \begin{itemize} \item $\mathcal{L}$ is a Laplacian matrix corresponding to a network of $N+1$ agents. The first agent represents the external entity and is not connected to any other agent while the rest of the agents represents the consumers and interact through the social network defined by the influence weights $a_{ij}$. \item $\mathcal{P}$ is a row stochastic matrix that can be interpreted as a Perron matrix associated with the directed graph having the external entity as a central node. This node is not influenced by the network and keeps its state constant. On the other hand it possibly influences (notice that components of $u(t_k)$ can be 0) all the other nodes. Consequently, without budget constraints, the network can reach, at least asymptotically, the state $d {\bf 1}_N$. \end{itemize} \end{remark} Several space-time control strategies can be implemented under budget constraints. For instance, we can spend the same budget for each agent {\it i.e. } $u_i(t_k)=u_j(t_k),\ \forall (i,j) \in \mathcal{V}^2$, we can also allocate the entire budget for specific agents of the network. Moreover, the budget can be spent either on few or many campaigns. Our objective is to design a space-time control strategy that minimizes the following cost function \begin{equation} J^{\infty}=\sum_{i=1}^N \lim_{t\rightarrow\infty}|x_i(t)-d| \end{equation} This can be interpreted as follows. If the entity (a company for example) is interested in convincing the public to buy some product or change their habits (practice sports or quit smoking for instance), it will try to move the asymptotic consensus value of the network as close as possible from the desired value, i.e. minimize $J^\infty$. In some other cases, like an election campaign which targets to get the opinions close to $d$ within a finite time $T$, we will minimize $J^T=\sum_{i=1}^N |x_i(T)-d|$. Therefore, in the absence of additional campaigns, the agents will exchange their opinion through the network and asymptotically converge to certain local or global agreements. It is worth noting that, after the last campaign, the system state converges exponentially fast. Indeed, in the absence of campaigns, the system dynamics is just $\dot{x}(t)=-Lx(t)$ which has the consensus manifold as a global uniformly exponentially stable attractor. This means that $x_i(T)$ is a good approximation of $\lim_{t\rightarrow\infty}x_i(t)$ when $T$ is sufficiently large. \section{Preliminaries}\label{prelim} Before starting the analysis of the multi-agent system in the presence of external influence, we state a key lemma which will help us to find the optimal solutions for the considered scenarios for the budget allocation problem. \begin{lemma}\label{lem1} Given an optimization problem (OP) under the following standard form \begin{equation} \begin{array}{cl} \underset{y \in \mathbb{R}^N}{\text{minimize}} & C(y)\\ \text{subject to} & y_i - \bar{y} \leq 0, \ \forall i \in \{1, ...,N\} \\ & - y_i \leq 0, \ \forall i \in \{1, ...,N\} \\ & \displaystyle{\sum_{i=1}^N} y_i - B \leq 0 \end{array} \label{eq:stdOP} \end{equation} where $N \in \mathbb{N}, N \geq 1$, $\bar{y}<1$, $B \geq 0$ and $C(y)$ is a decreasing convex function in $y_i$ such that one of the following two conditions hold.\\ \textbf{Case 1:} $\forall \ i \in \{1,\dots,N\},\ \exists g(y) \geq 0$ such that $$ \displaystyle\frac{\partial C(y)}{\partial y_i} = -c_i g(y)$$\\[1mm] for some $c_i \in \mathbb{R}$.\\ \textbf{Case 2:} $ \displaystyle\frac{\partial C(y)}{\partial y_i} =- \frac{1}{1-y_i}$ for all $i \in \{1,\dots,N\}$.\\[1mm] Then an optimal solution $y^*$ to this OP is given by water-filling as follows \begin{equation} y_{\omicron(i)}^* = \left\{ \begin{array}{lll} \bar{y} & \text{if} & i \leq \left\lfloor \frac{B}{\bar{y}}\right \rfloor \vspace{.1cm} \\ B- \bar{y} \left\lfloor \frac{B}{\bar{y}}\right\rfloor & \text{if} & i = \left\lfloor \frac{B}{\bar{y}}\right\rfloor +1 \\ 0 & \text{otherwise} & \end{array} \right. \label{eq:gensol} \end{equation} where $\omicron: \{1,\dots,N \} \mapsto \{1,\dots,N\}$ represents an ordering function which can be any bijection for Case 2 and, one satisfying $ c_{\omicron(1)} \geq c_{\omicron(2)} \geq \dots \geq c_{\omicron(N)}$ for Case 1. \end{lemma} {\bf Proof:} Note that all the constraint functions of the considered OP are affine, which corresponds to sufficient conditions for applying KKT conditions. Since the OP is convex, KKT conditions are necessary and sufficient for optimality. By denoting the Lagrangian by \begin{equation} \ell (y, \lambda, \lambda', \widehat{\lambda}) = C(y) + \sum_{i=1}^N \lambda_i ( y_i - \bar{y} ) - \sum_{i=1}^N \lambda_i' y_i + \widehat{\lambda} ( \displaystyle{\sum_{i=1}^N} y_i - B ), \end{equation} the first necessary and sufficient condition for optimality writes: \begin{equation} -\nabla C(y^\star) = \displaystyle{\sum_{i=1}^N \lambda_i^{\star} \nabla (y_i^\star-\bar{y}) - \sum_{i=1}^N (\lambda_i^\star)' \nabla y_i^\star} + \widehat{\lambda}^\star \nabla \left(\displaystyle \sum_{i=1}^N y_i^\star - B \right). \label{eq:KKT1} \end{equation} The primal feasibility conditions write \[ 0 \leq y_i^\star \leq \bar{y} \, \, \forall i \in \{1,\dots,N\} \] and \begin{equation} \sum_{i=1}^N y_i^\star \leq B. \label{eq:sumconst} \end{equation} All the KKT multipliers must satisfy the dual feasibility conditions: $\lambda_i^\star \geq 0$, $(\lambda_i^\star)' \geq 0$, $ \widehat{\lambda}^\star \geq 0$ for all $i\in \{1,\dots,N\}$. At last, the complementary slackness conditions are given by \[ \hspace{-2cm}\begin{split} &\lambda_i^\star (y_i^\star-\bar{y}) =0, \\ &(\lambda_i^\star)' y_i^\star =0 , \\ &\widehat{\lambda}^\star \left[ \left(\sum_{i=1}^N y_i^\star \right) - B \right] =0. \end{split}\] \textbf{Case 1:} Let us assume that $\displaystyle\frac{\partial C(y)}{\partial y_i} = c_i g(y)$. Then, we can simplify (\ref{eq:KKT1}) for component $i$ to get \begin{equation} c_i g(y^\star) = \widehat{\lambda} + \lambda_i^\star - (\lambda_i^\star)' \end{equation} which must hold for all $i \in \{1,\dots,N\}$. Since $g(y^\star)$ is identical for all $i \in \{ 1, \dots,N \}$ and is non-negative, we must have $\lambda_i^\star$, $ (\lambda_i^\star)'$ and $\widehat{\lambda}$ chose so that the above equation holds. Take $y^\star$ from (\ref{eq:gensol}). Set $\lambda_j^\star=(\lambda_j^\star)'=0$ for $j= \left(\omicron\left\lceil \frac{\beta_k}{\bar{u}}\right\rceil \right) $ as it is the only component with a non-saturated solution. For any $i$ such that $\omicron(i) < j$, we have $c_i \geq c_j$ and this can be satisfied by setting $y_i^\star=\bar{u}$ and having $\lambda_i^\star>0$ and $\lambda'^\star_i=0$. On the other hand, for any $i$ such that $\omicron(i) > j$, we set $y^*_i=0$ and the KKT conditions are satisfied if $\lambda_i^\star=0$ and $(\lambda_i^\star)' >0$. The solution from (\ref{eq:gensol}) can also be verified to satisfy (\ref{eq:sumconst}) and therefore, we have it satisfying all the KKT conditions.\\ \color{black} \textbf{Case 2:} When $\displaystyle\frac{\partial C(y)}{\partial y_i} =\frac{-1}{1-y_i}$, the role of $c_i$ is replaced by $\displaystyle\frac{-1}{1-y_i}$ and the agent with $y_i^\star=0$ has the lowest absolute value on the left side of the Lagrangian ({\it i.e. } $1$), and the agent with the saturation $y_i^\star=\bar{y}$ has the highest absolute value (i.e., $\displaystyle\frac{1}{1- \bar{u}}$). Due to symmetry, any agents can be chosen to have the min or max saturation. \hfill $\blacksquare$ Lemma 1 provides a water-filling type optimal allocation policy for the problem considered. The solution in Case 1 is to select the best agent in terms of the coefficient $c_n$, allocate the maximum possible $y_n$ to it, then allocate the remaining budget to the next best agent and so on. This implies sorting the agents based on $c_n$, which is done using the function $\omicron$, and saturating the $y_n$ for the first $ \left\lfloor \frac{B}{\bar{y}}\right\rfloor$ agents, assigning the remaining budget to the next agent and $0$ to the rest. \section{Performance analysis of the considered benchmark strategy}\label{ex_mot} As a reference strategy, we consider the broadcasting-based marketing. For every campaign, it consists in allocating the available campaign budget uniformly among all the consumers. However, the campaign budget is allowed to vary over time that is, from campaign to campaign, under the total budget constraint. In order to highlight the potential of designing more advanced strategies, we show here that for some particular network topologies it is possible to quantify analytically the potential gain brought by implementing target marketing ({\it i.e. } using space-time strategies) over broadcasting strategies. First, let us compute the optimal revenue that we can get by broadcasting strategies {\it i.e. } $u_i(t_k)=u_j(t_k)\triangleq \alpha_k,\ \forall i,j\in \mathcal{V}$. We suppose that the graph representing the social network contains a directed spanning tree ({\it i.e. } a directed graph in which, except the root which is not influenced, each node is influenced by a single other node called parent). Let $v$ be the left eigenvector of $L$ associated with the eigenvalue 0 and satisfying $v^\top {\bf 1}_N=1$. Therefore, in the absence of any control action, one has that $\lim_{t \to \infty} x(t) = v^\top x(0){\bf 1}_N\triangleq {x}^{\infty}_0$. Let us also introduce the following notation: \begin{equation} {x}^{\infty}_k=\lim_{t\rightarrow\infty}e^{-L(t-t_k)}x(t_k)=v^\top x(t_k){\bf 1}_N,\quad \forall k\in{\mathbb N}. \end{equation} Following \eqref{eq_collective_dynamics} and using $\delta_k=t_{k+1}-t_k,\ D_k=diag(u(t_{k}))$ one deduces that: \begin{equation} \begin{split} {x}^{\infty}_{k+1}=v^\top x(t_{k+1}){\bf 1}_N=v^\top\left[u(t_{k+1})d+(I_N-D_{k+1})x(t_{k+1}^-)\right]{\bf 1}_N =v^\top\left[u(t_{k+1})d+(I_N-D_{k+1})e^{-L\delta_k}x(t_k)\right]{\bf 1}_N. \end{split} \end{equation} Since $v^\top L={\bf 0}_N$ one has that $v^\top e^{-L\delta_k}=v^\top$ and consequently one obtains that \begin{equation}\label{recurrence1} {x}^{\infty}_{k+1}- {x}^{\infty}_k=v^\top\left(u(t_{k+1})d-D_{k+1}e^{-L\delta_k}x(t_k)\right){\bf 1}_N. \end{equation} In the case of broadcasting one has $u(t_k)=\alpha_k {\bf 1}_N$ and $D_k=\alpha_k I_N$, where $\alpha_k \in [0,\bar{u}]$ for all $k \in \{0,\dots,M\}$. Therefore, using $v^\top{\bf 1}_N=1$, \eqref{recurrence1} becomes \begin{equation} {x}^{\infty}_{k+1}- {x}^{\infty}_k=\alpha_{k+1}(d{\bf 1}_N- {x}^{\infty}_k), \end{equation} which can be equivalently rewritten as \begin{equation}\label{recurrence2} (d{\bf 1}_N- {x}^{\infty}_{k+1})=(1-\alpha_{k+1})(d{\bf 1}_N- {x}^{\infty}_k). \end{equation} Using \eqref{recurrence2} recursively one obtains that \begin{equation} J^{\infty}(\alpha) =|{\bf 1}_N^\top(d{\bf 1}_N- {x}^{\infty}_{M})|=\prod_{\ell=0}^{M}(1-\alpha_\ell)|{\bf 1}_N^\top(d{\bf 1}_N- {x}^{\infty}_0)| = \left(N d - {\bf 1}_N^\top {x}^{\infty}_0 \right) \prod_{\ell=0}^{M}(1-\alpha_\ell). \end{equation} where $J^\infty(\alpha)$ denotes the cost associated with a broadcasting strategy using $\alpha_k$ at campaign $t_k$. \begin{proposition} The cost $J^{\infty}(\alpha) $ obtained when implementing broadcasting is minimized by using the maximum possible investments as soon as possible, i.e., for all $k \in \{0,\dots,M\}$, \begin{equation} \alpha_k = \left\{ \begin{array}{lll} \bar{u} & \text{if} & k \leq \left\lfloor \frac{B}{N\bar{u}}\right \rfloor \vspace{.1cm} \\ \frac{B}{N}- \bar{u} \left\lfloor \frac{B}{N\bar{u}}\right\rfloor & \text{if} & k = \left\lfloor \frac{B}{N\bar{u}}\right\rfloor +1 \\ 0 & \text{otherwise} & \end{array} \right.\label{eq:solvbrod} \end{equation} \end{proposition} {\bf Proof:} Minimizing $J^{\infty}(\alpha)$ under broadcasting strategy assumption is equivalent with the minimization of $\prod_{\ell=0}^{k+1}(1-\alpha_\ell)$. This is equivalent to minimizing \begin{equation} C(\alpha) =\log\left(\prod_{\ell=0}^{k+1}(1-\alpha_\ell) \right) \end{equation} and we have \begin{equation} \frac{\partial C}{\partial \alpha_\ell} = -\frac{1}{1-\alpha_{\ell}} \end{equation} This results in an OP which satisfies the conditions to use Lemma 1 Case 2. \hfill $\blacksquare$ It is noteworthy that for $u_i\in[0,1)$ one has that \begin{equation}\label{ineqB} \prod_{\ell=0}^{k+1}(1-\alpha_\ell)\ge 1-\sum_{\ell=0}^{k+1}\alpha_\ell\ge 1-\frac{B}{N}. \end{equation} The last inequality in \eqref{ineqB} comes from the broadcasting hypothesis $u_i(t_\ell)=\alpha_\ell,\ \forall i\in\mathcal{V}$ and consequently the budget spent in the $\ell-$th campaign is $N\cdot \alpha_\ell$. Therefore, the total budget for $k+2$ campaigns is $N\sum_{\ell=0}^{k+1}\alpha_\ell$ and has to be smaller than $B$. \\ Thus \begin{equation} J=|{\bf 1}_N^\top(d{\bf 1}_N- {x}^{\infty}_{k+1})|\ge (1-\frac{B}{N})|{\bf 1}_N^\top(d{\bf 1}_N- {x}^{\infty}_0)|. \end{equation} The interpretation of \eqref{ineqB} is that for broadcasting strategy the minimal cost $J$ is obtained when the whole budget is spent in one marketing campaign (provided this is possible {\it i.e. } $B\leq N\bar{u}$), otherwise the first inequality in \eqref{ineqB} becomes strict meaning that \begin{equation} J> (1-\frac{B}{N})|{\bf 1}_N^\top(d{\bf 1}_N- {x}^{\infty}_0)|. \end{equation} Let us now suppose that the graph under consideration is a directed spanning tree having the first node as root. Then, using a targeted marketing in which the external entity influences only the root, we will show that, under the same budget constraints, the cost $J$ will be smaller. Indeed, for this graph topology one has $v=(1,0,\ldots,0)^\top$ yielding $ {x}^{\infty}_k=x_1(t_k){\bf 1}_N$. Moreover, the dynamics of $x_1(\cdot)$ writes as: \begin{equation}\label{eq_dynamics_x1} \left\{\begin{split} &\dot{x}_1(t)=0, \qquad t\in[t_k,t_{k+1})\\ &x_1(t_k)=u_1(t_k)d+(1-u_1(t_k))x_1(t_k^-) \end{split}\right.,\ \forall k\in{\mathbb N}. \end{equation} Therefore, \begin{equation} x_1(t_k)=u_1(t_k)d+(1-u_1(t_k))x_1(t_{k-1}) \end{equation} yielding \begin{equation} d-x_1(t_k)= [1-u_1(t_k)][d-x_1(t_{k-1})], \end{equation} which is equivalent to \eqref{recurrence2}. As we have seen before, in the broadcasting strategies one has $\sum_{\ell=0}^{k+1}\alpha_\ell\le\frac{B}{N}$ while targeting only the root, the constraint becomes $\sum_{\ell=0}^{k+1}u_1(t_\ell)\le B$. Therefore, for any given broadcasting strategy $(u_1,u_2,\ldots,u_k)$ there exists a targeted on the root strategy that consists in repeating $N$ times $(u_1,u_2,\ldots,u_k)$. Doing so, one obtains \begin{equation} (d{\bf 1}_N- {x}^{\infty}_{k+1})=\left[\prod_{\ell=0}^{k+1}(1-\alpha_\ell)\right]^N(d{\bf 1}_N- {x}^{\infty}_0). \end{equation} which leads to a cost which is seen to be less than the one obtained when using broadcasting-based marketing. \section{General optimal control strategy}\label{main} First, we rewrite the optimal control problem as an optimization problem by treating the control $u(t_k)$ as an $NM-$dimensional vector to optimize. We denote $u_{i,k}=u_i(t_k)$ to represent the control for agent $i \in \mathcal{V}$ at time $t_k$. Then our problem can be rewritten as \begin{equation} \begin{array}{cl} \underset{u \in \mathbb{R}^{NM}}{\text{minimize}} & \ J^\infty(u) \\ \text{subject to} & u_{i,k} - \bar{u} \leq 0, \ \ \ \forall i \in \mathcal{V}, k \in \{0,\dots,M\}\\ & - u_{i,k} \leq 0, \ \ \ \forall i \in \mathcal{V}, k \in \{0,\dots,M\}\\ & \displaystyle{\sum_{i=1}^N \sum_{k=1}^M} u_{i,k} - B \leq 0 \end{array} \label{eq:genOP} \end{equation} Here, $J^\infty(u)$ is seen as a multilinear function. Before solving problem \eqref{eq:genOP} we want to get further insights on structure of the optimal solution, which will lead to important simplifications. Therefore, instead of solving the general optimization problem \eqref{eq:genOP}, we first consider splitting our problem into time-allocation and space-allocation. \begin{assumption}\label{ass1} The graph $\mathcal{G}=(\mathcal{V},L)$ is weakly connected (sometimes referred to as quasi-strongly connected) {\it i.e. } it contains at least one directed spanning tree. \end{assumption} Assumption \ref{ass1} is standard in the analysis of multi-agent systems and guarantees that information flows over the entire network. In our analysis this assumption is not essential but we start analyzing networks that satisfy it and, in a second step, we solve the budget allocation problem over disconnected networks. When Assumption \ref{ass1} holds, if we know that for campaign $k$ a maximum budget of $\beta_k \leq B$ has been allocated ({\it i.e. }, for a given time-allocation), we find the optimal control strategy for the $k-$th campaign. Moreover, for long campaign duration ({\it i.e. } $t_{k+1}-t_k$ large) and given time budget allocation $(\beta_0,\dots,\beta_M)$, we provide a computationally oriented optimal space allocation of the budget. Based on these results, we propose a discrete-action space-time control strategy. Next, we extend the results to the case when Assumption \ref{ass1} does not hold and the network consists of a union of weakly connected clusters. \subsection{Minimizing the per-campaign cost} In this section we consider that Assumption \ref{ass1} holds and the budget $\beta_k$ for each campaign is a priori given. The objective is to find the spatial allocation of the budget that optimizes the cost $|{\bf 1}_N^\top(d{\bf 1}_N- {x}^{\infty}_k)|$ associated with the time allocation $(\beta_0,\dots,\beta_M)$. Consequently, the following budget constraint has to be considered at campaign $k$: \begin{equation} \sum_{i=1}^N u_i(t_k) \leq \beta_k. \label{eq:stagebudgetcons} \end{equation} The associated cost for the campaign $k$ is written as \begin{equation} \begin{array}{l} J^{\infty}_k(u(t_k)) =|{\bf 1}_N^\top(d{\bf 1}_N- {x}^{\infty}_k)|=N\cdot|d - \displaystyle\sum_{i=1}^N v_i x_i(t_k)| =N\cdot \left|d - \displaystyle\sum_{i=1}^N v_i (u_i(t_k) d + [1-u_i(t_k)] x_i(t_k^-) ) \right|. \end{array} \end{equation} This rewriting of the cost allows us to define the right quantity to measure the influence power of an agent, which translates the gain the marketer can make by investing on this agent. The corresponding quantity is defined and used in the next proposition. \begin{proposition}\label{prop:connected} Define the influence power of Agent $i$ as $p^k_i = v_i |d -x_i(t_k^-)|$. Denote by $\pi_k: \mathcal{V} \to \mathcal{V}$, a bijection which sorts the agents based on decreasing $p^k_i$, {\it i.e. } $p^k_{\pi_k(1)}\geq p^k_{\pi_k(2)}\geq \dots \geq p^k_{\pi_k(N)}$. Under Assumption \ref{ass1} the cost $J^{\infty}_k(u(t_k))$ is minimized by the following investment profile \begin{equation} u_{\pi(i)}^* (k)= \left\{ \begin{array}{lll} \bar{u} & \text{if} & i \leq \left\lfloor \frac{\beta_k}{\bar{u}}\right \rfloor \vspace{.1cm} \\ \beta_k - \bar{u} \left\lfloor \frac{\beta_k}{\bar{u}}\right\rfloor & \text{if} & i = \left\lfloor \frac{\beta_k}{\bar{u}}\right\rfloor +1 \\ 0 & \text{otherwise} & \end{array} \right. \label{eq:solvbrod} \end{equation}. \end{proposition} {\bf Proof:} Note that minimizing $J^{\infty}_k(u(t_j))$ is equivalent with the minimization of \[ C(u(t_k))= \left( d - \sum_{i=1}^N v_i (u_i(t_k) d + (1-u_i(t_k))x_i(t_k^-) ) \right)^2 \] with the constraints $0 \leq u_i(t_k) \leq \bar{u}$ for all $i \in \mathcal{V}$ and (\ref{eq:stagebudgetcons}). We notice that \begin{equation} \frac{\partial C}{ \partial u_i(t_k)} = - 2 v_i (d-x_i(t_k^-)) (d- x_k^{\infty}) \end{equation} where we used the notation $x_k^{\infty}=v^\top x(t_k)$.\\ If $d =1$, then $x_i(t_k^-) \leq 1,\ \forall i\in\mathcal{V}$ and $$(d-x_i(t_k^-)) (d- x_k^{\infty})\ge0.$$ On the other hand, if $d=0$, we have $x_i(t_k^-) \geq 0,\ \forall i\in\mathcal{V}$ and $$(d-x_i(t_k^-)) (d- x_k^{\infty})\ge0.$$ Therefore, we can rewrite the above equation as \begin{equation}\label{eq:KKTfinal} \frac{\partial C}{ \partial u_i(t_k)} = -\gamma_i g(u(t_k)),\qquad g(u(t_k))=|d- x_k^{\infty}| \end{equation} which satisfies the conditions to use Case 1 of Lemma \ref{lem1}. \hfill $\blacksquare$ \subsection{Space allocation for long campaign duration} In the following we consider that Assumption \ref{ass1} holds and a finite number of marketing campaigns with a priori fixed budget are scheduled such that $t_{k+1}-t_k$ is very large for each $k \in \{0,1,\dots,M-1\}$. Due to Assumption \ref{ass1} and the long duration of the campaigns, we can assume that $x_i(t_{k+1}^-) = x_k^{\infty}$ for all $i \in \mathcal{V}$ and $k \in \{0,1,\dots,M-1\}$. Under this assumption, we write \begin{equation} x_i(t_1^-) = x_0^{\infty}(u(t_0))= \sum_{i=1}^N v_i (d u_i(t_0)+ x_i(t_0^-) (1-u_i(t_k)) ) \end{equation} for any $i \in \mathcal{V}$. Subsequently, we have \begin{equation}\label{longstagedynamics} x_k^{\infty}(u(t_0),u(t_1),\dots,u(t_k))= \displaystyle\sum_{i=1}^N v_i \left[ d u_i(t_k)+ x_{k-1}^{\infty}(u(t_0),\dots,u(t_{k-1}))(1-u_i(t_k)) \right] \end{equation} for all $k \in \{1,2,\dots,M\}$. Our objective is to minimize $$J^\infty(u)=N\cdot \big|x_M^{\infty}(u(t_0),\dots,u(t_M)) -d\big|$$ and this can be done using the Proposition \ref{prop:longstage} below. \begin{proposition}\label{prop:longstage} Define $\rho_k : \mathcal{V} \to \mathcal{V}$ a bijection such that $\rho_0 = \pi_0$ (defined in Proposition \ref{prop:connected}) and for all $k \in \{1,2,\dots,M\}$, $\rho_k$ gives the agent index after sorting over $v_i$, {\it i.e. } $v_{\rho_k(1)} \geq v_{\rho_k(2)} \geq \dots \geq v_{\rho_k(N)}$. Let Assumption \ref{ass1} hold and the time budget allocation be given by $\beta= (\beta_0,\dots,\beta_M)$ such that $\displaystyle\sum_{k=1}^M \beta_k \leq B$ and $\beta_k \leq N \bar{u}$. Then, the optimal allocation per agent minimizing the cost $J(u)$ is given by \begin{equation} u_{\rho_k(i)}^* (k)= \left\{ \begin{array}{lll} \bar{u} & \text{if} & i \leq \left\lfloor \frac{\beta_k}{\bar{u}}\right \rfloor \vspace{.1cm} \\ \beta_k - \bar{u} \left\lfloor \frac{\beta_k}{\bar{u}}\right\rfloor & \text{if} & i = \left\lfloor \frac{\beta_k}{\bar{u}}\right\rfloor +1 \\ 0 & \text{otherwise} & \end{array} \right. \label{eq:solvbrod} \end{equation}. \end{proposition} {\bf Proof:} Minimizing $J^\infty(u)$ is equivalent with the minimization of \[ C(u)= \left( d - x_M^{\infty}(u(t_0),\dots,u(t_M)) \right)^2 \] with the constraints given in (\ref{eq:genOP}). We have \[ \frac{\partial C(u)}{\partial u_{i,k}} = 2 (d- x_M^{\infty}) \frac{\partial x_M^{\infty}}{\partial u_{i,k}} \] for any $i \in \mathcal{V},\ k \in \{0,1,\dots,M\}$. Observe that \[ \frac{\partial x_k^{\infty}}{\partial u_{i,k}} = v_i ( d - x_{k-1}^{\infty} ) \] for $k\in \{1,\dots,M\}$ and \[ \frac{\partial x_0^{\infty}}{\partial u_{i,0}} = \pm \gamma_i \] with the sign being negative if $x_i(t_0^-) >d$ and positive otherwise. We also have \[ \frac{\partial x_k^{\infty}}{\partial u_{i,k-1}} = \frac{\partial x_{k-1}^{\infty}}{\partial u_{i,k-1}} \sum_{i \in \mathcal{V}} v_i (1-u_{i,k}) \] Using the equations above iteratively, we have \[ \frac{\partial x_M^{\infty}}{\partial u_{i,k}} = v_i ( d - x_{k-1}^{\infty} ) \prod_{j=k+1}^M \sum_{i \in \mathcal{V}} v_i (1-u_{i,j}) \] for $k \geq 1$ and \[ \frac{\partial x_M^{\infty}}{\partial u_{i,0}} = \pm \gamma_i \prod_{j=1}^M \sum_{i \in \mathcal{V}} v_i (1-u_{i,j}) \] Therefore, \[ \frac{\partial C(u)}{\partial u_{i,k}} = 2 (d- x_M^{\infty}) v_i ( d - x_{k-1}^{\infty}(u) ) \prod_{j=k+1}^M \sum_{i \in \mathcal{V}} v_i (1-u_{i,j}) \] for $k \geq 1$ and \[ \frac{\partial C(u)}{\partial u_{i,0}} = 2 |d- x_M^{\infty}(u)| \gamma_i \prod_{j=1}^M \sum_{i \in \mathcal{V}} v_i (1-u_{i,j}) \] Assume that the optimal time-allocation of budget is known and is given by $\beta=(\beta_0,\beta_1,\dots,\beta_M)$ such that \[ \sum_{i=1}^N u_{i,k} = \beta_k,\quad \forall k \in \{0,1,\dots,M\} \] Then, the optimal spatial allocation problem within any campaign $k$ is an OP satisfying Case 1 of Lemma \ref{lem1}. \hfill $\blacksquare$ Due to the long stage duration, the opinions are in consensus for all $t_k$, except for the case of $k=0$, which is the first campaign. This means that in the first stage, the agents are sorted based on both their initial opinions and their centralities, but from the next stage onwards, only their centralities are considered. \subsection{Budget allocation for clusterized networks}\label{sec:cluster} In practice, when the social network becomes large, several issues may appear. Computational complexity limitations may prevent from operating with $NM-$dimensional spaces. Also, uncertainties or inaccuracies on $L$ or the centrality vector may appear. Motivated by these two observations, we formulate here the solution in the case where the network is structured into clusters which may be naturally present or arise from a dynamical process. Clusters can be determined from cluster detection algorithms such as the one proposed in \cite{blondel2008,MG10}) and emphasize a time-scale separation in the dynamics of the system. The subsequent analysis could use the time-scale modeling (\cite{ChowKokotovic,Arcak2007,MMN-Time-scale2016}) but we would have to carefully deal with the jumps introduced by the advertising campaigns. Instead of doing that, in this paper we simplify the analysis and the presentation by neglecting the weakest interconnections that may appear between agents belonging to different clusters. In other words we analyze the behavior of the reduced-order dynamics. Therefore, the network is assumed to be the union of a certain number of weakly connected clusters; in that case, we show that the cluster size has to be taken into account to optimally allocate the available budget. Let us consider that $\mathcal{V}=\bigcup_{i=1}^m \mathcal{C}_i$, where $\mathcal{C}_1,\ldots,C_m$ are disjoint subsets of agents which are weakly connected. For any $i\in\{1,\ldots,m\}$ we also denote by $N_i$ the cardinality of cluster $\mathcal{C}_i$ and by $x_{C_i}\in{\mathbb R}^{N_i}$ the column vector collecting all the states of the agents in cluster $\mathcal{C}_i$. Since Assumption \ref{ass1} does not hold, only local agreements corresponding to each cluster are obtained. Basically, $L=\mathrm{diag}(L_1,\ldots,L_m)$ where $L_i\in{\mathbb R}^{N_i\times N_i}$ is the Laplacian matrix associated with the interactions in cluster $\mathcal{C}_i$. In the sequel we denote by $x_{\mathcal{C}_i,k}^\infty$ the agreement value of cluster $\mathcal{C}_i$ starting from the initial condition $x_{C_i}(t_k)$. In other words, if the last advertising campaigns takes place at time $t_k$ than the system converges to the following state: \begin{equation} {x}_k^\infty=\lim_{t\rightarrow\infty}e^{-L(t-t_k)}x(t_k)=\left(\begin{array}{c}x_{\mathcal{C}_1,k}^\infty{\bf 1}_{N_i}\\ \vdots\\x_{\mathcal{C}_m,k}^\infty{\bf 1}_{N_m}\end{array}\right),\quad \forall k\in{\mathbb N}. \end{equation} A fixed number of advertising campaigns is assumed and the last campaign takes place at time $t_M$. The overall cost to minimize can be expressed as: \begin{equation} J^\infty(u)=\sum_{i=1}^mN_i\cdot \big|x_{C_i,M}^{\infty}(u(t_0),\dots,u(t_M)) -d\big|. \end{equation} In order to characterize the optimal control strategy in the case of disconnected networks we introduce some additional notation: $v^i$ is the left eigenvector of $L_i$ associated with the simple eigenvalue 0 and satisfying $(v^i)^\top{\bf{1}_{N_i}}=1$. It can noticed that \begin{equation}\label{longstagedynamicscluster} x_{C_i,k}^{\infty}(u(t_0),u(t_1),\dots,u(t_k))= \displaystyle\sum_{j\in\mathcal{C}_i} v^i_j \left[ d u_j(t_k)+ x_{\mathcal{C}_i,k-1}^{\infty}(u(t_0),\dots,u(t_{k-1}))(1-u_j(t_k)) \right] \end{equation} for all $k \in \{1,2,\dots,M\}$. This observation is exploited in the next proposition to define an appropriate measure of the influence power of an agent. Here again, the corresponding quantity is used to express the optimal budget allocation policy. \begin{proposition}\label{prop:cluster} Let $\mathcal{V}=\bigcup_{i=1}^m \mathcal{C}_i$, where $\mathcal{C}_1,\ldots,C_m$ are disjoint subsets of agents which are weakly connected. Let also the time budget allocation be given by $\beta= (\beta_0,\dots,\beta_M)$ such that $\displaystyle\sum_{k=1}^M \beta_k \leq B$ and $\beta_k \leq N \bar{u}$. For each agent $j\in\mathcal{V}\cap\mathcal{C}_i$ define the influence power as $s_j^k=N_i\cdot v^i_j\cdot|d-x_j(t_k^-)|$. At last, define $\sigma_k:\mathcal{V}\mapsto\mathcal{V}$ a bijection which sorts the agents based on decreasing $s_j^k$ {\it i.e. } $s^k_{\mathcal{H}(1)} \geq s^k_{\mathcal{H}(2)} \geq \dots \geq s^k_{\mathcal{H}(N)}$. Then, the optimal allocation per agent minimizing the cost $J(u)$ is given by \begin{equation} u_{\sigma_k(i)}^* (k)= \left\{ \begin{array}{lll} \bar{u} & \text{if} & i \leq \left\lfloor \frac{\beta_k}{\bar{u}}\right \rfloor \vspace{.1cm} \\ \beta_k - \bar{u} \left\lfloor \frac{\beta_k}{\bar{u}}\right\rfloor & \text{if} & i = \left\lfloor \frac{\beta_k}{\bar{u}}\right\rfloor +1 \\ 0 & \text{otherwise} & \end{array} \right. \label{eq:cluster} \end{equation} \end{proposition} {\bf Proof:} The result follows again by applying Case 1 of Lemma \ref{lem1}. We avoid unnecessary details and we just point out that \begin{equation} \frac{\partial J(u)}{\partial u_{j,k}} =-s^k_j \end{equation} leading to the desired result when $g$ is identically 1 in Lemma \ref{lem1}. \hfill $\blacksquare$ \section{Discrete-action space-time control strategy}\label{Sec:discrete_action} Motivated by the results in Propositions \ref{prop:connected} and \ref{prop:longstage}, in this section we consider that $u_i(t_k)\in\{0,\bar{u}\}, \forall i\in\mathcal{V}, k\in{\mathbb N}$ and $B= Q \bar{u}$ with $Q\in{\mathbb N}$ given a priori. The objective is to numerically find the best space-time control strategy for a given initial state $x_0$ of the network. \subsection{Brute force algorithm} We will consider in turn the cases of short and long campaigns. In the short-campaign case, given a time allocation consisting of the budgets $\beta_k = b_k \bar{u}$ at each campaign, either Proposition \ref{prop:connected} (if the directed graph is weakly connected) or Proposition \ref{prop:cluster} (if the network is clustered) tells us how to allocate each campaign budget optimally across the agents. Denote all possible budgets at one campaign by $\mathcal{B} = \{0, \dotsc, \min\{N, Q\}\}$. A simple algorithm is then to search in a brute-force manner all possible time allocations $\boldsymbol{b} = (b_0, \dotsc, b_M) \in \mathcal{B}^{M+1}$, subject to the constraint $\sum_k b_k \leq Q$. For each such vector $\boldsymbol{b}$, we simulate the system from $x_0$ with dynamics \eqref{eq_dynamics} where the budget $b_k$ is allocated with Proposition \ref{prop:connected} or \ref{prop:cluster}. After the last campaign, we compute with the appropriate formula the final, infinite-time state of the network $x_F(\boldsymbol{b})$. We retain a solution with the best average cost: $$ \min_{\boldsymbol{b}} \frac{1}{N} \sum_{i=1}^N \vert x_{i,F}(\boldsymbol{b}) - d \vert $$ where subscript $i$ is the agent index (recall that the agents all have the same opinion at infinite time if the network is weakly connected, and if it is clustered each cluster has its own opinion). Note that we report the average cost over the agents, $J^\infty/N$, instead of the sum $J^\infty$ because this version is easier to interpret as a mean deviation of each agent from the target state. Furthermore, the simulation can be done in closed form, using the fact that $x^-(t_{k+1}) = e^{-L \delta_k} x(t_k)$. The complexity of this search is $O(N^3 (M+1) (\min\{N, Q\}+1)^{M+1})$, dominated by the exponential term. Therefore, this approach will only be feasible for small values of $N$ or $Q$, and especially of $M$. Considering now the long-campaign case for weakly connected networks, we could still implement a similar brute-force search, but using dynamics \eqref{longstagedynamics} for inter-campaign propagation and Proposition \ref{prop:longstage} for allocation over agents. However, now we can do better by taking advantage of the fact that for all $k > 1$, the opinions of all the agents reach identical values. Using this, we will derive a more efficient, dynamic programming solution to the optimal control problem: $$ \min_{\boldsymbol{b}} \vert x_{1,F}(\boldsymbol{b}) - d \vert $$ where the long-campaign dynamics apply but by a slight abuse of notation we keep it the same as above. \subsection{Dynamic programming algorithm} When the graph is weakly connected and we are in the long-campaign case, we are able to provide a dynamic programming (DP) algorithm that is much more efficient than the brute force search above. Owing to the long campaigns, the agents have already reached consensus by $t_1$ and so we can use a scalar variable $y_k=|d-x^{\infty}_{k-1}|$ to represent the cost (or equivalently, the state of the network) before the campaign for all $k \in \{1,2,\dots,M+1\}$, i.e., from the second campaign onwards. After the initial-campaign decision at $k=0$, $y_1$ is computed in a special way since the network is not yet at consensus. To see how, consider a fixed, given initial opinion $x(t_0)$. Then, $y_1=f_0(b_0)$ where $f_0$ describes the evolution of the network after allocating the first-campaign budget $b_0$ using Proposition \ref{prop:connected}: \begin{equation} f_0(b_0)=\sum_{i=1}^{b_0} v_{\rho_0(i)} (1-\bar{u}) |x_{\rho_0(i)}(t_0) -d| + \sum_{i=b_0+1}^N v_{\rho_0(i)} |x_{\rho_0(i)}(t_0) -d| \label{eq:cost0stage} \end{equation} Now, to compute $y_k$ after the decisions at subsequent campaigns $k \in \{1,2,\dots,M\}$, {\it i.e. } for $t_1,t_2,\dots,t_M$, define function $f: \mathbb{Z}_{\geq 0} \to (0,1]$: \begin{equation} f(b) = \left(1-\bar{u} \sum_{i=1}^{b} v_{\rho_1(i)} \right). \end{equation} If $b_k$ denotes the budget allocated to campaign $k$, then it can be shown that after the optimal spatial allocation described in Proposition \ref{prop:longstage}, \begin{equation} y_{k+1}=y_k f(b_k), \end{equation} for all $k \in \{1,2,\dots,M\}$. This lets us write the final cost as \begin{equation} y_{M+1} = y_1 \prod_{k=1}^M f(b_k). \end{equation} In addition to the network state, we will need an additional integer state $r_k \in \{0, \dotsc, Q\}$ that keeps track of the remaining budget. This variable is initialized to the total budget $r_0=Q$ and evolves according to $r_{k+1} = r_k - b_k$. For any given $y_1$ obtained after using a budget $b_0$ during campaign $0$, minimization of the final cost involves minimizing $\prod_{k=1}^M f(b_k)$, with the constraint $\sum_{k=1}^M b_k \leq r_1$. Since $f(b) >0$ for any $b \geq 0$, we can minimize the final cost by minimizing the logarithm of the product mentioned above, i.e., the minimization of the final cost is equivalent to the following optimization problem: \begin{equation}\label{eq:logproblem} \begin{array}{c} \min_{b_0,b_1,\dots,b_M} \log(f_0(b_0)) + \sum_{k=1}^M \log(f(b_k)),\\ \text{such that } \sum_{k=0}^M b_k \leq Q ,\\ \text{where } b_k \in \{0,1,\dots,\min\{Q,N\}\}, \ \forall k \in \{0,\dots,M\} \end{array} \end{equation} where the budget allocated to each campaign is upper-bounded by the budget available and the number of agents in the network as assumed in Proposition \ref{prop:cluster}. In order to implement the DP algorithm, we keep a value function $V_k$ that represents at each $k$ the sum of logarithms from step $k$ onwards. This value function only depends on the remaining budget: \begin{equation} \begin{array}{l} V_M(r_M)= \log(f(r_M)), \\ V_k(r_k)= \min_{b_k \in \{0,1,\dots,\min\{N,r_k\} \}} \left[] \log(f(b_M)) +V_{k+1}(r_k-b_k) \right] \,\,\,\, \text{for } k \in \{M-1, M-2,\dots,1\},\\ V_0 = \min_{b_0 \in \{0,1,\dots,\min\{N,Q\} \}} \left[\log(f_0(b_0)) +V_{1}(Q-b_0)\right] \end{array} \end{equation} To understand this algorithm, note first that because $f$ is a strictly decreasing function, the optimal budget to use at campaign $M$ is all the remaining budget, which leads to the final-campaign DP initialization $V_M(r_M)=\log(f(r_M))$. Then, at intermediate campaigns, we simply minimize the summation in \eqref{eq:logproblem} with a DP rule. Finally, for the initial campaign $k=0$, instead of using $f$, the cost for any initial budget $b_0$ is calculated explicitly by taking $\log(f_0(b_0))$ defined in \eqref{eq:cost0stage}. Note that $V_0$ is a scalar constant, that represents the total optimal cost of the solution. Once $V_k$ is available, an optimal solution is found by a forward pass, as follows: \begin{equation} \begin{array}{ll} b_0^* = \arg\min_{b_0 \in \{ 0, 1, \dots, \min\{Q, N\}\}}\left[ \log(f_0(b_0)) +V_{1}(Q-b_0) \right]\\ b_k^* = \arg\min_{b_k \in \{0, 1, \dots, \min\{r_k, N\}\}} \left[\log(f(b_k)) +V_{k+1}(r_k-b_k)\right], \quad \text{for } k=1, \dotsc, M-1\\ b_M^* = r_M\\ \end{array} \end{equation} The complexity of the backward pass for value function computation is $O \left(MNQ\min\{N, Q\}\right)$ (the complexity of the forward pass is much smaller). To develop an intuition, take the case $N < Q$; then the algorithm is quadratic in $N$ and linear in $M$ and $Q$. This allows us to apply the algorithm to much larger problems than the brute-force search above. \color{black} \subsection{Numerical results} In this section, we begin by exemplifying on a small-scale problem: the short-campaign brute-force algorithm; long-campaign DP; and the clustered case (again with the brute-force method). After that, to illustrate the scalability of the proposed DP method with respect to the size of the network and to the number of campaigns, we devise a larger experiment with more agents and campaigns, and apply DP to it. \begin{figure}[!htb] \vspace*{1em} \centering \includegraphics[width=0.4\textwidth]{graph_connected} \includegraphics[trim=1cm 7cm 1cm 5cm, clip,width=0.55\textwidth]{vx0_smallgraph} \caption{Left: small-scale weakly connected graph. Right: agent centralities, sorted in descending order.}\label{smallscale} \end{figure} The small-scale problem has $N=15$ agents, and we start with the weakly connected graph from Figure \ref{smallscale}, left. The target opinion $d$ is $1$. The initial opinions of the agents are distributed on an equidistant grid, starting from $0$ for agent $1$, up to $1$ for agent $15$ (so agents with smaller indices have opinions closer to zero). The centrality of each agent is shown in Figure \ref{smallscale}, right. There are $4$ campaigns, corresponding to $M=3$, and the budget $Q=N=15$ and $\bar{u}=0.2$. For a short campaign length $\delta_k = 0.5\ \forall k$, the brute-force approach gets the results from Figure \ref{shortstage}. The final cost (each individual agent's difference from the desired opinion) is $0.3259$. Examining now the list of agents influenced, we see that these agents are generally among those with large centralities. Nevertheless, relatively lower-centrality agents are preferred when their opinion is far from the target (as is the case for agents $1$ and $4$, whose initial opinion is small). To better see the advantages of a well-designed advertising strategy, we compare the results above with the uncontrolled case (no advertising), and to the broadcast strategy (which consists of spending the entire budget at the initial time, with $\bar u$ allocated to each agent). The cost without using any control in this situation is $0.5135$, and the cost with the broadcast strategy is $0.4108$, i.e., we observe a $20$\% gain over the uncontrolled system using the broadcast and around $20$\% gain over the broadcast using the optimal strategy. \begin{figure}[!htb] \vspace*{1em} \centering \includegraphics[trim=1cm 7cm 1cm 5cm, clip,width=0.55\textwidth]{shortstage} \caption{Results for short campaigns. The bottom bar plot shows the budget allocated by the algorithm at each campaign, with the agents influenced in each campaign shown above each bar. The top plot shows the opinions of the agents, with an additional, long campaign converging to the average opinion (so the last campaign duration is not to scale). The circles indicate the opinions \emph{right before} applying the control at each campaign; note the discontinuous transitions of the opinions after control.}\label{shortstage} \vspace{-1cm} \end{figure} For the second experiment, in the same network of agents, we consider long campaigns, i.e., $t_{k+1}-t_k \to \infty$. We apply DP, with the results shown in Figure \ref{longstage_dp}. The solution is different from the short-campaign case, which is especially visible at campaigns $k \geq 1$, where only the most central agents are influenced. The final cost is slightly larger, $0.3457$. To better understand the meaning of the long campaigns, note that the network can be (informally speaking) associated with a time constant $T$ equal to the inverse of the smallest real part among all the eigenvalues of the Laplacian $L$ excluding the zero eigenvalue, and as soon as $t_{k+1}-t_k$ is significantly larger than $4T$, the network effectively reaches consensus in-between campaigns so we may consider we are in the long-campaign case. For the particular graph here, $T \approx 3.28$. Note that we can directly compare this long-campaign result with no-control and broadcasting above (since those strategies are independent of campaign length), and we still see significant improvement over both.\\ \begin{figure}[!htb] \vspace*{1em} \centering \includegraphics[trim=1cm 8cm 1cm 5cm, clip,width=0.55\textwidth]{longstage_dpmult} \vspace{-0.3cm} \caption{Results for long campaigns. The continuous opinion dynamics is plotted for a sufficiently long time to observe the long campaign behavior, {\it i.e. } the convergence of opinions of the agents (which means the horizontal axis is not to scale).}\label{longstage_dp} \end{figure} Next, to illustrate the results of Section \ref{sec:cluster}, we take the graph in Figure \ref{smallscale} and remove all the links between agents $1$ to $4$ and the rest of the agents, obtaining the graph in Figure \ref{clustered}. This new graph has two clusters, the first consisting of agents $1$ to $4$, and the second of the rest of the agents. Four campaigns of length $0.5$ are considered, like before. The brute-force algorithm is applied with $Q=N=15$, starting from an initial state of the network where all agents have opinion $0.5$ (this is done so that they all have the same initial deviation from the desired state, which better exposes the influence of their centrality and group size). The results are shown in Figure \ref{fig:clusterresults}. It is interesting to observe that despite their lower centrality, many agents in cluster $2$ (e.g.\ 7, 10, and 14) are given preference over agents 1, 2, and 4 in cluster $1$. This is because the number of agents is larger in the second cluster, and the selection criterion \eqref{eq:cluster} takes this into account. To compare, we have the final opinion without control to be $0.5$ (since all agents start with the same opinion), with the broadcast strategy to be $0.4$, and with the optimal strategy to be around $0.34$, which corresponds to $85$\% of the cost with the broadcast strategy. \begin{figure}[!htb] \vspace*{1em} \centering \includegraphics[width=0.4\textwidth]{graph_clustered} \includegraphics[trim=1cm 7cm 1cm 5cm, clip, width=0.55\textwidth]{vx0_clustered} \caption{Left: clustered graph. Right: agent centralities in the two clusters.}\label{clustered} \end{figure} \begin{figure}[!hbt] \vspace*{1em} \centering \includegraphics[trim=1cm 7cm 1cm 5cm, clip,width=0.55\textwidth]{cluster} \caption{Results for the clustered problem.}\label{fig:clusterresults} \end{figure} Finally, we move on to the problem where we test the scalability of DP for large graphs and many campaigns. Specifically, we take $100$ agents and $20$ campaigns. Link weights generated from a uniform distribution over $[0,1]$, after which any link with a weight smaller than $0.3$ is removed. Initial opinions are equidistantly spaced in $[0,1]$ as before, and the total budget is $Q=N=100$. The final cost here is $0.38$ instead of $0.5$ without any control. The obtained cost is close to the one related to the broadcast strategy (which is $0.4$) because all nodes have very similar centrality. Consequently, the DP cost is 76\% of the cost without any control and 95\% of the cost with broadcast strategy. As expected large part of the budget (47\%) is used on the first campaign towards agents having initial condition closer to 0. Note that the brute-force approach would be entirely unfeasible in this problem, while the execution time of DP in Matlab is around 1.7\,s on an Intel i7-3540M CPU. \section{Conclusions} In this paper, we have proposed a mathematical formulation of the problem of target marketing over social networks. We show how to exploit some properties of the social network graph in the design of the marketing budget allocation over the agents and over marketing campaigns. A marketer should mainly consider the initial opinion of an agent, its centrality, and (when relevant) the number of agents in the cluster it belongs to. Based on this, we have defined appropriate quantities which measure the influence power of an agent and which allows the marketer to define an order in which it has to allocate its influence budget. The derived budget allocation policies are shown to have a water-filling-type structure. The conducted numerical analysis allows one to extract many precious insights on how to invest a budget over consumers and time. For instance, key consumers to be influenced immediately appear, the number of campaigns to be performed is easily obtained, and the impact of having an advanced marketing campaign (versus allocating the available budget uniformly over consumers and campaigns) can be quantified. \section*{References} \bibliographystyle{elsarticle-num}
2,869,038,156,513
arxiv
\section{Introduction} Parameterised Boolean Equation Systems (PBESs) are sequences of fixed point equations with data variables. They form a very expressive formalism for encoding a wide range of problems, such as the verification of modal \MUCALC formulae \cite{kozen1983:results, bradfield2001:modal} for process algebraic specifications with data (see, e.g., \cite{groote2005:modelchecking, groote2005:parameterised}) and checking for (branching) bisimilarity of process equations \cite{chen2007:equivalence}. PBESs have been described extensively in \cite{groote2005:parameterised}. A method for solving PBESs directly has been presented \cite{groote2005:modelchecking}, but usually PBESs are solved by first instantiating the system to a plain Boolean Equation System (BES) and then solving the BES. Instantiation of PBESs is described in \cite{vandam2008:instantiation, ploeger2011:verification}, where clever rewriters and enumeration of quantifier expressions play an important role. We focus on instantiation to a Parity Game (PG), which is a restricted BES with equations that are either conjunctive or disjunctive. Although no polynomial time algorithm for solving parity games is known (however, the problem is known to be in $\text{NP}\cap\text{co-NP}$), effective parity game solvers exist (see, e.g., \cite{friedmann2009:solving}), especially when the alternation depth is low, and the instantiation step is currently the bottleneck of the whole procedure for many practical cases. There are clear similarities between instantiation of PBESs and state space generation, a well known problem in model checking. In both, an abstract description gives rise to a large graph, which requires efficient storage of the generated graph. Also, in both we often have that the description consists of a combination of reasonably independent components or equations. This `locality' can be used to speed up the generation of successor nodes. Inspired by these similarities, we apply in this paper optimisations from model checking to the PBES instantiation problem, devising a more efficient method. We use \LTSMIN, a language independent toolset for state space exploration which enables efficient state space generation and offers both symbolic exploration tools based on Binary Decision Diagrams (BDDs) and distributed exploration tools (see, e.g., \cite{blom2010:ltsmin}). The tools make use of knowledge about the dependencies for better efficiency, which can be specified for every language in a separate language module. Instantiating PBESs to parity games in our enhanced method has two phases: \begin{enumerate}\noitemsep \item Transforming the PBES into an equivalent system that consists of expressions that are either purely conjunctive or purely disjunctive. We call such a system a \emph{Parameterised Parity Game} (PPG). The result of this operation is that any instantiation of the PPG will result directly in a parity game. \item Instantiating the PPG to a PG using \LTSMIN. To this end this we defined a PBES language module for \LTSMIN, in which we specify a state vector representation of instantiated PBES variables (and the corresponding node in the generated game graph) and the dependencies between (parts of) the equations and the parts of the state vector. \end{enumerate} \begin{figure}[tbh] \centering \scalebox{.85}{ \input{images/bqnf2pg.tikz} } \caption{Overview of the verification approach, consisting of various transformations, an instantiation step, and available reductions and solvers.} \label{fig:verification} \end{figure} An overview of the method is shown in Figure~\ref{fig:verification}. We consider PBESs in \emph{Bounded Quantifier Normal Form} (BQNF), which is a subset of all PBESs, but any PBES can be rewritten automatically to a system in BQNF with the same solution. PBESs and their normal forms are described in Section~\ref{section:background}. The contributions of this article are the transformation from BQNF to PPG and the instantiation from PPG to PG. Both steps are not trivial. We will explain here where the obstacles lie. In general, each system of PBES equations in BQNF can be transformed automatically into a system consisting of equations in PPG while preserving the solution. An equation can be transformed to PPG by introducing fresh equations for subexpressions and replacing the subexpressions by the corresponding variable. However, it is important not to separate quantifiers from the expressions that restrict the data elements that have to be considered, so called \emph{bounds}. If a bound for a quantifier over an infinite data sort is replaced by a variable, the instantiator might generate an infinite number of successors for a node in the game graph. See Section~\ref{section:transformation} for our solution. For the instantiation step we implemented a PBES language module for \LTSMIN using the Partitioned Interface for the Next State function (\PINS). This includes partitioning each PBES equation into \emph{transition groups} and defining a \emph{dependency matrix} that specifies the dependencies between transition group and parts of the state vector. We then have a high-performance instantiation tool that offers both distributed and symbolic generation of a parity game. This requires some delicacy, as splitting a formula too much may result in infinite computation (as in the transformation phase) and not splitting enough could result in a dependency matrix that is too dense, which ruins the effect of transition caching and symbolic computation. The implementation is described in Section~\ref{section:instantiation}. In Section~\ref{section:experiments} we present performance results for a number of case studies, comparing our sequential, distributed and symbolic implementations based on \LTSMIN to the existing PBES instantiation tools in the \MCRLTWO toolset. In almost all cases memory usage is orders of magnitude better for our tool. In all cases also the execution time is much better. \section{Background}\label{section:background} In this section we will treat PBESs, normal forms for PBESs, and Parity Games. \subsection{PBES} \begin{definition} \emph{Predicate formulae} $\varphi$ are defined by the following grammar: \[ \varphi \Coloneqq b \mid \propvar{X}(\vec{e}) \mid \neg \varphi \mid \varphi \oplus \varphi \mid \mathsf{Q} d \oftype D \suchthat \varphi \] where $\oplus \in \set{\land, \lor, \impl}$, $\mathsf{Q} \in \set{\forall, \exists}$, $b$ is a data term of sort $\Bool$, $\propvar{X} \in \X$ is a predicate variable, $d$ is a data variable of sort $D$, and $\vec{e}$ is a vector of data terms. We will call any predicate formula without predicate variables a \emph{simple formula}. We denote the class of predicate formulae $\PF$. \end{definition} \begin{definition} A \emph{First-Order Boolean Equation} is an equation of the form: \[ \sigma \propvar{X}(\vec{d} \oftype D) = \varphi \] where $\sigma \in \set{\mu, \nu}$ is a minimum ($\mu$) or maximum ($\nu$) fixed point operator, $\vec{d}$ is a vector of data variables of sort $D$, and $\varphi$ is a predicate formula. \end{definition} \begin{definition} A \emph{Parameterised Boolean Equation System (PBES)} is a sequence of First-Order Boolean Equations: \[ \eqsys = (\sigma_1 \propvar{X}_1(\vec{d_1} \oftype D_1) = \varphi_1) \eqsep \dotsc \eqsep (\sigma_n \propvar{X}_n(\vec{d_n} \oftype D_n) = \varphi_n) \] \end{definition} The semantics and solution of PBESs are described in, e.g., \cite{groote2005:parameterised}. We say that two equation systems $\eqsys_1$ and $\eqsys_2$ are equivalent, written as $\eqsys_1 \equiv \eqsys_2$, if they have the same solution for every variable that occurs in both systems. We adopt the standard limitations: expressions are in positive form (negation occurs only in data expressions) and every predicate variable occurs exactly once as the left hand side of an equation. A PBES that contains no quantifiers and parameters is called a \emph{Boolean Equation System} (BES). A finitary PBES can be \emph{instantiated} to a BES by expanding the quantifiers to finite conjunctions or disjunctions and substituting concrete values for the data parameters. Every instantiated PBES variable $\propvar{X}(\vec{e})$ should then be read as a BES variable ``$\propvar{X}(\vec{e})$''. A one-to-one mapping can be made from a BES to an equivalent \emph{parity game} if the BES has only expressions that are either conjunctive or disjunctive. The parity game is then represented by a game graph with nodes that represent variables with concrete parameters and edges that represent dependencies. Parity games will be further explained in Section~\ref{section:paritygames}. To make instantiation of a PBES to a parity game more directly we will preprocess the PBES to a format that only allows expressions to be either conjunctive or disjunctive. This format is a normal form for PBESs that we call the \emph{Parameterised Parity Game}, defined as follows: \begin{definition} A PBES is a \emph{Parameterised Parity Game} (PPG) if every right hand side of an equation is a formula of the form: \begin{align*} \Land_{i \in I} f_i \land \Land_{j \in J} \forall_{\vec{v} \in D_j} \suchthat \big( g_j \impl \propvar{X}_j(\vec{e_j}) \big) \quad \vert \quad \Lor_{i \in I} f_i \lor \Lor_{j \in J} \exists_{\vec{v} \in D_j} \suchthat \big( g_j \land \propvar{X}_j(\vec{e_j}) \big). \end{align*} where $f_i$ and $g_j$ are simple boolean formulae, and $\vec{e_j}$ is a data expression. $I$ and $J$ are finite (possibly empty) index sets. \end{definition} The expressions range over two index sets $I$ and $J$. The left part is a conjunction (or disjunction) of simple expressions $f_i$ that can be seen as conditions that should hold in the current state. The right part is a conjunction (or disjunction) of a quantified vector of variables for next states $\propvar{X}_j$ with parameters $\vec{e_j}$, guarded by simple expression $g_j$. Before transforming arbitrary PBESs to PPGs we first define another normal form on PBESs to make the transformation easier. This normal form can have an arbitrary sequence of bounded quantifiers as outermost operators and has a conjunctive normal form at the inner. We call this the Bounded Quantifier Normal Form (BQNF): \begin{definition} A First-Order Boolean formula is in \emph{Bounded Quantifier Normal Form (BQNF)} if it has the form: \begin{align*} \mathsf{BQNF} \Coloneqq &\quad \forall {\vec{d} \in D} \suchthat b \impl \mathsf{BQNF} \quad \vert \quad \exists {\vec{d} \in D} \suchthat b \land \mathsf{BQNF} \quad \vert \quad \mathsf{CONJ} \\ \mathsf{CONJ} \Coloneqq &\quad \Land_{k \in K} f_k \land \Land_{i \in I} \forall_{\vec{v} \in D_I} \suchthat \big( g_i \impl \mathsf{DISJ}^i \big) \\ \mathsf{DISJ}^i \Coloneqq &\quad \Lor_{\ell \in L_i} f_{i\ell} \lor \Lor_{j \in J_i} \exists_{\vec{w} \in D_{ij}} \suchthat \big( g_{ij} \land \propvar{X}_{ij}(\vec{e_{ij}}) \big) \end{align*} where $b$, $f_k$, $f_{i\ell}$, $g_i$, and $g_{ij}$ are simple boolean formulae, and $\vec{e_{ij}}$ is a data expression. $K$, $I$, $L_i$, and $J_i$ are finite (possibly empty) index sets. \end{definition} This BQNF is similar to \emph{Predicate Formula Normal Form} (PFNF), defined elsewhere\footnote{ A transformation to PFNF is implemented in the \texttt{pbesrewr} tool and documented at \url{http://www.win.tue.nl/mcrl2/wiki/index.php/Parameterised_Boolean_Equation_Systems}.}, in that quantification is outermost and in that the core is a conjunctive normal form. However, unlike PFNF, BQNF allows bounds on the quantified variables (hence bounded quantifiers), and universal quantification is allowed within the conjunctive part and existential quantification is allowed within the disjunctive parts. These bounds are needed to avoid problems when transforming to PPG. Consider the expression $(\forall i \oftype \Nat \suchthat (i < 5) \impl \propvar{Y}(i) ) \lor ( \exists j \oftype \Nat \suchthat (j < 3) \land \propvar{Z}(j) )$. Rewriting to PFNF (moving the quantifiers outward) results in $\exists j \oftype \Nat \suchthat \forall i \oftype \Nat \suchthat ((i < 5) \impl Y(i)) \lor ((j < 3) \impl Z(j))$. Rewriting that expression to PPG would split the expression such that the initial expression is $\exists j: \Nat \suchthat \propvar{X_1}(j)$ ($\propvar{X_1}$ is a newly introduced variable for the equation with the remainder of the expression as right hand side), which would result in an infinite disjunction when instantiating the PPG. BQNF allows the original expression to be rewritten to $\exists j \oftype \Nat \suchthat (j < 3) \land \forall i \oftype \Nat \suchthat (i < 5) \impl (\propvar{Y}(i) \lor \propvar{Z}(j))$ with the bounds close to the quantifiers, which allows to split the expression after the bound, preventing the instantiation to result in an infinite expression. Requiring that a system is specified in BQNF does not limit the expressiveness, as each PBES can be transformed into a equivalent system in PFNF that has the same solution and PFNF is a subset of BQNF. The translation from process algebraic specifications in \MCRLTWO and \MUCALC formulae to PBESs is given in \cite{groote2005:modelchecking} and is illustrated by the following example. Throughout the paper we expect the reader to know process algebras and to be able to read \MCRLTWO specifications\footnote{See \url{http://mcrl2.org} for documentation on the \MCRLTWO language.}. \begin{example}[Buffer]\label{example:buffer} Consider the specification of a simple buffer with a capacity of 2. \begin{quote} \begin{tabular}{l@{}l@{\hspace{3pt}}l} $\sort\ $ & \multicolumn{2}{@{}l@{}}{$D = \struct\ d_1 \mid d_2;$ \quad $\act\ \action{r_1}, \action{s_4} \oftype D;$} \\ $\proc\ $ & $\procname{Buffer}(q \oftype \container{List}(D)) = $ & $\displaystyle\sum_{d \oftype D} \ (\#q < 2) \guards \action{r_1}(d) \suchthat \procname{Buffer}(q \append d)$ \\ & & $\ \ \, + \ (q \neq []) \guards \action{s_4}(\head(q)) \suchthat \procname{Buffer}(\tail(q));$ \\ $\init\ $ & $\procname{Buffer}([]);$ \end{tabular} \end{quote} The specification consists of sort and action definitions, process specifications where alternatives are in summands and an initial state. On the first line an enumerated data sort $D$ is introduced with data values $d_1$ and $d_2$, and the actions $\action{r_1}$ and $\action{s_4}$ are specified, both having a data parameter of type $D$. A process $\procname{Buffer}$ is specified that has a data parameter $q$, which is a list of elements of type $D$. The process consists of \emph{summands}, separated by the $+$-operator. Each summand may start with a summation over a data set, followed by a guard that is closed with a $\guards$, then an action, followed by a call to the process that describes the behaviour after the action, typically a recursive call to the process itself with different parameters. The first summand specifies that any element $d$ can be added to $q$ by the action $\action{r_1}(d)$ if the size of the internal buffer $q$ is smaller than $2$. The second summand specifies that if $q$ is not empty, elements can be popped by the action $\action{s_4}(\head(q))$. The initial state of the system is the $\procname{Buffer}$ process with an empty list in this case, which models that initially the buffer is empty. We can check the specification for absence of deadlock, which is expressed in \MUCALC as follows: \begin{center} $\always{\true^\ast}\possibly{\true}\true \quad (\text{which is syntactic sugar for: }\; \nu X \suchthat \possibly{\true}\true \land \always{\true}X )$ \end{center} which reads: after any sequence of actions ($\always{\true^\ast}$), always some action is enabled ($\possibly{\true}\true$). Satisfaction of the formula by the specification, translated to a PBES, looks as follows: \begin{quote} \begin{tabular}{l@{}l@{\hspace{3pt}}l} $\sort\ $ & \multicolumn{2}{@{}l@{}}{$D = \struct\ d_1 \mid d_2;$} \\ $\pbes\ $ & $\nu \propvar{X}(q \oftype \container{List}(D)) = $ & $(q \neq []) \lor (\#q < 2)$ \\ & & $\land \ (q \neq []) \impl \propvar{X}(\tail(q))$ \\ & & $\land \ \displaystyle\forall_{d \in D} \suchthat (\#q < 2) \impl \propvar{X}(q \append d);$ \\ $\init\ $ & $\propvar{X}([]);$ \end{tabular} \end{quote}\medskip This PBES is true if from the initial state $\propvar{X}([])$ an element can be added to $q$ if $\#q$ is smaller than $2$, an element can be popped from $q$ if it is not empty and any of these actions is enabled ($q \neq []$ or $\#q < 2$, which is obviously true for any $q$). The same has to hold for the successor states ($\propvar{X}$ with an element added to, respectively popped from $q$ as parameter). The solution of the PBES is $\ttrue$. \end{example} \begin{rem*} The equation system in the example above is already a PPG, which is no coincidence as any system when combined with the absence of deadlock property will result in a PBES in PPG form because of the form of the formula: a conjunction of ``we can do an action now'' (a disjunctive expression without recursion) and ``for all possible actions the property holds in all next states'' (universal quantification with recursion). Note that checking the absence of deadlock property is almost the same as standard reachability analysis. \end{rem*} \begin{definition}[Block]\label{def:block} A PBES is divided into \emph{blocks}, which are subsequences of equations with the same fixed point operator such that subsequent equations with the same fixed point operator belong to the same block. \end{definition} \subsection{Parity Games}\label{section:paritygames} A \emph{parity game} is a game between two players, player \Eloise (also called Eloise or player \emph{even}) and player \Abelard (also called Abelard or player \emph{odd}), where each player owns a set of places. On one place a token is placed that can be moved by the owner of the place to an adjacent place. The parity game is represented as a graph. We borrow notation from \cite{bradfield2001:modal} and \cite{mazala2002:infinite}. \begin{definition}[Parity Game] A \emph{parity game} is a graph $\G = \tuple{V, E, \VEloise, \VAbelard, v_I, \Omega}$, with \begin{itemize}\noitemsep \item $V$ the set of vertices (or places or states); \item $E \oftype V \times V$ the set of transitions; \item $\VEloise \subseteq V$ the set of places owned by player \Eloise; \item $\VAbelard \subseteq V$ the set of places owned by player \Abelard; \item $v_I \in V$ the initial state of the game; \item $\Omega \oftype V \to \Nat$ assigns a priority $\Omega(v)$ to each vertex $v \in V$; \end{itemize} where $\VEloise \cup \VAbelard = V$ and $\VEloise \cap \VAbelard = \emptyset$. \end{definition} The nodes in the graph represent the places and correspond to the instantiated variables from the equation system. The edges represent possible moves of the token (initially placed on $v_I$) and encode dependencies between variables. A node does not necessarily have outgoing transitions, i.e., deadlock nodes are allowed. In the parity game, player \Eloise owns the nodes that represent disjunctions, player \Abelard the nodes that represent conjunctions. The node priorities correspond to the number of the block to which the corresponding variable belongs (see Def.~\ref{def:block}), such that variables in earlier blocks have lower priorities, $\nu$-blocks have even priorities, $\mu$-blocks have odd priorities and the earliest $\mu$-block has priority $1$. The following table shows an intuitive overview of the relations between BESs and parity games. \begin{center} \begin{tabular}{c@{\quad}c@{\qquad}l} \toprule &$\nu$ blocks & Even priorities (0, 2, 4, \ldots) \\ &$\mu$ blocks & Odd priorities (1, 3, 5, \ldots) \\ &$\lor$, $\exists$, $\possibly{}$ & Player \Eloise, {$\exists$}loise, Even, Prover \\ &$\land$, $\forall$, $\always{}$ & Player \Abelard, {$\forall$}belard, Odd, Refuter \\ \bottomrule \end{tabular} \end{center} \medskip The values \ttrue ($\true$) and \tfalse ($\false$) are represented as a node with priority 0, player \Abelard and a transition to itself, and a node with priority 1, player \Eloise and a transition to itself, respectively. A \emph{play} in the game is a finite path $\pi = v_0 v_1 \cdots v_r \in V^+$ ending in a deadlock state $v_r$ or an infinite path $\pi = v_0 v_1 \cdots \in V^\omega$ such that $(v_i, v_{i+1}) \in E$ for every $v_i \in \pi$. Priority function $\Omega$ extends to plays in the following way: $\Omega(\pi) = \Omega(v_0) \Omega(v_1) \cdots$. $\Inf(\rho)$ returns the set of values that occur infinitely often in a sequence $\rho$. \begin{definition}[Winner of a play] Player \Eloise is the winner of a play $\pi$ if % \begin{itemize}\noitemsep \item $\pi$ is a finite play $v_0 v_1 \cdots v_r \in V^+$ and $v_r \in \VAbelard$ and no move is possible from $v_r$; or \item $\pi$ is an infinite play and $\min(\Inf(\Omega(\pi)))$, the minimum of the priorities that occur infinitely often in $\pi$, is even. This is called the \emph{min-parity condition}. \end{itemize}% \end{definition} \begin{definition}[Strategy] A (memoryless) \emph{strategy} for player $a$ is a function $f_a \oftype V_a \to V$. A play $\pi = v_0 v_1 \cdots$ is \emph{conform} to $f_a$ if for every $v_i \in \pi$, \; $v_i \in V_a \impl v_{i+1} = f_a(v_i)$. \end{definition} \begin{definition}[Winner of the game] Player \Eloise is the \emph{winner} of the game if and only if there exists a winning strategy for player \Eloise, i.e., from the initial state every play conforming to the strategy will be won by player \Eloise. \end{definition} The model checking problem is encoded as a PBES (see \cite{groote2005:modelchecking}) which is instantiated to a parity game (see \cite{ploeger2011:verification}) such that player \Eloise is the winner of the game iff the property holds for the system. \vspace*{-\medskipamount} \paragraph{Solving Parity Games} Solving a parity game means finding a winning strategy for one of the players. Various algorithms exist, such as the recursive algorithm by Zielonka \cite{zielonka1998:infinite} and Small Progress Measures by Jurdzi\'{n}ski \cite{jurdzinski2000:spm}, with a multi-core implementation in \cite{vdpol2008:multicore}. An overview and performance comparison of the algorithms are given in \cite{friedmann2009:solving}. \section{Transformation from BQNF to Parameterised Parity Games}\label{section:transformation} In order to automatically transform a PBES to a PPG, we define a transformation function $s$ from BQNF to PPG. The transformation rewrites expressions that contain both conjunctions and disjunctions to equivalent expressions that are either conjunctive or disjunctive, by introducing new equations for certain subformulae and substituting calls to the new equations for these subformulae in the original expression. The function $t$ below replaces an expression by a call to a new equation if the expression is not already a variable instantiation. The function $t'$ introduces a new equation for an expression if needed. \begin{align*} t(\propvar{X},\vec{d},\varphi) &\eqdef \begin{cases} \varphi & \text{ if } \varphi \text{ is of the form } \propvar{X}'(\vec{e}), \\ \propvar{X}(\vec{d}) & \text{ otherwise; } \end{cases}\\ t'(\sigma,\propvar{X},\vec{d},\varphi) &\eqdef \begin{cases} \emptyseq & \text{ if } \varphi \text{ is of the form } \propvar{X}'(\vec{e}), \\ s( \sigma \propvar{X}(\vec{d}) = \varphi ) & \text{ otherwise. } \end{cases} \end{align*}% For brevity, we leave out the types of the parameters. A tilde is used to introduce a fresh variable: $\fresh{\propvar{X}}$. For equation system $ \eqsys = (\sigma \propvar{X}_1(\vec{d}_1) = \xi_1) \eqsep \ldots \eqsep (\sigma \propvar{X}_n(\vec{d}_n) = \xi_n), $ with each $\xi_i$ in BQNF, the translation to PPG is defined as follows: \begin{center} \scalebox{1}{ \begin{tabular}{l@{\hspace{2pt}}l} $s\big(\eqsys \big)$ & $\eqdef s\big( \sigma \propvar{X}_1(\vec{d}_1) = \xi_1 \big) \eqsep \ldots \eqsep s\big( \sigma \propvar{X}_n(\vec{d}_n) = \xi_n \big)$ \defsep $s\big(\sigma \propvar{X}(\vec{d}) = f \big)$ & $\eqdef \sigma \propvar{X}(\vec{d}) = f$ \defsep $s\big(\sigma \propvar{X}(\vec{d}) = \forall \vec{v} \suchthat b \impl \varphi \big)$ & $\eqdef \Big( \sigma \propvar{X}(\vec{d}) = \forall \vec{v} \suchthat b \impl t(\fresh{\propvar{X}},\vec{d}+\vec{v},\varphi) \Big)$ \\& $\qquad t'(\sigma,\fresh{\propvar{X}},\vec{d}+\vec{v},\varphi)$ \defsep $s\big(\sigma \propvar{X}(\vec{d}) = \exists \vec{v} \suchthat b \land \varphi \big)$ & $\eqdef \Big( \sigma \propvar{X}(\vec{d}) = \exists \vec{v} \suchthat b \land t(\fresh{\propvar{X}},\vec{d}+\vec{v},\varphi) \Big)$ \\& $\qquad t'(\sigma,\fresh{\propvar{X}},\vec{d}+\vec{v},\varphi)$ \defsep $s\big(\sigma \propvar{X}(\vec{d}) = \Land_{k \in K} f_k$ \\ \qquad\qquad $\land \Land_{i \in I} (\forall_{\vec{v}_i} \suchthat g_i \impl \varphi_i) \big)$ & $\eqdef \Big( \sigma \propvar{X}(\vec{d}) = \Land_{k \in K} f_k$ \\ & \qquad\qquad\quad $\land \Land_{i \in I} \big(\forall_{\vec{v}_i} \suchthat g_i \impl t(\fresh{\propvar{X}}_i,\vec{d}+\vec{v}_i,\varphi_i)\big) \Big)$ \\ & $\qquad t'(\sigma,\fresh{\propvar{X}}_1,\vec{d}+\vec{v}_1,\varphi_1) \eqsep \ldots \eqsep t'(\sigma,\fresh{\propvar{X}}_m,\vec{d}+\vec{v}_m,\varphi_m)$ \defsep $s\big(\sigma \propvar{X}(\vec{d}) = \Lor_{k \in K} f_k$ \\ \qquad\qquad $\lor \Lor_{i \in I} (\exists_{\vec{v}_i} \suchthat g_i \land \varphi_i) \big)$ & $\eqdef \Big( \sigma \propvar{X}(\vec{d}) = \Lor_{k \in K} f_k$ \\ & \qquad\qquad\quad $\lor \Lor_{i \in I} \big(\exists_{\vec{v}_i} \suchthat g_i \land t(\fresh{\propvar{X}}_i,\vec{d}+\vec{v}_i,\varphi_i)\big) \Big)$ \\& $\qquad t'(\sigma,\fresh{\propvar{X}}_1,\vec{d}+\vec{v}_1,\varphi_1) \eqsep \ldots \eqsep t'(\sigma,\fresh{\propvar{X}}_m,\vec{d}+\vec{v}_m,\varphi_m)$ \end{tabular} } \end{center} with $I = 1 \ldots m$, $\vec{v} \cap \vec{d} = \emptyset$ (variables in $\vec{v}$ do not occur in $\vec{d}$), $b$, $f$, $f_k$, $g_i$ are simple formulae, $\varphi$, $\varphi_i$ are formulae that may contain predicate variables. \begin{prop} The transformation $s$ is solution preserving, i.e., for any $\eqsys$ in BQNF, $s(\eqsys) \equiv \eqsys$: bound variables $\propvar{X}(d)$ have the same solution in $s(\eqsys)$ as in $\eqsys$. \begin{proof} Every change made by $s$ to an equation $\sigma \propvar{X} = \xi$ is a substitution of a subexpression $\varphi$ by a fresh variable $\fresh{\propvar{X}}$, while adding at the same time a new equation $\sigma \fresh{\propvar{X}} = \varphi$ in the same block as $\propvar{X}$. We can apply \emph{backward substitution} (using \cite[Lemma 18]{groote2005:parameterised}) $s(\sigma \propvar{X} = \xi)[\fresh{\propvar{X}} \becomes \varphi]$ for every substitution caused by the transformation to get the original equation system (plus an unused equation $s(\sigma \fresh{\propvar{X}} = \varphi)$ for every fresh variable \fresh{\propvar{X}}). From that we can conclude that $s(\eqsys) \equiv \eqsys$. \end{proof} \end{prop} \begin{example}[Example of the transformation] We combine the buffer from Example~\ref{example:buffer} with the property that in every state both $\action{r_1}$ and $\action{s_4}$ actions are enabled: \[ \nu \propvar{X} \suchthat (\exists_{d \oftype D} \suchthat \possibly{\action{r_1}(d)}\propvar{X}) \land (\exists_{d \oftype D} \suchthat \possibly{\action{s_4}(d)}\propvar{X}) \] The resulting PBES has an equation which does not conform to the PPG form, but is in BQNF: \begin{quote} \begin{tabular}{l@{}l@{\hspace{3pt}}l@{\hspace{3pt}}l} $\sort\ $ & \multicolumn{3}{@{}l@{}}{$D = \struct\ d_1 \mid d_2;$} \\ $\pbes\ $ & $\nu \propvar{X}(q \oftype \container{List}(D)) =$ & $\big( \exists_{d \oftype D} \suchthat (\#q < 2) \land \propvar{X}(q \append d) \big)$\\ & & $\land \big( \exists_{d \oftype D} \suchthat (\head(q) = d) \land (q \neq []) \land \propvar{X}(\tail(q)) \big);$\\ $\init\ $ & $\propvar{X}([]);$ \end{tabular} \end{quote}\medskip The transformation $s$ replaces both conjuncts by a fresh variable and adds equations for these variables with the substituted expression as right hand side, resulting in equations: \begin{quote} \begin{tabular}{l@{}l@{\hspace{3pt}}l@{\hspace{3pt}}l@{\hspace{3pt}}l} $\pbes\ $ & $\nu \propvar{X}(q \oftype \container{List}(D))$ & $=$ & $\propvar{X_1}(q) \land \propvar{X_2}(q);$ \\ & $\nu \propvar{X_1}(q \oftype \container{List}(D))$ & $=$ & $\exists_{d \oftype D} \suchthat (\#q < 2) \land \propvar{X}(q \append d);$ \\ & $\nu \propvar{X_2}(q \oftype \container{List}(D))$ & $=$ & $\exists_{d \oftype D} \suchthat (\head(q) = d) \land (q \neq []) \land \propvar{X}(\tail(q));$ \\ \end{tabular} \end{quote} The first equation is purely conjunctive, while that latter two equations are (guarded) disjunctive. \end{example} \section{Instantiation of Parameterised Parity Games}\label{section:instantiation} We view the instantiation of PPGs to Parity Games as generating a transition system, where states are predicate variables with concrete parameters and transitions are dependencies, specified by the right hand side of the corresponding equation in the PPG. \begin{example} Consider the equation: \[ \nu \propvar{X}(d \oftype D) = (d > 0 \land d < 10) \impl \propvar{X}(d-1) \land \propvar{X}(d+1) \] If $\propvar{X}(5)$ is the initial value, its successors are $\propvar{X}(4)$ and $\propvar{X}(6)$, so the graph starts with a node owned by player \Abelard representing $\propvar{X}(5)$ with transitions to nodes $\propvar{X}(4)$ and $\propvar{X}(6)$. \end{example} \subsection{\LTSMIN} We use the tool \LTSMIN to generate a parity game given a PPG. \LTSMIN is a language independent tool for state-space generation \cite{blom2010:ltsmin}. Different language-modules are available, which are connected to different exploration algorithms through the so-called \PINS-interface. This interface allows for certain language-independent optimisations, such as transition caching and distributed generation (see \cite{blom2009:bridging}), and an efficient compressed storage of states in a tree database (see \cite{blom2009:database}). Also symbolic reachability analysis is possible, where the state space is stored as a Binary Decision Diagram (BDD) \cite{blom2008:symbolic}. \subsubsection{Partitioned Interface for the Next State function} \LTSMIN uses a Partitioned Interface for the Next State function (\PINS), where states are represented as a vector $\tuple{x_1, x_2, \dotsc, x_M}$ with size $M$ that is fixed for the whole system (to be determined statically). These values are stored in a globally accessible table, so that the states can also be represented as a vector of integer indices $\tuple{i_1, i_2, \dotsc, i_M}$. The \PINS interface functions on this level of integer vectors, so that each tool can really be language-independent. Throughout the text we will often use value vectors instead of index vectors for better readability. For a system with a state vector of $M$ parts, the universe of states is $S = \Nat^M$. For each language module a \emph{transition function} $\nextst \oftype S \to \powerset{S}$ has to be defined that computes the set of successor states for a given state. This transition relation is preferrably split into \emph{transition groups} in order to reflect the compositional structure of the system, by defining a function $\groupnext \oftype S \times \Nat \to \powerset{S}$ that computes successors for state $s$ as defined in group $k$. Suppose we have $K$ transition groups. $\nextst$ can then be defined as \[ \nextst(s) = \bigcup_{k=1}^{K}\groupnext(s,k) \] \subsubsection{Dependence} An important optimisation comes from the observation that not all parts of the state vectors are relevant in every transition group. To indicate the relevant parts of the vector for each of the transition groups, \LTSMIN uses a \emph{dependency matrix}, which has to be computed statically. \begin{definition}[\PINS Matrix: \cite{blom2009:bridging}, Def. 4] A \emph{dependency matrix} $D_{K \times N} = \mathit{DM}(P)$ for system $P$ is a matrix with $K$ rows and $N$ columns containing $\set{0, 1}$ such that if $D_{k,i} = 0$ then group $k$ is independent of element $i$.\\ For any transition group $1 \leq k \leq K$, we define $\pi_k$ as the projection $\pi_k \oftype S \to \Pi_{\set{1 \leq i \leq N \mid D_{k,i} = 1}} S_i$. \end{definition} \emph{Independence} here means that for given transition group $k$ the transitions do not depend on part $i$ of the state vector (\emph{read independence}) and the transitions do not change part $i$ of the successor state vector (\emph{write independence}) or that part $i$ is \emph{irrelevant} in both the current state and all successor states. \emph{Irrelevant} here means that changing the value of that part would still result in a bisimilar state space. For a more precise definition, see \cite[Def. 9]{vdpol2009:state}. This definition of independence is slightly more liberal than the one in \cite{blom2009:bridging} in that we added this notion of relevance. \subsubsection{Transition caching} One way of exploiting the dependency information in the matrix is by using transition caching. \begin{algorithm}[tb] \caption{\nextcache($s$, $k$) computes successors of $s$ for group $k$ using a cache.} \label{algorithm:nextcache} \centering \begin{minipage}[t]{.32\linewidth} \nextcache($s$, $k$) \begin{algorithmic}[1] {\footnotesize \STATE $\updatecache(s, k)$ \STATE $S \becomes \emptyset$ \FORALL{$t \in \cache_k[\pi_k(s)]$} \STATE $t' \becomes \nextapply(s, t, k)$ \STATE Add $t'$ to $S$ \ENDFOR \RETURN {$S$}; } \end{algorithmic} \end{minipage} \hspace{.15cm} \begin{minipage}[t]{.3\linewidth} \updatecache($s$, $k$) \begin{algorithmic}[1] {\footnotesize \IF{$\pi_k(s) \notin \dom(\cache_k)$} \STATE $S \becomes \emptyset$ \STATE $S' \becomes \groupnext(s,k)$ \FORALL{$s' \in S'$} \STATE Add $\pi_k(s')$ to $S$ \ENDFOR \STATE $\cache_k[\pi_k(s)] \becomes S$ \ENDIF } \end{algorithmic} \end{minipage} \hspace{.15cm} \begin{minipage}[t]{.25\linewidth} \nextapply($s$, $t$, $k$) \begin{algorithmic}[1] {\footnotesize \STATE $j \becomes 1$ \FOR{$1 \leq i \leq N$} \IF{$D_{k,i} = 0$} \STATE $s'[i] \becomes s[i]$ \ELSE \STATE $s'[i] \becomes t[j]$ \STATE $j \becomes j + 1$ \ENDIF \ENDFOR \RETURN {$s'$}; } \end{algorithmic} \end{minipage} \end{algorithm}% Only the dependent parts of the transition are stored in a cache $\cache_k$ for every group $k$ by using the projection function $\pi_k$, as described in \cite{blom2009:bridging} and shown in Alg.~\ref{algorithm:nextcache}. This way time is saved, because caching of transitions avoids calling $\groupnext$ at every step. The density of the matrix has great influence on the performance of caching and of the symbolic tools. \subsection{PBES Language Module} In this section we describe states, transition groups and the dependency matrix for PPGs. We assume to have a rewriter $\Simplify$ that is powerful enough to evaluate any closed data expression to $\ttrue$ or $\tfalse$ or to a disjunction or conjunction of predicate variables with closed data expressions as parameters. We use the same rewriter by Van Weerdenburg \cite{weerdenburg2009:efficient} as used in \cite{ploeger2011:verification}. \subsubsection{States and transition groups} For PPGs, the state vector is partitioned as follows: $\tuple{\propvar{X}, x_1, x_2, \dotsc, x_M }$, where $\propvar{X}$ is a propositional variable, and for $i \in \set{1 \ldots M}$ each $x_i$ is the value of parameter $i$. $M$ is the total number of parameter signatures in the system (consisting of name and type). We assume the existence of a function $\textit{priority} \oftype S \to \Int$ that assigns a priority to each state (based on the block of the corresponding equation) and a function $\textit{player} \oftype S \to \set{\Eloise, \Abelard}$ that assigns a player to each state (\Eloise if the corresponding expression is a disjunction, \Abelard if it is a conjunction). In particular, the $\ttrue$ state has priority $0$ and is owned by player $\Abelard$ and the $\tfalse$ state has priority $1$ and belongs to player $\Eloise$. The equations in the PPG specify the transitions between states. The right hand side of the equation is split into conjuncts or disjuncts if possible, which form the \emph{transition groups}, which are numbered subsequently. We use a mapping $\var \oftype \Int \to \mathcal{X}$ from group number to variable and a mapping $\expression \oftype \Int \to \PF$ from group number to corresponding conjunct or disjunct. In the following we assume the index sets $I$ and $J$ to be disjoint. For a sequence of equations of the form \[ \sigma \propvar{X}(\vec{d} \oftype D) = \Land_{i \in I} f_i \land \Land_{j \in J} \forall {\vec{v} \in D_j} \suchthat \big( g_j(\vec{d},\vec{v}) \impl \propvar{X}_j(e_j(\vec{d},\vec{v})) \big), \] for each $i \in I$ there is a group $k$ with $\expression(k) = f_i$ and for each $j \in J$ there is a group $k$ with \begin{align*} \expression(k) &= \forall {\vec{v} \in D_j} \suchthat \big( g_j(\vec{d},\vec{v}) \impl \propvar{X}_j(e_j(\vec{d},\vec{v})), \end{align*} and $\var(k) = \propvar{X}$. Symmetrically for disjunctive equations. \begin{example}\label{example:buffer2} We will explain these concepts using a specification of two sequential buffers (\texttt{buffer.2}): \begin{quote} \begin{tabular}{l@{}l@{\hspace{3pt}}l} $\proc\ $ & $\procname{In}(i: \Pos, q: \container{List}(D)) = $ & $\displaystyle\sum_{d \oftype D}\ (\#q < 2) \guards \action{r_1}(d) \suchthat \procname{In}(i, q \append d)$ \\ & & $\quad + \ (q \neq []) \guards \action{w}(i+1, \head(q)) \suchthat \procname{In}(i, \tail(q));$ \\ $\proc\ $ & $\procname{Out}(i: \Pos, q: \container{List}(D)) = $ & $\displaystyle\sum_{d \oftype D}\ (\#q < 2) \guards \action{r}(i, d) \suchthat \procname{Out}(i, q \append d)$ \\ & & $\quad + \ (q \neq []) \guards \action{s_4}(\head(q)) \suchthat \procname{Out}(i, \tail(q));$ \\ $\init\ $ & \multicolumn{2}{@{}l@{}}{$\hide(\{\action{c}\}, \allow(\{\action{r_1},\action{c},\action{s_4}\}, \comm(\{\action{w} \mid \action{r} \to \action{c}\}, \; \procname{In}(1,[]) \parallel \procname{Out}(2,[]) \; )));$}\\ \end{tabular} \end{quote}\medskip The specification of the initial state the system is specified as composed of an $\procname{In}$ and an $\procname{Out}$ component, composed with the \emph{parallel composition} ($\parallel$) operator. \emph{Synchronisation} of $\action{r}$ and $\action{w}$ actions of the two processes proceeds in two steps. The simultaneous occurence of actions $\action{r}$ and $\action{w}$ (the multi-action $\action{w} \mid \action{r}$) is renamed to $\action{c}$ ($\comm$) and separate occurances of $\action{r}$ and $\action{w}$ are ruled out by the \emph{restriction} operator ($\allow$). The internal action $\action{c}$ is \emph{hidden} ($\hide$). This specification is translated to a single process by \emph{linearising} it to Linear Process Specification (LPS) format. The result is the following specification: \begin{quote} \begin{tabular}{l@{}l@{\hspace{3pt}}l} $\proc\ $ & $\procname{P}(q_{in},q_{out} \oftype \container{List}(D)) = $ & $\displaystyle\sum_{d \oftype D}\ (\#q_{in} < 2) \guards \action{r_1}(d) \suchthat \procname{P}(q_{in} \append d, q_{out})$ \\ & & $\quad + \ (q_{out} \neq []) \guards \action{s_4}(\head(q_{out})) \suchthat \procname{P}(q_{in}, \tail(q_{out}));$ \\ & & $\quad + \ (q_{in} \neq [] \land \#q_{out} < 2) \guards \ttau \suchthat \procname{P}(\tail(q_{in}), q_{out} \append \head(q_{in}))$ \\ & & $\quad + \ \tdelta;$ \\ $\init\ $ & $\procname{P}([],[]);$ \end{tabular} \end{quote}\medskip The result of hiding the $\action{c}$ action is the internal \ttau transition in the third summand. Actions that are not in the set $\set{\action{r_1},\action{c},\action{s_4}}$ are replaced by a \tdelta as a result of the restriction operator.\\ For this process specification, we want to verify the property that if a message is read through $\action{r_1}$, it will eventually be sent through $\action{s_4}$: \[ \nu \propvar{Y} \suchthat (\forall d \oftype D \suchthat (\always{\action{r_1}(d)}(\mu \propvar{X} \suchthat (\possibly{\ttrue}\ttrue \land \always{\neg\action{s_4}(d)}\propvar{X})))) \land \always{\ttrue}\propvar{Y} \] Satisfaction of this formula by the LPS translates to the following PBES:% \begin{align} \notag \pbes\ & \nu \propvar{Y}(q_{in},q_{out} \oftype \container{List}(D)) = \\ & \qquad\quad\quad (\forall_{d \oftype D} \suchthat (\#q_{in} < 2) \impl \propvar{X}(q_{in} \append d, q_{out}, d)) \label{ex:pbes-groups:first} \\ & \qquad\quad \land (\forall_{d_0 \oftype D} \suchthat (\#q_{in} < 2) \impl \propvar{Y}(q_{in} \append d_0, q_{out})) \label{ex:pbes-groups:yfirst} \\ \label{group:Y3} & \qquad\quad \land ((q_{out} \neq []) \impl \propvar{Y}(q_{in}, \tail(q_{out})))\\ & \qquad\quad \land ((q_{in} \neq [] \land \#q_{out} < 2) \impl \propvar{Y}(\tail(q_{in}), q_{out} \append \head(q_{in}))); \label{ex:pbes-groups:ylast} \\ \notag & \mu \propvar{X}(q_{in},q_{out} \oftype \container{List}(D), d \oftype D) = \\ & \qquad\quad\quad (\#q_{in} < 2) \lor (q_{out} \neq []) \lor (q_{in} \neq [] \land \#q_{out} < 2) \label{ex:pbes-groups:truetrue} \\ & \qquad\quad \land (\forall_{d_0 \oftype D} \suchthat (\#q_{in} < 2) \impl \propvar{X}(q_{in} \append d_0, q_{out}, d)) \label{ex:pbes-groups:xfirst}\\ & \qquad\quad \land ( (\head(q_{out}) \neq d) \land (q_{out} \neq []) \impl \propvar{X}(q_{in}, \tail(q_{out}), d)) \\ & \qquad\quad \land ( (q_{in} \neq [] \land \#q_{out} < 2) \impl \propvar{X}(\tail(q_{in}), q_{out} \append \head(q_{in}), d) ); \label{ex:pbes-groups:last}\\ \notag \init\ & \propvar{Y}([], []); \end{align} For this equation system, the structure of the state vector is $\tuple{\propvar{X}, q_{in}, q_{out}, d}$. The initial state would be encoded as $\tuple{\propvar{Y}, [], [], 0}$. Since the initial state has no parameter $d$, a default value is chosen. The numbers \ref{ex:pbes-groups:first}--\ref{ex:pbes-groups:last} behind the equation parts denote the different transition groups, i.e., each conjunct of a conjunctive expression forms a group. For instance, for group \ref{group:Y3} the associated expression is $\expression(\ref{group:Y3}) = ((q_{out} \neq []) \impl \propvar{Y}(q_{in}, \tail(q_{out})))$ and it is associated with variable $\var(\ref{group:Y3}) = \propvar{Y}$. Group \ref{ex:pbes-groups:first} encodes the $\always{\action{r_1}(d)}\varphi$ part of the formula (where $\varphi$ is the $\mu\propvar{X}$ part of the formula), groups \ref{ex:pbes-groups:yfirst}--\ref{ex:pbes-groups:ylast} encode the $\always{\ttrue}\propvar{Y}$ part, group \ref{ex:pbes-groups:truetrue} encodes that a transition is enabled ($\possibly{\ttrue}\ttrue$), and groups \ref{ex:pbes-groups:xfirst}--\ref{ex:pbes-groups:last} encode the cases that not an $\action{r_4}(d)$ transition is taken. \end{example} \medskip For an equation $\sigma \propvar{X}(\vec{d} \oftype D) = \varphi$, let $\params(\propvar{X})$ be the list of parameters $\vec{d}$ and $\params(\propvar{X})_i$ the $i$-th element of that list. The next state function $\groupnext$ is defined as follows. For every $k$ with $\var(k) = \propvar{X}$, \[ \groupnext(\propvar{X}(\vec{e}), k) \eqdef \begin{cases} \set{ \Simplify(f[\params(\propvar{X}) \becomes \vec{e}]) } \\ \qquad \text{ if $f = \expression(k)$ is a simple formula; } \\ \set{ \propvar{X}'(h(\vec{e},\vec{v})) \mid \vec{v} \in D \land g(\vec{e},\vec{v}) } \\ \qquad \text{ if } \expression(k) \text{ is of the form } \mathsf{Q} {\vec{v} \in D} \suchthat \big( g(\vec{e},\vec{v}) \oplus \propvar{X}'(h(\vec{e},\vec{v})) \end{cases} \] Note that if $f$ is a simple expression, $\Simplify(f[\params(\propvar{X}) \becomes \vec{e}])$ will result in either \ttrue or \tfalse. In the case that $f$ is not simple, all concrete variable instantiations are enumerated for every quantifier variable $\vec{v}$ for which the guard $g$ is satisfied. \begin{example} For the example above, $\groupnext(\propvar{Y}([], []), \ref{group:Y3})$ yields the empty set because $q_{out} = []$. $\groupnext(\propvar{Y}([], []), \ref{ex:pbes-groups:yfirst})$ results in $\set{\propvar{Y}([d_1], []), \propvar{Y}([d_2], [])}$. \end{example} \subsubsection{Dependency matrix} Let $\occ(\varphi)$ be the set of propositional variable occurring in a term $\varphi$, let $\free(d)$ be the set of \emph{free data variables} occurring in a data term $d$, and $\used(\varphi)$ the set of free data variables occurring in an expression $\varphi$ such that the variables are not merely passed on to the next state. E.g., with $\propvar{X}(a, b) = \xi$, for the expression $\varphi = a \land \propvar{X}(c, b)$, $\used(\varphi) = \set{a, c}$. Parameter $b$ is not in the set because it does not influence the computation,\ but is only passed on to the next state. For a formula $\varphi$, the function $\changed(\varphi)$ computes the variable parameters changed in the formula: \[ \changed(\propvar{X}(e_1, \ldots, e_m)) \eqdef\eqsep \set{d_i \mid i \in \set{1 \ldots m} \land d_i = \params(\propvar{X})_i \land e_i \neq d_i} \] The function $\booleanResult(\varphi)$ determines if $\varphi$ contains a branch that directly results in a \ttrue or \tfalse (not a variable). This is needed because the boolean constants are encoded as a vector with variable names ``true'' and ``false'', hence a transition to one of them changes the first part of the state vector. For group $k$ and part $i$, we define read dependence $d_{R}$ and write dependence $d_{W}$: \begin{align*} d_{R}(k, i) &\eqdef \begin{cases} \ttrue & \text{ if } i=1;\\ p_i \in ( \params(\var(k)) \cap \used(\expression(k)) ) & \text{ otherwise. } \end{cases}\\ d_{W}(k, i) &\eqdef \begin{cases} \left( \occ(\expression(k)) \setminus \set{\var(k)} \neq \emptyset \right) \; \lor \; \booleanResult(\expression(k)) & \text{ if } i=1;\\ p_i \in \changed(\expression(k)) & \text{ otherwise. } \end{cases} \end{align*} $d_{R}(k, 1)$ is true for every group $k$, since the variable has to be read to determine if a transition group is applicable. \begin{definition}[PPG Dependency matrix]\label{def:dependencymatrix} For a PPG $P$ the dependency matrix $\mathit{DM}(P)$ is a $K \times M$ matrix defined for $1 \leq k \leq K$ and $1 \leq i \leq M$ as: \begin{align*} \mathit{DM}(P)_{k,i} &= \begin{cases} + & \text{ if } d_{R}(k, i) \land d_{W}(k, i); \\ r & \text{ if } d_{R}(k, i) \land \neg d_{W}(k, i); \\ w & \text{ if } \neg d_{R}(k, i) \land d_{W}(k, i); \\ - & \text{ otherwise. } \end{cases} \end{align*} \end{definition} \begin{example} For the PBES in Example~\ref{example:buffer2}, the dependency matrix looks like this:\\ \begin{tabular*}{\textwidth}{@{}c@{\extracolsep{\fill}}m{4in}@{}} \begin{tabular}{l|cccc} $k$ & $\propvar{X}$ & $q_{in}$ & $q_{out}$ & $d$\\ \hline $1$&$+$&$+$&$-$&$w$\\ $2$&$+$&$+$&$-$&$-$\\ $3$&$+$&$-$&$+$&$-$\\ $4$&$+$&$+$&$+$&$-$\\ $5$&$+$&$r$&$r$&$-$\\ $6$&$+$&$+$&$-$&$-$\\ $7$&$+$&$-$&$+$&$r$\\ $8$&$+$&$+$&$+$&$-$\\ \end{tabular} & \smallskip The first row lists the state vector parts. The left column lists the group numbers. A `$+$' denotes both read and write dependency, `$w$' denotes write dependency, `$r$' read dependency, and `$-$' no dependency between the group and the state vector part. For group \ref{ex:pbes-groups:first} we can see that the variable is changed from $\propvar{Y}$ to $\propvar{X}$, which results in a `$+$' in the $\propvar{X}$ column. The $q_{in}$ parameter is both read and changed ($d$ is added to it). The $q_{out}$ parameter is not touched, which results in a `$-$'. The parameter $d$ is not in $\params({\propvar{Y}})$ and therefore there is no read dependence. However, the value of $d$ is set for the next state, resulting in a `$w$' in the last column. \end{tabular*} \end{example} \section{Performance Evaluation}\label{section:experiments} In this section we report the performance of our tools compared to existing tools in the \MCRLTWO toolset. \subsection{Experiment setup} As input we used PBESs that are derived from the following \MCRLTWO models: $n$ sequential buffers (\texttt{buffer-*}), the Sliding Window Protocol (SWP), the IEEE 1394 protocol, a Sokoban puzzle, and state machines that are part of the control system for an experiment at CERN (\texttt{wheel\_sector}), described in \cite{hwong2011:analysing}. The models are combined with \MUCALC properties that check absence of deadlock (\texttt{nodeadlock}, see Example~\ref{example:buffer}), if $x$ is read, then eventually $x$ will be written (\texttt{evt\_send}, see Example~\ref{example:buffer2}), or that from the initial state there is a path on which a $\action{push}$ action is possible (\texttt{always\_push}: $\possibly{\ttrue^\ast}\possibly{\action{push}}\ttrue$ -- only applicable to the Sokoban puzzle). As preprocessing steps, we applied \texttt{pbesparelm} and \texttt{pbesrewr -psimplify} to every equation system, which are rewriters that apply some obvious simplifications to the equation systems. In the reported cases no transformation to PPG was needed, as the systems were already in the required form. The tools that we compared are: \begin{center} \scalebox{0.9}{ \begin{tabular}{l c c c c c l} \toprule Tool & Toolset & \begin{sideways}Groups\end{sideways} & \begin{sideways}Caching\end{sideways} & \begin{sideways}Distributed\end{sideways} &\begin{sideways}Symbolic\end{sideways} & Command \\ \toprule \texttt{pbes2bes} & \MCRLTWO & -- & -- & -- & -- & \texttt{pbes2bes -rjittyc} \\ \midrule \texttt{pbespgsolve} & \MCRLTWO & -- & -- & -- & -- & \texttt{pbespgsolve -rjittyc -g} \\ \midrule \texttt{pbes2lts -black} & \LTSMIN & no & no & no & no & \texttt{pbes2lts-grey {-}-black {-}-always-split} \\ \midrule \texttt{pbes2lts -grey} & \LTSMIN & yes & no & no & no & \texttt{pbes2lts-grey {-}-grey {-}-always-split} \\ \midrule \texttt{pbes2lts -cache} & \LTSMIN & yes & yes & no & no & \texttt{pbes2lts-grey -rgs -c {-}-always-split} \\ \midrule \texttt{pbes2lts-mpi-*} & \LTSMIN & yes & yes & yes & no & \texttt{pbes2lts-mpi -rgs -c {-}-always-split} \\ \midrule \texttt{pbes-reach} & \LTSMIN & yes & no & no & yes & \texttt{pbes-reach {-}-order=chain-prev} \\ & & & & & & \quad \texttt{{-}-saturation=sat-like} \\ & & & & & & \quad \texttt{{-}-save-levels -rgs} \\ & & & & & & \quad \texttt{{-}-always-split} \\ \bottomrule \end{tabular} } \end{center} It is indicated whether transition groups, caching, distributed generation or symbolic generation are available. \texttt{pbes2bes} and \texttt{pbespgsolve} from the \MCRLTWO toolset are similar in functionality, but different in implementation. For \texttt{pbespgsolve} the \texttt{-g} option means only generating the parity game without solving. For the \LTSMIN tools \texttt{pbes2lts-*} and \texttt{pbes-reach} the option \texttt{-rgs} enables regrouping, \texttt{-c} enables caching, and \texttt{{-}-black} disables the use of transition groups. \texttt{pbes-reach} uses the \texttt{sat-like} saturation strategy. The experiments were performed on a cluster of 10 machines with each two quad-core Intel Xeon E5520 CPUs @ 2.27 GHz (with 2 hyperthreads per core) and 24GB memory. Every tool was given a 20 GB memory limit and a 10 ks time limit. Elapsed time and memory usage have been measured by the tool \texttt{memtime}. The experiments were executed using Linux 2.6.34, \MCRLTWO svn rev. 10785 and for \LTSMIN the git rev. after commit 4d11bc20 in the experimental `next' branch. The tools were built using GCC 4.4.1. Open MPI 1.4.3 was used for the distributed tool. \begin{table} \caption{Time performance in seconds. `T' indicates a timeout, `M' out of memory.} \label{table:results-time} \begin{center} \scalebox{0.8}{ \begin{tabular}{lrrrrrrrrrr} \toprule Equation system & \# States & \begin{sideways}\texttt{pbes2bes }\end{sideways} & \begin{sideways}\texttt{pbespgsolve }\end{sideways} & \begin{sideways}\texttt{pbes2lts -black }\end{sideways} & \begin{sideways}\texttt{pbes2lts -grey }\end{sideways} & \begin{sideways}\texttt{pbes2lts -cache }\end{sideways} & \begin{sideways}\texttt{pbes2lts-mpi-1 }\end{sideways} & \begin{sideways}\texttt{pbes2lts-mpi-4 }\end{sideways} & \begin{sideways}\texttt{pbes2lts-mpi-8 }\end{sideways} & \begin{sideways}\texttt{pbes-reach }\end{sideways} \\ \toprule \texttt{swp.nodeadlock } & 1,862 & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\ \midrule \texttt{swp.evt\_send } & 33,554 & 7 & 7 & 8 & 11 & 5 & 5 & 5 & 8 & 5 \\ \midrule \texttt{1394.nodeadlock } & 173,101 & 199 & 202 & 231 & 1,387 & 120 & 125 & 56 & 73 & 114 \\ \midrule \texttt{sokoban.372.always\_push } & 834,397 & 69 & 78 & 258 & T & 403 & 419 & 182 & 62 & 31 \\ \midrule \texttt{buffer.7.nodeadlock } & 823,545 & 32 & 33 & 48 & 76 & 13 & 16 & 9 & 7 & 9 \\ \midrule \texttt{buffer.7.evt\_send } & 2,466,257 & 111 & 107 & 157 & 266 & 22 & 27 & 13 & 11 & 9 \\ \midrule \texttt{buffer.8.nodeadlock } & 5,764,803 & 235 & 237 & 357 & 594 & 82 & 93 & 31 & 20 & 37 \\ \midrule \texttt{buffer.8.evt\_send } & 17,281,283 & 820 & 859 & 1,256 & 2,171 & 158 & 191 & 71 & 67 & 42 \\ \midrule \texttt{buffer.9.nodeadlock } & 40,353,607 & 1,059 & M & 2,937 & 4,905 & 571 & 686 & 241 & 197 & 274 \\ \midrule \texttt{buffer.9.evt\_send } & 121,021,455 & M & M & T & T & 1,172 & 1,448 & 520 & 306 & 282 \\ \midrule \texttt{wheel\_sector.nodeadlock } & 4,897,760 & T & T & T & T & 2,337 & 2,368 & 828 & 939 & 1,904 \\ \bottomrule \end{tabular} } \end{center} \end{table} \begin{table} \caption{Memory usage in MB. `T' indicates a timeout, `M' out of memory.} \label{table:results-mem} \begin{center} \scalebox{0.8}{ \begin{tabular}{lrrrrrrrrrr} \toprule Equation system & \# States & \begin{sideways}\texttt{pbes2bes }\end{sideways} & \begin{sideways}\texttt{pbespgsolve }\end{sideways} & \begin{sideways}\texttt{pbes2lts -black }\end{sideways} & \begin{sideways}\texttt{pbes2lts -grey }\end{sideways} & \begin{sideways}\texttt{pbes2lts -cache }\end{sideways} & \begin{sideways}\texttt{pbes2lts-mpi-1 }\end{sideways} & \begin{sideways}\texttt{pbes2lts-mpi-4 }\end{sideways} & \begin{sideways}\texttt{pbes2lts-mpi-8 }\end{sideways} & \begin{sideways}\texttt{pbes-reach }\end{sideways} \\ \toprule \texttt{swp.nodeadlock } & 1,862 & 12 & 11 & 17 & 17 & 16 & 13 & 15 & 14 & 16 \\ \midrule \texttt{swp.evt\_send } & 33,554 & 58 & 29 & 20 & 20 & 18 & 15 & 15 & 16 & 47 \\ \midrule \texttt{1394.nodeadlock } & 173,101 & 227 & 168 & 31 & 30 & 89 & 86 & 60 & 50 & 57 \\ \midrule \texttt{sokoban.372.always\_push } & 834,397 & 1,187 & 768 & 34 & T & 220 & 217 & 69 & 45 & 47 \\ \midrule \texttt{buffer.7.nodeadlock } & 823,545 & 965 & 354 & 32 & 32 & 91 & 89 & 36 & 27 & 49 \\ \midrule \texttt{buffer.7.evt\_send } & 2,466,257 & 3,340 & 1,215 & 63 & 64 & 181 & 179 & 67 & 43 & 49 \\ \midrule \texttt{buffer.8.nodeadlock } & 5,764,803 & 7,179 & 2,579 & 117 & 117 & 528 & 525 & 145 & 81 & 49 \\ \midrule \texttt{buffer.8.evt\_send } & 17,281,283 & 18,136 & 9,056 & 345 & 345 & 1,155 & 1,152 & 377 & 204 & 49 \\ \midrule \texttt{buffer.9.nodeadlock } & 40,353,607 & 18,451 & M & 737 & 737 & 4,129 & 4,127 & 1,048 & 538 & 49 \\ \midrule \texttt{buffer.9.evt\_send } & 121,021,455 & M & M & T & T & 9,209 & 9,206 & 3,003 & 1,487 & 49 \\ \midrule \texttt{wheel\_sector.nodeadlock } & 4,897,760 & T & T & T & T & 1,288 & 1,285 & 389 & 238 & 90 \\ \bottomrule \end{tabular} } \end{center} \end{table} \subsection{Results} Results are in Tables~\ref{table:results-time} (time performance in seconds) and \ref{table:results-mem} (memory usage in MB). For the MPI tool, the values are the maximum for the workers. The `T' indicates a timeout, the `M' indicates an Out of Memory error. We can make the following observations. From the results we see that \texttt{pbes2bes} and \texttt{pbespgsolve} from the \MCRLTWO toolset perform better than \texttt{pbes2lts -black}, the \LTSMIN based tool without any optimisation. The memory performance of the \LTSMIN tool however is much better, even over 25 times better in the case of \texttt{buffer.8.evt\_send}. Looking at \texttt{pbes2lts -grey} we observe that only splitting into transition groups without any optimisations has a negative impact on the performance, especially in the case of \texttt{1394.nodeadlock}. The \LTSMIN tools have a relatively bad performance for the Sokoban puzzle, because of the structure of \texttt{always\_push}: either ``we can do a push now'' or ``we move and take a recursive step''. If this formula is evaluated as a whole on a state where we can do a push, the first part will immediately evaluate to \ttrue and the formula as well, without taking the recursive step. When the formula is split into transition groups, then both parts may be evaluated independently. Although the second part is not needed, such on-the-fly solving optimisations are not available in the PBES language module yet when transition groups are enabled. This causes \LTSMIN to generate a state space of 10,992,856 states (instead of 834,397), but still the symbolic tool of \LTSMIN, \texttt{pbes-reach}, is the fastest. Transition caching pays off for many systems. Compared to the \MCRLTWO tools, the speedup is between 1.8 and 5.1 for the sequential buffers and for \texttt{wheel\_sector} the instantiation is completed within the timebound. The distributed tool does not scale well. The speedup with 8 workers compared to 1 worker is 6.8 for the Sokoban puzzle, but does not exceed 4.7 for the sequential buffers, and is only 2.5 for the \texttt{wheel\_sector} case. In the \texttt{wheel\_sector} and \texttt{1394} cases the execution time for 8 workers is even worse than with 4 workers, indicating that there is a limit to the number of workers that result in a further speedup. The symbolic tool performs best of all sequential tools in all cases. The tool is up to 19.5 times faster than the fastest tool from the \MCRLTWO toolset (in the \texttt{buffer.8.evt\_send} case). And for some cases \LTSMIN could finish within memory and time bounds, whereas the \MCRLTWO tools could not. Memory usage of \texttt{pbes-reach} is slightly worse in the smallest cases, but up to more than 180 times better than the \MCRLTWO tools for the other cases. \section{Discussion} \section{Conclusions}\label{section:conclusions} We have defined PPG as normal form for PBESs and a transformation to PPG, making the instantiation to parity games more straightforward. We implemented a PBES language module for \LTSMIN. As a result, the high-performance capabilities for state space generation become available for parity game generation. We demonstrated this for distributed state space generation and for symbolic state space generation. Experimental comparison to existing tools shows good results. The \LTSMIN tools reduce memory usage enormously. Transition caching, distributed computation and the symbolic tool speed up the instantiation in all reported cases. However, the distributed tool does not scale well. For all reported cases, the symbolic \LTSMIN tool performed the best, with up to 19 times speedup and up to more than 180 times lower memory usage compared to the \MCRLTWO tools. We intend to extend the tool with optimisations, such as on-the-fly minimisation and solving, i.e., while generating the parity game (possibly also distributed). Furthermore, the symbolic tool generates a BDD representation of the parity game, which asks for solvers that can deal with such symbolic parity games similar to the tool by \cite{bakera2009:solving}. \vspace*{-\medskipamount} \paragraph{Acknowledgments.} We are grateful to Tim Willemse, Jeroen Keiren and Wieger Wesselink for their support on the \MCRLTWO toolset.
2,869,038,156,514
arxiv
\section{Introduction} Given graphs $G$ and $H$, and an integer $q\le |E(H)|$, a $(G,H,q)$-coloring is an edge-coloring of $G$ such that the edges of every copy of $H$ in $G$ receive at least $q$ colors. Let $f(G, H, q)$ be the minimum number of colors in a $(G,H,q)$-coloring. This general problem is hopeless in most cases, for example, when $G$ and $H$ are cliques, and $q=2$, determining it is equivalent to determining the multicolor Ramsey number $R_k(p)$ which is a longstanding open problem. There has been more success in determining $f(G,H,q)$ when $G$ and $H$ are not cliques or when $q>2$ (or both). Many Ramsey problems have received considerable attention when studied on the $n$-dimensional cube. The papers \cite{ARSV, AHMS} are examples where anti-Ramsey problems for subcubes in cubes and problems about monochromatic cycles in cubes are investigated. In \cite{DO}, Offner found the exact value for the maximum number of colors for which it is possible to edge color the hypercube so that all subcubes of dimension $d$ contain all colors. Related Tur\'an type problems for subcubes in cubes have been studied in \cite{AKS}. Rainbow cycles have also been well studied as subgraphs of $K_n$. Erd\H os, Simonovits and S\'os \cite{ESS} introduced $AR(n,H)$, the maximum number of colors in an edge coloring of $K_n$ such that it contains no rainbow copy of $H$, and provided a conjecture when $H$ is a cycle and showed that their conjecture was true when $H = C_3$. Alon \cite{NA} proved their conjecture for cycles of length four and Montellano-Ballesteros and Neumann-Lara \cite{MBNL} proved the conjecture for all cycles in 2003. More recently, Choi \cite{JC} gave a shorter proof of the conjecture. We continue this theme in the current note and let $G=Q_n$, the $n$-dimensional cube, and $H=C_k$, the cycle of length $k$. Our focus is on $q=|E(H)|$, in which case we will call a $(G,H,q)$-coloring an $H$-rainbow coloring, assuming that $G$ is obvious from context (in this paper $G=Q_n$ always). \begin{defn} For $4\le k \le 2^n$, let $f(n,k)=f(Q_n, C_k, k)$ be the minimum number of colors in a $C_k$-rainbow coloring of $Q_n$. \end{defn} The smallest case $f(n,4)$ was studied by Faudree, Gy\'arf\'as, Lesniak and Schelp~\cite{FGLS} who proved that the trivial lower bound of $n$ is tight by providing, for all $n \ge 6$, a $C_4$-rainbow coloring with $n$ colors. We consider larger $k$. Our first result determines the order of magnitude of $f(n,k)$ for $k \equiv 0$ (mod 4). \begin{thm} \label{mod4} Fix a positive $k \equiv 0$ (mod 4). There are constants $c_1, c_2>0$ depending only on $k$ such that $$c_1n^{k/4} < f(n,k) < c_2 n^{k/4}.$$ \end{thm} The case $k\equiv 2$ (mod 4) seems more complicated. Our results imply that for such fixed $k$ there are positive constants $c_1', c_2'$ with $$c_1' n^{\lfloor k/4 \rfloor} < f(n,k) < c_2' n^{\lceil k/4 \rceil}.$$ We believe that the lower bound is closer to the truth. As evidence for this, we tackle the smallest case in this range, $k=6$. As we will observe later, the lower bound $f(n,6) \ge n$ is trivial for $n \ge 3$, and we obtain the following upper bound. \begin{thm} \label{6} For every $\epsilon > 0$ there exists $n_0$ such that for $n>n_0$ we have~$f(n,6) \leq n^{1 + \epsilon}$. \end{thm} It is rather easy to see that $f(Q_n,Q_3,12) = f(Q_n,C_6,3)$. Indeed, if $Q_n$ is edge-colored so that every $Q_3$ is rainbow, then every $C_6$ is rainbow since each one is contained in a rainbow $Q_3$ and so $f(Q_n,Q_3,12) \geq f(Q_n,C_6,3)$. On the other hand, it is easy to see that any two edges of a $Q_3$ lie in some $C_6$ and therefore if $Q_n$ is edge-colored so that every $C_6$ is rainbow then every $Q_3$ must also be rainbow and so $f(Q_n,Q_3,12) \leq f(Q_n,C_6,3)$. Since $C_4=Q_2$, the following corollary can also be considered an extension of the result \cite{FGLS} to subcubes. \begin{cor} As $n \rightarrow \infty$, we have $f(Q_n,Q_3,12) =n^{1+o(1)}$. \end{cor} We will consider the vertices of $Q_n$ as binary vectors of length $n$ or as subsets of $[n]=\{1, \ldots, n\}$, depending on the context (with the natural bijection $\vec{v} \leftrightarrow v$ where $\vec{v}$ is the incidence vector for $v \subset[n]$, i.e. $\vec{v}_i=1$ iff $i \in v$). In particular, whenever we write $v-w$ we mean set theoretic difference, $v \cup w$ or $v \cap w$ we mean set union/intersection and when we write $\vec{v} \pm \vec{w}$ we mean vector addition/subtraction modulo 2. We write $e_i$ for the standard basis vector, so $e_i$ is one in the $i$th coordinate and zero in all other coordinates. Given an edge $f=uv$ of $Q_n$ where $\vec{v}=\vec{u} +e_s$ for some $s$, we say that $v$ is the top vertex of $f$ and $u$ is the bottom vertex. We will say the an edge is on level $i$ of $Q_n$ if its bottom vertex corresponds to a vector with $i-1$ ones and the top vertex to a vector with $i$ ones. \section{Proof of Theorem \ref{mod4}} The lower bound in Theorem \ref{mod4} follows from the easy observation that in a $C_k$-rainbow coloring all edges at level $k/4$ must receive distinct colors. Indeed, given any two such edges $f_1=vw$ and $f_2=xy$, where $\vec{w}=\vec{v}+e_i$ and $\vec{y}=\vec{x}+e_j$, it suffices to find a copy of $C_k$ containing $f_1$ and $f_2$. If $f_1$ and $f_2$ are incident then it is clear that we can find a $C_k$ containing them as long as $n>k$ which we may clearly assume. The two cases are illustrated below where $r = k/2 - 2$ and $s_i \notin w \cup y$ for all $i \in \lbrace 1,...,r \rbrace$. \begin{center} \begin{tikzpicture}[auto] \tikzstyle{vertex}=[circle,draw = black,fill=black,minimum size=1pt,inner sep=1pt] \node[vertex] (V) at (-3.5,0) {}; \draw (-3.5,-.25) node {$x=v$}; \node[vertex] (VL1) at (-4,1) {}; \draw (-5.1,1)node {$y=v\cup\lbrace j \rbrace$}; \node[vertex] (VR1) at (-3,1) {}; \draw (-1.9,1) node {$w=v\cup\lbrace i \rbrace$}; \node[vertex] (VL2) at (-4,2.5) {}; \draw (-5,2.5) node {$v\cup\lbrace j,s_1 \rbrace$}; \node[vertex] (VR2) at (-3,2.5) {}; \draw (-2,2.5) node {$v\cup\lbrace i,s_1 \rbrace$}; \node[vertex] (VL3) at (-4,5) {}; \draw (-5.5,5) node {$v\cup\lbrace j,s_1,...,s_r \rbrace$}; \node[vertex] (VR3) at (-3,5) {}; \draw (-1.5,5) node {$v\cup\lbrace i,s_1,...,s_r \rbrace$}; \node[vertex] (VT) at (-3.5,6) {}; \draw (-3.5,6.25) node {$v\cup\lbrace i,j,s_1,...,s_r \rbrace$}; \draw (VL1) to node {} (V); \draw (VR1) to node {} (V); \draw (VR1) to node {} (VR2); \draw (VL1) to node {} (V); \draw (VL1) to node {} (VL2); \draw [loosely dashed](VR2) to node {} (VR3); \draw [loosely dashed](VL2) to node {} (VL3); \draw (VR3) to node {} (VT); \draw (VL3) to node {} (VT); \node[vertex] (W) at (5,1) {}; \draw (5,1.25) node {$y=w$}; \node[vertex] (WL1) at (4.5,0) {}; \draw (3.8,-.35) node {$x=y-\lbrace j \rbrace$}; \node[vertex] (WR1) at (5.5,0) {}; \draw (6.2,-.35) node {$v=w-\lbrace i \rbrace$}; \node[vertex] (WL2) at (4,1) {}; \draw (3.15,1) node {$x\cup\lbrace s_1 \rbrace$}; \node[vertex] (WR2) at (6,1) {}; \draw (6.85,1) node {$v\cup\lbrace s_1 \rbrace$}; \node[vertex] (WL3) at (4,5) {}; \draw (2.5,5) node {$x\cup\lbrace s_1,...,s_{r} \rbrace$}; \node[vertex] (WR3) at (6,5) {}; \draw (7.5,5) node {$v\cup\lbrace s_1,...,s_{r} \rbrace$}; \node[vertex] (WT) at (5,6) {}; \draw (5,6.25) node {$w\cup\lbrace s_1,...,s_{r} \rbrace$}; \draw (W) to node {} (WL1); \draw (WL1) to node {} (WL2); \draw [loosely dashed](WL2) to node {} (WL3); \draw (WL3) to node {} (WT); \draw (W) to node {} (WR1); \draw (WR1) to node {} (WR2); \draw [loosely dashed](WR2) to node {} (WR3); \draw (WR3) to node {} (WT); \end{tikzpicture} \end{center} Now, suppose $f_1$ and $f_2$ are not incident. We know that $|x \triangle v| \leq k/2 - 2$ since $x$ and $v$ are each sets of size $k/4 - 1$. By successively deleting elements of $v$ and $x$ in the appropriate order, we can obtain a $v,x$-path of length $k/2 - 2$. Then, since $w$ and $y$ are sets of size $k/4$, we may find a $w,y$-path of length $k/2$ between them by successively adding the elements of $y$ to $w$ and vice versa along with extra elements as needed. The two paths along with the edges $vw$ and $xy$ form a cycle of length $k$. This is shown in the following diagram. Let $y-w = \lbrace y_1,...,y_m \rbrace$, $w-y = \lbrace w_1,...,w_m\rbrace$ and $w\cap y = \lbrace z_1,...,z_l\rbrace$ where $m + l = k/4$. Let $\lbrace s_1,...,s_r \rbrace$ again be a set such that $s_i \notin y \cup w$ with $r = k/4 - m$. \begin{center} \begin{tikzpicture}[auto] \tikzstyle{vertex}=[circle,draw = black,fill=black,minimum size=1pt,inner sep=1pt] \node [vertex] (G) at (0,2) {}; \draw (0,1.65) node {$0$}; \node [vertex] (GL1) at (-2,3) {}; \draw (-2.5,3) node {$\lbrace y_1 \rbrace$}; \node [vertex] (GR1) at (2,3) {}; \draw (2.5,3) node {$\lbrace w_1 \rbrace$}; \node [vertex] (GL3) at (-2,5) {}; \draw (-4.7,5) node {$x=\lbrace y_1,...,y_m,z_1,...,z_l \rbrace - \lbrace j \rbrace$}; \node [vertex] (GR3) at (2,5) {}; \draw (4.8,5) node {$v=\lbrace w_1,...,w_m,z_1,...,z_l \rbrace - \lbrace i \rbrace$}; \node [vertex] (GL4) at (-2,6) {}; \draw (-3.1,6) node {$y=x \cup \lbrace j \rbrace$}; \node [vertex] (GR4) at (2,6) {}; \draw (3.1,6) node {$w=v \cup \lbrace i \rbrace$}; \node [vertex] (GL5) at (-2,7.5) {}; \draw (-3.3,7.5) node {$y \cup \lbrace {s_1,...,s_r} \rbrace$}; \node [vertex] (GR5) at (2,7.5) {}; \draw (3.4,7.5) node {$w \cup \lbrace {s_1,...,s_r} \rbrace$}; \node [vertex] (GL6) at (-2,9) {}; \draw (-4.4,9) node {$y \cup \lbrace {s_1,...,s_r,w_1,...,w_{m-1}} \rbrace$}; \node [vertex] (GR6) at (2,9) {}; \draw (4.4,9) node {$w \cup \lbrace {s_1,...,s_r,y_1,...,y_{m-1}} \rbrace$}; \node [vertex] (GT) at (0,10) {}; \draw (0,10.3) node {$w \cup y \cup \lbrace {s_1,...,s_r} \rbrace$}; \draw (GL1) to (G) to (GR1); \draw (GL6) to (GT) to (GR6); \draw (GL3) to (GL4); \draw (GR3) to (GR4); \draw [loosely dashed] (GL1) to (GL3); \draw [loosely dashed] (GR1) to (GR3); \draw [loosely dashed] (GL4) to (GL6); \draw [loosely dashed] (GR4) to (GR6); \end{tikzpicture} \end{center} For the upper bound we need a classical construction of generalized Sidon sets by Bose and Chowla. A $B_t$-set $S=\{s_1, \ldots, s_n\}$ is a set of integers such that if $1\le i_1 \le i_2 \le \cdots \le i_{t}\le n$ and $1\le j_1 \le j_2 \le \cdots \le j_{t}\le n$, then $$ s_{i_1}+\cdots +s_{i_t} \neq s_{j_1}+\cdots +s_{j_t} $$ unless $(i_1, \ldots, i_t)=(j_1, \ldots, j_t)$. A consequence of this is that if $P,Q$ are nonempty disjoint subsets of $[n]$ with $|P|=|Q|\le t$, then \begin{equation} \label{sidon} \sum_{i \in P} s_i \neq \sum_{j \in Q} s_j. \end{equation} The result below is phrased in a form that is suitable for our use later. \begin{thm} {\bf (Bose-Chowla \cite{BC})} \label{bc} For each fixed $t \ge 2$, there is a constant $A>1$ such that for all $n$, there is a $B_t$-set $S=\{s_1, \ldots, s_n\} \subset \{1, 2, \ldots, \lfloor An^t \rfloor\}$. \end{thm} Now we provide the upper bound construction for Theorem \ref{mod4}. {\bf Construction 1.} Let $t=k/4-1$ and $S=\{s_1, \ldots, s_n\} \subset \{1, 2, \ldots, \lfloor An^{t} \rfloor\}$ be a $B_{t}$-set as above. For each $v \in V(Q_n)$, let $$a(v)= \sum_{i=1}^n \vec{v}_i s_i= \sum_{i: \vec{v}_i=1} s_i.$$ Given $vw \in E(Q_n)$ with $\vec{w}=\vec{v}+e_j$, let $M=\lceil kAn^{t}\rceil$, and let $$d(vw)= a(v) + Mj.$$ Suppose further that $vw$ is at level $p$ and $p'$ is the congruence class of $p$ modulo $k/2$. Then the color of the edge $vw$ is $$\chi(vw)= (d(vw), p'). \qed $$ Let us now argue that this construction yields the upper bound in Theorem \ref{mod4}. First, the number of colors is at most $$\max_{vw} d(vw) \times \frac{k}{2}\le (n \cdot \max s_i + Mn)\frac{k}{2}\le \frac{nk}{2}An^t + \frac{nk}{2}M < k^2An^{t+1}= k^2An^{k/4}$$ as desired. Now we show that this is a $C_k$-rainbow coloring. Suppose for contradiction that $H$ is a copy of $C_k$ in $Q_n$ and $f_1=vw, f_2=xy$ are distinct edges of $H$ with $\chi(f_1)=\chi(f_2)$. Since $H$ spans at most $k/2$ levels, $f_1$ and $f_2$ cannot lie in levels that differ by more than $k/2$, so $\chi(f_1)\neq \chi(f_2)$ unless $f_1$ and $f_2$ are in the same level which we may henceforth assume. Let $v,x$ be the bottom vertices of $f_1, f_2$, and $\vec{w}=\vec{v}+e_i, \vec{y}=\vec{x}+e_j$. Assume without loss of generality that $i \le j$. If $v=x$, then $$a(v)+Mi=d(vw)=d(xy)=a(x)+Mj=a(v)+Mj.$$ This implies that $i=j$ and contradicts the fact that $f_1 \neq f_2$. We may therefore assume that $v\neq x$. Similarly, if $w=y$, then $i < j$ and $$a(w)-s_i+Mi=a(v)+Mi = d(vw)=d(xy)= a(x)+Mj= a(y)-s_j+Mj=a(w)-s_j+Mj.$$ This implies the contradiction $s_j-s_i=M(j-i)\ge M > An^t > s_j-s_i$. Consequently, we may assume that $vw$ and $xy$ share no vertex. If $|v \triangle x| >k/2$, then any $v,x$-path in $Q_n$ has length more than $k/2$ so there can be no cycle of length $k$ containing both $v$ and $x$, contradiction. So we may assume that $|v \triangle x|\le k/2$. Now $\chi(vw)=\chi(xy)$ implies that $$a(v)+Mi=d(vw)=d(xy)=a(x)+Mj$$ and this yields $$M(j-i)=Mj-Mi= a(v)-a(x)= a(v-x)-a(x-v) \le \frac{|v \triangle x|}{2} An^t\le \frac{k}{4}An^{t}<M.$$ Consequently, we may assume that $i=j$, $a(v)=a(x)$, $a(v-x)=a(x-v)$ and $|v \triangle x|=|w \triangle y|$. If $|v-x|=|x-v|\le k/4-1$, then $$a(v-x) = \sum_{i\in v-x} s_i \neq \sum_{j \in x-v} s_j= a(x-v)$$ due to (\ref{sidon}), the definition of $S$ and $t=k/4-1$. So we may assume that $|v-x|=|x-v|= k/4$ and $|w \triangle y|=|v \triangle x|=k/2$. This implies that dist$_{Q_n}(w,y)=$ dist$_{Q_n}(v,x)=k/2$. Together with edges $f_1, f_2$, we conclude that $C$ must have at least $k+2$ edges, contradiction. \qed \section{Proof of Theorem \ref{6}} We will first show the lower bound $f(n,6) \ge n$ for $n \ge 3$. It is immediate that no two edges incident with 0 receive the same color, for otherwise there would be two edges of the same color on level 1 of $Q_n$ which we could easily extend to a non-rainbow $C_6$. Indeed, let $i,j,k$ be distinct and consider the following $C_6$: $$0 \quad e_i \quad e_i+e_k \quad e_k\quad e_j+e_k\quad e_j \quad 0.$$ To obtain the upper bound, we will give an explicit coloring that makes use of a classical construction of Behrend on sets of integers with no arithmetic progression of size three. Let $r_3(N)$ denote the maximum size of a subset of $\{1,\ldots, N\}$ that contains no 3-term arithmetic progression. \begin{thm} {\bf (Behrend \cite{B})} \label{b} There is a $c>0$ such that if $N$ is sufficiently large, then $$r_3(N) > N^{1 - {c \over {\sqrt{\log N}}}}.$$ \end{thm} Theorem \ref{b} clearly implies that for $\epsilon > 0$ and sufficiently large $N$ we have $r_3(N) > N^{1 - \epsilon}$. The error term $\epsilon$ was improved recently by Elkin~\cite{E} (see \cite{GW} for a simpler proof) and using Elkin's result would give corresponding improvements in our result. {\bf Construction 2.} Let $\epsilon > 0$ and $n$ be sufficiently large. Put $N=\lceil n^{1+\epsilon} \rceil$ and let $S = \{s_1,...,s_n\}\subset \{1, \ldots, N\}$ contain no 3-term arithmetic progression. Such a set exists by Theorem \ref{b} since $$n > n^{1-\epsilon^2} = n^{(1-\epsilon)(1+\epsilon)} > N^{1-2\epsilon}.$$ Let $$a(v) = {\sum\limits_{i = 1}^{n} \vec{v}_i\,s_i}.$$ Consider the edge $vw$, where $\vec{w}=\vec{v}+e_k$. Let $$d(vw)= a(v) +2s_k \in Z_{2N}.$$ We emphasize here that we are computing $d(vw)$ modulo $2N$. Suppose further that $vw$ is at level $p$ and $p'$ is the congruence class of $p$ modulo $3$. Then the color of the edge $vw$ is $$\chi(vw)= (d(vw), p'). \qed $$ The number of colors used is at most $6N<n^{1+2\epsilon}$ as required. We will now show that this is a $C_6$-rainbow coloring. Due to the second coordinate, it suffices to show that any two edges $f_1, f_2$ of a $C_6$ which are on the same of level of $Q_n$ receive different colors. If $f_1$ and $f_2$ are incident, then they meet either at their top vertices or bottom vertices. If incident at their bottom vertices, the edges are colored as follows and thus are distinctly colored: \begin{center} \begin{tikzpicture}[auto] \tikzstyle{vertex}=[circle,draw = black,fill=black,minimum size=1pt,inner sep=1pt] \node[vertex] (V) at (0,0) {}; \node[vertex] (VL) at (-2,2) {}; \node[vertex] (VR) at (2,2) {}; \draw(0,-.5) node {$v$}; \draw (V) to node {$a(v) + 2s_i$} (VL); \draw (V) to node [swap] {$a(v) + 2s_j$} (VR); \end{tikzpicture} \end{center} If incident at their top vertices, the edges lie on a $C_4$ and are therefore distinctly colored. \begin{center} \begin{tikzpicture}[auto] \tikzstyle{vertex}=[circle,draw = black,fill=black,minimum size=1pt,inner sep=1pt] \node[vertex] (V) at (0,0) {}; \node[vertex] (VL) at (-2,2) {}; \node[vertex] (VR) at (2,2) {}; \node[vertex] (VT) at (0,4) {}; \draw(0,-.5) node {$v$}; \draw [loosely dashed](V) to node {$a(v) + 2s_i$} (VL); \draw [loosely dashed](V) to node [swap] {$a(v) + 2s_j$} (VR); \draw (VR) to node [swap]{$a(v) + s_j + 2s_i$} (VT); \draw (VL) to node {$a(v) + s_i + 2s_j$} (VT); \end{tikzpicture} \end{center} If $f_1$ and $f_2$ are not incident, then there must be a path of length two between their bottom vertices. For if not, then they could not lie on a $C_6$ as the shortest path between their top vertices has length at least two. Moreover, the top vertices of $f_1$ and $f_2$ have symmetric difference precisely two since there is a path of length two between them. With these conditions, there are three ways the edges may be colored. \begin{center} \begin{tikzpicture}[auto] \tikzstyle{vertex}=[circle,draw = black,fill=black,minimum size=1pt,inner sep=1pt] \node[vertex] (V) at (0,0) {}; \node[vertex] (L1) at (-2,2) {}; \node[vertex] (L2) at (-2,4) {}; \node[vertex] (R1) at (2,2) {}; \node[vertex] (R2) at (2,4) {}; \draw [loosely dashed] (V) to node {$a(v) + 2s_i$} (L1); \draw (L1) to node {$a(v)+ s_i + 2s_j$}(L2); \draw [loosely dashed] (V) to node [swap]{$a(v) + 2s_k$} (R1); \draw (R1) to node [swap]{$a(v)+ s_k + 2s_j$} (R2); \draw(0,-.5) node {$v$}; \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[auto] \tikzstyle{vertex}=[circle,draw = black,fill=black,minimum size=1pt,inner sep=1pt] \node[vertex] (V) at (0,0) {}; \node[vertex] (L1) at (-2,2) {}; \node[vertex] (L2) at (-2,4) {}; \node[vertex] (R1) at (2,2) {}; \node[vertex] (R2) at (2,4) {}; \draw [loosely dashed] (V) to node {$a(v) + 2s_i$} (L1); \draw (L1) to node {$a(v)+ s_i + 2s_j$}(L2); \draw [loosely dashed] (V) to node [swap]{$a(v) + 2s_k$} (R1); \draw (R1) to node [swap]{$a(v)+ s_k + 2s_i$} (R2); \draw(0,-.5) node {$v$}; \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[auto] \tikzstyle{vertex}=[circle,draw = black,fill=black,minimum size=1pt,inner sep=1pt] \node[vertex] (V) at (0,0) {}; \node[vertex] (L1) at (-2,2) {}; \node[vertex] (L2) at (-2,4) {}; \node[vertex] (R1) at (2,2) {}; \node[vertex] (R2) at (2,4) {}; \draw [loosely dashed] (V) to node {$a(v) + 2s_i$} (L1); \draw (L1) to node {$a(v)+ s_i + 2s_j$}(L2); \draw [loosely dashed] (V) to node [swap]{$a(v) + 2s_j$} (R1); \draw (R1) to node [swap]{$a(v)+ s_j + 2s_k$} (R2); \draw(0,-.5) node {$v$}; \end{tikzpicture} \end{center} In the first coloring, $s_i + 2s_j \neq s_k + 2s_j$ holds due to $i$ and $k$ being distinct. In the second and third colorings, $s_i + 2s_j \neq s_k + 2s_i$ and $s_i + 2s_j \neq s_j + 2s_k$ hold due to our set $S$ being free of three term arithmetic progressions. \qed \section{Concluding Remark} Our results imply a tight connection between $C_k$-rainbow colorings in the cube and constructions of large generalized Sidon sets. When $k \equiv 0$ (mod 4) Construction 1 gives the correct order of magnitude, however for $k \equiv 2$ (mod 4) the same method does not work. In this case an approach similar to Construction 2 would work provided we can construct large sets that do not contains solutions to certain equations. \begin{conj} \label{conj} Fix $4 \le k \equiv 2$ {\rm (mod 4)}. Then $f(n,k)=n^{\lfloor k/4 \rfloor + o(1)}$. \end{conj} For the first open case $k=10$, we can show that $f(n,10)=n^{2+o(1)}$ provided one can construct a set $S \subset [N]$ with $|S| > N^{1/2-o(1)}$ that contains no nontrivial solution to any of the following equations: $$x_1+ x_2= x_3+x_4$$ $$x_1+x_2+x_3= x_4+2x_5$$ $$x_1 + 2x_2= x_3+2x_4.$$ Ruzsa~\cite{R1, R2} defined the genus $g(E)$ of an equation $$ E: \quad a_1x_1 + \cdots + a_kx_k = 0$$ as the largest $m$ such that there is a partition $S_1\cup \ldots \cup S_m$ of $[k]$ where the $S_i$ are disjoint, non-empty and for all $j$, \begin{equation} \label{re} \sum\limits_{i \in S_j} a_i = 0. \end{equation} A solution $(x_1, \ldots, x_k)$ of $E$ is trivial if there are $l$ distinct numbers among $\{x_1, \ldots, x_k\}$ and (\ref{re}) holds for a partition $S_1 \cup \ldots \cup S_l$ of $[k]$ into disjoint, non-empty parts such that $x_i = x_j$ if and only if $i,j \in S_v$ for some $v$. Ruzsa showed that if $S \subset [n]$ has no nontrivial solutions to $E$ then $|S|\le O(n^{1/g(E)})$. The question of whether there exists $S$ with $|S|=n^{1/g(E)-o(1)}$ remains open for most equations $E$. The set of equations above has genus two so it is plausible that one can prove Conjecture \ref{conj} for $k=10$ using this approach. For the general case, we can provide a rainbow coloring if our set $S$ contains no nontrivial solutions to any of the three equations below with $m = \lfloor k/4 \rfloor$. \begin{center} $x_1 + \cdots + x_m = x_{m+1} + \cdots + x_{2m}$ \\ $x_1 + \cdots + x_m + x_{m+1} = x_{m+2} + \cdots + x_{2m} + 2x_{2m+1}$ \\ $x_1 + \cdots + x_{m-1} + 2x_m = x_{m+1} + \cdots + x_{2m-1} + 2x_{2m}$. \end{center} The set of equations above has genus $m = \lfloor k/4 \rfloor$, so if Ruzsa's question has a positive answer, then we would be able to construct a set of the desired size.
2,869,038,156,515
arxiv
\section{Introduction} In a recent paper \cite{Louis:2009xd} we discussed spontaneous ${\cal N}=2 \to {\cal N}=1$ supersymmetry breaking in four-dimensional supergravity and type II string compactifications using the embedding tensor formalism \cite{deWit:2002vt,deWit:2005ub}. We confirmed that the simultaneous appearance of electric and magnetic charges is necessary to circumvent the old no-go theorem forbidding partial ${\cal N}=2 \to {\cal N}=1$ supersymmetry breaking in theories with only electric charges \cite{Cecotti:1984rk,Cecotti:1984wn}, analogous to the case of rigid supersymmetry \cite{Antoniadis:1995vb}. This fact is particularly transparent in the embedding tensor formalism which treats electric gauge bosons and their magnetic duals on the same footing. Specific examples of supergravity theories which display partial supersymmetry breaking have been presented in \cite{Cecotti:1985sf,Ferrara:1995gu,Ferrara:1995xi,Fre:1996js}, generalising the mechanism of adding a magnetic Fayet-Illiopoulos term to a rigid supersymmetric theory \cite{Antoniadis:1995vb}.\footnote{For an analogous discussion in string theory see, for example, \cite{Kiritsis:1997ca}.} In \cite{Louis:2009xd} we adopted a more general approach, in that we analysed arbitrary ${\cal N}=2$ gauged supergravities and showed that the conditions for partial supersymmetry breaking in a maximally symmetric background primarily determine the structure of the embedding tensor, i.e.\ the spectrum of electric and magnetic charges, but do not constrain the scalar field space ${{\bf M}}_{\rm v}$ of the vector multiplets. In the hypermultiplet sector on the other hand, the scalar field space ${{\bf M}}_{\rm h}$ has to admit at least two linearly independent, commuting isometries. It is necessary to gauge these isometries in order to induce masses for the two Abelian gauge bosons which join the heavy gravitino in a massive ${\cal N}=1$ gravitino multiplet. Partial supersymmetry breaking further demands that a specific linear combination of the two Killing vectors generating the isometries is holomorphic with respect to one of the three almost complex structures which exist on ${{\bf M}}_{\rm h}$. In \cite{Louis:2009xd} we explicitly identified two such Killing vectors for the specific class of special quaternionic-K\"ahler manifolds \cite{Cecotti:1988qn}. These manifolds are in the image of the c-map and so arise at tree-level in type II compactifications on Calabi-Yau or generalised manifolds with $SU(3)\times SU(3)$ structure \cite{Cecotti:1988qn,Gurrieri:2002wz,D'Auria:2004tr,Grana:2005ny,Grana:2006hr,Cassani:2007pq,Cassani:2008rb,Grana:2009im}. However, in this paper we shall keep the discussion more general and discuss partial supersymmetry breaking in generic ${\cal N}=2$ supergravities. The special quaternionic-K\"ahler manifolds then serve as a convenient explicit example. The aim of the present paper is to continue the analysis of \cite{Louis:2009xd} and derive the ${\cal N}=1$ low-energy effective action that is valid below the scale of partial supersymmetry breaking $m_{3/2}$ or, in other words, below the scale set by the heavy gravitino. In order to achieve this we integrate out the entire massive ${\cal N}=1$ gravitino multiplet (containing fields with spin $s=(3/2,1,1,1/2)$) together with all other multiplets which, due to the symmetry breaking, acquire masses of ${\cal O}(m_{3/2})$. This results in an effective ${\cal N}=1$ theory whose couplings are determined by the couplings of the `parent' ${\cal N}=2$ theory.\footnote{Preliminary aspects of this programme were presented in \cite{Louis:2002vy,Gunara:2003td}.} An interesting aspect of the effective theory is the structure of the scalar field space~${\bf M}$. In ${\cal N}=2$ supergravities~${\bf M}$ is a direct product of the form \cite{Bagger:1983tt,deWit:1984pk,deWit:1984px,Andrianopoli:1996cm,Craps:1997gp} \begin{equation} \label{N=2product} {\bf M} \ =\ {{\bf M}}_{\rm h} \times {{\bf M}}_{\rm v}\ , \end{equation} where ${{\bf M}}_{\rm h}$ is the $4 n_{\rm h}$-dimensional quaternionic-K\"ahler manifold spanned by the scalars of $n_{\rm h}$ hypermultiplets, while ${{\bf M}}_{\rm v}$ is a $2 n_{\rm v}$-dimensional special-K\"ahler manifold spanned by the scalars of $n_{\rm v}$ vector multiplets. Note that ${{\bf M}}_{\rm v}$ is a K\"ahler manifold but ${{\bf M}}_{\rm h}$ is not. We shall see that the process of integrating out the two heavy gauge bosons corresponds to taking the quotient of ${{\bf M}}_{\rm h}$ with respect to the two isometries generating the partial supersymmetry breaking. This leaves a $(4 n_{\rm h}-2)$-dimensional manifold $\hat{\M}_{\rm h}$ where the two `missing' scalar fields are the Goldstone bosons eaten by the heavy gauge bosons. We shall show that $\hat{\M}_{\rm h}$ is equipped with a K\"ahler metric consistent with the ${\cal N}=1$ supersymmetry of the low-energy effective theory.\footnote{A more detailed analysis of the mathematical properties of this construction will be presented in a companion paper \cite{Cortes}.} It is also possible that, apart from the two gauge bosons, other scalar fields (from both vector- and hypermultiplets) acquire a mass of ${\cal O}(m_{3/2})$ and thus have to be integrated out, leading to a further reduction of the scalar field space. However, as such scalars are not Goldstone bosons this process simply amounts to projecting to a K\"ahler submanifold of $\hat{\M}_{\rm h}\times {{\bf M}}_{\rm v}$, rather than taking a quotient. The resulting ${\cal N}=1$ scalar field space is then given by \begin{equation} \label{N=1product} {\bf M}^{{\cal N}=1} = {\hat{\M}}_{\rm h} \times {\hat{\M}}_{\rm v}\ , \end{equation} where ${\hat{\M}}_{\rm v}$ is a submanifold of ${{\bf M}}_{\rm v}$. (For notational simplicity we did not introduce a new symbol for the submanifold of $\hat{\M}_{\rm h}$.) The dimension of ${\bf M}^{{\cal N}=1}$ is model dependent. It can be as large as $2n_{\rm v} + 4 n_{\rm h}-2$ when the only scalars integrated out are the two Goldstone bosons providing the mass degrees of freedom for the heavy gauge bosons. However, the dimension of ${\bf M}^{{\cal N}=1}$ is generically much smaller as most of the scalars are stabilised at $m_{3/2}$. Furthermore, we shall see that the role of the Goldstone bosons is the crucial difference between the ${\cal N}=1$ effective action arising from a spontaneously broken ${\cal N}=2$ theory and that obtained by an ${\cal N}=1$ truncation of the same ${\cal N}=2$ theory \cite{Andrianopoli:2001zh,Andrianopoli:2001gm,Andrianopoli:2002rm,Andrianopoli:2002vq,D'Auria:2005yg} (see \cite{Grimm:2004uq,Grimm:2004ua,Benmachiche:2006df,Koerber:2007xk,Martucci:2009sf} for type II orientifold compactification examples). The field space of the latter always contains a submanifold of the $4 n_{\rm h}$-dimensional manifold ${\bf M}_{\rm h}$ of maximal dimension $2n_{\rm h}$, rather than a quotient of maximal dimension $4 n_{\rm h}-2$. It is possible that the original ${\cal N}=2$ supergravity is also gauged with respect to Killing vectors which do not participate in the partial supersymmetry breaking and which induce a separate mass scale ${\tilde m}$. For ${\tilde m} > m_{3/2}$ all heavy multiplets with masses of ${\cal O}({\tilde m})$ should also be integrated out and thus are not visible in the ${\cal N}=1$ low-energy effective action. If ${\tilde m} < m_{3/2}$, on the other hand, then the associated light multiplets are kept in the action and do contribute to the superpotential ${\cal W}$ and possibly also to the D-terms ${\cal D}^{\hat I} $. Due to their ${\cal N}=2$ origin, we will see that both ${\cal W}$ and ${\cal D}^{\hat I} $ take a special form. The remainder of this paper is organised as follows. In Section~\ref{review} we briefly summarise the results of \cite{Louis:2009xd} in order to set the stage for our analysis. However, here we shall use a more geometric formulation of the hyperino supersymmetry conditions compared to \cite{Louis:2009xd}, stating them as a holomorphicity condition on the Killing vectors. In Section~\ref{section:None} we then derive the ${\cal N}=1$ low-energy effective action. We begin with the target space metric of the scalar fields in Section~\ref{section:NoneK}, show that it is K\"ahler and determine its K\"ahler potential $K^{{\cal N}=1}$. In Section~\ref{section:Nonef} we compute the ${\cal N}=1$ gauge kinetic function $f$ and check its holomorphicity with respect to the ${\cal N}=1$ complex structure. Similarly, in Section~\ref{section:NoneW} we derive the superpotential ${\cal W}$ and show its holomorphicity. In Section~\ref{section:NoneD} we determine the $D$-terms and in Section~\ref{section:SQC} we give the ${\cal N}=1$ K\"ahler potential, the superpotential and the $D$-terms for the class of special quaternionic-K\"ahler manifolds. We conclude in Section~\ref{Conc}. In Appendix~\ref{section:massive_multi} we compute the normalised masses of the two heavy gauge bosons and show their consistency with the ${\cal N}=1$ mass relations. In Appendix~\ref{section:holcoords} we show that the coordinates on the K\"ahler space introduced in Section~\ref{section:Min_special} are holomorphic. \section{Partially broken ${\cal N}=2$ supergravities} \label{review} \subsection{Gauged ${\cal N}=2$ supergravities } \label{section:N=2} We shall first briefly recall the spectrum and couplings of four-dimensional ${\cal N}=2$ supergravity (for a review see e.g.\ \cite{Andrianopoli:1996cm}). The theory consists of a gravitational multiplet, $n_{\rm v}$ vector multiplets and $n_{\rm h}$ hypermultiplets. The gravitational multiplet $(g_{\mu\nu},\Psi_{\mu {\cal A}}, A_\mu^0)$ contains the spacetime metric $ g_{\mu\nu}, \mu,\nu =0,\ldots,3$, two gravitini $\Psi_{\mu {\cal A}}, {\cal A}=1,2$, and the graviphoton $A_\mu^0$. A vector multiplet $(A_\mu,\lambda^{\cal A}, t)$ contains a vector $A_\mu$, two gaugini $\lambda^{\cal A}$ and a complex scalar $t$. Finally, a hypermultiplet $(\zeta_{\alpha}, q^u)$ contains two hyperini $\zeta_{\alpha}$ and 4 real scalars $q^u$. For $n_{\rm v}$ vector- and $n_{\rm h}$ hypermultiplets there are a total of $2n_{\rm v} +4n_{\rm h}$ real scalar fields and $2(n_{\rm v}+n_{\rm h})$ spin-$\tfrac12$ fermions in the spectrum. For an ungauged theory the bosonic matter Lagrangian is given by \begin{equation}\begin{aligned}\label{sigmaint} {\cal L}\ =\ - \mathrm{i} \mathcal{N}_{IJ}\,F^{I +}_{\mu\nu}F^{\mu\nu\, J+} + \mathrm{i} \overline{\mathcal{N}}_{IJ}\, F^{I-}_{\mu\nu} F^{\mu\nu\, J-} + g_{i\bar \jmath}(t,\bar t)\, \partial_\mu t^i \partial^\mu\bar t^{\bar \jmath} + h_{uv}(q)\, \partial_\mu q^u \partial^\mu q^v \ , \end{aligned}\end{equation} where $h_{uv},\, u,v=1,\ldots,4n_{\rm h},$ is the metric on the $4n_{\rm h}$-dimensional space ${{\bf M}}_{\rm h}$, which ${\cal N}=2$ supersymmetry constrains to be a quaternionic-K\"ahler manifold \cite{Bagger:1983tt,deWit:1984px}. Such manifolds have a holonomy group given by $Sp(1)\times Sp(n_{\rm h})$. In addition, they admit a triplet of complex structures $J^x, x=1,2,3$, which satisfy the quaternionic algebra \begin{equation}\label{jrel} J^x J^y = -\delta^{xy}{\bf 1} + \epsilon^{xyz} J^z ~. \end{equation} The metric $h_{uv}$ is Hermitian with respect to all three complex structures. Correspondingly, a quaternionic-K\"ahler manifold admits a triplet of hyper-K\"ahler two-forms given by $K^x_{uv} = h_{uw} (J^x)^w_v$ that are only covariantly closed with respect to the $Sp(1)$ connection $\omega^x$, i.e. \begin{equation}\label{deriv_Sp(1)_curvature} \nabla K^x \equiv dK^x + \epsilon^{xyz} \omega^y \wedge K^z=0 \ . \end{equation} In other words, $K^x$ is proportional to the $Sp(1)$ field strength of $\omega^x$, thus leading to \begin{equation} \label{def_Sp(1)_curvature} K^x =\diff \omega^x + \tfrac12 \epsilon^{xyz} \omega^y\wedge \omega^z\ . \end{equation} The metric $g_{i\bar \jmath},\, i,\bar\jmath = 1,\ldots,n_{\rm v}$, is defined on the $2n_{\rm v}$-dimensional space ${{\bf M}}_{\rm v}$, which ${\cal N}=2$ supersymmetry constrains to be a special-K\"ahler manifold \cite{deWit:1984pk,Craps:1997gp}. This implies that the metric obeys \begin{equation}\label{gdef} g_{i\bar \jmath} = \partial_i \partial_{\bar \jmath} K^{\rm v}\ , \qquad \textrm{for}\qquad K^{\rm v}= -\ln \iu \left( \bar X^I {\cal F}_I - X^I\bar {\cal F}_I \right)\ . \end{equation} Both $X^I(t)$ and ${\cal F}_I(t)$, $I= 0,1,\ldots,n_{\rm v}$, are holomorphic functions of the scalars $t^i$ and in the ungauged case one can always choose ${\cal F}_I = \partial{\cal F}/\partial{X^I}$, i.e.\ ${\cal F}_I$ is the derivative of a holomorphic prepotential ${\cal F}(X)$ which is homogeneous of degree two. Furthermore, it is possible to go to a system of `special coordinates' where $X^I= (1,t^i)$ (See e.g. \cite{Craps:1997gp} for further details). The $F^{I \pm}_{\mu\nu}$ that appear in the Lagrangian \eqref{sigmaint} are the self-dual and anti-self-dual parts of the usual field strengths. They include the field strengths of the gauge bosons of the vector multiplets and the graviphoton. Their kinetic matrix \mathcal{N}_{IJ}$ is a function of the $t^i $ given by \begin{equation} \label{Ndef} {\cal N}_{IJ} = \bar {\cal F}_{IJ} +2\iu\ \frac{\mbox{Im} {\cal F}_{IK}\mbox{Im} {\cal F}_{JL} X^K X^L}{\mbox{Im} {\cal F}_{LK} X^K X^L} \ , \end{equation} where ${\cal F}_{IJ}=\partial_I {\cal F}_J$. As we shall discuss in Sections \ref{section:Nonef} and \ref{section:NoneD}, the second term in \eqref{Ndef} is due to the inclusion of the graviphoton in $F^{I \pm}_{\mu\nu}$. In the ungauged case the equations of motion derived from $\cal L$ are invariant under $Sp(n_{\rm v}+1)$ electric-magnetic duality rotations which act on the $(2n_{\rm v}+2)$-dimensional symplectic vectors $(F^I, G_I)$ and $(X^I, {\cal F}_I)$. The $G_I$ are dual magnetic field strengths that only appear on-shell, in that they are not part of the Lagrangian \eqref{sigmaint} and are defined by \begin{equation}\label{Gdualdef} G_{I}^{\mu\nu\pm} = \pm \frac{\mathrm{i}}{2}\frac{\partial {\cal L}}{\partial F^{I\pm}_{\mu\nu}}~, \end{equation} from which we find (suppressing the spacetime indices) \begin{equation}\label{Gdual} G_I^+ = \mathcal{N}_{IJ}\,F^{J +}\ , \qquad G_I^- = \overline{\mathcal{N}}_{IJ}\,F^{J -}\ . \qquad \end{equation} The symplectic invariance is broken in the presence of charged scalars, i.e.\ in gauged supergravities, and the resulting theory crucially depends on which charges (electric or magnetic) the fermions and scalars carry. In fact, one of the necessary conditions for partial supersymmetry breaking is the appearance of magnetically charged fields \cite{Antoniadis:1995vb,Ferrara:1995gu,Louis:2009xd}. Therefore, the formalism of the embedding tensor introduced in \cite{deWit:2002vt,deWit:2005ub} is ideally suited to discuss the problem of partial supersymmetry breaking, as it treats the electric vectors $A_\mu^{~~I}$ and their magnetic duals $B_{\mu I}$ on the same footing and naturally allows for arbitrary gaugings. As we shall review in the next section, partial supersymmetry breaking needs at least two commuting isometries in the hypermultiplet sector while it is sufficient for the vector multiplets to be Abelian \cite{Ferrara:1995gu,Louis:2009xd}. Therefore, we focus on this situation and introduce covariant derivatives of the following form into the Lagrangian \eqref{sigmaint}: \begin{equation}\label{d2} \partial_\mu q^u\to D_{\mu} q^u = \partial_{\mu} q^u - A^{~I}_{\mu}\, \Theta_I^{~\lambda}\, {k}_{\lambda}^u + B_{\mu I}\, \Theta^{I{\lambda}}\, {k}_{\lambda}^u \ , \end{equation} where $\Theta$ is the embedding tensor and ${k}_\lambda(q)$ are the Killing vectors on ${\bf M}_{\rm h}$. Mutual locality of electric and magnetic charges additionally imposes $\Theta^{I[{\lambda}} \Theta_{I}^{\phantom{I}\kappa]} = 0$. Inserting the replacement \eqref{d2} into the Lagrangian \eqref{sigmaint} introduces both electric and magnetic vector fields. This upsets the counting of degrees of freedom and leads to unwanted equations of motion. Therefore, the Lagrangian has to be carefully augmented by a set of two-form gauge potentials $B_{\mu\nu}^M$ with couplings that keep supersymmetry and gauge invariance intact. As we do not need these couplings in this paper, we refer the interested reader to the literature for further details~\cite{deWit:2002vt,deWit:2005ub,deVroome:2007zd}. An analysis of the symplectic extension of the gauged ${\cal N}=2$ supergravity Lagrangian in $D=4$ to include electric and magnetic charges has been carried out in \cite{Dall'Agata:2003yr,Sommovigo:2004vj,D'Auria:2004yi}. We are specifically interested in the scalar part of supersymmetry variations, i.e.\ \begin{equation}\label{susytrans2}\begin{aligned} \delta_\epsilon \Psi_{\mu {\cal A}} ~=& ~ D_\mu \epsilon_{\cal A} - S_{\cal AB} \gamma_\mu \epsilon^{\cal B} + \ldots \, ,\nonumber\\ \delta_\epsilon \lambda^{i {\cal A}} ~=& ~ W^{i{\cal AB}}\epsilon_{\cal B}+\ldots \, ,\\ \delta_\epsilon \zeta_{\alpha} ~=& ~ N_\alpha^{\cal A} \epsilon_{\cal A}+\ldots \, ,\nonumber \end{aligned} \end{equation} where the ellipses indicate further terms that vanish in a maximally symmetric ground state. The $\gamma_\mu$ are Dirac matrices and $\epsilon^{\cal A}$ is the $SU(2)$ doublet of spinors parametrising the ${\cal N}=2$ supersymmetry transformations.\footnote{Note that the $SU(2)$ R-symmetry acts as the $Sp(1)$ introduced above on the quaternionic-K\"ahler manifold.} $S_{\cal AB}$ is the mass matrix of the two gravitini, while $W^{i {\cal AB}}$ and $N_\alpha^{\cal A}$ are related to the mass matrices of the spin-$\tfrac12$ fermions. The symplectic extensions of these expressions in the embedding tensor formalism are given by \begin{eqnarray}\label{susytrans3}\begin{aligned} S_{\cal AB} ~=&~ \tfrac{1}{2} \e^{K^{\rm v}/2} {V}^\Lambda \Theta_\Lambda^{~\lambda} P_{\lambda}^x (\sigma^x)_{\cal AB} \ ,\nonumber\\ W^{i{\cal AB}} ~=&~ \mathrm{i} \e^{K^{\rm v}/2} g^{i\bar \jmath}\, (\nabla_{\bar \jmath}\bar {V}^\Lambda) \Theta_\Lambda^{~\lambda} P_{\lambda}^x (\sigma^x)^{\cal AB} \ ,\\ N_\alpha^{\cal A} ~=&~ 2 \e^{K^{\rm v}/2} \bar {V}^\Lambda \Theta_\Lambda^{~\lambda} U^{\cal A}_{\alpha u} {k}^u_{\lambda} \ ,\nonumber \end{aligned} \end{eqnarray} where the matrices $(\sigma^x)_{\cal AB}$ and $(\sigma^x)^{\cal AB}$ are found by applying the $SU(2)$ metric $\varepsilon_{\cal{AB}}$ (and its inverse) to the standard Pauli matrices $(\sigma^x)_{\cal A}^{~~\cal B}$, $x=1,2,3$. From \eqref{d2} we see that the embedding tensor $\Theta_\Lambda^{~~\lambda}$ has electric and magnetic components, which we combined in \eqref{susytrans3} as $\Theta_\Lambda^{~~\lambda} = (\Theta_I^{~~\lambda},-\Theta^{I\lambda})$. Similarly, ${V}^\Lambda$ is the holomorphic symplectic vector defined by ${V}^\Lambda = (X^I,{\cal F}_I)$ and its K\"ahler covariant derivative reads $\nabla_i V^\Lambda = \partial_i V^\Lambda +K^{\rm v}_i V^\Lambda$, with $ K^{\rm v}_i = \partial_i K^{\rm v}$. ${\mathcal U}^{\mathcal A\alpha}_u $ is the vielbein on the quaternionic-K\"ahler manifold ${{\bf M}}_{\rm h}$ and is related to the metric $h_{uv}$ via \begin{equation}\label{Udef} h_{uv} = {\mathcal U}^{\mathcal A\alpha}_u \varepsilon_\mathcal{AB} \mathcal C_{\alpha \beta} \mathcal U^{\mathcal B\beta}_v \ , \end{equation} where $\mathcal C_{\alpha \beta}$ is the $Sp(n_{\rm h})$ invariant metric. Finally, $P^x_\lambda$ is a triplet of Killing prepotentials defined by \begin{equation}\label{Pdef} - 2 {k}^u_\lambda\,K_{uv}^x = \nabla_v P_\lambda^x = \partial_v P_\lambda^x + \epsilon^{xyz} \omega^y_v P_\lambda^z \ , \end{equation} where $k^u_\lambda$ are the isometries on the quaternionic-K\"ahler manifold and $K_{uv}^x$ is the triplet of hyper-K\"ahler two-forms. \subsection{Partial supersymmetry breaking}\label{section:vectors} Spontaneous ${\cal N}=2\to {\cal N}=1$ supersymmetry breaking in a Minkowski or anti-de Sitter (AdS) ground state requires that for one linear combination of the two spinors $\epsilon^{\cal A}$ parametrising the supersymmetry transformations, say $\epsilon^{\cal A}_1$, the variations of the fermions given in \eqref{susytrans2} vanish, i.e.\ $\delta_{\epsilon_1} \lambda^{i {\cal A}} = \delta_{\epsilon_1} \zeta_\alpha = \delta_{\epsilon_1} \Psi_{\mu {\cal A}} =0$. Using the fact that in a supersymmetric Minkowski or AdS background the supersymmetry parameter obeys the Killing spinor equation\footnote{Note that the index of $\epsilon^*_{1\,\cal A}$ is not lowered with $\varepsilon_{\cal AB}$ but $\epsilon^*_{1\,\cal A}$ is related to $\epsilon^A_1$ just by complex conjugation. $|\mu|$ is related to the cosmological constant via $\Lambda = - 3 |\mu|^2$, while the phase of $\mu$ is unphysical.} \begin{equation} D_\nu \epsilon_{1\,\cal A} = \tfrac12 \mu \gamma_\nu \epsilon^*_{1\,\cal A} \ , \end{equation} the supersymmetry variations \eqref{susytrans2} yield \begin{equation} \label{N=1conditions} W_{i\cal AB}\, \epsilon^{\cal B}_1 ~= 0~ =~ N_{\alpha \cal A}\, \epsilon^{\cal A}_1 \qquad \textrm{and}\qquad S_\mathcal{AB}\, \epsilon^{\cal B}_1~ =~ \tfrac12 \mu \epsilon^*_{1\,\cal A} \ . \end{equation} The second, broken generator, denoted by $\epsilon^{\cal A}_2$, should obey \begin{equation} \label{N=1conditions2} W_{i\cal AB}\, \epsilon^{\cal B}_2 \neq 0\qquad \textrm{or}\qquad N_{\alpha \cal A}\, \epsilon^{\cal A}_2\neq 0\qquad \textrm{and}\qquad S_\mathcal{AB}\, \epsilon^{\cal B}_2~ \ne~ \tfrac12 \mu' \epsilon^*_{2\,\cal A} \ , \end{equation} for any $\mu'$ that obeys $|\mu'|=|\mu|$, i.e.\ $\mu'$ only differs from $\mu$ by an unphysical phase. A necessary condition for the existence of an ${\cal N}=1$ ground state is that the two eigenvalues $m_{\Psi_1}$ and $m_{\Psi_2}$ of the gravitino mass matrix $S_{\cal AB}$ are non-degenerate, e.g.\ $m_{\Psi_1}\neq m_{\Psi_2}$. One of the two gravitini has to remain massless, i.e. $m_{\Psi_1}=0$ in a Minkowski ground state, while the second one becomes massive. The unbroken ${\cal N}=1$ supersymmetry also implies that the massive gravitino has to be a member of an entire ${\cal N}=1$ massive spin-$3/2$ multiplet, which has the spin content $s=(3/2,1,1,1/2)$. This means that two vectors, say $A_\mu^1, A_\mu^2$ and a spin-$1/2$ fermion $\chi$ have to become massive, in addition to the gravitino.\footnote{In Appendix \ref{section:massive_multi} we explicitly check that the correct ${\cal N}=1$ mass relations are obeyed using the results of Section \ref{section:None}} Therefore, the would-be Goldstone fermion (the Goldstino), which gets eaten by the gravitino, is accompanied by two would-be Goldstone bosons (the sGoldstinos) that are eaten by the vectors \cite{Ferrara:1983gn}. In the resulting Lagrangian, only ${\cal N}=1$ supersymmetry is linearly realized while the second, spontaneously broken supersymmetry generator acts non-linearly on the fields. When integrating out the massive fields, the latter is broken explicitly and we end up with an ${\cal N}=1$ effective action. The sGoldstinos necessarily arise from the hypermultiplets, which means that ${{\bf M}}_{\rm h}$ has to admit at least two commuting isometries, say $k_1$ and $k_2$, and that these isometries have to be gauged \cite{Ferrara:1995gu,Fre:1996js}. The corresponding Goldstone bosons are then charged and generate the masses for the two heavy gauge bosons via the Higgs mechanism. If ${{\bf M}}_{\rm h}$ has further Killing vectors $k_\lambda, \lambda\neq1,2$, which are gauged, then additional charged and possibly massive scalars arise. In fact, in \cite{Louis:2009xd} we showed that only two Killing vectors can participate in the partial supersymmetry breaking. The other, orthogonal Killing vectors either preserve the full ${\cal N}=2$ supersymmetry, as analysed in \cite{Hristov:2009uj}, or break it completely. In the latter case we need to assume that this breaking is at a scale far below $m_{3/2}$ and therefore can be neglected in the following discussion. However, we shall return to this issue in Sections \ref{section:NoneW} and \ref{section:NoneD} where we compute the ${\cal N}=1$ effective potential generated by such additional Killing vectors. The definition \eqref{Pdef} implies that the two non-trivial Killing vectors have non-zero Killing prepotentials $P_1^x, P_2^x$ in the ${\cal N}=1$ background. For an ${\cal N}=1$ solution these prepotentials must not be proportional to each other, as this would allow us to take linear combinations of $k_1$ and $k_2$ such that one combination has vanishing prepotentials. However, we can use the local $SU(2)$ invariance of the hypermultiplet sector to rotate into a convenient $SU(2)$-frame where $P_{1,2}^x$ both lie entirely in the $(x=1,2)$-plane. Thus, without loss of generality we can arrange \begin{equation}\label{P3} P^3_1 ~ =~ P^3_2 ~ =~ 0~ =~ \partial_u P^3_1 ~ = ~ \partial_u P^3_2\ . \end{equation} From \eqref{susytrans3} we learn that in such a frame both $S_{\cal AB}$ and $W^{i{\cal AB}}$ are diagonal in $SU(2)$ space and hence one can further choose the parameter of the unbroken ${\cal N}=1$ generator to be $\epsilon_1 = {\epsilon\choose 0}$ or $\epsilon_1 = {0\choose\epsilon}$. This corresponds to the choice of $\Psi_{\mu\, 1}$ or $\Psi_{\mu\, 2}$ as the massless ${\cal N}=1$ gravitino.\footnote{Note that all our expressions can also be written in an $SU(2)$-covariant way by replacing the ``$3$''-direction with $\epsilon_1^A \sigma^x_{\cal AB} \epsilon_2^B$ and the direction spanned by $(P^1-\iu P^2)$ with $\epsilon_1^A \sigma^x_{\cal AB} \epsilon_1^B$. So, for instance, \eqref{P3} then reads $\epsilon_1^A \sigma^x_{\cal AB} \epsilon_2^B P^x_{1,2} = \epsilon_1^A \sigma^x_{\cal AB} \epsilon_2^B \diff P^x_{1,2}=0. } After these preliminaries, let us now review the conditions for partial supersymmetry breaking which we derived in \cite{Louis:2009xd}. \subsubsection{Gravitino and gaugino equations} For $\epsilon_1 = {\epsilon\choose 0}$ the ${\cal N}=1$ solution of the gravitino and gaugino variations in a Minkowski vacuum was found to be \cite{Louis:2009xd} \begin{equation}\label{solution_embedding_tensor} \begin{aligned} \Theta_I^{\phantom{I}1} = -\Im\left(P^+_2\, {\cal F}_{IJ}\,\C^J \right) \ ,& \qquad \Theta^{I1} = -\Im \left(P^+_2\,\C^I\right) \ , \\ \Theta_I^{\phantom{I}2} = \quad \Im\left(P^+_1\, {\cal F}_{IJ}\,\C^J \right) \ ,& \qquad \Theta^{I2} = \quad \Im \left(P^+_1\, \C^I\right) \ , \end{aligned} \end{equation} parametrised in terms of a complex vector $\C^I$. The mutual locality constraint then demands \begin{equation} \label{constraint_embedding_tensor} \bar{\C}^{I} (\Im {\cal F})_{IJ}\, \C^{J} = 0 \ , \end{equation} and we have defined \begin{equation}\label{Pc} P_{1,2}^\pm = P_{1,2}^1 \pm \iu P_{1,2}^2 \ . \end{equation} Note that the ${\cal N}=1$ solution \eqref{solution_embedding_tensor} determines the embedding tensor in terms of $C^I$ but does not constrain the special-K\"ahler manifold ${{\bf M}}_{\rm v}$. For $\epsilon_1 = {\epsilon\choose 0}$ the ${\cal N}=1$ solution of the gravitino and gaugino variations in an AdS vacuum was found to be \cite{Louis:2009xd} \begin{equation}\label{solution_embedding_tensor_AdS} \begin{aligned} \Theta_I^{\phantom{I}1} = & -\Im\left({\cal F}_{IJ}\,(P^+_2\, \C_{\rm AdS}^J + \e^{K^{\rm v}/2} \tfrac{\bar \mu}{P^+_1} X^J) \right) \ , \\ \Theta^{I1} = & -\Im \left(P^+_2\,\C_{\rm AdS}^I + \e^{K^{\rm v}/2} \tfrac{\bar \mu}{P^+_1} X^I\right) \ , \\ \Theta_I^{\phantom{I}2} = & \quad \Im\left({\cal F}_{IJ}\,(P^+_1\, \C_{\rm AdS}^J - \e^{K^{\rm v}/2} \tfrac{\bar \mu}{P^+_2} X^J) \right) \ , \\ \Theta^{I2} = & \quad \Im \left(P^+_1\, \C_{\rm AdS}^I- \e^{K^{\rm v}/2} \tfrac{\bar \mu}{P^+_2} X^I\right) \ , \end{aligned} \end{equation} where again $\C_{\rm AdS}^I$ is a complex vector. The mutual locality constraint \eqref{constraint_embedding_tensor} now reads \begin{equation}\label{constraint_embedding_tensor_AdS} \bar{\C}_{\rm AdS}^{I} (\Im {\cal F})_{IJ} \C_{\rm AdS}^{J} = - \tfrac{|\mu|^2}{2 |P_1|^2 |P_2|^2} \ . \end{equation} \subsubsection{Hyperino equations}\label{section:hyperino} The solution to the hyperino equations is more model dependent. We already stated that the quaternionic-K\"ahler manifold ${{\bf M}}_{\rm h}$ has to admit two commuting isometries with Killing prepotentials $P^x_1$ and $P^x_2$ that are not proportional to each other in the ${\cal N}=1$ locus. In addition, the ${\cal N}=1$ hyperino supersymmetry conditions \begin{equation} \label{Ncond} N_{\alpha \cal A}\, \epsilon_1^{\cal A}~ =~ N_{\alpha 1}~ =~ 0 \end{equation} have to be satisfied. Before we continue, let us rewrite \eqref{Ncond} in a more convenient form. The insertion of \eqref{susytrans3} into \eqref{Ncond} and subsequent complex conjugation implies \begin{equation}\label{kcond} k^u\, {\mathcal U}_{\alpha u}^{2}\ =\ 0\ , \end{equation} where we have defined \begin{equation}\label{knew} k^u = {V}^\Lambda \big(\Theta_\Lambda^{~~1} {k}_{1}^u +\Theta_\Lambda^{~~2} {k}_{2}^u\big)\ . \end{equation} By contracting the decomposition \cite{Andrianopoli:1996cm,D'Auria:2001kv} \begin{equation}\label{Udecomp} {\mathcal U}_{\alpha u}^{\mathcal A}{\mathcal U}_v^{\mathcal B\alpha} = - \tfrac{\iu}{2} K^x_{uv}\sigma^{x {\mathcal A}{\mathcal B}} - \tfrac12 h_{uv} \epsilon^{{\mathcal A}{\mathcal B}}\ , \end{equation} with $k^v$ and using the explicit expressions \begin{equation} (\sigma^1)^{\cal AB} = \left(\begin{array}{cc}-1 &0\\ 0& 1 \end{array}\right)~, \ (\sigma^2)^{\cal AB} = \left(\begin{array}{cc} -\mathrm{i} &0\\ 0& -\mathrm{i} \end{array}\right)~ , \ (\sigma^3)^{\cal AB} = \left(\begin{array}{cc}0 &1\\ 1& 0 \end{array}\right)~, \end{equation} we see that \eqref{kcond} is equivalent to \begin{equation}\label{jhol} k^u \left(J^{1~v}_{~~u} - \iu J^{2~v}_{~~u}\right) = 0 \ , \qquad k^u J^{3~v}_{~~u} = \iu k^{v}\ . \end{equation} The second condition of \eqref{jhol} simply states that $k$ is holomorphic with respect to the complex structure $J^3$. Furthermore, using the relation between the three $J$'s given in \eqref{jrel}, the first equation in \eqref{jhol} follows from the second one. For our subsequent analysis it is convenient to define a new pair of Killing vectors $k^u_{1,2}$ by using the real and imaginary parts of the $k^u$ defined in \eqref{knew}, such that the following holds\footnote{In order to keep the notation simple we shall use the same letter $k$ to denote the original Killing vectors, as well as the redefined ones. The same holds for the respective Killing prepotentials $P^x$.} \begin{equation} \label{Jk} J^{3~v}_{~~u} k_1^u = -k_2^v \ , \qquad J^{3~v}_{~~u} k_2^u = k_1^v \ . \end{equation} Note that this is nothing more than a change of basis in the space spanned by the two Killing vectors. The coefficients in this change of basis do not depend on the coordinates of ${\bf M}_{\rm h}$, as the embedding tensor components are constant. As the related Killing prepotentials $P^x_{1,2}$ will also not be proportional to each other, we can equally use the new Killing vectors to construct a partial supersymmetry breaking solution, instead of the original Killing vectors ${k}_{1,2}$ appearing in \eqref{d2}. The conditions \eqref{jhol}, or equivalently \eqref{Jk}, also constrain the Killing prepotentials. Written in terms of the associated K\"ahler forms the first condition of \eqref{jhol} reads \begin{equation}\label{relationkK} k_1^uK^1_{uv}=-k_2^uK^2_{uv} \ , \qquad k_1^uK^2_{uv}=k_2^uK^1_{uv} \ , \end{equation} which, together with the definition of the prepotentials \eqref{Pdef}, implies \begin{equation} P_1^1=-P_2^2 \ , \qquad P_1^2=P_2^1 \ . \end{equation} This in turn simplifies the embedding tensor solutions \eqref{solution_embedding_tensor}, which after a redefinition of $C^I$ read \begin{equation}\label{solution_embedding_tensor2} \begin{aligned} \Theta_I^{\phantom{I}1} = & \Re\big( {\cal F}_{IJ}\,\C^J \big) \ , \qquad \Theta^{I1} = & \Re \C^I \ , \\ \Theta_I^{\phantom{I}2} = & \Im\big( {\cal F}_{IJ}\,\C^J \big) \ , \qquad \Theta^{I2} = & \Im \C^I \ . \end{aligned} \end{equation} Similarly, the AdS solutions \eqref{solution_embedding_tensor_AdS} become \begin{equation}\label{solution_embedding_tensor_AdS2} \begin{aligned} \Theta_I^{\phantom{I}1} = & \Re\big({\cal F}_{IJ}\,( \C_{\rm AdS}^J - \iu \e^{K^{\rm v}/2} \tfrac{\bar \mu}{P^+_1} X^J) \big) \ , \\ \Theta^{I1} = & \Re \big(\C_{\rm AdS}^I - \iu \e^{K^{\rm v}/2} \tfrac{\bar \mu}{P^+_1} X^I\big) \ , \\ \Theta_I^{\phantom{I}2} = & \Im\big({\cal F}_{IJ}\,(\C_{\rm AdS}^J +\iu \e^{K^{\rm v}/2} \tfrac{\bar \mu}{P^+_1} X^J) \big) \ , \\ \Theta^{I2} = & \Im \big(\C_{\rm AdS}^I + \iu \e^{K^{\rm v}/2} \tfrac{\bar \mu}{P^+_1} X^I\big) \ . \end{aligned} \end{equation} The hyperino conditions \eqref{Jk}, or equivalently \eqref{Ncond}, are difficult to solve in general. In \cite{Louis:2009xd} we showed that for special quaternionic-K\"ahler manifolds, i.e.\ quaternionic-K\"ahler manifolds that are in the image of the c-map \cite{Cecotti:1988qn}, \eqref{Ncond} together with all other constraints can be fulfilled.\footnote{Explicit examples of AdS vacua are constructed in \cite{Lust:2004ig,House:2005yc,Micu:2006ey,Tomasiello:2007eq,KashaniPoor:2007tr,Cassani:2009na,Cassani:2009ck,Lust:2009mb}.} In the following, however, we do not restrict our analysis to this class of manifolds but instead only assume that an ${\cal N}=1$ solution exists, i.e.\ we assume that equations ~\eqref{Jk}, \eqref{solution_embedding_tensor2} and \eqref{solution_embedding_tensor_AdS2} are satisfied without specifying a particular explicit solution. Before we continue let us note that the ${\cal N}=1$ solution we just recalled has both $W_{i\cal AB}\, \epsilon^{\cal B}_2 \neq 0$ \emph{and} $N_{\alpha \cal A}\, \epsilon^{\cal A}_2\neq 0$. In \eqref{N=1conditions2} we allowed for the logical possibility that supersymmetry is only broken in the gaugino or hyperino sector. However, this situation cannot occur for partial supersymmetry breaking. The two Killing prepotentials $P^x_{1,2}$ have to be non-zero in order to render the two eigenvalues of the gravitino mass matrix $S_{\cal AB}$ non-degenerate. Using \eqref{Pdef} or the equivariance condition $ 2 k_1^u k_2^v K^x_{uv} + \epsilon^{xyz} P^y_1 P^z_2 = 0$ \cite{Andrianopoli:1996cm}, we can further conclude that the two Killing vectors $k^u_{1,2}$ have to be non-zero which, together with \eqref{susytrans3}, implies $N_{\alpha \cal A}\neq 0$. Finally, one can check that for the charges \eqref{solution_embedding_tensor2} and \eqref{solution_embedding_tensor_AdS2} $W_{i\cal AB}$ is always non-zero. \subsubsection{Massive, light and massless scalars}\label{section:scales} The Minkowski and AdS ground states described above are local ${\cal N}=1$ minima in ${\cal N}=2$ field space i.e.\ the ${\cal N}=2$ supersymmetry variations were solved for an ${\cal N}=1$ vacuum which can be a point in each of ${{\bf M}}_{\rm h}$ and ${{\bf M}}_{\rm v}$ or a higher-dimensional vacuum manifold. In the latter case there are exactly flat directions (moduli) of the minimum along which ${\cal N}=1$ supersymmetry is preserved. In addition, there can be light scalars in the spectrum (i.e.\ with masses ${\tilde m}$ much smaller than $m_{3/2}$) which either preserve ${\cal N}=1$ supersymmetry or break it at a scale beneath $m_{3/2}$. This breaking is negligible in the limit ${\tilde m}\ll m_{3/2}$ and therefore we also include \emph{all} light scalar fields in the definition of the ${\cal N}=1$ field space. As we will see in Sections \ref{section:NoneW} and \ref{section:NoneD} the light fields contribute to the superpotential and D-terms in the effective action and any spontaneous ${\cal N}=1$ supersymmetry breaking will be captured by these couplings. In the following we denote the scalars of the ${\cal N}=1$ field space by $\hat{t}$ and $\hat{q}$, where there is natural split into fields descending from the ${\cal N}=2$ vector- and hypermultiplets, respectively. Let us now give a more precise description of the distinction between scalars with masses of ${\cal O}(m_{3/2})$ and massless (or light) scalar fields. The latter are the deformations which preserve the ${\cal N}=1$ supersymmetry conditions \eqref{N=1conditions} in the limit ${\tilde m}\to0$. Equivalently, \eqref{jhol} holds and the embedding tensor solutions \eqref{solution_embedding_tensor2} or \eqref{solution_embedding_tensor_AdS2} remain constant across the ${\cal N}=1$ field space. On the other hand, any deformation that violates the ${\cal N}=1$ supersymmetry conditions \eqref{N=1conditions} (ignoring any supersymmetry breaking at a lower scale ${\tilde m}$) should have a mass of ${\cal O} (m_{3/2})$. Consistency of the low-energy effective theory implies that all fields with a mass of ${\cal O} (m_{3/2})$ should be integrated-out along with the massive gravitino. As an example, let us consider the Minkowski solution \eqref{solution_embedding_tensor2} at a point $t=t_0$ and determine the deformations $t=t_0+\delta t$ which preserve \eqref{solution_embedding_tensor2}. This implies \begin{equation}\label{deformV} {\cal F}_{IJK} C^J \delta X^K = 0 \ . \end{equation} For a generic prepotential ${\cal F}$, \eqref{deformV} gives $n_\textrm{v}$ equations for $n_\textrm{v}$ deformation parameters. This can be seen by noting that the homogeneity of the holomorphic prepotential ${\cal F}$ implies ${\cal F}_{IJK}X^K = 0$. Thus all $n_\textrm{v}$ scalars in the vector multiplets are generically stabilised with masses of ${\cal O}(m_{3/2})$ and an ${\cal N}=1$ moduli space can only occur for special prepotentials . For example, if the prepotential ${\cal F}$ is purely quadratic, \eqref{deformV} is satisfied on the entire field space and no scalars in the vector multiplets are stabilised. This corresponds to ${\bf M}_{\rm v}= SU(1,n_{\rm v})/SU(n_{\rm v})$. In contrast, for a generic cubic prepotential \eqref{deformV} tells us that all scalars are stabilised. This would appear to be in conflict with the existence of the $n_\textrm{v}$ shift isometries on ${\bf M}_{\rm v}$ \cite{deWit:1992wf}. However, these shift isometries induce symplectic rotations on the vectors of the theory. These symplectic rotations are only symmetries of the ungauged theory and can be broken by the charges $\Theta_{\Lambda}^{1,2}$ given in \eqref{solution_embedding_tensor2}. The same conclusion can be reached for isometries on general special-K\"ahler manifolds. A computation analogous to \eqref{deformV} for the AdS solution \eqref{solution_embedding_tensor_AdS2} leads to \begin{equation}\label{deformV_AdS} {\cal F}_{IJK} C^J \delta X^K + 2 \tfrac{\mu}{P^-_1} (\Im {\cal F})_{IJ} \delta (\e^{K^{\rm v}/2} \bar X^J) = 0 \ . \end{equation} In contrast to the Minkowski case, this is not a holomorphic equation. Nevertheless the number of equations coincides with the number of scalars in the vector multiplets and generically all scalars are stabilised. A corresponding condition arises for the scalars of ${{\bf M}}_{\rm h}$ from \eqref{Ncond} or equivalently \eqref{Jk}. The Killing vector $k=k_1 + \iu k_2$ should stay holomorphic over the entire ${\cal N}=1$ field space or in other words \begin{equation}\label{deformH} \delta \, ( J^{3~v}_{~~u} k_1^u + k_2^{v})\ = 0 \end{equation} should hold. This condition generically stabilises a large number of scalar fields arising from the hypermultiplet sector. In contrast to the vector multiplet sector, a non-trivial ${\cal N}=1$ moduli space necessarily arises whenever ${{\bf M}}_{\rm h}$ has additional isometries which commute with the two isometries responsible for the partial supersymmetry breaking. We will return to this issue in Section~\ref{section:SQC}. \section{The low-energy effective ${\cal N}=1$ theory}\label{section:None} \setcounter{equation}{0} Let us now turn to the main objective of this paper and derive the low-energy effective ${\cal N}=1$ theory that is valid below the scale of supersymmetry breaking set by $m_{3/2}$. We will begin by outlining the procedure employed and briefly summarising the results which we obtain. In the previous section we reviewed the properties of an ${\cal N}=2$ supergravity that admits ${\cal N}=1$ Minkowski or AdS backgrounds. Consistency requires that an ${\cal N}=1$ massive spin-3/2 multiplet with spins $s=(3/2,1,1,1/2)$ and mass $m_{3/2}$ is generated, possibly along with a set of massive ${\cal N}=1$ chiral- and vector multiplets whose masses are also of ${\cal O}(m_{3/2})$. All of these multiplets have to be integrated out to obtain the ${\cal N}=1$ low-energy effective action.\footnote{If the ${\cal N}=2$ theory has a supersymmetric mass scale above $m_{3/2}$ then all multiplets at that scale are also integrated out.} At the two-derivative level this is achieved by using the equations of motion of the massive fields to first non-trivial order in $p/m_{3/2}$, where $p\llm_{3/2}$ is the characteristic momentum. The low-energy effective theory should then contain the leftover light ${\cal N}=1$ multiplets, i.e.\ the gravity multiplet, $n'_{\rm v}$ vector multiplets and $n_{\rm c}$ chiral multiplets. These multiplets either have a mass below $m_{3/2}$ or are exactly massless. The case when all the multiplets are massless arises when the ${\cal N}=2$ supergravity is gauged with respect to just the two Killing vectors that are responsible for the partial supersymmetry breaking. If, on the other hand, the ${\cal N}=2$ supergravity is gauged with respect to additional Killing vectors, then some of the ${\cal N}=1$ multiplets can have a light mass or, more generally, contribute to the ${\cal N}=1$ effective potential. However, the derivation of the low-energy effective action is insensitive additional gaugings. Whether or not such gaugings preserve the ${\cal N}=1$ supersymmetry or spontaneously break it only becomes clear on examining the ground states of the effective potential. Integrating out all massive fields of ${\cal O}(m_{3/2})$ in the ${\cal N}=2$ gauged supergravity should naturally lead to an ${\cal N}=1$ effective theory. Its bosonic matter Lagrangian therefore has a standard form, given by \cite{Wess:1992cp,Gates:1983nr} \begin{eqnarray}\label{N=1Lagrangian} \hat{\cal L} ~=~ - \ K_{\hat A \hat{\bar B} } D_\mu M^{\hat A} D^{\mu} \bar M^{\hat{\bar B} } - \tfrac{1}{2} f_{\hat I \hat J}\ F^{\hat I -}_{\mu\nu}F^{\mu\nu\, \hat J -} - \tfrac{1}{2}\bar{f}_{\hat I \hat J}\ F^{\hat I +}_{\mu\nu} F_{\rho\sigma}^{\hat J + } - V \ , \end{eqnarray} where \begin{eqnarray}\label{N=1pot} V ~=~ V_F + V_{\cal D} ~=~ e^K \big( K^{\hat A \hat{\bar B}} D_{\hat A} {\cal W} {D_{\hat{\bar B}} \bar {\cal W}}-3|{\cal W}|^2 \big) +\tfrac{1}{2}\, (\text{Re}\; f)_{\hat I \hat J} {\cal D}^{\hat I} {\cal D}^{\hat J} \ . \end{eqnarray} We use hatted indices to label the fields of the ${\cal N}=1$ effective theory. $M^{\hat A} = M^{\hat A} (\hat{t},\hat{q}) $ collectively denotes all complex scalars in the theory, i.e.\ those descending from both the vector- and hypermultiplet sectors in the original ${\cal N}=2$ theory. $K_{\hat A \hat{\bar B} } $ is a K\"ahler metric satisfying $ K_{\hat A \hat{\bar B} } = \partial_{\hat A} \bar\partial_{\hat{\bar B}} K(M,\bar M)$. $F^{\hat I +}_{\mu\nu}$ and $F^{\hat I -}_{\mu\nu}$ denote the self-dual and anti-self-dual ${\cal N}=1$ gauge field strengths, respectively, and $f_{\hat I \hat J}$ is the holomorphic gauge kinetic function. The scalar potential $V$ is determined in terms of the holomorphic superpotential ${\cal W}$, its K\"ahler-covariant derivative $D_{\hat A} {\cal W}= \partial_{\hat A} {\cal W} + (\partial_{\hat A} K)\, {\cal W}$ and the D-terms ${\cal D}^{\hat I}$, given by \begin{equation}\label{NoneDterm} {\cal D}^{\hat I}~ =~ -2\, (\text{Re}\; f)^{-1 \hat I \hat J}\, {\cal P}_{\hat J}~, \end{equation} where ${\cal P}_{\hat J}$ is the ${\cal N}=1$ Killing prepotential. The objective of this section is to compute the coupling functions $K,{\cal W},f$ and ${\cal P}$ of the effective ${\cal N}=1$ theory in terms of ${\cal N}=2$ `input data'. ${\cal N}=1$ supersymmetry constrains $\cal W$ and $f$ to be holomorphic while the metric $K_{\hat A \hat{\bar B} }$ has to be K\"ahler. Showing that the low-energy effective theory has these properties serves as an important consistency check of our results. Before we turn to the derivation of these couplings let us briefly anticipate the results. One interesting aspect relates to the ${\cal N}=1$ scalar manifold that descends from the ${\cal N}=2$ product space ${\bf M} = {{\bf M}}_{\rm h}\times {{\bf M}}_{\rm v}$, where ${{\bf M}}_{\rm v}$ is already a K\"ahler manifold but ${{\bf M}}_{\rm h}$ is not. In Section~\ref{section:NoneK} we will show that integrating out the two heavy gauge bosons in the gravitino multiplet amounts to taking a quotient of ${{\bf M}}_{\rm h}$ with respect to the two gauged isometries $k_1,k_2$ discussed in the previous section. This quotient, denoted by \begin{equation}\label{quotient} {\hat{\M}}_{\rm h} = {{\bf M}}_{\rm h}/\langle k_1,k_2 \rangle\ , \end{equation} has co-dimension two, corresponding to the fact that the two Goldstone bosons giving mass to the two gauge bosons have been removed. We shall see that the quotient $\hat{\M}_{\rm h}$ is indeed K\"ahler, which establishes the consistency with ${\cal N}=1$ supersymmetry. In order to obtain the final ${\cal N}=1$ scalar field space, we also have to integrate out all additional scalars that gained a mass of ${\cal O} (m_{3/2})$. However, these scalars are not Goldstone bosons and thus integrating them out corresponds to simply projecting ${{\bf M}}_{\rm v}\times {\hat{\M}}_{\rm h}$ to a K\"ahler subspace ${\bf M}^{{\cal N}=1} = {\hat{\M}}_{\rm v} \times {\hat{\M}}_{\rm h}\ ,$ where ${\hat{\M}}_{\rm v}$ coincides with ${{\bf M}}_{\rm v}$ or is a submanifold thereof. ${\hat{\M}}_{\rm h}$ can also be a subspace of \eqref{quotient}, but for notational simplicity we do not introduce a separate symbol for this. Integrating out the two massive gauge bosons projects the ${\cal N}=2$ gauge kinetic function to a submatrix. In Section~\ref{section:Nonef} we will show that one of the two massive gauge bosons is always given by the graviphoton.\footnote{This can also be seen by noting that \eqref{constraint_embedding_tensor} implies that $C^I$ consists of a spacelike and a timelike component with respect to $\Im{\cal F}_{IJ}$, which has signature $(1,n_{\rm v})$. The timelike component corresponds to a gauging with respect to the graviphoton.} Integrating out this vector leads to a holomorphic gauge kinetic function $f$ that is the second derivative of the holomorphic prepotential on ${\hat{\M}}_{\rm v}$, similarly to the case of ${\cal N}=1$ truncations \cite{Andrianopoli:2001zh,Andrianopoli:2001gm}. Finally, as our ${\cal N}=1$ effective theory descends from an ${\cal N}=2$ supergravity, its superpotential ${\cal W}$ and the D-terms can only be non-trivial if there are additional charged scalars present, i.e.\ if there are further gaugings at a scale beneath $m_{3/2}$. As discussed above, this precisely occurs when isometries other than $k_1$ and $k_2$ are gauged in the original ${\cal N}=2$ theory. Since both ${\cal W}$ and ${\cal D}$ appear in the ${\cal N}=1$ supersymmetry transformations of the gravitino and gaugini, we can consider the corresponding ${\cal N}=2$ supersymmetry transformations restricted to ${\cal N}=1$ fields and then read off the appropriate terms. We will carry this out in Sections \ref{section:NoneW} and \ref{section:NoneD}. Using the complex structure of ${\bf M}^{{\cal N}=1}$, we will then also check the holomorphicity of ${\cal W}$ in Section~\ref{section:NoneW}. Let us now turn to the detailed derivation of the ${\cal N}=1$ couplings, starting with the metric on the quotient ${\hat{\M}}_{\rm h}$. \subsection{The K\"ahler metric on the quotient ${\hat{\M}}_{\rm h}$} \label{section:NoneK} The first step in determining the sigma-model metric on the quotient ${\hat{\M}}_{\rm h}$ is to eliminate the two massive gauge bosons via their field equations, which are algebraic in the limit $p\llm_{3/2}$. In order to be able to use the constraints \eqref{jhol} and \eqref{Jk} derived from the hyperino conditions, we first have to rewrite the combination $\Theta_\Lambda^\lambda{k}_\lambda,~ \lambda=1,2,$ that appears in \eqref{d2} in terms of the new Killing vectors defined in \eqref{knew}. This change of basis can be compensated by an appropriate change of $\Theta_\Lambda^\lambda$, such that the covariant derivatives given in \eqref{d2} continue to have the same form, albeit with rotated ${k}_\lambda$ and $\Theta_\Lambda^\lambda$ (for simplicity, we shall not introduce new symbols for the rotated quantities). {}From \eqref{sigmaint} we then obtain \begin{equation}\label{eofA} \frac{\partial \cal L}{\partial A_\mu^{\lambda}} = -2 k^v_{\lambda} h_{uv} \partial_{\mu} q^u + m_{\lambda\rho}^2 A_\mu^{\rho} = 0\ ,\qquad \lambda, \rho =1,2\ , \end{equation} where we have defined \begin{equation}\label{Acomb} A_\mu^\lambda \equiv A_\mu^\Lambda\Theta^\lambda_\Lambda = A_\mu^I \Theta^\lambda_I -B_{\mu I}\Theta^{I\lambda}\ , \end{equation} and the mass matrix \begin{equation}\label{mdef} m_{\lambda\rho}^2 = 2 k^u_{\lambda} h_{uv} k^v_{\rho}\ . \end{equation} Using the quaternionic algebra \eqref{jrel} and the hyperino conditions \eqref{Jk} written in terms of the associated K\"ahler forms $K^x$, we see that this mass matrix is diagonal \begin{equation} m_{\lambda\rho}^2 = m^2 \, \delta_{\lambda\rho} \ , \end{equation} where \begin{equation}\label{massrel} m^2 = 2 |k_{1}|^2 = 2 |k_{2}|^2 \ . \end{equation} Inserting the algebraic field equations \eqref{eofA} back into the Lagrangian yields a modified kinetic term for the hypermultiplet scalars, which reads \begin{equation} \hat{\cal L} = \hat{h}_{uv} \partial_{\mu} q^u \partial^{\mu} q^v\ . \end{equation} $\hat{h}_{uv}$ is the metric on the quotient ${\hat{\M}}_{\rm h}$ and is given by \begin{equation}\label{hq} \hat{h}_{uv} = h_{uv} - \frac{2k_{1 u} k_{1 v} + 2k_{2 u} k_{2 v}}{m^{2}} = \tilde \pi^w_u h_{wv} \ , \end{equation} where $k_{\lambda u} = k^w_{\lambda} h_{wu}$ and \begin{equation}\label{mproj} \tilde \pi^u_v=\delta^u_v - \frac{2k_{1}^{u} k_{1 v} + 2k_{2}^{u} k_{2 v}}{m^{2}}~. \end{equation} From \eqref{hq} it is easy to see that $\hat{h}_{uv}$ satisfies \begin{equation} \hat{h}_{uv}k^v_\lambda = 0\ ,\qquad \hat{h}_{uv}h^{vw}\hat{h}_{wr} = \hat{h}_{ur}\ , \label{hid} \end{equation} where $h^{vw}$ is the inverse metric of the original quaternionic manifold ${\bf M}_{\rm h}$, i.e.\ $h^{vw} h_{wu} = \delta_u^v$. We can then use \eqref{mproj} to define the inverse metric on the quotient as $\hat{h}^{uv} = \tilde \pi^u_w h^{wv}$. The first equation in \eqref{hid} states that the rank of $\hat{h}_{uv}$ is reduced by two relative to $h_{uv}$, which precisely corresponds to the two Goldstone bosons that have been integrated out. The second equation in \eqref{hid} tells us that the inverse metric on the quotient $\hat{h}^{uv}$ actually coincides with the inverse of the original metric $h^{vw}$. Consistency with ${\cal N}=1$ supersymmetry requires that $\hat{h}_{uv}$ is a K\"ahler metric. In order to show this we first need to find the integrable complex structure on the K\"ahler manifold. It seems likely that one of the three almost complex structures of the quaternionic manifold descends to the complex structure on the quotient. Indeed, due to the $SU(2)$ gauge choice \eqref{P3}, $J^3$ plays a preferred role in that it points in the direction (in $SU(2)$-space) normal to the plane spanned by $P_1^x, P_2^x$ and is left invariant by the $U(1)$ rotation in that plane. One way to calculate $J^3$ on the quotient is to employ the same method that we just used for the metric and apply it to the two-form $K^3_{uv}$. This is possible in an (auxiliary) two-dimensional $\sigma$-model of the form\footnote{This Lagrangian has nothing to do with the theory considered so far and is only used to derive the form of the complex structure -- or rather its associated fundamental two-form -- on the quotient. We thank E.\ Zaslow for suggesting this procedure.} \begin{equation}\label{Laux} {\cal L}_{K^3} = K^3_{uv} D_{\alpha} q^u D_{\beta} q^v\epsilon^{\alpha\beta}\ ,\quad \alpha, \beta = 1,2\ , \end{equation} where the covariant derivatives are again given by \eqref{d2}. As above, we derive the algebraic equation of motion for $A_\alpha^{\lambda}$ and insert it back into \eqref{Laux} to arrive at \begin{equation} {\cal L}_{K^3} = \hat{K}_{uv} \epsilon^{\alpha\beta} \partial_{\alpha} q^u \partial_{\beta} q^v\ , \end{equation} where \begin{equation}\begin{aligned}\label{Khatdef} \hat{K}_{uv} = K^3_{uv} - \frac{2k_{2 u} k_{1 v}-2k_{1 u} k_{2 v}}{m^2} = \tilde \pi^w_u K^3_{wv} \ . \end{aligned} \end{equation} Here we have used the relations \eqref{Jk} to conclude that $k^u_\lambda K^3_{uv} k^v_\rho = m^2 \epsilon_{\lambda\rho}$, where $\epsilon_{21}=1$. We find that the rank of $\hat{K}_{uv}$ is reduced by two due to $k^u_{\lambda} \hat{K}_{uv} = 0$, analogous to the result for the metric $h_{\mu\nu}$. For two commuting isometries $k_1$ and $k_2$ we have the identity \cite{Andrianopoli:1996cm} \begin{equation}\label{equivariance_cond} 2 k_1^u k_2^v K^x_{uv} + \epsilon^{xyz} P^y_1 P^z_2 = 0 \ , \end{equation} which, together with \eqref {jhol}, allows us to simplify the expression for the mass: \begin{equation}\label{mass_prepot} m^2= P^1_1 P^2_2 - P^1_2 P^2_1 \ . \end{equation} On the other hand, from the definition of the prepotentials \eqref{Pdef} we find \begin{equation} \label{Killing_oneform} \begin{aligned} & k_{2v} ~=~ k^u_1 \,K_{uv}^3 ~ = ~ \omega^2_v P^1_1 - \omega^1_v P^2_1 \ , \\ & k_{1v} ~=~ k^u_2 \,K_{uv}^3 ~ = ~ \omega^1_v P^2_2 - \omega^2_v P^1_2 \ , \end{aligned} \end{equation} where we have used \eqref{P3} and \eqref{Jk}. Inserting \eqref{mass_prepot} and \eqref{Killing_oneform} into \eqref{Khatdef} we arrive at \begin{equation}\label{Kform} \hat{K}_{uv} = \partial_u \omega_v^3 - \partial_v \omega_u^3\ . \end{equation} Thus, on ${\hat{\M}}_{\rm h}$ there exists a fundamental two-form $\hat{K}$ which is indeed closed: \begin{equation}\label{juhu} \diff\hat{K} = 0 \ . \end{equation} Furthermore, we find that $\hat{J}$ defined via $\hat{K}_{uv} = \hat{h}_{uw} \hat{J}^w_v$ is the projected complex structure $J^3$, i.e. \begin{equation} \hat{J}^u_v = \tilde \pi^u_w J^{3w}_{v}~. \end{equation} As $\tilde \pi$ commutes with $J^3$, due to \eqref{jhol}, $\hat{J}$ is the associated complex structure, i.e.\ it satisfies $\hat{J}^u_v \hat{J}^v_w = - \tilde \pi^u_w$, which on the quotient reads $\hat{J}^2 =-{\bf 1}$. This, together with \eqref{juhu}, implies that the Nijenhuis-tensor $N(\hat{J})$ vanishes. This completes the proof that ${\hat{\M}}_{\rm h}$ is a K\"ahler manifold, with K\"ahler form $\hat{K}$ and complex structure $\hat{J}$. In order to display the K\"ahler potential on ${\hat{\M}}_{\rm h}$ let us explicitly introduce complex coordinates. Since $\hat{J}$ is an honest complex structure, we can group the $4n_{\rm h}-2$ coordinates $q^u$ into two sets of coordinates $q^{2a-1}$ and $q^{2b}, a,b=1,\ldots,2n_{\rm h}-1$ such that $\hat{J}$ is constant and `block-diagonal' in this basis, taking the form \begin{equation} \hat{J}_u^v = \left(\begin{array}{ccccc} 0 & -1 && \\ 1 &0&&& \\ && \ddots && \\ &&&0 & -1 \\ &&&1 &0 \end{array}\right)\ . \end{equation} We can then define complex coordinates \begin{equation}\label{zdef} z^a := q^{2a-1} +\iu q^{2a}\ , \quad \bar z^{\bar a} := q^{2a-1} - \iu q^{2a}\ , \end{equation} and the associated derivatives \begin{equation}\label{dzdef} \partial_{a} = \tfrac12\big(\partial_{q^{2a-1}} - \iu \partial_{q^{2a}}\big)\ ,\qquad \bar\partial_{\bar a} = \tfrac12\big(\partial_{q^{2a-1}} + \iu \partial_{q^{2a}}\big)\ . \end{equation} From $\hat{J}^w_u \hat{J}^t_v \hat{K}_{wt} = \hat{K}_{uv}$ we see that, in terms of complex coordinates, the two-form $\hat{K}_{uv}$ given in \eqref{Kform} has no $(2,0)$ and $(0,2)$ parts. In other words, $\hat{K}_{ab}= \partial_a\omega_b^3-\partial_b\omega_a^3=0$ and $\hat{K}_{\bar a\bar b}= \bar \partial_{\bar a} \bar \omega_{\bar b}^3-\bar \partial_{\bar b} \bar \omega_{\bar a}^3=0$ . This in turn implies \begin{equation}\label{Kahlerconnection} \omega_a^3 = \tfrac\iu2 \partial_a \hat{K}\ ,\qquad \bar\omega_{\bar a}^3 = -\tfrac\iu2 \bar\partial_{\bar a} \hat{K}\ , \end{equation} where $\hat{K}$ is the (real) ${{\cal N}=1}$ K\"ahler potential.\footnote{Note that one could add a further term in \eqref{Kahlerconnection} that does not contribute in \eqref{Kform} and corresponds to a K\"ahler transformation.} Inserting these expressions into \eqref{Kform} one obtains the K\"ahler-form \begin{equation}\label{Kdef} \hat{K}_{a\bar b} = \partial_a\bar\omega_{\bar b}^3-\bar\partial_{\bar b}\omega_a^3 = -\iu\partial_a\bar\partial_b \hat{K}\ . \end{equation} So far, we have only integrated out the two vector bosons of the massive gravitino multiplet including their Goldstone degrees of freedom. As we have just shown, the removal of the two Goldstone bosons amounts to taking the quotient of the original quaternionic-K\"ahler manifold ${{\bf M}}_{\rm h}$ with respect to the two gauged isometries $k_{1,2}$. This quotient ${\hat{\M}}_{\rm h}= {{\bf M}}_{\rm h}/<k_1,k_2>$ has co-dimension two and is indeed a K\"ahler manifold, consistent with the unbroken ${\cal N}=1$ supersymmetry. However, additional scalars from both vector- and/or hypermultiplets can acquire a mass of ${\cal O}(m_{3/2})$ due to the partial supersymmetry breaking. Integrating out these scalar fields results in a submanifold ${\hat{\M}}_{\rm v}$ of the original ${\cal N}=2$ special-K\"ahler manifold ${{\bf M}}_{\rm v}$ and a submanifold of ${\hat{\M}}_{\rm h}$. Thus, the final ${\cal N}=1$ field space is the K\"ahler manifold \begin{equation}\label{NoneMod} {\bf M}^{{\cal N}=1} = {\hat{\M}}_{\rm v} \times {\hat{\M}}_{\rm h} \end{equation} with K\"ahler potential \begin{equation}\label{Kone} K^{{\cal N}=1} = \hat K^{\rm v} +\hat{K} \ . \end{equation} Before we continue let us note that the quotient construction presented in this section can also be understood in terms of the corresponding superconformal supergravity \cite{deWit:1984px,Kallosh:2000ve} or, equivalently, in terms of the hyper-K\"ahler cone construction \cite{Hitchin:1986ea,Swann:1991,Gibbons:1998xa,deWit:1999fp}. In the ${\cal N}=2$ superconformal theory, the scalar field space of the hypermultiplets is given by a $(4 n_{\rm h}+4)$-dimensional hyper-K\"ahler cone ${{\bf M}}_{\rm HKC}$ over a $(4 n_{\rm h}+3)$-dimensional tri-Sasakian manifold, which itself is an $S^3$-fibration over the quaternionic base ${{\bf M}}_{\rm h}$. Thus, ${{\bf M}}_{\rm h}$ can be viewed as the quotient ${{\bf M}}_{\rm h} = {{\bf M}}_{\rm HKC}/(SU(2)_R\times\mathbb{R}_+)$, where dilatations and $SU(2)_R$ act on the cone and fibre directions, respectively. ${{\bf M}}_{\rm HKC}$ is hyper-K\"ahler and thus has three integrable complex structures which descend to the three almost complex structures $J^x$ on ${{\bf M}}_{\rm h}$. In the superconformal framework partial supersymmetry breaking would correspond to taking a K\"ahler quotient of ${{\bf M}}_{\rm HKC}$ with respect to the holomorphic Killing vector $k_1 + \iu k_2$ to produce an ${\cal N}=1$ superconformal theory. On this K\"ahler quotient only one of the three complex structures should be well-defined and thus $SU(2)_R$ is broken to $U(1)_R$. In other words, the fibre $S^3$ is projected onto an $S^1$ on which the ${\cal N}=1$ $U(1)_R$ acts, while the cone direction $\mathbb{R}_+$ is not effected. Therefore, when ${\cal N}=2$ to ${\cal N}=1$ supersymmetry breaking occurs in superconformal supergravity, a minimum of four scalars should be removed from the spectrum - two are eaten by the gauge bosons in the massive gravitino multiplet and two are eaten by the massive $SU(2)_R$ gauge bosons. The structure of the ${\cal N}=1$ superconformal theory then implies that we have a rigid K\"ahler manifold of dimension $4 n_{\rm h}$ which is an $\mathbb{R}_+$ cone over a $4 n_{\rm h}-1$ dimensional Sasakian manifold, which itself is a $S^1$-fibration over a $4 n_{\rm h}-2$ K\"ahler base ${\hat{\M}}_{\rm h}$ \cite{Gibbons:1998xa,Kallosh:2000ve}. Fixing the superconformal symmetry corresponds to taking the standard K\"ahler quotient, i.e.\ gauge fixing the dilatation ($\mathbb{R}_+$) and the $U(1)_R$ , to leave ${\hat{\M}}_{\rm h}$ as the ${\cal N}=1$ scalar field space of the effective theory, which is K\"ahler by construction. We will not study the superconformal version of partial supersymmetry breaking in any further detail here. However, in Section \ref{section:SQC} we shall see that a knowledge of the hyper-K\"ahler cone construction proves useful in determining the K\"ahler potential and the holomorphic coordinates on ${\hat{\M}}_{\rm h}$. \subsection{The gauge couplings} \label{section:Nonef} Let us now check the holomorphicity of the gauge couplings. In section~\ref{section:NoneK} we integrated out the two heavy gauge bosons in the low-energy limit by neglecting their kinetic terms and using their algebraic equations of motion. In order to compute the gauge couplings of the light gauge fields that descend to the ${\cal N}=1$ theory we have to explicitly project out the heavy gauge bosons in the coupled kinetic terms in \eqref{sigmaint}. From \eqref{Acomb} we see that the projection is determined by the embedding tensor solutions given in \eqref{solution_embedding_tensor2} and \eqref{solution_embedding_tensor_AdS2}. In other words, we should impose the projection \begin{equation}\label{Fcond} \Theta^{\lambda I} G\,_I^\pm + \Theta^{\lambda}_I\, F^{I \pm} = 0\ ,\qquad \lambda=1,2 \end{equation} and then compute the gauge couplings of the remaining gauge fields. Taking complex combinations and inserting the embedding tensor solutions \eqref{solution_embedding_tensor2} yields\footnote{We only discuss the Minkowski case here. The AdS case is completely equivalent, in that \eqref{solution_embedding_tensor_AdS2} only leads to a different prefactor (i.e.\ not $C^I$) but the conclusion remains the same.} \begin{equation}\label{Fint} C^I ({\cal F}_{IJ}(\hat{t}) - \mathcal{N}_{IJ}(\hat{t})) F^{J +} ~=~ 0 ~ =~ \bar{C}^I (\bar{{\cal F}}_{IJ}(\hat{t}) - \mathcal{N}_{IJ}(\hat{t})) F^{J +}\ , \end{equation} and a similar set of equations for $F^{J -}$. Note that ${\cal F}_{IJ}$ and $\mathcal{N}_{IJ}$ are evaluated in the ${\cal N}=1$ background, which means that scalar fields not obeying \eqref{deformV} are fixed at their background values. The scalars $\hat{t}$ of the ${\cal N}=1$ theory, which do obey \eqref{deformV}, can vary arbitrarily. Using the definition of $\mathcal{N}_{IJ}$ \eqref{Ndef} we find that \eqref{Fint} implies \begin{equation}\label{Fproj} X^I \Im\left({\cal F}_{IJ}(\hat{t}) \right) F^{J +} = 0 , \end{equation} where we have dropped a non-vanishing prefactor. This condition projects out one linear combination of the $F^{I}$ that is heavy. For the following analysis it will be useful to define the related projection operator \begin{equation}\label{Pidef} \bar \Pi^I_J \equiv \delta^I_J + 2 \e^{K^{\rm v}} \bar X^I X^K \Im({\cal F})_{KJ} \ , \end{equation} such that $(1-\bar\Pi)$ projects onto the heavy gauge boson while $\bar\Pi$ projects onto the orthogonal gauge bosons. Note that in \eqref{Pidef} (and from now on) we have dropped the explicit $\hat{t}$-dependence for convenience. Before we identify the second heavy gauge boson let us check which physical field is projected out by \eqref{Fproj}. Looking at the full ${\cal N}=2$ gravitino variation \cite{Andrianopoli:1996cm}, we see that it contains the `dressed' graviphoton term \begin{equation} \tilde T_{\mu\nu}^{+} = 2 \iu \bar X^{I} \Im \mathcal{N}_{IJ} F^{J + }_{\mu\nu} + \ldots~. \end{equation} It is straightforward to check that the projection $\bar X^{I} \Im\mathcal{N}_{IJ}$ appearing here coincides with \eqref{Fproj} \cite{Ceresole:1995ca}. Therefore, \eqref{Fproj} can be understood as projecting out the graviphoton. The second projection condition implied by \eqref{Fint} reads \begin{equation} \label{FprojC} C^{(P)\,I} \Im({\cal F})_{JK} F^{K+} = 0 \ , \end{equation} where we have defined $C^{(P)\,I}= \Pi^I_JC^J$. Expressing this in terms of the projection operator \begin{equation}\label{Gammadef} \bar \Gamma_{J}^I \equiv \delta^I_J - \frac{\bar C^{(P)\,I} C^{(P)\,K} \Im({\cal F})_{KJ}}{ C^{(P)\,M} \Im({\cal F})_{MN} \bar C^{(P)\,N}} \ , \end{equation} we see that $(1-\bar\Gamma)$ projects onto the second heavy gauge boson while $\bar\Gamma$ projects to the orthogonal gauge bosons. With the help of the two projection operators, which one can show commute, we are now in the position to define the light vector fields which remain in the ${\cal N}=1$ theory by \begin{equation} \label{FN=1} F^{\hat I +} \equiv F^{I +}\Big|_{{\cal N}=1} = \bar \Pi^I_J \bar \Gamma^{J}_K F^{K +} \ , \end{equation} where \ $\hat I = 1,\dots n'_v = (n_{\mathrm v}-1)$, i.e.\ we have projected out two of the ${\cal N}=2$ vectors. In Appendix \ref{section:massive_multi} we further check that the masses of the two heavy gauge bosons obey the ${\cal N}=1$ relations with the gravitino mass. Let us now return to our original task and compute the gauge coupling functions of the ${\cal N}=1$ action. This can be done by imposing the two projections \eqref{Fproj} and \eqref{FprojC} on the gauge kinetic term $\mathcal{N}_{IJ}\,F^{I +}_{\mu\nu}F^{\mu\nu\, J+}$ of \eqref{sigmaint}. In other words, we should compute $\mathcal{N}_{\hat I\hat J}\,F^{\hat I +}_{\mu\nu}F^{\mu\nu\, \hat J+}$ with $F^{\hat I +}$ given by \eqref{FN=1}. Inserting the definition of $\mathcal{N}_{I J}$ \eqref{Ndef} we find that the ${\cal N}=1$ gauge coupling functions appearing in \eqref{N=1Lagrangian} are given by \begin{equation}\label{fN=1} \bar f_{\hat I\hat J}(\hat{t}) = -\mathrm{i} \bar {\cal F}_{\hat I\hat J}\ \ , \end{equation} where the second term in \eqref{Ndef} drops out due to the identity \begin{equation}\label{Fproj2} X^I \Im({\cal F})_{I\hat J} F^{\hat J +} = 0 \ . \end{equation} It is straightforward to see that \eqref{Fproj2} holds by inserting \eqref{FN=1} and using $e^{-K_{\rm v}} = -2 \bar X^I {\rm Im}({\cal F})_{IJ} X^J$. As promised, we see that the gauge couplings are manifestly holomorphic. Furthermore, $f_{\hat I\hat J}(t)$ can only depend on the scalar fields that descend from ${\cal N}=2$ vector multiplets, but not on those descending from hypermultiplets. In fact, this is analogous to the situation in ${\cal N}=2\to {\cal N}=1$ truncations, where the graviphoton also has to be projected out and, as a consequence, the gauge couplings are holomorphic and only depend on the scalars of the vector multiplets \cite{Andrianopoli:2001zh,Andrianopoli:2001gm}. \subsection{The superpotential} \label{section:NoneW} Our next task is to determine the ${\cal N}=1$ superpotential ${\cal W}$. This is most easily done by comparing the supersymmetry transformation of the ${\cal N}=1$ gravitino $\Psi_{\mu\, 1}$ \eqref{susytrans2} with the conventional ${\cal N}=1$ transformation given, for example, in \cite{Wess:1992cp}. (An analogous computation for ${\cal N}=1$ truncations of ${\cal N}=2$ theories can be found in \cite{Grana:2005ny,Andrianopoli:2001zh,Andrianopoli:2001gm}). Focusing on the scalar contribution one has \begin{equation}\label{Nonegravitino} \delta_\epsilon \Psi_{\mu\,1} = D_\mu \epsilon - S_{11} \gamma_\mu \bar \epsilon + \ldots \,\ =\ D_\mu \epsilon - \tfrac12 \e^{\tfrac12K^{{\cal N}=1}} {\cal W} \gamma_\mu \bar \epsilon + \ldots \end{equation} where we have already inserted our choice $\epsilon_1 = {\epsilon\choose 0}$ and the right-hand side is the ${\cal N}=1$ gravitino variation expressed in terms of the superpotential ${\cal W}$. Using the definition of the gravitino mass matrices \eqref{susytrans3} we find that the ${\cal N}=1$ superpotential is given by \begin{equation}\label{Wone} {\cal W} = 2 \e^{-\tfrac12K^{{\cal N}=1}} S_{11}\ =\ \e^{-\hat{K}/2} {V}^\Lambda \Theta_\Lambda^{~~\lambda} P_{\lambda}^- \ . \end{equation} In this expression we have to appropriately project out all scalars with masses of ${\cal O}(m_{3/2})$. In other words, ${\cal W}$ should be expressed in terms of ${\cal N}=2$ input couplings restricted to the light ${\cal N}=1$ modes. As we discussed at the end of section \ref{section:NoneK}, this projection preserves the K\"ahler and complex structure of ${{\bf M}}_{\rm v} \times {\hat{\M}}_{\rm h}$. Therefore, we should be able to check the holomorphicity of ${\cal W}$ without knowing the precise ${\cal N}=1$ spectrum. Before continuing, let us discuss the situation where the original ${\cal N}=2$ supergravity is only gauged with respect to the two Killing vectors $k_1,k_2$ that induce the partial breaking. In this case the index $\lambda$ in \eqref{Wone} takes the values $\lambda=1,2$ and all fields in the ${\cal N}=1$ effective theory are exactly massless, i.e.\ they are ${\cal N}=1$ moduli. Their vacuum expectation values are not fixed, or, in other words, they parametrise the entire ${\cal N}=1$ background. As a consequence the superpotential has to be proportional to the cosmological constant. This can be seen explicitly by inserting the gravitino mass matrix \eqref{N=1conditions} into \eqref{Wone} which gives \begin{equation} |{\cal W}|^2= 4 \e^{-K^{{\cal N}=1}} |S_{11}|^2 = 4 \e^{-K^{{\cal N}=1}} |\mu|^2 \ , \end{equation} in agreement with the standard ${\cal N}=1$ relation \cite{Wess:1992cp}. If an additional $m$ Killing vectors are gauged, then their corresponding Killing prepotentials appear in \eqref{Wone} and the index $\lambda$ runs over all $m+2$ values. For this case we will now show that ${\cal W}$ is holomorphic with respect to the ${\cal N}=1$ complex structure determined in the previous section. Inspecting the superpotential ${\cal W}$ given in \eqref{Wone} we see that the scalars of ${{\bf M}}_{\rm v}$ already appear holomorphically via $V^\Lambda$. Therefore, we are left to show that the anti-holomorphic derivative of ${\cal W}$ with respect to the scalars of ${\hat{\M}}_{\rm h}$ vanishes, i.e.\ \begin{equation}\label{der_W} \bar\partial_{\bar a} {\cal W} = \e^{-\hat{K}/2} {V}^\Lambda \Theta_\Lambda^{~~\lambda} (\bar \partial_{\bar a} P_\lambda^- - \tfrac12 (\bar \partial_{\bar a} \hat{K})P_\lambda^-) = 0\ . \end{equation} Let us first note that using \eqref{Kahlerconnection} we can express $\bar \partial_{\bar a} \hat{K}$ in terms of $\omega^3_{\bar a}$. Furthermore, from the definition of Killing prepotentials \eqref{Pdef} we see that \begin{equation} -2 K^-_{uv} k^v_\lambda = \partial_u P^-_\lambda + \iu \omega^-_u P^3_\lambda - \iu \omega^3_u P^-_\lambda \ , \end{equation} which implies \begin{equation}\label{der_W2} \bar\partial_{\bar a} {\cal W} = - \e^{-\hat{K}/2} {V}^\Lambda \Theta_\Lambda^{~~\lambda} (2 K^-_{\bar a v} k^v_\lambda + \iu \bar \omega^-_{\bar a} P_\lambda^3) \ . \end{equation} From the quaternionic algebra \eqref{jrel} and $K^x_{uv} = h_{uw} (J^x)^w_v$ it is easy to see that $K^-$ is actually a $(2,0)$-form and thus only has holomorphic indices. This immediately implies that the first term in the bracket vanishes. From \eqref{Killing_oneform} we can infer that both $\omega^1$ and $\omega^2$ live entirely in the space spanned by $k_{1v}$ and $k_{2v}$, which in fact is divided out. This implies that $\omega^-_{\bar a}$ is zero on ${\hat{\M}}_{\rm h}$ and therefore the second term in \eqref{der_W2} also vanishes. Thus, the superpotential ${\cal W}$ is holomorphic, consistent with ${\cal N}=1$ supersymmetry. \subsection{The D-terms} \label{section:NoneD} Our final task is to explicitly compute the ${\cal N}=1$ D-terms appearing in the effective potential \eqref{N=1pot}. This proceeds analogously to the calculation of the superpotential in Section \ref{section:NoneW}, but by comparing the ${\cal N}=2$ and ${\cal N}=1$ gaugino variations instead of the gravitino variations. Once again, this procedure is similar that used in ${\cal N}=1$ truncations \cite{Andrianopoli:2001zh,Andrianopoli:2001gm,D'Auria:2005yg}, but here we shall more closely follow \cite{Cassani:2007pq}. The ${\cal N}=2$ gaugino variation is given by \cite{Andrianopoli:1996cm} \begin{equation}\label{gauginivar} \delta_\epsilon \lambda^{i {\cal A}} = \gamma^\mu\partial_\mu t^i \epsilon^{\cal A} - \tilde G_{\mu\nu}^{i -} \gamma^{\mu\nu} \varepsilon^{\cal AB}\epsilon_{\cal B} + W^{i{\cal AB}}\epsilon_{\cal B}+\ldots~, \end{equation} where $W^{i {\cal AB}}$ was defined in \eqref{susytrans3} and $\tilde G_{\mu\nu}^{i -} = -g^{i\bar j} \nabla_{\bar j}\bar X^{I} \mathrm{Im}\mathcal{N}_{IJ} F^{J - }_{\mu\nu} + \ldots$ are the `dressed' anti-self-dual field strengths, with the ellipses denoting higher-order fermionic contributions. In order to identify the gaugini of the effective ${\cal N}=1$ theory we evaluate \eqref{gauginivar} for our choice of the preserved supersymmetry parameter $\epsilon_1 = {\epsilon\choose 0}$ and obtain \begin{eqnarray}\begin{aligned} \delta_\epsilon \lambda^{i 1} ~=&~ \gamma^\mu\partial_\mu t^i \bar\epsilon + W^{i{11}} \epsilon~ +\ldots~, \label{trunc-fermion} \\ \delta_\epsilon \lambda^{i 2} ~=&~ - \tilde G_{\mu\nu}^{i -} \gamma^{\mu\nu} \epsilon + W^{i{ 21}} \epsilon~ +\ldots~. \label{trunc-gaugino} \end{aligned} \end{eqnarray} Comparing with the standard ${\cal N}=1$ gaugino variation \cite{Wess:1992cp,Gates:1983nr} \begin{equation} \delta_\epsilon \lambda^{\hat{I}} = F_{\mu\nu}^{\hat{I} -} \gamma^{\mu\nu} \epsilon + \mathrm{i} {\cal D}^{\hat I} \epsilon~ +\ldots~, \label{NoneGauginoVar} \end{equation} we conclude that the $\lambda^{i 2}$ are candidates for ${\cal N}=1$ gaugini. However, not all $\lambda^{i 2}$ descend to the effective ${\cal N}=1$ theory as some of them are massive and have to be integrated out. The ${\cal N}=1$ gaugini should be defined as those with the light ${\cal N}=1$ gauge fields \eqref{FN=1} appearing in their supersymmetry variations. Using the projection operators $\Pi$ and $\Gamma$, given in \eqref{Pidef} and \eqref{Gammadef} respectively, and the definition \eqref{FN=1} we can restrict the gauge fields appearing in the ${\cal N}=2$ gaugino variation \eqref{trunc-gaugino} to the light ${\cal N}=1$ gauge fields. By comparing the resulting expression with the ${\cal N}=1$ gaugino variation \eqref{NoneGauginoVar}, we can identify the ${\cal N}=1$ gaugini as \begin{equation}\label{NoneGaugino} \lambda^{\hat{I}} = -2e^{K^{\rm v}/2} \nabla_{i} X^{\hat I} \lambda^{i 2} ~, \end{equation} where we have used the same projector \eqref{FN=1} to define \begin{equation}\label{dxproj} \nabla_{i}X^{\hat I} = \Pi^I_J \Gamma^{J}_K \nabla_{i}X^K~. \end{equation} In order to reach the result \eqref{NoneGaugino}, we have first made use of the special-geometry relation \cite{Andrianopoli:1996cm} \begin{equation}\label{special} \nabla_{i} X^{\hat I} g^{i\bar \jmath}\, \nabla_{\bar \jmath}\bar {X}^{\hat J} = -\tfrac{1}{2} e^{-K^{\rm v}}(\mathrm{Im} {\cal N})^{-1 ~\hat I \hat J} - X^{\hat I} \bar {X}^{\hat J}~, \end{equation} which is derived from the standard identity restricted to the light ${\cal N}=1$ fields using the projection operators $\Pi$ and $\Gamma$ and \eqref{dxproj}. We can then simplify \eqref{special} by making use of the fact that the projector $\Pi$ given in \eqref{Pidef} is defined such that the following property holds\footnote{Note that \eqref{Xcond} does not fix any scalars, as the projection operators $\Pi^I_J$ and $\Gamma^{J}_K$ are field-dependent quantities which vary over the ${\cal N}=1$ moduli space. This should be compared to ${\cal N}=2 \rightarrow {\cal N}=1$ supergravity truncations\cite{Andrianopoli:2001zh,Andrianopoli:2001gm}, where the equivalent projection operators are constant and, therefore, some scalars are fixed by the condition $\Pi^I_J X^J =0$.} \begin{equation}\label{Xcond} X^{\hat I}= \Pi^I_J \Gamma^{J}_K X^K =0~, \end{equation} We can now take the ${\cal N}=1$ supersymmetry variation of \eqref{NoneGaugino} (to lowest fermionic order), use \eqref{trunc-gaugino}, insert the definition of $W^{i{ 21}}$ \eqref{susytrans3}, and compare the result with the standard ${\cal N}=1$ expression \eqref{NoneGauginoVar} to read off the D-term: \begin{eqnarray}\begin{aligned} {\cal D}^{\hat I} ~=&~ 2\mathrm{i}e^{K^{\rm v}/2} \nabla_{i} X^{\hat I} W^{i{ 21}} \nonumber \\ ~=&~-2 e^{K^{\rm v}} \nabla_{i} X^{\hat I} g^{i\bar \jmath}\, \nabla_{\bar \jmath}\bar {X}^{\hat J}\left( \Theta_{\hat J}^{~~\lambda} - {\cal N}_{\hat J \hat K} \Theta^{\hat K\lambda}\right) P_{\lambda}^3 ~, \label{Dterm} \end{aligned} \end{eqnarray} where we have used $\nabla_{i} F_{\hat J} = {\cal F}_{\hat J \hat K}\nabla_{i} X^{\hat K}$ in the second line. In order to see that this expression agrees with the standard ${\cal N}=1$ D-term \eqref{NoneDterm}, we again make use of \eqref{special} and \eqref{Xcond} to see that it can be written as \begin{equation}\label{D_terms} {\cal D}^{\hat I} = - (\mathrm{Re} f)^{-1 ~\hat I \hat J} \left( \Theta_{\hat J}^{~~\lambda} - \iu \bar{f}_{\hat J \hat K} \Theta^{\hat K\lambda}\right) P_{\lambda}^3~. \end{equation} Therefore we can identify the ${\cal N}=1$ Killing prepotential as follows \begin{equation}\label{N=1prepotential} {\cal P}_{\hat J} = \tfrac{1}{2} \left( \Theta_{\hat J}^{~~\lambda} - \iu \bar{f}_{\hat J \hat K} \Theta^{\hat K\lambda}\right) P_{\lambda}^3~. \end{equation} If we now consider gaugings with respect to just the Killing vectors $k_1$ and $k_2$ responsible for partial supersymmetry breaking, we see that the D-term vanishes by our ${\cal N}=1$ supersymmetry condition \eqref{P3}, as expected for a supersymmetric vacuum. Note that both the D-terms \eqref{D_terms} and the Killing prepotentials \eqref{N=1prepotential} are complex, in agreement with the analogous results from ${\cal N}=1$ truncations \cite{D'Auria:2005yg,Cassani:2007pq}. The reason is that these quantities appear in the supersymmetry variations of the gaugini in \eqref{trunc-gaugino} which are paired with the (complexified) anti-self-dual field strengths $\tilde G_{\mu\nu}^{i -}$. Therefore, \eqref{D_terms} describes a complex linear combination of the electric and the magnetic D-terms. More precisely, from \eqref{N=1prepotential} we see that the electric and magnetic Killing prepotentials of the ${\cal N}=1$ theory are given by $\frac{1}{2} \Theta_{\hat J}^{~~\lambda} P_{\lambda}^3$ and $\frac{1}{2} \Theta^{\hat K\lambda} P_{\lambda}^3$.\footnote{We thank D.\ Cassani and G.\ Dall'Agata for useful discussions on this point.} Before we close this section let us note that one can also check that the supersymmetry transformation of the ${\cal N}=1$ fermions in chiral multiplets that descend from the ${\cal N}=2$ gaugini $\lambda^{i1}$ (cf.~\eqref{trunc-fermion}) correctly reproduces the F-terms. Furthermore, one might expect that it is necessary to take field redefinitions of the gaugini and the hyperini with respect to the Goldstino, such that we can rewrite the fermionic Lagrangian in terms of physical fermions, i.e.\ fermions that cannot be gauged away by further field redefinitions of the massive gravitino $\Psi_{\mu 2}$ \cite{Gunara:2003td}. However, it is straightforward to check that any such field redefinitions are projected out when one identifies the ${\cal N}=1$ fields as in \eqref{NoneGaugino}. In other words, the ${\cal N}=1$ fermionic field space is defined by quotienting the ${\cal N}=2$ counterpart by the Goldstino direction. This completes our analysis of the low-energy effective theory in the partial supersymmetry breaking vacua of ${\cal N}=2$ gauged supergravity with electric and magnetic charges. We have proven that this theory enjoys ${\cal N}=1$ supersymmetry, as is required for the consistency of the partial supersymmetry breaking mechanism. We shall now focus on the class of special quaternionic-K\"ahler manifolds. \section{Special quaternionic-K\"ahler manifolds} \label{section:SQC} In this section we will provide an explicit example of the results of Section \ref{section:None} by deriving the ${\cal N}=1$ effective action for the class of supergravities that arise at string tree-level in type II compactifications. In this case the $4n_{\rm h}$-dimensional quaternionic-K\"ahler manifold ${\bf M}_{\rm h}$ takes a special form, in that its metric is entirely determined in terms of the holomorphic prepotential of a $(2n_{\rm h}-2)$--dimensional special-K\"ahler submanifold ${\bf M}_{\rm sk}$. Such a manifold ${\bf M}_{\rm h}$ is called special quaternionic-K\"ahler and the construction of its metric is known as the c-map \cite{Cecotti:1988qn,Ferrara:1989ik}. In \cite{Louis:2009xd} we showed that ${\cal N}=1$ vacua generically exist for this subclass of quaternionic K\"ahler manifolds. In the following we will determine the K\"ahler potential, the superpotential and the D-terms of the corresponding effective action. Let us denote the complex coordinates of ${\bf M}_{\rm sk}$ by $z^a, a=1,\ldots,n_{\rm h}-1,$ its K\"ahler potential by $K^{\rm h}(z,\bar z)$ and the holomorphic prepotential by ${\cal G}(z)$. The remaining scalars in the hypermultiplets are the dilaton $\phi$, the axion $\ax$ and $2n_{\rm h}$ real Ramond-Ramond scalars $\xi^A, \tilde\xi_A, A=1,\ldots,n_{\rm h}$. Together they define a $G$-bundle over ${\bf M}_{\rm sk}$, where $G$ is the semidirect product of a $(2n_{\rm h}+1)$-dimensional Heisenberg group with $\mathbb{R}$. The Killing vectors corresponding to the action of $G$ can be used to construct ${\cal N}=1$ solutions \cite{Louis:2009xd}. In \cite{Ferrara:1989ik} it was observed that there is a specific parametrisation of the quaternionic vielbein $\mathcal U^{\mathcal A\alpha}_u$, defined in \eqref{Udef}, which reads (our notation follows \cite{Cassani:2007pq,Louis:2009xd}) \begin{equation} \label{quat_vielbein} \mathcal U^{\mathcal A\alpha}= \mathcal U^{\mathcal A\alpha}_u \diff q^u =\tfrac{1}{\sqrt{2}} \left(\begin{aligned} \bar{u} && \bar{e} && -v && -E \\ \bar{v} && \bar{E} && u && e \end{aligned}\right) \ , \end{equation} where the one-forms are defined by \begin{equation} \label{one-forms_quat} \begin{aligned} u ~= &~ \iu \e^{K^{\rm h}/2+\phi}Z^A(\diff \tilde\xi_A - \mathcal M_{AB} \diff \xi^B) \ , \\ v ~= &~ \tfrac{1}{2} \e^{2\phi}\big[ \diff \e^{-2\phi}-\iu (\diff \ax +\tilde\xi_A \diff \xi^A-\xi^A \diff \tilde \xi_A ) \big] \ , \\ E^{\,\underline{b}} ~= &~ -\tfrac{\iu}{2} \e^{\phi-K^{\rm h}/2} {{\Pi}}_A^{\phantom{A}\underline{b}} (\Im \mathcal G)^{-1\,AB}(\diff \tilde\xi_B - \mathcal M_{BC} \diff \xi^C) \ , \\ e^{\,\underline{b}} ~= &~ {{\Pi}}_A^{\phantom{A}\underline{b}} \diff Z^A \ . \\ \end{aligned} \end{equation} Here $Z^A$ are the homogeneous coordinates of ${\bf M}_{\rm sk}$, ${{\Pi}}_A^{\phantom{A}\underline{b}} =(-e_a^{\phantom{a}\underline{b}}Z^a, e_a^{\phantom{a}\underline{b}})$ is defined using the vielbein $e_a^{\phantom{a}\underline{b}}$ on ${\bf M}_{\rm sk}$ and $\mathcal M_{AB}$ is computed from the prepotential ${\cal G}$ exactly as ${\cal N}_{IJ}$ is determined by ${\cal F}$ in \eqref{Ndef}. The metric $h_{uv}$ on ${\bf M}_{\rm h}$ is \begin{equation} h = \left[ v \otimes \bar v + u \otimes \bar u + E \otimes \bar E + e \otimes \bar e \right]_{\rm sym} \ . \end{equation} Given the explicit form of the vielbein \eqref{quat_vielbein} the $SU(2)$ connections $\omega^x$ reads \cite{Ferrara:1989ik} \begin{equation}\label{quat_connection} \begin{aligned} \omega^1 ~= &~ \iu (\bar u- u) \ , \qquad \omega^2 ~= ~ u + \bar u \ , \\ \omega^3 ~= &~ \tfrac{\iu}{2} (v-\bar v) - \iu \e^{K^{\rm h}} \left(Z^A (\Im \mathcal G)_{AB} \diff \bar Z^B - \bar Z^A (\Im \mathcal G)_{AB} \diff Z^B \right) \ . \end{aligned} \end{equation} As already anticipated, the metric of ${\bf M}_{\rm h}$ has $(2n_{\rm h}+2)$ isometries generated by the Killing vectors \begin{equation}\label{Killing} \begin{aligned} \hat{k}_{\phi} ~= &~ \tfrac{1}{2} \frac{\partial}{\partial \phi} - \ax \frac{\partial}{\partial \ax} - \tfrac{1}{2} \xi^A \frac{\partial}{\partial \xi^A} - \tfrac{1}{2} \tilde \xi_A \frac{\partial}{\partial \tilde \xi_A} \ , \\ k_{\tilde \phi} ~= &~ - 2 \frac{\partial}{\partial \ax} \ , \\ k_A ~= &~ \frac{\partial}{\partial \xi^A} + \tilde \xi_A \frac{\partial}{\partial \ax} \ , \\ \tilde k^A ~= &~ \frac{\partial}{\partial \tilde \xi_A} - \xi^A \frac{\partial}{\partial \ax} \ . \end{aligned} \end{equation} The corresponding Killing prepotentials $P^x_\lambda$, defined in \eqref{Pdef}, take the following simple form \cite{Michelson:1996pn,Cassani:2007pq} \begin{equation} \label{prepotential_no_compensator} P^x_\lambda = \omega^x_u k_\lambda^u \ . \end{equation} After these preliminaries, we can explicitly compute the couplings of the ${\cal N}=1$ effective action. However, it will be necessary to discuss Minkowski and AdS backgrounds separately. Let us start with the Minkowski case. \subsection{Minkowski vacua} \label{section:Min_special} In \cite{Louis:2009xd} we showed that the two Killing vectors needed for partial supersymmetry breaking are given by \begin{equation} \label{Killingv} \begin{aligned} k_1 =& \ \Im\big(D^A (k_A + {\cal G}_{AB} \tilde k^B )\big) +\Im\big(D^A ({\tilde \xi}_A - {\cal G}_{AB} \xi^B )\big) k_{\tilde \phi} \ , \\ k_2=&\ \Re\big(D^A (k_A + {\cal G}_{AB} \tilde k^B )\big)+\Re\big(D^A ({\tilde \xi}_A - {\cal G}_{AB} \xi^B )\big) k_{\tilde \phi} \ , \end{aligned}\end{equation} where $k_A, \tilde k^B$ and $k_{\tilde \phi}$ are defined in \eqref{Killing} and $D^A$ is a complex vector obeying \begin{equation} \bar D^A (\Im {\cal G})_{AB} D^B = 0 \ . \end{equation} Furthermore, the prefactors in \eqref{Killingv} have to be constant in order for $k_1$ and $k_2$ to be Killing vectors, i.e.\ \begin{equation} \label{constant_Min} D^A = \textrm{const.} \ , \qquad D^A {\cal G}_{AB} = \textrm{const.} \ , \qquad D^A ({\tilde \xi}_A - {\cal G}_{AB} \xi^B ) = \textrm{const.} \ . \end{equation} The scalars that obey \eqref{constant_Min} define the ${\cal N}=1$ locus, while those violating \eqref{constant_Min} have a mass of ${\cal O}(m_{3/2})$. In the ${\cal N}=1$ effective action all such massive fields are integrated out, which corresponds to setting the variation of the prefactors in \eqref{Killingv} to zero. From the third equation in \eqref{constant_Min} we see that only two coordinates in the fibre are stabilised. For the base coordinates the second equation in \eqref{constant_Min} implies \begin{equation}\label{deformSQ} {\cal G}_{ABC} D^B \delta Z^C = 0 \ . \end{equation} Analogously to \eqref{deformV}, for generic ${\cal G}$ this gives $n_{\rm h}-1$ complex conditions and thus stabilises all coordinates of ${\bf M}_\textrm{sk}$. A special case occurs when the prepotential ${\cal G}$ is quadratic, which corresponds to \begin{equation} {\bf M}_\textrm{sk}=\frac{SU(1,n_{\rm h}-1)}{SU(n_{\rm h}-1)} \ , \qquad {\bf M}_\textrm{h}=\frac{U(n_{\rm h},2)}{U(n_{\rm h})\times U(2)} \ . \end{equation} Then \eqref{deformSQ} is trivially satisfied and all base coordinates together with $2n_{\rm h}-2$ fibre coordinates descend to a total of $4n_{\rm h}-4$ light scalar fields in the ${\cal N}=1$ theory. In contrast, for a generic cubic prepotential the condition \eqref{deformSQ} stabilises all base coordinates, leaving an ${\cal N}=1$ scalar field space of dimension $2n_{\rm h}-2$.\footnote{It is known that ${\bf M}_\textrm{sk}$ admits shift isometries for the imaginary parts of the $z^a$, therefore one might expect the $z^a$ to also be massless. However, these isometries generically induce symplectic transformations on the vector of fibre coordinates $\xi_{\tilde \lambda} = (\tilde \xi_A,\xi^A)$, see \cite{deWit:1992wf}. If $k_1$ and $k_2$ transform non-trivially under these symplectic transformations, the corresponding symmetries are broken by the gaugings and the related scalars gets a mass, indicated by the condition \eqref{deformSQ}. This is analogous to the discussion of isometries on ${\bf M}_{\rm v}$, see Section \ref{section:scales}.} Let us now determine the couplings of the ${\cal N}=1$ effective theory. In order to apply the procedure developed in the previous section we should first check that the $SU(2)$ gauge choice \eqref{P3} holds. By inserting \eqref{Killingv} into \eqref{prepotential_no_compensator} we find \begin{equation}\begin{aligned} P^3_1 ~= &~ \e^{2\phi} \Im D^A\Big( (\tilde \xi_A - \hat {\tilde \xi}_A) - {\cal G}_{AB}(\hat z) (\xi^B - \hat \xi^B) \Big) \ , \\ P^3_2 ~= &~ \e^{2\phi} \Re D^A\Big( (\tilde \xi_A - \hat {\tilde \xi}_A) - {\cal G}_{AB}(\hat z) (\xi^B - \hat \xi^B) \Big) \ , \end{aligned}\end{equation} where the coordinates $(\hat {\tilde \xi}_A,\hat \xi^A)$ and $\hat z$ parametrise the ${\cal N}=1$ locus, while $(\tilde \xi_A,\xi^A)$ also include the massive scalars. We see that in the ${\cal N}=1$ locus $P^3_{1,2} = 0$ indeed holds but $\diff P^3_{1,2} = 0$ is not fulfilled. More precisely, the one-forms \begin{equation}\label{dP3_Mink}\begin{aligned} \diff P^3_{1} ~= &~ \e^{2\phi}\Im \Big( D^A (\diff \tilde \xi_A - {\cal G}_{AB}(z) \diff \xi^B ) \Big) \ , \\ \diff P^3_{2} ~= &~ \e^{2\phi}\Re \Big( D^A (\diff \tilde \xi_A - {\cal G}_{AB}(z) \diff \xi^B ) \Big) \end{aligned}\end{equation} point in the direction of the massive scalars in the fibre. Integrating out these scalars automatically sets $\diff P^3_{1,2} = 0$ and we recover the \eqref{P3}. Let us now compute the K\"ahler two-form and the K\"ahler potential. Using \eqref{Kform}, the $SU(2)$ connections \eqref{quat_connection} and the results for the exterior derivatives of the one-forms \eqref{one-forms_quat} given in \cite{Ferrara:1989ik}, we find \begin{equation}\label{K_Mink}\begin{aligned} \hat{K} = \diff \omega^3 = \iu (v \wedge \bar v + u \wedge \bar u + E \wedge \bar E - e \wedge \bar e) \end{aligned}\end{equation} for the K\"ahler two-form. In order to compute the K\"ahler potential, we use \eqref{quat_connection} to determine the holomorphic component of $\omega^3$ to be $\omega^3_a=\tfrac{\iu}{2} (v_a - \partial_a K^{\rm h}) $. Inserting this into \eqref{Kahlerconnection} and integrating finally yields \begin{equation}\label{special_Kpot} \hat{K} = K^{\rm h}(\hat z, \bar {\hat z}) + 2 \phi \ . \end{equation} The K\"ahler potential $\hat{K}$ given in \eqref{special_Kpot} is still expressed in terms of the original ${\cal N}=2$ field variables. We can find the corresponding holomorphic coordinates by starting from the superconformal theory, modding out $k_1$ and $k_2$ as a K\"ahler quotient and projecting at the same time $SU(2)\to U(1)$ in the fibre, as explained at the end of Section \ref{section:NoneK}. This gives a rigid K\"ahler space that is a $U(1)\times \mathbb{R}_+$ fibration over ${\hat{\M}}_{\rm h}$. Inspired by the holomorphic coordinates on the hyper-K\"ahler cone (and the corresponding twistor space) we make the following ansatz \cite{deWit:1997vg,deWit:1998zg,Rocek:2005ij}: \begin{equation} \label{holcoords_Min}\begin{aligned} w^0 ~= &~\e^{-2\phi} + \iu (\tilde \phi + \xi^A (\tilde \xi_A - {\cal G}_{AB} \xi^B)) \ , \\ w_A ~= &~ -\iu (\tilde \xi_A - {\cal G}_{AB} \xi^B) \ , \end{aligned}\end{equation} together with the manifestly holomorphic base coordinates $\hat z^a$. As discussed in Appendix~\ref{section:holcoords}, the coordinates $(z^a, w^0, w_A)$ form complex coordinates with respect to the integrable complex structure $J^3$ on ${\bf M}_{\rm h}$. The third condition in \eqref{constant_Min} then reads \begin{equation} \label{constant_Min_2} D^A w_A = \textrm{const.} \ , \end{equation} which is a holomorphic equation on ${\bf M}_{\rm h}$. On the quotient ${\hat{\M}}_{\rm h}$ the coordinates \eqref{holcoords_Min} form equivalence classes under shifts by $k_1$ and $k_2$, i.e.\ under \begin{equation}\label{equivalence_Min} \begin{aligned} w^0 ~\sim & ~ w^0 - 2 \lambda \bar D^A w_A + 2 \lambda \bar D^A \bar w_A \ , \\ w_A ~\sim & ~ w_A + \iu \lambda {\cal G}_{AB} \bar D^B - \iu \lambda \bar {\cal G}_{AB} \bar D^B \ , \end{aligned}\end{equation} where $\lambda \in \mathbb{C}$ and in both equivalence relations the first shift is holomorphic in the coordinates and the second one is constant due to \eqref{constant_Min_2}. In Appendix \ref{section:holcoords} we show that the coordinates $(w^0,w_A, \hat z^a)$ together with the constraint \eqref{constant_Min} and the identification \eqref{equivalence_Min} give a set of holomorphic coordinates with respect to $\hat J$. Now we can express $\phi$ and the K\"ahler potential $\hat{K}$ in \eqref{special_Kpot} in terms of these holomorphic coordinates via \cite{Rocek:2005ij} \begin{equation} \phi = - \tfrac12 \ln \big( ( w^0 +\bar w^0) + (w_A + \bar w_A) (\Im{\cal G})^{-1\, AB} (w_B + \bar w_B) \big) \ . \end{equation} So far we just considered the Killing vectors $k_{1,2}$ given in \eqref{Killingv}. Now let us assume that there are additional gaugings for the remaining Killing vectors $k_{\tilde \lambda} = (k_A, \tilde k^A)$ and $k_{\tilde \phi}$, at a scale well below $m_{3/2}$. The superpotential generated by these can be found by inserting the Killing prepotentials \eqref{prepotential_no_compensator} into the general expression \eqref{Wone}, from which we find \begin{equation} \label{superpot_Min} {\cal W} = 2 V^\Lambda \Theta^{\phantom{\Lambda}\tilde\lambda}_\Lambda U_{\tilde\lambda} \ , \end{equation} where $U_{\tilde\lambda} = (Z^A , {\cal G}_A)$. We see that ${\cal W}$ is manifestly holomorphic, consistent with our proof in Section \ref{section:NoneW}. Furthermore, it depends on the scalars from both the vector- and hypermultiplet sectors. The D-terms are obtained by insertion of $P^3$ into \eqref{D_terms}. They read \begin{equation}\label{D_terms_Min} {\cal D}^{\hat I} = - \e^{2\phi} (\mathrm{Re} f)^{-1 ~\hat I \hat J} \left( \left( \Theta_{\hat J}^{~~\bar \lambda} - \iu \bar{f}_{\hat J \hat K} \Theta^{\hat K\bar \lambda}\right) \xi_{\bar \lambda} - \left( \Theta_{\hat J}^{~~\tilde \phi} - \iu \bar{f}_{\hat J \hat K} \Theta^{\hat K \tilde \phi}\right) \right) \ . \end{equation} \subsection{AdS vacua} \label{section:AdS_special} Let us now determine the K\"ahler potential and the superpotential of effective ${\cal N}=1$ theories that have AdS ground states. In this case the ${\cal N}=2$ supersymmetry parameter that preserves the ${\cal N}=1$ has the form \cite{Louis:2009xd,Grana:2006kf} \begin{equation} \label{unbroken_SUSY_AdS} \epsilon^{\cal A}_1 = (\e^{\iu \varphi/2}, \e^{-\iu \varphi/2})\, \epsilon \ , \end{equation} where $\epsilon$ is the ${\cal N}=1$ generator and $\varphi$ is an arbitrary phase. In order to use the expressions of the previous section, we first perform an $SU(2)$-rotation given by \begin{equation} \epsilon^{\cal A}\to {M}^{\cal A}_{\phantom{\cal A} \cal B}\epsilon^{\cal B}\ ,\qquad \textrm{where}\qquad {M}^{\cal A}_{\phantom{\cal A} \cal B}= \tfrac{1}{\sqrt{2}}\left( \begin{aligned} \e^{\iu\varphi/2} && - \e^{\iu\varphi/2} \\ \e^{-\iu\varphi/2} &&\e^{-\iu\varphi/2} \end{aligned}\right) \ . \end{equation} This in turn rotates the Killing prepotentials according to \begin{equation} \label{twisted_prepotentials} P^-_{1,2} \to \tilde P^-_{1,2} = \iu \Im(\e^{\iu \varphi} P^-_{1,2}) - P^3_{1,2} \ , \end{equation} and the connection to \begin{equation} \label{vanishingP3} \omega^3 \to \tilde \omega^3 = \Re(\e^{\iu \varphi} \omega^-) = 2 \Im(\e^{\iu \varphi} u) \ . \end{equation} We have shown in \cite{Louis:2009xd} that the conditions for partial supersymmetry breaking in an AdS ground state are solved by the two Killing vectors \begin{equation}\label{kv_AdS} k_1 = \Re \Big( \e^{\iu\varphi} (Z^A k_A + {\cal G}_A \tilde k^A)\Big) \ , \qquad k_2= k_{\tilde \phi} \ . \end{equation} The prefactors of $(k_A,\tilde k^A)$ should again be constant in the ${\cal N}=1$ locus and therefore all coordinates $z^a$ of the base space ${\bf M}_{\rm sk}$ are stabilised. It is straightforward to check that the SU(2) gauge choice $\tilde P^3_{1,2}=0$ and $\diff \tilde P^3_{1,2}=0$ holds. We can now use \eqref{Kdef} and \eqref{vanishingP3} to compute the K\"ahler two-form $\hat{K}$ on ${\hat{\M}}_{\rm h}$, finding \begin{equation} \label{K_2form_AdS}\begin{aligned} \hat{K} =& \ 2\Im\big(\e^{\iu \varphi} u \big) \wedge \Re v - 2 \Im\Big(\e^{\iu \varphi}\bar E \wedge e \Big) \\ &+ 2 \iu \e^{K^{\rm h}} \Re \big(\e^{\iu \varphi} u\big) \wedge \Big(Z^A (\Im \mathcal G_{AB})\diff \bar Z^B - \bar Z^A (\Im \mathcal G_{AB}) \diff Z^B \Big) \ . \end{aligned}\end{equation} With the help of the associated complex structure we then identify the holomorphic part of $\tilde \omega^3$ to be $\tilde \omega^3_a= 2 (\Im(\e^{\iu \varphi} u_a) - \iu (v+\bar v)_a)$. Inserting this into \eqref{Kahlerconnection} leads to the K\"ahler potential \begin{equation}\label{special_Kpot_AdS} \hat{K} = 4 \phi \ . \end{equation} Analogous to the Minkowski case, one can find holomorphic coordinates on ${\hat{\M}}_{\rm h}$ by going to the corresponding superconformal theory. We use the ansatz \cite{Neitzke:2007ke} \begin{equation} \label{holcoords_AdS} w_{\tilde \lambda} ~=~ \xi_{\tilde \lambda} + 2 \iu \Im \Big(\e^{K^{\rm h}/2-\phi+\iu \varphi} U_{\tilde\lambda} \Big) \ , \end{equation} where $\xi_{\tilde \lambda}=(\xi^A,\tilde \xi_A)$ and $U_{\tilde\lambda}=(Z^A,{\cal G}_A)$. We will see below that this leads to holomorphic coordinates with respect to $\hat J$ if one imposes the equivalence relation \begin{equation} \xi_{\tilde \lambda} ~\sim~ \xi_{\tilde \lambda} + \lambda \Re \Big( \e^{\iu\varphi} U_{\tilde\lambda} \Big) \ , \end{equation} for any real number $\lambda$. In terms of $w_{\tilde \lambda}$, the K\"ahler potential \eqref{special_Kpot_AdS} is expressed as \begin{equation}\label{K_AdS} \hat{K} = -2 \ln \left( \tfrac14 \Im w_{\tilde \lambda} G^{\tilde \lambda \tilde \rho} \Im w_{\tilde \rho} \right) \ , \end{equation} where $G^{\tilde \lambda \tilde \rho}$ is the well-known matrix \cite{Ceresole:1995ca} \begin{equation} G^{\tilde \lambda \tilde \rho} = \left( \begin{array}{ccc} (\Im {\cal G})_{AB} + (\Re {\cal G})_{AC}(\Im {\cal G})^{-1\,CD}(\Re {\cal G})_{DB} && -(\Re {\cal G})_{AC} (\Im {\cal G})^{-1\, CB} \\ - (\Im {\cal G})^{-1\, AC} (\Re {\cal G})_{CB} && (\Im {\cal G})^{-1\, AB} \end{array} \right) \ . \end{equation} Inserting the Killing prepotentials \eqref{twisted_prepotentials} and \eqref{prepotential_no_compensator} into the general expression for the superpotential \eqref{Wone}, we obtain \begin{equation}\label{superpot_AdS} {\cal W} = {\cal W}_0 + V^\Lambda \Theta^{\tilde \lambda}_\Lambda w_{\tilde \lambda} \ , \end{equation} where ${\cal W}_0$ is constant and related to the cosmological constant via $\mu = \e^{K^{{\cal N}=1}/2} {\cal W}_0 $, with $K^{{\cal N}=1}$ evaluated at the ${\cal N}=1$ point. In Section \ref{section:NoneW} we already showed that the superpotential ${\cal W}$ is holomorphic with respect to $\hat J$. Since all coordinates $w_{\tilde \lambda}$ of $\hat{\M}_{\rm h}$ appear in \eqref{superpot_AdS} as coefficients of $\Theta^{\tilde \lambda}_\Lambda$, we can conclude that these coordinates are indeed holomorphic with respect to $\hat J$. Before we continue, let us consider the case with just $k_{1,2}$ gauged such that ${\cal W} = {\cal W}_0$. Then the ${\cal N}=1$ F-term condition implies that $\hat{K}$ is extremal at the supersymmetric minimum and all scalars appearing in $\hat{K}$ given in \eqref{K_AdS}, i.e.\ all $\Im w_{\tilde \lambda}$, are stabilised. From \eqref{holcoords_AdS} we then see that the dilaton and all base coordinates are stabilised, consistent with the discussion below \eqref{kv_AdS}. The D-terms are obtained by insertion of \eqref{prepotential_no_compensator} and \eqref{vanishingP3} into \eqref{D_terms}, resulting in \begin{equation}\label{D_terms_AdS}\begin{aligned} {\cal D}^{\hat I} =& - \e^{2 \phi} (\mathrm{Re} f)^{-1 ~\hat I \hat J} \left( \Theta_{\hat J}^{~~\tilde \lambda} - \iu \bar{f}_{\hat J \hat K} \Theta^{\hat K\tilde \lambda}\right) \Re(\e^{\iu \varphi} U_{\tilde \lambda}) \\ =& - \e^{2 \phi} (\mathrm{Re} f)^{-1 ~\hat I \hat J} \left( \Theta_{\hat J}^{~~\tilde \lambda} - \iu \bar{f}_{\hat J \hat K} \Theta^{\hat K\tilde \lambda}\right) G_{\tilde \lambda}^{~~ \tilde \rho} \Im(w_{\tilde \rho})\ , \end{aligned}\end{equation} where we lowered one index in $G_{\tilde \lambda}^{~~ \tilde \rho}$ by using the standard symplectic metric. Finally, let us note that the K\"ahler potential $\hat{K}$ \eqref{special_Kpot_AdS} also coincides with the expression obtained in orientifold truncations of the type II compactifications considered in \cite{Benmachiche:2006df} (see also \cite{Cassani:2007pq} and references therein). This is expected from the form of $S_{\cal AB}$ in supergravities with special quaternionic-K\"ahler ${\bf M}_{\rm h}$ when the unbroken ${\cal N}=1$ supersymmetry generator has the form \eqref{unbroken_SUSY_AdS} \cite{Grana:2005ny,Grana:2006hr}. Furthermore, \eqref{superpot_AdS} is similar to the superpotential derived in \cite{Shelton:2006fd,Aldazabal:2006up,Micu:2007rd,Cassani:2007pq} for ${\cal N}=1$ truncations of ${\cal N}=2$ supergravity, up to the directions we have integrated out. \section{Conclusions}\label{Conc} We have derived the ${\cal N}=1$ low-energy effective action of partially broken ${\cal N}=2$ gauged supergravity. We first kept the analysis as general as possible, in that we only assumed the existence of maximally-symmetric ${\cal N}=1$ backgrounds without further specifying any particular supergravity. This implies that the ${\cal N}=2$ spectrum has to contain electrically and magnetically charged hypermultiplets arising from two commuting isometries on the quaternionic-K\"ahler manifold ${\bf M}_{\rm h}$. The corresponding Killing vectors can be combined into one complex Killing vector which has to be holomorphic with respect to one of the three almost complex structures of ${\bf M}_{\rm h}$. For this class of supergravities we explicitly computed the couplings of the ${\cal N}=1$ low-energy effective action in terms of the original ${\cal N}=2$ `data' and showed their consistency with the general constraints of ${\cal N}=1$ supersymmetry. The main issue in checking the ${\cal N}=1$ supersymmetry of the low-energy effective theory is related to the necessary K\"ahler property of the scalar field space. Although the component ${\bf M}_{\rm h}$ of the original ${\cal N}=2$ field space is not a K\"ahler manifold, the quotient $\hat{\M}_{\rm h}$ arising from integrating out the two heavy gauge bosons is K\"ahler. The dimension of this quotient depends on the details of the theory, but can be as large as $(4n_{\rm h}-2)$, where only the two Goldstone bosons have been removed. However, generically a large number of moduli are fixed leaving a low-dimensional ${\cal N}=1$ field space. This differs from truncated theories where the scalar field space is a submanifold of maximal dimension $2n_{\rm h}$, in agreement with the mathematical results of \cite{Alekseevsky}. Thus, our quotient construction is an interesting mathematical result in itself, which we shall further expand on in a companion paper \cite{Cortes}. Once the K\"ahler structure is identified it is relatively straightforward to also check the holomorphicity of the superpotential, which we confirmed in Section~\ref{section:NoneW}. We found that the holomorphicity of the gauge couplings was a consequence of integrating out the graviphoton, which is necessarily one of the heavy gauge bosons. Similarly, in Section \ref{section:NoneD} we saw that the restriction to the light gauge bosons led to the correct form of the ${\cal N}=1$ D-term. Finally, in Section \ref{section:SQC} we gave an example of our construction by deriving the ${\cal N}=1$ K\"ahler potential, the superpotential and the D-terms arising from partial supersymmetry breaking in an ${\cal N}=2$ supergravity with a special quaternionic-K\"ahler manifold for both Minkowski and AdS backgrounds. For this example we argued that a large number of moduli are stabilised \vskip 1cm \subsection*{Acknowledgements} This work was supported by the German Science Foundation (DFG) under the Collaborative Research Center (SFB) 676. We have greatly benefited from conversations and correspondence with P.\ Berglund, D.\ Cassani, S.\ Cecotti, V.\ Cort\'es, G.\ Dall'Agata, C.\ Scrucca, S.~Vandoren and E.\ Zaslow. \vskip 1cm
2,869,038,156,516
arxiv
\section{Introduction} \label{sec:Intro} A wide range of scientific and engineering problems involve multiple scales due to the heterogeneity of the media properties. Direct numerical simulation for multiscale problems, such as multiscale elliptic equations, is typically computationally demanding due to the finescale fluctuation of the media properties. A major effort has been made in past decades to approximate a multiscale equation by the corresponding homogenized equation, whose coefficient, known as the homoegenized coefficient or G-limit \cite{spagnolo1967sul,spagnolo1976convergence}, does not depend on the fine scale. The resulting solution is referred to as the homogenized solution. However, deriving the homogenized equations requires the computation of the G-limits, which is a difficult task for general problems. For standard periodic or locally periodic problems, there are several homogenization methods to find the G-limits, such as two-scale and multiscale convergence \cite{allaire1992homogenization, allaire1996multiscale}, but they can be computationally demanding as they often involve a large number of local problem computations. Additionally, if the periodicity assumption does not hold, the standard homogenization methods are not directly applicable, and non-trivial extensions are usually needed if possible. As a result, deriving the homogenized models from the first principles remains challenging for general homogenization problems. Alternatively, there has been a surge of interest in data-driven learning the effective macroscale model from available measurements or simulated data. In \cite{you2021data}, coarse-grained nonlocal models are learned from synthetic high-fidelity (multiscale) data by recovering the sign-changing kernels. In \cite{chen2020physics}, physics-informed neural networks (PINNs) were employed to retrieve the effective permittivity parameters from scattering data in inverse scattering electromagnetic problems. In \cite{arbabi2020linking}, a neural network algorithm coupled with an equation-free method has been developed to approximate homogenized solution of a time-dependent multiscale problem using simulated multiscale solution data. Regarding the homogenization on multiscale elliptic equations, there have been several inversion approaches related to the homogenization problems in the past several years. The authors in \cite{frederick2014numerical, abdulle2020bayesian, abdulle2020ensemble} recovered multiscale coefficients from (noisy) multiscale solution data using corresponding homogenized models based on numerical homogenization techniques - the finite element heterogeneous method (FE-HMM) to reduce the computational cost of their forward problems. A Bayesian estimation has been developed to reconstruct the slowly varying parts of the multiscale coefficients from the noisy measurement of multiscale solution data in \cite{nolen2009fine}. Nonetheless, the majority of the existing methods assume that the multiscale coefficients are periodic. More general multiscale coefficients such as non-periodic coefficients are considered in \cite{gulliksson2016separating}. The authors separated the oscillations of the multiscale coefficients from the weak $L^2$ limits of them and recovered the part of G-limits from the contributions of the oscillations. However, they required the multiscale coefficients to be known during the inversion stage. In addition, the existing inversion methods often require specialist knowledge, such as numerical multiscale methods or homogenization theory, which can be difficult for application practitioners. These limitations motivate the development of simple and flexible algorithms for the homogenization of multiscale elliptic equations with scale separations in more general settings. The goal in this paper is to develop a simple and flexible framework to learn the G-limits and corresponding homogenized solutions simultaneously for multiscale elliptic equations, given multiscale solution data. Unlike other approaches, our approach does not require the periodicity of the multiscale coefficient or a known multiscale coefficients during the learning stage. Instead, we assume that the (simulated or measured) solution data of the multiscale equations are available and the the structure of corresponding homogenized equations are known. We mainly consider the following two possible scenarios: \begin{center} \begin{itemize} \item {\it Noise-free data}: In this scenario, we assume that the traditional homogenization methods may not be applicable, e.g., in non-periodic cases, but the multiscale solution data can be generated by the exisiting forward solver of the multiscale problem with a known multiscale coefficient. Our goal is to estimate the corresponding G-limit and the homogenized solutions. \item {\it Noisy data:} In this case, we consider that noisy multiscale solution data (from a specific medium with a fixed finescale size $\epsilon$) can be collected by sensors. We aim to learn the G-limit of the unknown multiscale coefficient and corresponding homogenized solution as they can serve as good approximations to the effective behavior of the multiscale coefficient when $\epsilon$ is sufficiently small. \end{itemize} \end{center} \begin{comment} \jrp{ In this setting, we consider the inverse problem of reconstructing the G-limit and solution of the homogenized equations from given the solution data of multiscale elliptic equations. There have been several inversion approaches related to the homogenization problems on elliptic multiscale equations. In \cite{frederick2014numerical, abdulle2020bayesian, abdulle2020ensemble}, the authors recover multiscale coefficients from (noisy) multiscale data using corresponding homogenized models based on numerical homogenization - the finite element heterogeneous method (FE-HMM) to reduce the computational cost of their forward problems. A Bayesian estimation has been developed to reconstruct the slowly varying parts of the multiscale coefficients from noisy measurement of multiscale solution data in \cite{nolen2009fine}. The above frameworks assume that the multiscale coefficients are periodic and/or represented by a limited number of parameters regarding the feasibility of numerical homogenization and well-posedness of the inverse problems. More general multiscale coefficients such as non-periodic coefficients are considered in \cite{gulliksson2016separating}. The authors separate the oscillations of the multiscale coefficients from the weak $L^2$ limits of them and recover the part of G-limits from the contributions of the oscillations. They use the special right-hand sides that involve multiscale coefficients and corresponding solution data. The explicit form of multiscale coefficients are known in order to compute the $L^2$ limits of them and to generate the multiscale solution data from forward simulations using the multiscale finite element methods (MsFEM, \cite{unityhou,Msnon}). However, there is very little literature on the homogenization problems where the explicit form of the multiscale coefficients is not available and further the coefficients are not necessarily periodic or represented by a limited number of parameters. Also, existing methods necessitate some deep domain knowledge on e.g. numerical multiscale methods, regularization strategy, or theories related to homogenization. In this paper, we consider the homogenization of elliptic multiscale equations given the measurement of multiscale solution. In our framework, the multiscale coefficients are not explicitly required. In this general setting, we utilize the physics-informed neural networks (PINNs) algorithm that is simple and easily implemented. \textcolor{blue}{It is worth mentioning that it is challenging to collect a large number of the multiscale solution data containing sufficient finescale information by sensors, when the finescale size, $\epsilon$ is very small. In addition, the measurements by sensors are often corrupted by noises that dominate the finescale fluctuations. We found that our approach does not require dense sampled the multiscale data in space in order to contain the detailed finescale information as we are only interested in the macroscopic (homogenized) behavior of the multiscale solution data.} For general homogenization problems, the analytic derivation of the homogenized models is typically not available. Alternatively, there have been data-driven discoveries of the macroscale model given microscale information. In \cite{you2021data}, coarse-grained nonlocal models are learned from synthetic high-fidelity (multiscale) data by recovering the sign-changing kernels. In \cite{arbabi2020linking}, A machine learning technique coupled with an equation-free method has been developed to approximate homogenized solution using simulated multiscale data. In this work, we propose using a neural network based deep learning approach, the physics-informed neural networks (PINNs) \cite{raissi2019physics,lu2019deepxde} to recover the G-limits and homogenized solutions from multiscale data. The algorithm is simple and does not require problem dependent domain knowledge. The users are only required to know the governing partial differential equations (PDEs) and embed the PDE residuals and the boundary conditions in the loss functional to be minimized. } \end{comment} Specifically, we adopt one emerging scientific machine learning framework - the physics-informed neural network (PINNs) for our problem. They have been successfully used for approximating solutions to both forward and inverse problems regarding PDEs \cite{lu2019deepxde,chen2020physics,raissi2019physics}. One key component of PINNs is to provide neural network approximations to the solutions of forward or inverse problems by incorporating prior physics knowledge into the loss functional. This feature turns out to be beneficial for our current setting. Since the multiscale solution data often contains rapid oscillations or noise, estimating the G-limits and homogenized solutions from the multiscale or random fluctuations is a fundamental challenge. To address these issues, we trained the neural works to approximate the G-limit and the corresponding homogenized solution for the elliptic homogenized equations based on the multiscale solution data. By incorporating the corresponding homogenized equation into the loss function, PINNs can encourage the neural network to capture the slowly varying parts of the multiscale solution data. It is worth noting that collecting a large number of the multiscale solution data containing sufficient finescale information is in general difficult, especially when the finescale parameter $\epsilon$ is very small. In addition, the measurements by sensors are often corrupted by noises that dominate the finescale fluctuations. Nonetheless, we found that our approach does not require dense sampling of the multiscale solution data in space in order to retain the detailed finescale information as we are only interested in the macroscopic (homogenized) behavior of the multiscale solution data. With the prior knowlege of the structure of the homogenized equation, PINNs can provide an effective regularization that can cope with the noise and the multiscale features in the data. We demonstrate the applicability and performance of our approach via several benchmark examples with both noise-free and noisy data. \begin{comment} \jrp{ In this paper, we develop an efficient method to estimate the G-limits and corresponding homogenized solutions even when the traditional homogenization methods are not applicable. Our method does not require the explicit form or the periodicity of the multiscale coefficient, instead, we assume that the solution data of the multiscale equations are available. We mainly consider the following two scenario: (1)When we know the closed form of the coefficient in the multiscale equation, we generate the noise-free solution data by performing direct forward simulations of the equation. (2) we assume that multiscale coefficient is unknown but we measured noisy multiscale solution data measured by sensors. \textcolor{blue}{We found that the multiscale data do not need to resolve the finescale oscillations of the multiscale solution when the finescale size $\epsilon$, is very small. } } \end{comment} The paper is organized as follows. In Section 2, we introduce the concepts of G-convergence and G-limit, and the formulation of the inverse problem. In Section 3, we briefly review the physics-informed neural networks (PINNs) and adopt them in our context. Finally, we demonstrate the performance of the proposed methods with several numerical examples, including locally periodic, non-periodic, non-standard, and random homogenization cases. \pagestyle{myheadings} \thispagestyle{plain} \section{Background and Problem Setup} In this section, we first introduce the definition of G-convergence and G-limit in the homogenized equations, given the multiscale elliptic equations. Then we discuss the convergence of homogenization in a special case where the periodic multiscale coefficients are given. Finally, we formulate the inverse problem to learn the G-limits and the corresponding homogenized solutions. \subsection{G-convergence and G-limit} We first briefly review the general theory of homogenization and introduce the notion of the G-convergence and G-limit (homogenized coefficient). Let us consider a sequence of the following second order multiscale elliptic equations: \begin{equation} \label{eq:original_gen} \begin{split} -\div \bigg(A^\epsilon (x) \nabla u^\epsilon(x) \bigg) &= f(x) \ \ \textrm{in} \ \ \Omega, \\ u^\epsilon(x) &= 0 \ \ \textrm{on} \ \ \partial \Omega, \end{split} \end{equation} where $\Omega \in \mathbb{R}^N$ is the domain and $A^\epsilon: \Omega \to \mathbb{R}^{N\times N}$ is a symmetric multiscale coefficient with finescale size $\epsilon$. We consider the sequence of coefficients $A^\epsilon(x)$ and the corresponding solution $u^\epsilon(x)$ of (\ref{eq:original_gen}). The G-convergence of the sequence $A^\epsilon(x)$ is defined as follows \cite{spagnolo1967sul,spagnolo1976convergence}: \begin{definition} \label{def:glimit} A sequence of coefficient $A^\epsilon(x)$ in (\ref{eq:original_gen}) is said to G-converge to a limit $A^*(x)$ as $\epsilon$ tends to $0$, if the sequence of solution $u^\epsilon(x)$ converges weakly in $H^1_0(\Omega)$ to $u_0(x)$, the unique solution of the following homogenized equation, \begin{equation} \label{eq:homogenized_gen} \begin{split} -\div \bigg(A^*(x) \nabla u_0(x) \bigg) &= f(x) \ \ \textrm{in} \ \ \Omega, \\ u_0(x) &= 0 \ \ \textrm{on} \ \ \partial \Omega, \end{split} \end{equation} for any source term $f(x)$. The limit matrix $A^*(x)$ is called the G-limit of $A^\epsilon(x)$. \end{definition} We now define the following class of matrices. \begin{definition} A matrix function $A(x)$ is said to belong to $E(\alpha,\beta,\Omega)$ if the followings are satisfied for some $\alpha$, $\beta>0$. \begin{equation} \begin{split} &A(x) \in L^\infty(\Omega)^{N\times N}, \\ &A(x)k\cdot k \geq \alpha |k|^2, \ \ \textrm{for all} \ \ k \in\mathbb{R}^N,\ a.e. \ x\in\Omega\\ &|A(x)k| \leq \beta|k|, \ \ \textrm{for all} \ \ k \in\mathbb{R}^N,\ a.e. \ x\in\Omega. \end{split} \end{equation} \end{definition} We have the following theorem that justifies the definition of G-convergence \cite{defranceschi1993introduction}. \begin{theorem} \label{thm:Gconv} Let $A^\epsilon(x)$ be a sequence of functions that belong to $E(\alpha,\beta,\Omega)$. Then there exist a function $A^*(x) \in E(\alpha,\beta,\Omega)$ such that $A^\epsilon(x)$ G-converges to $A^*(x)$ up to subsequence. \end{theorem} The following theorem guarantees the uniqueness of the G-limit. \begin{theorem} \label{thm:gconvunique} The G-limit of a G-converging sequence is unique. \end{theorem} \begin{proof} See \cite[Section 7]{defranceschi1993introduction} \end{proof} The following remark provides important properties of the G-limit and one motivation for the recovery of the G-limit. \begin{remark} The G-limit $A^*(x)$ does not depend on the source term $f(x)$ by definition. It is also known that it also does not depend on the boundary conditions \cite[Chapter 1]{allaire2012shape}. Thus, the G-limit recovered with specific source term $f(x)$ and the boundary condition $g(x)$ in (\ref{eq:original_gen}) can be reused with the different source terms in the same medium. \end{remark} From Theorem \ref{thm:Gconv}, we know that a well-posed homogenized limit (\ref{eq:homogenized_gen}) exists, but in general, there is no systemic way to find the explicit formula for the G-limit $A^*(x)$. In addition, the G-convergence is only guaranteed up to a subsequence in the theorem. For (locally) periodic media, the G-convergence is well studied and the G-limit can be computed by the periodic homogenization methods \cite{papanicolau1978asymptotic,jikov2012homogenization}. We remark that even though it might not be clear that how to construct the explicit form of the the G-limit $A^*(x)$ in general, \eqref{eq:homogenized_gen} does provide the structure of the homogenized equation served as a generic priori knowledge for PINNs. \subsection{Homogenization for periodic media} In this section, we present the outline and the convergence results of the standard periodic homogenization. We let $\Omega \in \mathbb{R}^N$ be a bounded domain and $Y$ be a unit cube in $\mathbb{R}^N$. We consider the homogenization of the following multiscale elliptic equation: \begin{equation} \label{eq:original_per} \begin{split} -\div \bigg(A^\epsilon (x) \nabla u^\epsilon(x) \bigg) &= f(x) \ \ \textrm{in} \ \ \Omega, \\ u^\epsilon(x) &= 0 \ \ \textrm{on} \ \ \partial \Omega. \end{split} \end{equation} Here, $\epsilon$ represents the fine scale of the system. The coefficient has the scale separation and is defined by $A^\epsilon(x) = A(x,{x\over\epsilon})$, where $A(x,y)$ is $Y$-periodic with respect to the fast variable $y$. Thus, we consider the coefficient $A^\epsilon(x)$ with smooth finescale oscillations. We further assume that $A^\epsilon(x)$ is in ${\mathcal C}^\infty(\Omega)$ and uniformly positive, i.e., $A^\epsilon(x) > c >0$ for some constant $c$. We consider the following two-scale asymptotic expansion of the solution $u^\epsilon(x)$. \begin{equation} \label{eq:asympt} u^\epsilon(x) = u_0(x) + \epsilon u_1(x,{x\over\epsilon}) + \epsilon^2 u_2(x,{x\over\epsilon}) + \dots, \end{equation} where $u_i(x,{x\over\epsilon})$, ($i = 1,2,\dots$) are $Y$-periodic with respect to $y = {x\over\epsilon}$. We can derive the following homogenized equation with the G-limit $A^*(x)$ using the above expansion: \begin{equation} \label{eq:homogenized_per} \begin{split} -\div \bigg(A^*(x) \nabla u_0(x) \bigg) &= f(x) \ \ \textrm{in} \ \ \Omega, \\ u_0(x) &= 0 \ \ \textrm{on} \ \ \partial \Omega, \end{split} \end{equation} where the G-limit $A^*(x)$ is defined as follows: \begin{equation} \label{eq:kappastar} A^*_{ij}(x) = \int_Y A(x,y) (\delta_{ij} + {\partial \chi^j(x,y)\over \partial y_i}) \mathrm{d}y, \end{equation} where $\chi^i(x,y)$ is the solution of the following {\it cell problem}: \begin{equation} \label{eq:cell} \div_y\bigg(A(x,y) \nabla_y \chi^i(x,y)\bigg) = -\div_y(A(x,y)e^i), \end{equation} on $Y$ with periodic boundary condition. Here, $e^i$ is the standard basis vector in $\mathbb{R}^n$. This homogenized equation does not depend on the fine scale $\epsilon$ and the solution $u_0(x)$ represents the macroscopic behavior of the solution $u^\epsilon(x)$ to the multiscale equation (\ref{eq:original_per}) when $\epsilon$ is sufficiently small. This can be rigorously explained by the following theorem on the convergence of the multiscales solution $u^\epsilon(x)$ to the homogeinzed solution $u_0$ \cite{papanicolau1978asymptotic}. \begin{theorem} \label{thm:weakconv} Assume $A^\epsilon(x) \in L^\infty(\Omega)$, $f(x)\in L^2(\Omega)$. Let $u^\epsilon(x)$ and $u_0(x)$ be the solutions to (\ref{eq:original_per}) and (\ref{eq:homogenized_per}) respectively. Then as $\epsilon \to 0$, the sequence $u^\epsilon(x)$ converges weakly in $H^1(\Omega)$ to $u_0(x)$. \end{theorem} Above result is obtained under minimal regularity assumptions on the multiscale coefficient and the source term. However, in practice, it is often the case that we can achieve the strong convergence of the multiscale solution to the homogenized solution. For example, we have the following convergence estimates \cite[Chapter 6]{papanicolau1978asymptotic}. \begin{remark} \label{rmk:strongconv} Let $u^\epsilon(x)$ and $u_0(x)$ be the solutions to (\ref{eq:original_per}) and (\ref{eq:homogenized_per}) respectively. Assuming $A^\epsilon(x)$ and $f(x)$ are smooth, we have the following convergence rate. \begin{equation} \label{eq:linftyconv} \begin{split} \norm{u^\epsilon(x)- u_0(x)}_{L^\infty(\Omega)} \leq C \epsilon, \end{split} \end{equation} where $C>0$ is independent of $\epsilon$. \end{remark} In (locally) periodic media, we solve the cell problems (\ref{eq:cell}) to compute the G-limit when the explicit form of the multiscale coefficient $A^\epsilon(x)$ is known. Without the periodicity assumption or a known multiscale coefficient $A^\epsilon(x)$, traditional homogenization methods are typically not applicable. Despite the fact that the convergence result (\ref{eq:linftyconv}) holds only for periodic cases, we can still expect the multiscale solution data to be close to the homogenized solution for sufficiently small $\epsilon$ even if the periodic assumption is violated. This motivates us to utilize multiscale solution data as a surrogate for the corresponding homogenized solution data for more general scenarios beyond the periodicity assumptions. \subsection{Inverse Problem formulation} Equipped with the background knowledge introduced above, we now consider the multiscale elliptic equations (\ref{eq:original_gen}) and assume a well-posed homogenized limit (\ref{eq:homogenized_gen}) exists. We also assume the multiscale coefficient is smooth, but no geometric assumptions, such as periodicity, are required. In this work, we consider the following inverse problem setting: given a set of observations/data points, our goal is to learn the G-limit $A^*(x)$ and the homogenized solution $u_0(x)$ of the homogenized limit (\ref{eq:homogenized_gen}). As we mentioned before, the homogenized solution data are often not available. Instead, we utilize the multiscale solution data of the equation (\ref{eq:original_gen}) as a surrogate for the homogenized solution data. We remark that even though the multiscale solution data are close to the homogenized solution in most of the regions in our domain for sufficiently small $\epsilon$, our solution data contains multiscale or noise fluctuations that do not present in the homogenized solution. This introduces additional difficulties because one needs to approximate the slowly varying functions from multiscale solution. It is preferable for a method to be less sensitive to these finescale oscillations and the noise in our multiscale solutions data. Motivated by recent developments of the physics-informed neural networks (PINNs) \cite{lu2019deepxde,raissi2019physics}, we propose to develop PINNs for estimating the G-limits, which not only simultaneously match the measurements/data while respecting the underlying physics in the problem, but also provide an effective regularization to mitigate the adversarial effects due to the multiscale fluctuations or noise in the data. \section{Method} Next, we will briefly review physics-informed neural networks (PINNs) \cite{lu2019deepxde,chen2020physics} and adopt it to tackle the inverse problems to learn the G-limits in the homogenized equation (\ref{eq:homogenized_gen}), given the corresponding multiscale solution data. \subsection{Feed-forward neural network} We shall use feed-forward neural networks to approximate the solution $u_0(x)$ and the effective coefficients $A^*(x)$ in (\ref{eq:homogenized_gen}). The feed-forward neural network with $L$ layers and $N_l$ neurons in the $l$th layer is a function ${\mathcal N}_\theta(x): \mathbb{R}^{N_0} \to \mathbb{R}^{N_L}$ defined by \begin{equation} \begin{split} {\mathcal N}_\theta(x) &= W^L {\mathcal N}^{L-1}(x) + b^L, \\ {\mathcal N}^l(x) &= \sigma (W^l {\mathcal N}^{l-1}(x) + b^l), \\ {\mathcal N}^1(x) &= W^1 x + b^1, \end{split} \end{equation} for $1<l<L$. The matrix $W^l \in \mathbb{R}^{N_{l-1} \times N_l}$ and the vector $b^l \in \mathbb{R}^{N_l}$ represent the weight and bias in $l$-th layer and $\sigma$ is a nonlinear activation function, such as ReLU function, the hyperbolic tangent function, and the sine function \cite{goodfellow2016deep}. We further define the set of tunable weights and biases of the neural network, $\theta = \{ W^l, b^l\}$ for $1\leq l \leq L$. \subsection{PINNs for inverse problems} For ease of presentation, we consider the following partial differential equation for the solution $u(x)$ with an unknown coefficient $A(x)$: \begin{equation} \label{eq:residual} {\mathcal F}\left[u(x);A(x)\right] = 0, \ \textrm{in} \ \Omega, \end{equation} given a Dirichlet boundary condition, \begin{equation} \label{eq:bc} u(x) = g(x), \ \ \textrm{on} \ \ \partial\Omega. \end{equation} Given the observation data on the solution $u(x)$ is available, we are interested in recovering the unknown coefficient $A(x)$ and the entire solution field $u(x)$. PINNs employ the feed-forward networks ${\mathcal N}_{\theta_u}(x)$ and ${\mathcal N}_{\theta_A}(x)$ to approximate the solution and the unknown coefficients respectively, where $\theta_u$ and $\theta_A$ represent the trainable network parameters for each network. Then we train the networks to get the approximations $\hat{u}(x)$ for solutions and $\hat{A}(x)$ for unknown coefficients by minimizing the following loss functional, including data misfit, the PDE residual loss (\ref{eq:residual}) and the boundary condition loss (\ref{eq:bc}) over the training set ${\mathcal T}$: \begin{equation} \label{eq:loss_gen} {\mathcal L}(\theta_{u},\theta_{A}; {\mathcal T}) = \lambda_r{\mathcal L_r}(\theta_{u},\theta_{A}; {\mathcal T}_r)+ \lambda_d{\mathcal L_d}(\theta_{u},\theta_{A}; {\mathcal T}_d) + \lambda_b{\mathcal L_b}(\theta_{u},\theta_{A}; {\mathcal T}_b), \end{equation} where \begin{equation} \begin{split} &{\mathcal L}_r(\theta_{u},\theta_{A}; {\mathcal T}_r) = \frac{1}{|{\mathcal T}_r|} \displaystyle\sum_{x_i^r\in{\mathcal T}_r} \left\lvert {\mathcal F}\left(\hat{u}(x_i^r);\hat{A}(x_i^r)\right) \right\rvert^2,\\ &{\mathcal L}_d(\theta_{u},\theta_{A}; {\mathcal T}_d) = \frac{1}{|{\mathcal T}_d|} \displaystyle\sum_{x_i^d\in{\mathcal T}_d} |\hat{u}(x_i^d)-u(x_i^d) |^2, \\ &{\mathcal L}_b(\theta_{u},\theta_{A}; {\mathcal T}_b) = \frac{1}{|{\mathcal T}_b|} \displaystyle\sum_{x_i^b\in{\mathcal T}_b} |\hat{u}(x_i^b)-g(x_i^b) |^2, \end{split} \end{equation} where $\lambda_r$, $\lambda_d$ and $\lambda_b$ denote the weights for each loss term. The training points ${\mathcal T} = {\mathcal T}_r \cup {\mathcal T}_d \cup {\mathcal T}_b$. ${\mathcal T}_d$, ${\mathcal T}_r$, and ${\mathcal T}_b$ denote data/measurement points, PDE residual points, and boundary data points. Both ${\mathcal T}_r \in \Omega$, and ${\mathcal T}_b \in \partial \Omega$ are predfined and can be chosen from mesh grid points or randomly. The parameters $\theta_u$ and $\theta_A$ can be found by minimizing the loss function (\ref{eq:loss_gen}), and the resulting networks $\hat{u}(x)$ and $\hat{A}(x)$ are the approximations to the solution $u(x)$ and the coefficient $A(x)$ of the equation \eqref{eq:residual}. \subsection{Learning the G-limits via PINNs} Following that, we adopt the PINNs framework to tackle the inverse problem of estimating the G-limit $A^*(x)$ for the multiscale elliptic equation (\ref{eq:original_gen}). One issue is that the measurements of the homogenized solution are often not available. Motivated by the convergence results for periodic media in \eqref{eq:linftyconv}, we employ the multiscale solution data $u^{\epsilon}$ of the multiscale equation (\ref{eq:original_gen}) as the training data, which is expected to be a good surrogate for the homogenized solution data when $\epsilon$ is sufficiently small. We construct two feed-forward neural networks $\hat{A}^*(x) = {\mathcal N}_{\theta_{A^*}}(x)$ and $\hat{u}_0(x)={\mathcal N}_{\theta_{u_0}}(x)$ to approximate the G-limit and the solution of the homogenized equation (\ref{eq:homogenized_gen}). Since we consider Dirichlet boundary condition \eqref{eq:bc} in this work, the boundary condition can be embeded into the neural network exactly. Specifically, we follow the approach suggested in \cite{lu2021physics} by modifying the solution network output ${\mathcal N}_{\theta_{u_0}}$: \begin{equation} \label{eq:hardconst} \hat{u}_0(x) = g(x)+l(x){\mathcal N}_{\theta_{u_0}}, \end{equation} where $u_0(x) = g(x)$ is a Dirichlet boundary condition, and $l(x)$ is a function that satisfies the following conditions. \begin{equation} l(x) = 0 \ \ \textrm{on} \ \ \partial \Omega, \ \ \ \ l(x) >0 \ \ \textrm{in} \ \ \Omega-\partial \Omega. \end{equation} With a simple domain, we can analytically choose $l(x)$ \cite{lagaris1998artificial}. For example, for the domain $[a,b]^2$, we can choose $l(x) = (x_1-a)(b-x_1)(x_2-a)(b-x_2)$, where $x = (x_1,x_2)$. We then seek a set of network parameters $\theta_{u_0}$ and $ \theta_{A^*}$ that minimize the loss function defined as follows: \begin{equation} \label{eq:loss_hom} {\mathcal L}(\theta_{u_0},\theta_{A^*}; {\mathcal T}) = \lambda_r{\mathcal L_r}(\theta_{u_0},\theta_{A^*}; {\mathcal T}_r)+ \lambda_d{\mathcal L_d}(\theta_{u_0},\theta_{A^*}; {\mathcal T}_d), \end{equation} where \begin{equation} \begin{split} &{\mathcal L}_r(\theta_{u_0},\theta_{A^*}; {\mathcal T}_r) = \frac{1}{|{\mathcal T}_r|} \displaystyle\sum_{x_i^r\in{\mathcal T}_r} \left\lvert \div\bigg(\hat{A}^*(x_i^r)\nabla \hat{u}_{0}(x_i^r)\bigg)+f(x_i^r) \right\rvert^2,\\ &{\mathcal L}_d(\theta_{u_0},\theta_{A^*}; {\mathcal T}_d) = \frac{1}{|{\mathcal T}_d|} \displaystyle\sum_{x_i^d\in{\mathcal T}_d} |\hat{u}_{0}(x_i^d)-u^{\epsilon}(x_i^d) |^2. \end{split} \end{equation} Here, $u^{\epsilon}(x_i^d)$ denotes the (noise-free/noisy) multiscale solution data at $x_i^d$. Note that since the neural network $\hat{u}_0(x)$ satisfies the boundary condition exactly, there are only two terms in the loss function. The first term (\ref{eq:loss_hom}) encourages the neural network to respect the homogenized equation (\ref{eq:homogenized_gen}). The second term makes sure that the approximated homogenized solution is not far from the multiscale solution data $u^\epsilon(x)$. The choice of the regualization parameters $\lambda_r$ and $\lambda_d$ could affect the training performance considerably. We adopted the adaptive weight techniques \cite{wang2020and} in this work. In summary, Figure \ref{fig:pinns_scheme} presents the schematic deisgn of the PINNs for our problem. \begin{figure}[t] \centering \includegraphics[scale=0.3]{Figure/PINN_Arch_exactBC1} \caption{ The schematic architecture of PINNs for learning the G-limit $A^*(x)$ in the homogenized equation (\ref{eq:homogenized_gen}) by the neural network $\hat{A}^*(x)$. The boundary condition $u_0(x) = g(x)$ is strictly imposed using (\ref{eq:hardconst}).} \label{fig:pinns_scheme} \end{figure} \section{Numerical Examples} \label{sec:NumExample} In this section, we present several numerical examples to illustrate the effectiveness and applicability of our method, including the elliptic equations with locally periodic, non-periodic, and ergodic random multiscale coefficients. The noise-free measurements are generated from the multiscale solution data of the multiscale elliptic equation by the underlying forward FEM simulation. For noisy scenario, we corrupt the noise-free data with independent, and identically distributed normal noise with different noise levels. To estimate the accuracy of the recovered G-limit $\hat{A}^*(x)$ and the homogenized solutions $\hat{u}_0(x)$, we use the following relative $L^2$-errors computed over a predefined mesh grid in spatial domain: \begin{equation} \label{eq:kappaerror} e_{\hat{A}^*} = \frac{\norm{\hat{A}^*(x)-A^*(x)}_{{L^2}(\Omega)}}{\norm{A^*(x)}_{{L^2}(\Omega)}}, \ \ e_{\hat{u}_{0}} = \frac{\norm{\hat{u}_{0}(x) -u_{0,h}(x)}_{L^2(\Omega)}}{\norm{ u_{0,h}(x)}_{L^2(\Omega)}}. \end{equation} Here, $A^*(x)$ is the reference G-limit that is either exact or pre-computed by FEM via traditional homogenization methods. The reference homogenized solutions, $u_{0,h}(x)$, are computed by FEM using the reference G-limits. During the training stage, we alternatively use ADAM and L-BFGS as suggested in \cite{lu2019deepxde,shin2020convergence}. A hyperbolic tangent function is used as the activation in all examples. In addition, the architectural parameters of neural network were tuned to achieve reasonable results. The architecture parameters and other hyperparameters used for each example are listed in {\it Table \ref{tb:hyperparameters}} in the appendix. Advanced hyperparameter selection techniques can further improve the results, which, however, is not the focus of this work. In addition, the multiple restarts approach is adopted to prevent the results from being affected by how the weights are (randomly) initialized. More specifically, we train the nets with a number of random initialization using Glorot normal initializer, and report the best possible results for each example. All examples are carried out on Google’s Colab \cite{carneiro2018performance} using the library SciANN \cite{haghighat2021sciann}. \subsection{Homogenization of a slowly varying periodic coefficient} To test the basic capability of our proposed method, we first consider the following multiscale elliptic equation with a slowly varying periodic coefficient: \begin{equation} \label{eq:1dlocper} \begin{split} -\frac{d}{d x}\left(\frac{1+x^2}{2+\sin(2\pi{x\over\epsilon})}\frac{d}{d x}u^\epsilon(x)\right) &= \cos(\pi x)\ \ \textrm{in} \ \ \Omega = [0,1] , \\ u^\epsilon(0) &= u^\epsilon(1) = 0. \end{split} \end{equation} In this example, the permeability coefficient depends on both $x$ and ${x\over\epsilon}$, and is periodic with respect to ${x\over\epsilon}$. The analytical G-limit is known as $A^*(x) = \frac{x^2+1}{2}$. We compute the reference homogenized solution $u_{0,h}(x)$ using FEM via the exact G-limit. To generate the noise-free data, we compute the multiscale solution to the equation (\ref{eq:1dlocper}) for each finescale parameter value $\epsilon$ by FEM with mesh size $h = 1/10^5$ and obtain equally spaced data sampled from the multiscale solution as the training data. For noisy data, we corrupt the measurements with different noise levels. % The architecture parameters and other hyperparameters of PINNs are listed in {\it Table \ref{tb:hyperparameters}} in appendix. The relative $L^2$ errors for both G-limit and homogenized solution are computed on a mesh with size $h=1/10^5$. We first plot the relative $L^2$ errors with respect to the size of training data with $\epsilon = 2^{-7}$ in Figure \ref{figure:glimitsoltdns1}. For the noise-free case, the proposed method can achieve the errors at the level of ${\mathcal O}(10^{-3})$ for the G-limit and ${\mathcal O}(10^{-4})$ for the homogenized solution. As the data set was enriched, the error level saturated. With noisy data, the error increases with the noise level and can be reduced as additional data are available, particularly for a high noise level. With $5\%$ noise, the relative errors for the approximated G-limit and solution are roughly $4\%$ and $1\%$ respectively, given enough data. To further demonstrate the performance of the method, Figure \ref{figure:1dlocper_plots_ns} plots the G-limits and the homogenized solution recovered by PINNs for $\epsilon = 2^{-7}$, where both G-limit and homogenized solution are well approximated under the different noise levels. As shown in Figure \ref{fig:1dlocper_plot_data_homsols_ums1_ns1_sc} and \ref{fig:1dlocper_plot_data_homsols_ums1_ns3_sc}, even when the multiscale data contain non-negligible random fluctuations, PINNs can still capture the macroscopic variation of the data thanks to the regularization provided by the homogenized equation. \begin{figure}[!hbt] \centering \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_errors_glimit_nd_sc1.png} \caption{G-limits \label{fig:1dlocper_errors_glimit_nd} \end{subfigure} \hspace{.3in} \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_errors_homsol_nd_Sc1.png} \caption{Homogenized solutions \label{fig:1dlocper_errors_homsol_nd} \end{subfigure} \vspace*{-4mm} \caption{Problem (\ref{eq:1dlocper}): the relative $L^2$ errors for the G-limits and the homogenized solutions with different number of multiscale data points corrupted by different noise levels for $\epsilon = 2^{-7}$ and the number of PDE residual points is $|{\mathcal T}_r| = |{\mathcal T}_d| +30$.} \label{figure:glimitsoltdns1} \end{figure} \begin{figure}[!hbt] \centering \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_plot_kappa_ums1_sc_label.png} \caption{G-limits (noise-free) \label{fig:1dlocper_plot_glimit_ums1_sc} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_plot_kappa_ums1_ns1_sc_label.png} \caption{G-limits ($1\%$-noise) \label{fig:1dlocper_plot_kappa_ums1_ns1_sc} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_plot_kappa_ums1_ns3_sc_label.png} \caption{G-limits ($3\%$-noise) \label{fig:1dlocper_plot_kappa_ums1_ns3_sc} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_plot_data_homsols_ums1_sc_label.png} \caption{Solutions (noise-free)} \label{fig:1dlocper_plot_data_homsols_ums1_sc} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_plot_data_homsols_ums1_ns1_sc_label.png} \caption{Solutions ($1\%$-noise) \label{fig:1dlocper_plot_data_homsols_ums1_ns1_sc} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_plot_data_homsols_ums1_ns3_sc_label.png} \caption{Solutions ($3\%$-noise) \label{fig:1dlocper_plot_data_homsols_ums1_ns3_sc} \end{subfigure} \vspace*{-4mm} \caption{Error results for problem (\ref{eq:1dlocper}): comparison of the reference solutions ($A^*(x)$ and $u_{0,h}(x)$) and the counterparts ($\hat{A}^*(x)$ and $\hat{u}_{0}(x)$) learned by PINNs with different noise levels in the data. (a. b. c.): the G-limit; (d. e. f.): the homogenized solution, where $\epsilon = 2^{-7}$ and the number of multiscale data and PDE residual points are $|{\mathcal T}_d| = 160$, $|{\mathcal T}_r| = 190$.} \label{figure:1dlocper_plots_ns} \vspace*{-5mm} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_errors_glimit_ep_sc1.png} \caption{G-limits \label{fig:1dlocper_errors_glimit_ep} \end{subfigure} \hspace{.3in} \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_errors_homsol_ep_sc1.png} \caption{Homogenized solutions \label{fig:1dlocper_errors_homsol_ep} \end{subfigure} \vspace*{-4mm} \caption{Error results for problem (\ref{eq:1dlocper}): the relative $L^2$ errors for the G-limits and the homogenized solutions with different finescale parameter $\epsilon$ and noise levels, when the number of multiscale data and PDE residual points are $|{\mathcal T}_d| = 160$ and $|{\mathcal T}_r| = 190$.} \label{figure:glimitsolepns1} \vspace*{-3mm} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_plot_glimit_ums1_ep3_label.png} \caption{G-limit ($\epsilon = 2^{-3}$) \label{fig:1dlocper_plot_glimit_ums1_ep3} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_plot_glimit_ums1_ep5_label.png} \caption{G-limit ($\epsilon = 2^{-5}$) \label{fig:1dlocper_plot_glimit_ums1_ep5} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_plot_glimit_ums1_ep7_label.png} \caption{G-limit ($\epsilon = 2^{-7}$) \label{fig:1dlocper_plot_glimit_ums1} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_plot_data_homsols_ums1_ep3_mag_label.png} \caption{Solutions ($\epsilon = 2^{-3}$)} \label{fig:1dlocper_plot_data_homsols_ums1_ep3} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_plot_data_homsols_ums1_ep5_mag_label.png} \caption{Solutions ($\epsilon = 2^{-5}$)} \label{fig:1dlocper_plot_data_homsols_ums1_ep5} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dlocper_plot_data_homsols_ums1_ep7_mag_label.png} \caption{Solutions ($\epsilon = 2^{-7}$)} \label{fig:1dlocper_plot_data_homsols_ums1} \end{subfigure} \vspace*{-4mm} \caption{Problem (\ref{eq:1dlocper}) with noise-free data: comparison of the reference solutions ($A^*(x)$ and $u_{0,h}(x)$) and the counterparts ($\hat{A}^*(x)$ and $\hat{u}_{0}(x)$) learned by PINNs with different values of $\epsilon$. (a. b. c.): the G-limit; (d. e. f.): the homogenized solution, when the number of multiscale data and PDE residual points are $|{\mathcal T}_d| = 160$, $|{\mathcal T}_r| = 190$.} \label{figure:1dlocper_plot_ums1_ep} \vspace*{-4mm} \end{figure} To investigate the impacts of the finescale parameter $\epsilon$ of the multiscale data on the performance of our algorithm, we plot the relative errors for the G-limit and the homogenized solution with different values of $\epsilon$ in Figure \ref{figure:glimitsolepns1}. For each $\epsilon$ value, $|{\mathcal T}_d|=160$ multiscale solution data collected at fixed spatial locations are used. In noise-free cases, the errors for homogenized solutions tend to decrease when $\epsilon$ becomes smaller. This is expected as multiscale solution data are closer to the homogenized solution for smaller $\epsilon$. Figure \ref{figure:1dlocper_plot_ums1_ep} further shows the learned G-limit and homogenized solution with different finescale parameters $\epsilon$. We can observe that the multiscale data converge to the reference homogenized solution as $\epsilon$ becomes smaller. For example, when $\epsilon = 2^{-7}$, our data almost overlap with the reference homogenized solution (Figure \ref{fig:1dlocper_plot_data_homsols_ums1}) and both G-limit and the homogenized solution learned by PINNs agree very well with their references. Furthermore, despite the presence of noticeable multiscale oscillations in the data, PINNs can still provide reasonably good results for larger epsilons ($\epsilon = 2^{-3}, \ 2^{-5}$). This is because the proposed PINN tends to promote the smooth macroscale behavior of the data rather than their microscale fluctuations shown in Figure \ref{fig:1dlocper_plot_data_homsols_ums1_ep3} and \ref{fig:1dlocper_plot_data_homsols_ums1_ep5}. For noisy scenarios, the approximation quality deteriorates with the noise level as seen in Figure \ref{figure:glimitsolepns1}. We also observe that the impact of the finescale size $\epsilon$ of the medium becomes negligible once the noise level is large enough, suggesting that the magnitude of the noises is dominant over the multiscale oscillations in our data. Nonetheless, our approach can still provide reasonably good approximations under a mild noise level. \subsection{Homogenization of a heavily oscillatory coefficient} Next, we consider the following elliptic equation with a heavily oscillatory permeability coefficient introduced in \cite{floden2009g}: \begin{equation} \label{eq:1nonpermulti} \begin{split} -\frac{d}{d x} \bigg(A^\epsilon(x) \frac{d}{d x} u^\epsilon(x)\bigg)&= 3+\sin(x)\ \ \textrm{in} \ \ \Omega \in [0,1],\\ u^\epsilon(0)&=0, \ u^\epsilon(1) = 0, \end{split} \end{equation} where $A^\epsilon(x) = \int_Y \left(1 +\frac{1}{2} \sin \left(\left( y + \frac{1}{2\epsilon}\sin\left(\pi \sqrt{\frac{2}{\epsilon}}x\right)\right)^2\right)\right) e^{y(1+\sin x)}dy$. The coefficient $A^\epsilon(x)$ is quite oscillatory. Figure \ref{figure:perm_nonper} illustrates the multiscale coefficients $A^\epsilon(x)$ and the effective coefficients $A^*(x)$ for $\epsilon = 2^{-3}, 2^{-5}$. Due to strong oscillations in the coefficients, direct numerical simulation of this problem is very expensive when the formula for $A^\epsilon(x)$ is known. This homogenization problem is in general challenging: (1) The explicit integral of the multiscale coefficient is not available. (2) This problem cannot be handled by the traditional homogenization method, such as the two-scale convergence method, because the oscillations in $A^\epsilon(x)$ cannot be captured by any test functions admissible for the two-scale convergence \cite{allaire1992homogenization}. For this example, it can be shown that the analytical G-limit coincides with the weak $L^2$ limit of $A^\epsilon(x)$ given by $A^*(x) = \frac{e^{(1+\sin x)}-1}{1+\sin x}$ \cite{floden2009g}, but this is not the case in general \cite[Chapter1]{allaire2012shape}. \begin{figure}[!htb] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{Figure/1D_Nonper_perm_ep3_label.png} \caption{$A^\epsilon(x)$ and $A^*(x)$, $\epsilon = 2^{-3}$ \label{perm1_nonper} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{Figure/1D_Nonper_perm_ep5_label.png} \caption{$A^\epsilon(x)$ and $A^*(x)$, $\epsilon = 2^{-5}$ \label{perm2_nonper} \end{subfigure} \caption{The G-limits $A^*(x)$ and multiscale coefficients $A^\epsilon(x)$ with $\epsilon = 2^{-3}$ and $\epsilon = 2^{-5}$ for problem (\ref{eq:1nonpermulti})} \label{figure:perm_nonper} \end{figure} The synthetic training data set are equally spaced sampled from the multiscale solution for each finescale parameter value $\epsilon$ computed by FEM with a mesh size of $h = 1/10^5$. The reference homogenized solution is computed by FEM with the same mesh based on the analytic G-limit. The architecture parameters and other hyperparameters of PINNs are listed in {\it Table \ref{tb:hyperparameters}} in appendix. The relative $L^2$ errors for both G-limit and homogenized solution are computed based on the same mesh aforementioned. We first consider the case with a relatively small finescale size $\epsilon = 2^{-7}$. Figure \ref{figure:glimitsoltdns2} presents the relative $L^2$ errors for the G-limit and homogenized solution with respect to the number of multiscale solution data. With noise-free data, we can achieve an error level of ${\mathcal O}(10^{-3})$ for both homogenized coefficient and the homogenized solution. It appears that $80$ multiscale data are enough to obtain good approximations. With a high noise level, PINNs can still achieve satisfactory approximations when the data set is large enough. This can be further supported by the corresponding G-limit and homogenized solution obtained by PINNs under different levels of noise corruptions in Figure \ref{figure:1dnonper_plots_ns3}. It is clear that the proposed method can still capture the G-limit and the smooth homogenized solution accurately under mild noise corruptions. This can be further evidenced by Figure \ref{fig:1dnonper_plot_data_homsols_ums1_ns1} and \ref{fig:1dnonper_plot_data_homsols_ums1_ns3} where the learned homogenized solutions tend to fit the macroscopic behavior of the noisy data that is close to the reference solution. \begin{figure}[!htb] \centering \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_errors_glimit_nd_sc.png} \caption{G-limits \label{fig:1dnonper_errors_glimit_nd} \end{subfigure} \hspace{.3in} \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_errors_homsol_nd_sc.png} \caption{Homogenized solutions \label{fig:1dnonper_errors_homsol_nd} \end{subfigure} \caption{Error results for problem (\ref{eq:1nonpermulti}): the relative $L^2$ errors for the G-limits and the homogenized solutions with different number of multiscale data corrupted by different noise levels, when $\epsilon = 2^{-7}$ and the number of PDE residual points is $|{\mathcal T}_r| = |{\mathcal T}_d| +30$.} \label{figure:glimitsoltdns2} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_plot_kappa_ums1_sc_label.png} \caption{G-limits (noise-free) \label{fig:1dnonper_plot_kappa_ums11} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_plot_kappa_ums1_ns1_sc_label.png} \caption{G-limits ($1\%$ noise) \label{fig:1dnonper_plot_kappa_ums1_ns1} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_plot_kappa_ums1_ns3_sc_label.png} \caption{G-limits ($3\%$ noise) \label{fig:1dnonper_plot_kappa_ums1_ns3} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_plot_data_homsols_ums1_sc_label.png} \caption{Solutions (noise-free) \label{fig:1dnonper_plot_data_homsols_ums11} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_plot_data_homsols_ums1_ns1_sc_label.png} \caption{Solutions ($1\%$ noise) \label{fig:1dnonper_plot_data_homsols_ums1_ns1} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_plot_data_homsols_ums1_ns3_sc_label.png} \caption{Solutions ($3\%$ noise) \label{fig:1dnonper_plot_data_homsols_ums1_ns3} \end{subfigure} \vspace*{-4mm} \caption{Problem (\ref{eq:1nonpermulti}): comparison of the reference solutions ($A^*(x)$ and $u_{0,h}(x)$) and the counterparts ($\hat{A}^*(x)$ and $\hat{u}_{0}(x)$) learned by PINNs with different noise levels in the data. (a. b. c.): the G-limit; (d. e. f.): the homogenized solution, when $\epsilon = 2^{-7}$ and the number of multiscale data and PDE residual points are $|{\mathcal T}_d| = 160$, $|{\mathcal T}_r| = 190$.} \label{figure:1dnonper_plots_ns3} \vspace*{-4mm} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_errors_glimit_ep_sc.png} \caption{G-limits \label{fig:1dnonper_errors_glimit_ep} \end{subfigure} \hspace{.3in} \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_errors_homsol_ep_sc.png} \caption{Homogenized solutions \label{fig:1dnonper_errors_homsol_ep} \end{subfigure} \vspace*{-4mm} \caption{Error results for problem (\ref{eq:1nonpermulti}): the relative $L^2$ errors for the G-limits and the homogenized solutions with different finescale parameter $\epsilon$ and noise levels of data, when the number of multiscale data and PDE residual points are $|{\mathcal T}_d| = 160$ and $|{\mathcal T}_r| = 190$.} \label{figure:glimitsolepns2} \vspace*{-5mm} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_plot_glimit_ums1_ep3_label.png} \caption{G-limits ($\epsilon = 2^{-3}$) \label{fig:1dnonper_plot_glimit_ums1_ep3} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_plot_glimit_ums1_ep5_label.png} \caption{G-limits ($\epsilon = 2^{-5}$) \label{fig:1dnonper_plot_glimit_ums1_ep5} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_plot_glimit_ums1_ep7_label.png} \caption{G-limits ($\epsilon = 2^{-7}$) \label{fig:1dnonper_plot_kappa_ums1} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_plot_data_homsols_ums1_ep3_label.png} \caption{Solutions ($\epsilon = 2^{-3}$)} \label{fig:1dnonper_plot_data_homsols_ums1_ep3} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_plot_data_homsols_ums1_ep5_label.png} \caption{Solutions ($\epsilon = 2^{-5}$)} \label{fig:1dnonper_plot_data_homsols_ums1_ep5} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1dnonper_plot_data_homsols_ums1_ep7_label.png} \caption{Solutions ($\epsilon = 2^{-7}$) \label{fig:1dnonper_plot_data_homsols_ums1} \end{subfigure} \vspace*{-3mm} \caption{Problem (\ref{eq:1nonpermulti}) with noise-free data: comparison of the reference solutions ($A^*(x)$ and $u_{0,h}(x)$) and the counterparts ($\hat{A}^*(x)$ and $\hat{u}_{0}(x)$) learned by PINNs with different values of $\epsilon$. (a. b. c.): the G-limit; (d. e. f.): the homogenized solution, when the number of multiscale data and PDE residual points are $|{\mathcal T}_d| = 160$, $|{\mathcal T}_r| = 190$.} \label{figure:1dnonper_plot_ums1_ep} \end{figure} Figure \ref{figure:glimitsolepns2} presents errors of the estimated G-limit and the homogenized solution with respect to the finescale parameter $\epsilon$ and $160$ multiscale solution data collected at fixed spatial locations for all $\epsilon$, i.e., $|{\mathcal T}_d|=160$. As expected, better approximation of the G-limit and the homogenized solution can be delivered as $\epsilon$ becomes smaller in noise-free case. In addition, Figure \ref{figure:1dnonper_plot_ums1_ep} again shows that the learned homogenized solutions tend to fit the multiscale solution data. We note that even if the finescale oscillations in our data are not visible in the figures, a relatively large $\epsilon$ (=$2^{-3}$, $2^{-5}$) could result in the non-negligible difference between the reference homogenized solution and our data. As a result, approximations of G-limits and homogenized solutions are less accurate but satisfactory for larger $\epsilon$. In contrast to noise-free scenarios, the finescale size $\epsilon$ has much less impact on both learned G-limit and the homogenized solution as the noise level increases as shown in Figure \ref{figure:glimitsolepns2}. \subsection{Homogenization of a 2D non-periodic coefficient} We next consider the following 2D multiscale elliptic equation with a non-periodic coefficient introduced in \cite{persson2012selected}: \begin{equation} \label{eq:original_nonper_2D} \begin{split} -\div \bigg( A^\epsilon(x) \cdot \nabla u^\epsilon(x) \bigg) &= 1 \ \ \textrm{in} \ \ \Omega = [1,2]^2\\ u^\epsilon(x) &= 0 \ \ \textrm{on} \ \ \partial\Omega, \end{split} \end{equation} where $A^\epsilon(x) = \left(1+0.9\sin(2\pi\frac{x_1}{\epsilon})\sin(2\pi \frac{x_2^2}{\epsilon})\right)$. Figure \ref{figure:Perm_2Dnonper} illustrates the multiscale coefficient $A^\epsilon(x)$ when $\epsilon = 2^{-3}$. The G-limit $A^*= \big(\begin{smallmatrix} A_{11}(x) & A_{12}(x)\\ A_{21}(x) & A_{22}(x) \end{smallmatrix}\big)$ that is a $2\times2$ matrix function, can be found via the $\lambda$-scale convergence technique \cite{persson2012selected}. Since $A^\epsilon(x)$ is periodic with respect to $x_1$, we know that the G-limit only depends on $x_2$, i.e., $A^*(x) = A^*(x_2)$. We assume a priori that the non-diagonal entries of the G-limit are zeros i.e., $A_{12}(x_2) = A_{21}(x_2) = 0$. Therefore, we shall only approximate the diagonal entries of the G-limit. The reference G-limit is computed by the $\lambda$-scale convergence method. Specifically, we solved local cell problems at $129$ equidistant points of $x_2$ and each problem is solved by FEM with a mesh size of $h = 1/1000$. With the reference G-limit, we computed the reference homogenized solution by FEM with a mesh size $h=1/128$. \begin{figure}[!t] \centering \includegraphics[scale =0.15]{Figure/Perm_2Dnonper_label.png} \caption{The multiscale permeability coefficient $A^\epsilon(x)$ in (\ref{eq:original_nonper_2D}) when $\epsilon = 2^{-3}$.} \label{figure:Perm_2Dnonper} \end{figure} The training data are equally spaced sampled from the multiscale solution for each finescale parameter value $\epsilon$ obtained by the forward FEM simulation of the problem (\ref{eq:original_nonper_2D}) with a fine mesh size $1/8000$. The architecture parameters and other hyperparameters of PINNs are listed in {\it Table \ref{tb:hyperparameters}} in appendix. We compute the errors on a mesh with size $h=1/128$ in the spatial domain. Figure \ref{figure:glimitsoltdns3_sc} presents the error convergence of the G-limit and the homogenized solutions for $\epsilon = 2^{-7}$ with different numbers of training data. With noise-free data, $400$ data points appear to be sufficient to obtain good approximations with errors of less than $10^{-3}$ for both G-limit and homogenized solution. Increasing the amount of training data helps improve the accuracy for noisy data cases. Even with $5\%$-noise in the data, our proposed method can still achieve an error less than ${\mathcal O}(10^{-2})$ for both coefficient and the solution, when the number of available data is large enough. We also plot the G-limits and the homogenized solutions at $x_2 = 1.25$ shown in Figure \ref{figure:2dnonper_plots_ns} when $\epsilon = 2^{-7}$. Both diagonal entries of the G-limit and homogenized solution agree well with their references. Again, we observe that the learned solutions tend to fit the macroscale behaviors of the data, even when non-negligible noises present in \ref{figure:2dnonper_plot_data_homsols_ums1_ns1_sc} and \ref{figure:2dnonper_plot_data_homsols_ums1_ns3}. \begin{figure}[!htb] \centering \vspace*{3mm} \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/2dnonper_errors_glimit_nd_sc1.png} \caption{G-limits \label{fig:2dnonper_errors_glimit_nd_sc} \end{subfigure} \hspace{.3in} \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/2dnonper_errors_homsol_nd_sc1.png} \caption{Homogenized solutions \label{fig:2dnonper_errors_homsol_nd_sc} \end{subfigure} \caption{ Error results for problem (\ref{eq:original_nonper_2D}): the relative $L^2$ errors for the G-limits and the homogenized solutions with different number of multiscale data corrupted by different noise levels, when $\epsilon = 2^{-7}$ and the number of PDE residual points is $|{\mathcal T}_r| = |{\mathcal T}_d|$.} \label{figure:glimitsoltdns3_sc} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}{0.35\textwidth} \includegraphics[width=1\textwidth]{Figure/2dnonper_plot_Glimits_nd40_ep7_mat_sc_label.png} \caption{G-limits (noise-free)} \label{figure:2dnonper_plot_Glimits_nd40_ep7_mat} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=1\textwidth]{Figure/2dnonper_plot_Glimits_nd40_ep7_mat_ns1_sc_label.png} \caption{G-limits ($1\%$ noise)} \label{figure:2dnonper_plot_Glimits_nd40_ep7_mat_ns1_sc} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=1\textwidth]{Figure/2dnonper_plot_Glimits_nd40_ep7_mat_ns3_sc_label.png} \caption{G-limits ($3\%$ noise)} \label{figure:2dnonper_plot_Glimits_nd40_ep7_mat_ns3_sc} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=1\textwidth]{Figure/2dnonper_plot_data_homsols_ums1_sc_label.png} \caption{Solutions (noise-free)} \label{figure:2dnonper_plot_data_homsols_ums1_ep7} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=1\textwidth]{Figure/2dnonper_plot_data_homsols_ums1_ns1_sc_label.png} \caption{Solutions ($1\%$ noise)} \label{figure:2dnonper_plot_data_homsols_ums1_ns1_sc} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=1\textwidth]{Figure/2dnonper_plot_data_homsols_ums1_ns3_sc_label.png} \caption{Solutions ($3\%$ noise)} \label{figure:2dnonper_plot_data_homsols_ums1_ns3} \end{subfigure} \vspace*{-4mm} \caption{Problem (\ref{eq:original_nonper_2D}): comparison of the reference solutions (the diagonal entries of $A^*(x)$ and $u_{0,h}(x)$) and the counterparts learned by PINNs with different noise levels. (a. b. c.): the G-limit; (d. e. f.): the homogenized solution at $x_1 = 1.25$, where $\epsilon = 2^{-7}$, the number of multiscale data and PDE residual points are $|{\mathcal T}_d| = 1600$, $|{\mathcal T}_r| = 1600$.} \label{figure:2dnonper_plots_ns} \vspace*{-5mm} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/2dnonper_errors_glimit_ep_sc1.png} \caption{G-limits \label{fig:2dnonper_errors_glimit_ep_sc} \end{subfigure} \hspace{.3in} \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/2dnonper_errors_homsol_ep_sc1.png} \caption{Homogenized solutions \label{fig:2dnonper_errors_homsol_ep_sc} \end{subfigure} \vspace*{-4mm} \caption{ Error results for problem (\ref{eq:original_nonper_2D}): the relative $L^2$ errors for the G-limits and the homogenized solutions with different finescale parameter $\epsilon$ and noise levels in the data when the number of multiscale data and PDE residual points are $|{\mathcal T}_d| = 1600$ and $|{\mathcal T}_r| = 1600$.} \label{figure:glimitsolepns3_Sc} \vspace*{-7mm} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}{0.35\textwidth} \includegraphics[width=1\textwidth]{Figure/2dnonper_plot_Glimits_nd40_ep3_mat_sc_label.png} \caption{G-limits ($\epsilon = 2^{-3}$)} \label{figure:2dnonper_plot_Glimits_nd40_ep3_mat_sc} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=1\textwidth]{Figure/2dnonper_plot_Glimits_nd40_ep5_mat_sc_label.png} \caption{G-limits ($\epsilon = 2^{-5}$)} \label{figure:2dnonper_plot_Glimits_nd40_ep7_mat_ep5_sc} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=1\textwidth]{Figure/2dnonper_plot_Glimits_nd40_ep77_mat_sc_label.png} \caption{G-limits ($\epsilon = 2^{-7}$)} \label{figure:2dnonper_plot_Glimits_nd40_ep7_mat_sc} \end{subfigure} \begin{subfigure}{0.35\textwidth} \includegraphics[width=1\textwidth]{Figure/2dnonper_plot_data_homsols_ums1_ep3_sc_label.png} \caption{Solutions ($\epsilon = 2^{-3}$)} \label{figure:2dnonper_plot_data_homsols_ums1_ep3_sc} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=1\textwidth]{Figure/2dnonper_plot_data_homsols_ums1_ep5_sc_label.png} \caption{Solutions ($\epsilon = 2^{-5}$)} \label{figure:2dnonper_plot_data_homsols_ums1_ep5_sc} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=1\textwidth]{Figure/2dnonper_plot_data_homsols_ums1_ep7_sc_label.png} \caption{Solutions ($\epsilon = 2^{-7}$)} \label{figure:2dnonper_plot_data_homsols_ums1_ep7_sc} \end{subfigure} \caption{Problem (\ref{eq:original_nonper_2D}) with noise-free data: comparison of the reference coefficient and homogenized solutions (the diagonal entries of $A^*(x)$ and $u_{0,h}(x)$) and counterparts learned by PINNs with different values of $\epsilon$. (a. b. c.): the G-limit; (d. e. f.): the homogenized solution at $x_1 = 1.25$, where the number of multiscale data and PDE residual points are $|{\mathcal T}_d| = 1600$, $|{\mathcal T}_r| = 1600$.} \label{figure:2dnonper_plot_ums1_ep} \end{figure} \begin{figure}[!htb] \centering \captionsetup{justification=centering} \begin{subfigure}{0.25\textwidth} \includegraphics[width=\textwidth]{Figure/2dnonper_plot_homsol_PINNs_12_label.png} \caption{$\hat{u}_0$, noise-free data \\ \quad \label{fig:2dnonper_plot_homsol_PINNs_12} \end{subfigure} \hspace{.3in} \begin{subfigure}{0.25\textwidth} \includegraphics[width=\textwidth]{Figure/2dnonper_plot_homsol_FEM_12_label.png} \caption{$u_{0,h}$ \\ \quad \label{fig:2dnonper_plot_homsol_FEM_12} \end{subfigure} \hspace{.3in} \begin{subfigure}{0.25\textwidth} \includegraphics[width=\textwidth]{Figure/2dnonper_plot_solerror_nd40_ep7_label.png} \caption{$\left|u_{0,h}(x)-\hat{u}_0(x)\right|$\\ (noise-free data) \label{fig:2dnonper_plot_solerror_nd80_ep7} \end{subfigure} \begin{subfigure}{0.25\textwidth} \includegraphics[width=\textwidth]{Figure/2dnonper_plot_homsol_PINNs_12_ns3_label.png} \caption{$\hat{u}_0$, $3\%$-noise data\\ \quad \label{fig:2dnonper_plot_homsol_PINNs_12_ns3} \end{subfigure} \hspace{.3in} \begin{subfigure}{0.25\textwidth} \includegraphics[width=\textwidth]{Figure/2dnonper_plot_homsol_FEM_12_label.png} \caption{$u_{0,h}$\\ \quad \label{fig:2dnonper_plot_homsol_FEM_12_ns3} \end{subfigure} \hspace{.3in} \begin{subfigure}{0.25\textwidth} \includegraphics[width=\textwidth]{Figure/2dnonper_plot_solerror_nd40_ep7_ns3_label.png} \caption{$\left|u_{0,h}(x)-\hat{u}_0(x)\right|$\\ ($3\%$-noise data) \label{fig:2dnonper_plot_solerror_nd80_ep7_ns3} \end{subfigure} \captionsetup{justification=justified} \caption{The 2D Homogenized solutions of problem (\ref{eq:original_nonper_2D}) obtained with noise-free and $3\%$-noise data. (a. d.): Homogenized solution obtained by PINNs; (b. e.): the reference homogenized solution (c. f.): the absolute error between the two solutions, when $\epsilon=2^{-7}$, the number of multiscale solution data and PDE residual points used are $|{\mathcal T}_d|=1600$, $|{\mathcal T}_r|=1600$. } \label{figure:2dnonper_sol_plot_ns3} \vspace*{-3mm} \end{figure} Figure \ref{figure:glimitsolepns3_Sc} shows the relative $L^2$ errors for the learned G-limits and the homogenized solutions for different finescale parameter $\epsilon$ and $1600$ multiscale solution data collected at fixed spatial locations for all $\epsilon$, i.e., $|{\mathcal T}_d|=1600$. For noise-free scenarios, the error decays as the finescale size $\epsilon$ decreases. To further examine this effect, we plot the corresponding the learned G-limit and homogenized solutions in Figure \ref{figure:2dnonper_plot_ums1_ep}. In particular, the approximation quality of G-limits appears to be more sensitive to the size of $\epsilon$ for the noise-free case. When the noise dominates over the multiscale oscillations in the data, the results are no longer sensitive to the size of finescale. Nonetheless, we can still achieve the errors of less than $1\%$ for both G-limit and the homogenized solutions under a $5\%$ noise level. To further highlight the performance of the proposed method, we also show the learned homogenized solution with the reference solutions in Figure \ref{figure:2dnonper_sol_plot_ns3}. As we can see, our solutions agree very well with the reference solutions. Overall, our results show that when mild noise and multiscale fluctuations are presented in the data, the PINNs can provide good estimations of the G-limit for the 2D non-periodic example. \subsection{Homogenization of an ergodic random coefficient} Finally, we consider the following two-scale elliptic equation with an ergodic coefficient, inspired by the exmaple in \cite[Section 4.2]{brown2017hierarchical}: \begin{equation} \label{eq:1Drandom} \begin{split} -\frac{d}{d x}\left(A^\epsilon(x,\omega)\cdot\frac{d}{d x}u^\epsilon(x)\right) &= 1\ \ \textrm{in} \ \ \Omega = [0,1], \\ u^\epsilon(0) &= u^\epsilon(1) = 0, \end{split} \end{equation} where $A^\epsilon(x,\omega) = A(x, T_{{x\over\epsilon}}(\omega)) = 3.1 + (x+1)\sin(2\pi(\omega_1+{x\over\epsilon}))+\sin(2\pi(\omega_2 + \sqrt{2}{x\over\epsilon}))$ for $\omega = (\omega_1,\omega_2)$ drawn from a uniform distribution over $[0,1]^2$. Here the ergodic dynamical system $T: \mathbb{R} \times {\mathcal Z} \to {\mathcal Z}$ is given by \begin{equation} T(x)\omega = \omega + (1,\sqrt{2})x. \end{equation} \begin{figure}[!htb] \centering \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/1drandom_errors_glimit_nd_sc1.png} \caption{G-limits \label{fig:1drandom_errors_glimit_nd} \end{subfigure} \hspace{.3in} \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/1drandom_errors_homsol_nd_sc1.png} \caption{Homogenized solutions \label{fig:1drandom_errors_homsol_nd} \end{subfigure} \vspace*{-4mm} \caption{Error results for problem (\ref{eq:1Drandom}): the relative $L^2$ errors for the G-limits and the homogenized solutions with different number of multiscale data corrupted by different noise levels for $\epsilon = 2^{-10}$ and the number of PDE residual points is $|{\mathcal T}_r| = |{\mathcal T}_d| +20$.} \label{figure:glimitsoltdns4} \vspace*{-3mm} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1drandom_plot_kappa_ums1_ep10_160_label.png} \caption{G-limits (noise-free) \label{fig:1drandom_plot_kappa_ums1_ep10_160_1} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1drandom_plot_kappa_ums1_ns1_160_label.png} \caption{G-limits ($1\%$ noise) \label{fig:1drandom_plot_kappa_ums1_ns1_160_sciann} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1drandom_plot_kappa_ums1_ns3_160_label.png} \caption{G-limits ($3\%$ noise) \label{fig:1drandom_plot_kappa_ums1_ns3_160_sciann} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1drandom_plot_data_homsols_ums1_ep10_160_label.png} \caption{Solutions (noise-free) \label{fig:1drandom_plot_data_homsols_ums1_ep10_160_1} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1drandom_plot_data_homsols_ums1_ns1_160_label.png} \caption{Solutions ($1\%$ noise) \label{fig:1drandom_plot_data_homsols_ums1_ns1_160_sciann} \end{subfigure} \hspace{-.25in} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{Figure/1drandom_plot_data_homsols_ums1_ns3_160_label.png} \caption{Solutions ($3\%$ noise) \label{fig:1drandom_plot_data_homsols_ums1_ns3_160_sciann} \end{subfigure} \vspace*{-4mm} \caption{Problem (\ref{eq:1Drandom}): comparison of the reference coefficient and solutions ($A^*(x)$ and $u_{0,h}(x)$) and the counterparts learned by PINNs with different noise levels in the data. (a. b. c.): the G-limit; (d. e. f.): the homogenized solution, where $\epsilon = 2^{-10}$ and the number of multiscale data and PDE residual points are $|{\mathcal T}_d| = 160$, $|{\mathcal T}_r| = 180$.} \label{figure:1drandom_plots_ns} \vspace*{-3mm} \end{figure} \begin{figure}[!htb] \centering \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/1drandom_errors_glimit_ep_sc1.png} \caption{G-limits \label{fig:1drandom_errors_glimit_ep} \end{subfigure} \hspace{.3in} \begin{subfigure}{0.40\textwidth} \includegraphics[width=\textwidth]{Figure/1drandom_errors_homsol_ep_sc1.png} \caption{Homogenized solutions \label{fig:1drandom_errors_homsol_ep} \end{subfigure} \vspace*{-4mm} \caption{Error results for problem (\ref{eq:1Drandom}): the relative $L^2$ errors for the G-limits and the homogenized solutions with different finescale parameter $\epsilon$ and noise levels in the data when the number of multiscale data and PDE residual points are $|{\mathcal T}_d| = 160$ and $|{\mathcal T}_r| = 180$.} \label{figure:glimitsolepns4} \vspace*{-5mm} \end{figure} Notably, the G-limit $A^*(x)$ of this ergodic homogenization problem is deterministic and independent of the realization of $\omega$ \cite{jikov2012homogenization}. In this example, it is known that the exact G-limit is given by $1/\mathbb{E}\left [1/A(x,\omega)\right]$, where $\mathbb{E}$ denotes the expectation with respect to the realizations of $\omega$ \cite{alexanderian2014primer}. Traditional approaches for ergodic homogenization usually first compute the local cell problems for many different realizations of the coefficient $A^\epsilon(x,\omega)$ to obtain the realization dependent approximations to the G-limits. Then the G-limit can be approximated by taking its expectation. This procedure requires solving a lot of cell problems at many different points $x$ with thousands of realizations of $\omega$. For PINNs, on the other hand, we just need to collect the multiscale solution data based on a single realization of the coefficient for PINNs. Specifically, the reference G-limit $A^*(x)$ is computed as the expectation by roughly $200,000$ Monte Carlo samples over $2000$ equidistant points in the spatial domain. Based on this G-limit, we compute the reference homogenized solution by FEM with mesh size $h=1/2000$. For PINNs, we learned the G-limit based on only one realization of $\omega = (0.5,0.5)$. The training data are equally spaced sampled from the multiscale solution for each finescale parameter value $\epsilon$ computed by FEM with a mesh size $h = 1/10^5$. The architecture parameters and other hyperparameters of PINNs are listed in {\it Table \ref{tb:hyperparameters}} in the appendix. The relative $L^2$ errors are computed using a mesh of size $h=1/2000$. Figure \ref{figure:glimitsoltdns4} shows the error convergence of the learned G-limits and homogenized solution for $\epsilon = 2^{-10}$. For the noise-free case, an error level ${\mathcal O}(10^{-3})$ for both G-limit and the homogenized solution can be achieved. For noisy data, while the errors for the G-limit and homogenized solution tend to stagnate after more than $80$ multiscale data points are used, we can still achieve errors of ${\mathcal O}(10^{-2})$ for both G-limit and the homogenized solution with $5\%$-noise corruption in the data. We also compare the learned G-limits and the homogenized solutions with their references for $\epsilon = 2^{-10}$ and $|{\mathcal T}_d| = 160$ data with different noise levels in Figure \ref{figure:1drandom_plots_ns}. While the learned G-limit is close to the reference coefficient, the approximated homogenized solution almost overlaps with the data. Again, we observed PINNs tend to learn the macroscopic behavior of the noisy data that is close to the reference homogenized solution. The error results with finescale parameter $\epsilon$ are presented in Figure \ref{figure:glimitsolepns4}. For both noise-free and noisy scenarios, the errors tend to decay as the finescale size $\epsilon$ decreases, particularly for homogenized solutions. The effect of $\epsilon$ is less pronounced when the noise level is high because the noise dominates over the fine scale size of $\epsilon$. Notably, with $5\%$-noise corruption, we can still achieve the relative errors less than $5\%$ for both G-limit and homogenized solution by incorporating the corresponding homogenized equation. \section{Conclusion} In this paper, we proposed a simple and flexible approach to estimate the G-limit and approximate the homogenized solution for multiscale elliptic equations from data, by adopting physics-informed neural networks (PINNs). Due to the lack of the homogenized solution data or measurements, we employ the multiscale solution data as the surrogate of the homogenized solution. Despite the rapid multiscale and noisy fluctuations presented in the data, we demonstrated that PINNs are capable to effectively extract the macroscopic (homogenized) behavior from data and provide good approximations to the G-limits and the homogenized solution. The applicability and performance of the method have been demonstrated through a number of different benchmark problems. Finally, we remark that except for the assumption of the existence and structure of the homogenized equation, our approach does not rely on the periodicity or the explicit formula of the underlying multiscale coefficient during the learning stage, which can be applicable to more general settings beyond periodic cases. \section*{Acknowledgments} XZ was supported by Simons Foundation.
2,869,038,156,517
arxiv
\section{Introduction} Studying star formation in the Central Molecular Zone (CMZ) of the Galaxy (within a few hundred pc of the Galactic center) can help provide insight into the structure and evolution of our Milky Way Galaxy, as well as provide a template for studying star formation in the nuclei of other galaxies. The CMZ, which contains $\sim$~5~$\times$~10$^{7}$~M$_{\odot}$\, of molecular gas \citep{pier00}, is home to several of the prominent star-forming regions toward the Galactic center, such as the Sgr~A, Sgr~B, and Sgr~C \mbox{$\mathrm{H\,{\scriptstyle {II}}}$}\, complexes. These regions are evolved enough to contain UV-emitting stars, but the CMZ also shows evidence of earlier stages of star formation \citep{lisz09,ball10}. Among the best tracers of the early stages of star formation are CH$_3$OH\, masers and regions of extended, enhanced 4.5~$\mu$m\, emission. A new method of identifying star formation activity in its early stages is by selecting sources with enhanced 4.5~$\mu$m\, emission. Such sources, commonly called green fuzzies \citep{cham09} or extended green objects \citep[EGOs;][]{cyga08}, are named for how they appear in {\it Spitzer}/IRAC \citep{fazi04} 3-color images (8.0~$\mu$m\, in red, 4.5~$\mu$m\, in green, and 3.6~$\mu$m\, in blue). The enhancement at 4.5~$\mu$m\, likely arises from a shock-excited H$_2$ or CO spectral feature in the 4.5~$\mu$m\, band (Marston et al. 2004, Noriega-Crespo et al. 2004). More recent work by \citet{debu10} and \citet{fost11} has shown that the 4.5~$\mu$m\, emission from some green sources is caused by shock-excited H$_2$ emission, while others show no prominent spectral features. Despite its uncertain origin, enhanced 4.5~$\mu$m\, emission, frequently referred to as `green' emission in this paper, is a reliable tracer of early protostellar activity \citep{cham09,cyga08,cyga09}. Another tracer of star formation activity is one of the brightest known maser lines, the CH$_3$OH\, transition at 6.7~GHz. This Class~II CH$_3$OH\, maser transition, which is thought to be radiatively excited by a central high-mass protostellar object \citep{crag92}, exclusively traces high-mass ($\geq$~8~M$_{\odot}$) star formation \citep{wals01, mini03}. Class~II masers are often found very close to protostars, supporting this idea \citep{casw97, elli05}. Collisionally excited Class~I CH$_3$OH\, masers are also thought to be reliable tracers of star formation. Unlike Class~II masers, however, they are not restricted to high-mass star-forming regions, as evidenced by the detection of Class~I CH$_3$OH\, masers toward low- and intermediate-mass star forming regions \citep{kale06,kale10}. Class I masers tend to be found at larger angular distances from protostars than Class~II masers \citep{kurt04,elli05}, and are well-correlated with the outflows associated with star formation \citep{plam90, kurt04, voro06}. Recent work has shown that Class~I CH$_3$OH\, masers are also associated with regions of shocked gas where expanding \mbox{$\mathrm{H\,{\scriptstyle {II}}}$}\, regions collide with neighboring molecular clouds \citep{voro10}. Thus, Class~I masers are associated with two evolutionary phases of star formation. Because green sources are known to be in early evolutionary phases, Class~I masers associated with them are likely to be associated with protostellar outflows rather than \mbox{$\mathrm{H\,{\scriptstyle {II}}}$}\, regions. The first to propose a correlation between sources with enhanced 4.5~$\mu$m\, emission and CH$_3$OH\, masers was \citet{yuse07}, who found that at least 1/3 of the green sources in the Galactic center are associated with CH$_3$OH\, maser emission. Subsequent studies \citep{cham09, chen09, cyga09} have shown that CH$_3$OH\, masers are strongly associated with 4.5~$\mu$m\, excess sources in the Galactic disk. Moreover, these green sources are also highly associated with: (1) 24~$\mu$m\, emission, indicative of heated dust around a protostar, and (2) infrared dark clouds (IRDCs), which are known to harbor the early stages of high-mass star formation (see Section~\ref{irdcs}). Because both Class~I and Class~II CH$_3$OH\, maser emission are generated during high-mass star formation, their association with 4.5~$\mu$m\, excess sources helps to confirm that the green sources are indeed tracing the early stages of high-mass star formation. In a detailed examination of the star formation in the Galactic center, \citet{yuse09}\, (Y-Z09 hereafter) identified and examined 34 green sources toward the central region of the Galaxy ($|\ell|<1.05^{\circ}$ and $|b|<0.8^{\circ}$; see Fig.~\ref{galcen_sources}) using {\it {Spitzer}}/IRAC data. Y-Z09 identified these sources, by eye, using the empirical `green' ratio $I(4.5)/[I(3.6)^{1.2} \times I(5.8)]^{0.5}$, which helps select sources that are enhanced at 4.5~$\mu$m\, relative to 3.6~$\mu$m\, and 5.8~$\mu$m. Using available published CH$_3$OH\, maser data, primarily from targeted observations of known star-forming regions and maser sites, Y-Z09 found that $\sim$~40\%\, of these green sources are associated with Class~I and/or Class~II CH$_3$OH\, maser emission. Due to the relatively high detection limit ($\sim$~0.3--5~Jy) of these observations, however, it is possible that some masers have gone undetected. Here, we present our more sensitive ($<$~0.1~Jy), targeted, 6.7 and 44~GHz CH$_3$OH\, maser observations toward these green sources using the National Radio Astronomy Observatory's Expanded Very Large Array (EVLA)\footnote[1]{The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.}. In addition to CH$_3$OH\, masers, we also search for other star formation indicators that correlate with our sample of 4.5~$\mu$m\, emission sources, such as 24~$\mu$m\, emission and IRDCs. We investigate whether star formation in the CMZ differs from that in the Galactic disk. \section{Observations} We used the new 4.0~-~8.0~GHz (C-band) capability of the EVLA to obtain spectral line observations of the 6.6669~GHz 5(1,5)-6(0,6)~A$^{++}$ CH$_3$OH\, maser transition \citep{ment91}\, toward a sample of 34 green sources in the central few degrees of the Galaxy (Fig.~\ref{galcen_sources}). The sample selection and individual sources are described in Section~\ref{notes}. To observe all 34 sources, we made 31 snapshot observations (one field contains 3 green sources, and another contains 2 green sources) of $\sim$~80 seconds each in 2007 December as part of Program AY184. In total, 12 EVLA antennas were used in the B configuration. The spectra have 140~km~s$^{-1}$\, (3.125~MHz) bandwidth, 256 channels, and a channel width of 0.54~km~s$^{-1}$\, (12.2~kHz). The primary beam of the EVLA at this frequency is $\sim$~7\arcmin, the synthesized beams in the processed images are $\sim$~4\arcsec~x~1\arcsec, and the position angles of the synthesized beams range from $-$14$^{\circ}$\, to 2$^{\circ}$, with a mean of $-$7$^{\circ}$. To calibrate the 6.7~GHz data, we use standard procedures in AIPS along with a few modifications to the standard recipe. The calibrators are 1331+305 for flux calibration and 1730-130 for phase calibration. Doppler tracking was used for these observations. After the data were obtained, however, it was determined that the path lengths of the fibers connecting the LO reference signals to the antennas were not taken into account. After the standard flux and phase calibration, an additional phase modification is applied (using CLCOR in AIPS) to each antenna to correct for this error. Because the sources are within a few degrees on the sky, only one constant offset is necessary, rather than a more time/source specific offset. This offset modifies the phase only, and thus should not affect the positional or velocity accuracy of the data. For those sources with bright emission, self-calibration is performed on the channel of peak emission (as identified in the AIPS task POSSM), and subsequently applied to all other channels. PBCOR is used to correct the flux for primary beam attenuation. The final sensitivity of these 6.7~GHz data is $\sim$~50~mJy~beam$^{-1}$ in each spectral channel. In order to identify CH$_3$OH\, masers at wide range of velocities, we made two observations of each field, centered at what were intended to be $\pm$~50~km~s$^{-1}$. Because Doppler tracking was used, however, an error in the online system caused the observed frequencies to be calculated using the submission date of the observe file rather than the actual date of observation. Using the online DOPSET tool and the date of the observations, we calculated what the observing frequency should have been, and found that an offset of 16.3-16.8~km~s$^{-1}$\, was introduced by this error, depending on the source. Thus, the central velocities of the two observations for each field are $\sim$~$-$33~km~s$^{-1}$\, and $\sim$~$+$67~km~s$^{-1}$, and our observations cover the entire velocity range from $-$103~km~s$^{-1}$\, to $+$136~km~s$^{-1}$. To supplement our observations, we also use a portion of the Methanol Multibeam (MMB) survey catalog of 6.7~GHz masers \citep[][C10 hereafter]{casw10}. In their survey of the Galactic plane (345$^{\circ}$~$<~\ell~<$~6$^{\circ}$, $|b|<$2$^{\circ}$), C10 used the Parkes 64-m telescope to search for 6.7~GHz maser emission with a sensitivity of $\sim$~170~mJy (1$\sigma$). In order to pinpoint the location of the maser emission to within 0.4\arcsec, follow-up observations of the Parkes detections were carried out using the Australia Telescope Compact Array (ATCA). We used the EVLA in the C configuration to observe the 44.069~GHz 7(0,7)-6(1,6)~A$^{++}$ CH$_3$OH\, maser emission \citep{mori85}\, toward our sample of 34 green sources in 2008 May as part of Program AY184. As with the 6.7~GHz observations, these 34 sources are observed in 31 fields. A total of 26 antennas were used, including 15 EVLA antennas. At this frequency, the primary beam is $\sim$~1\arcmin, the synthesized beams in our data are $\sim$~1\arcsec~x~0.5\arcsec, and the position angles of the synthesized beams range from $-$8$^{\circ}$\, to 12$^{\circ}$, with a mean of $-$5$^{\circ}$. The spectra have 84~km~s$^{-1}$\, (12.5~MHz) bandwidth and 64 channels, resulting in a channel width of 1.3~km~s$^{-1}$\, (195~kHz). Each source was observed for $\sim$~2~minutes in each of two velocity ranges, from $\sim$~$-$89 to $-$7~km~s$^{-1}$\, and $\sim$~$-$2 to $+$80~km~s$^{-1}$. Because the velocity range from $-$7~km~s$^{-1}$\, to $-$2~km~s$^{-1}$\, was not covered by these observations, it is possible that masers exist in this range and were undetected by our observations. C10 found that only one of the 29 Class~II masers in the region containing our green sources is in this velocity range. As a result, it seems unlikely that a significant number of masers have gone undetected in our search. Because the the possibility remains, however, our 44~GHz maser detection rates should be considered lower limits. Standard flux (1331+305) and phase (17443-31165) calibration are applied to the data in AIPS. Because these are high-frequency observations, fast-switching was used for frequent phase monitoring. Self-calibration is performed on the channel of peak emission (as identified in the AIPS task POSSM), and subsequently applied to all other channels. PBCOR is used to correct the flux for primary beam attenuation. Because these observations combined data from VLA and EVLA antennas, Doppler tracking was not used (as recommended by NRAO). During the data reduction process, Doppler offsets are calculated using the online DOPSET tool. The final sensitivity of these observations is $\sim$~70~mJy~beam$^{-1}$ in each spectral channel. \section{Results} To better understand excess 4.5~$\mu$m\, emission sources, and the star formation occurring toward the Galactic center, we compiled several of their characteristics. Three of the 34 green sources (g1, g2, and g4) were found to be positionally coincident with known planetary nebulae \citep[][Y-Z09]{vand01,jaco04}. Because we are interested in studying regions of star formation, we exclude these three sources from subsequent analysis. To examine the remaining 31 sources, we compare their (1) 6.7~GHz Class~II CH$_3$OH\, maser emission, (2) 44~GHz Class~I CH$_3$OH\, maser emission, (3) 24~$\mu$m\, emission using {\it {Spitzer}}/MIPS \citep{riek04}\, data from Y-Z09, (4) Galactic location, (5) mass, as estimated by Y-Z09, and (6) association with IRDCs, using {\it {Spitzer}}/IRAC data \citep{stol06, aren08, rami08}. A summary of these results can be found in Table~\ref{green-summary}. \subsection{Association of 6.7~GHz CH$_3$OH\, Masers with Green Sources} Class~II 6.7~GHz CH$_3$OH\, masers are radiatively excited and are known to trace the early stages of high-mass star formation \citep[e.g.,][]{ment91, crag92, wals01, mini03}. Because of their strong association with high-mass star formation, we searched for 6.7~GHz CH$_3$OH\, masers toward our sample of green sources. We detect a total of 18 maser sites in the 31 observed fields. The masers are identified, by eye, in EVLA data cubes before self-calibration. The typical 1$\sigma$\, sensitivity in these cubes is $\sim$~50~mJy. Spectra of the 18 maser sites are shown in Figures~\ref{g0}-\ref{g32}, and their positions, velocities, and peak intensities can be found in Table~\ref{six-summary}. If a maser site displays more than one velocity feature, the position and velocity of the peak intensity are listed. All of our maser detections, except for one, are present in the C10 catalog. Comparing the positions of the masers, they have a mean offset of 0.9\arcsec. The maser positions from our data are identified by the position of the peak pixel rather than an elliptical Gaussian fit, possibly resulting in an offset of one or two pixels (our data have a pixel size of 0.3\arcsec). C10 cite a positional uncertainty of 0.4\arcsec\, for their catalog. Together, the positional uncertainties can account for the mean offset of 0.9\arcsec, and we consider the masers that we detect to be positional matches to the C10 masers. Moreover, we find an average offset in peak velocity of only 0.2~km~s$^{-1}$\, (compared to our channel width of 0.54~km~s$^{-1}$) between the two sets of masers. As a result, we are confident that we are detecting the same maser sources as C10. We detect one maser toward green source g31 (G359.199$+$0.041) that is not in the C10 catalog, located at $\alpha, \delta$\, (J2000) $= 17^{h}43^{m}37.4^{s}, -29^{\circ}36'10.3''$\, with a velocity of $-$4.1~km~s$^{-1}$. This maser has a peak flux of $\sim$~1.4~Jy, not much greater than the 1~Jy value at which the C10 survey is close to 100\% complete \citep{gree09}. Because 6.7~GHz masers can be variable by factors of a few over timescales of months \citep{goed04}, it is not surprising that this maser escaped detection by C10. There are four CH$_3$OH\, masers, detected by C10, that are in our fields of view but went undetected in our observations (0.167$-$0.446, 0.376+0.040, 358.980+0.084, and 359.970$-$0.457 in the naming scheme of C10). Each of these masers displayed variability of at least a factor of 2 in the different C10 observations, and their lowest observed intensities range from $<$~0.2~Jy to 1.3~Jy. Their variability makes it plausible that these masers were in a low or dormant state during our observations, resulting in their non-detection in our observations. Because of its excitation mechanism, 6.7~GHz maser emission is likely found close to protostars. Recent work \citep[e.g.,][]{cyga09} has shown that 6.7~GHz masers are indeed close (typically within a few arcseconds) to central, star-forming objects within enhanced 4.5~$\mu$m\, sources. To determine which green sources in our sample are associated with 6.7~GHz masers, we select a search radius of 10\arcsec. We find that 12 of 31 green sources are positionally coincident with at least one 6.7~GHz maser (Fig.~\ref{galcen_sources}). \subsection{Association of 44~GHz CH$_3$OH\, Masers with Green Sources} Class~I CH$_3$OH\, masers are collisionally excited, and are thought to form both in the outflows associated with star formation \citep{plam90,kurt04, voro06} and at the intersections of expanding \mbox{$\mathrm{H\,{\scriptstyle {II}}}$}\, regions and their neighboring molecular clouds \citep{voro10}. Because they are reliable tracers of star formation, we searched for 44~GHz CH$_3$OH\, maser emission toward our enhanced 4.5~$\mu$m\, emission sources. We detect 8 masers in 31 fields. As with the 6.7~GHz maser observations, the 44~GHz masers are identified by eye in EVLA data cubes prior to self-calibration. These data have a typical 1$\sigma$ sensitivity of $\sim$~70~mJy. Spectra of the 8 maser sites are shown in Figures~\ref{g0}-\ref{g32}, and their positions, velocities, and peak intensities can be found in Table~\ref{forty-summary}. Because we are searching for 44~GHz masers toward green sources, which are known to be in the early stages of star formation, we will likely identify masers that are associated with outflows rather than \mbox{$\mathrm{H\,{\scriptstyle {II}}}$}\, regions. The collisionally excited masers formed in outflows are typically found at larger angular separations \citep[up to tens of arcseconds;][]{cyga09} from central protostellar objects than radiatively excited masers. Indeed, \citet{cyga09} find some 44~GHz masers beyond the extent of the 4.5~$\mu$m\, emission used to identify their green sources. We select a radius of 30\arcsec\, as the maximum separation to associate a 44~GHz maser with a green source. We find that 8 of 31 green sources are positionally coincident with at least one 44~GHz maser (Fig.~\ref{galcen_sources}). It is possible that these sources are not physically associated, especially if they are located at the Galactic center distance of $\sim$~8.5~kpc (where a separation of 30\arcsec\, corresponds to a linear size scale of $\sim$~1.2~pc). Because the chance of a random alignment of these two signs of star formation activity is low, however, we use the angular separation of 30\arcsec\, even for the Galactic center sources. We find that some green sources are associated with both 44~GHz and 6.7~GHz masers. Of the 12 green sources with 6.7~GHz CH$_3$OH\, masers, 5 have 44~GHz masers. Of the 8 sources with associated 44~GHz masers, 5 have 6.7~GHz maser counterparts. \subsection{24~$\mu$m\, Emission toward Green Sources} Bright 24~$\mu$m\, emission is often associated with star formation. This emission may be from the heated dust in a protostar/disk system \citep{muze04, whit04, beut07}, or it could arise from the heated dust in an \mbox{$\mathrm{H\,{\scriptstyle {II}}}$}\, region. \citet{cham09}\, find a high correlation of green sources with 24~$\mu$m\, emission, further supporting the idea that it is a reliable tracer of star formation. To determine if our sample of green sources is coincident with this additional star formation indicator, we visually inspected {\it Spitzer}/MIPS 24~$\mu$m\, data of the Galactic center (Y-Z09). In regions where the MIPS data are saturated (e.g., near Sgr~A$^*$), lower resolution {\it MSX} data at 21.34~$\mu$m\, are used to replace the missing MIPS data (see Y-Z09 for details). We find that 24 of the 31 green sources are coincident with 24~$\mu$m\, emission. Thus, we can be reasonably sure that these sources are indeed protostellar in nature. Of the seven green sources that do not display 24~$\mu$m\, emission, two green sources are close to \mbox{$\mathrm{H\,{\scriptstyle {II}}}$}\, regions that are very bright at 24~$\mu$m: Sgr~C and Sh2-20 \citep{shar59,dutr03}. The bright emission from these sources may be overwhelming any 24~$\mu$m\, emission toward the green sources. Thus, the 24~$\mu$m\, detection rate toward green sources of 77\% (24 of 31) should be considered a lower limit. \subsection{The Galactic Location of Green Sources} The green sources in our sample are divided into two categories -- one consisting of sources likely to be in the Galactic center region at a distance of $\sim$~8.5~kpc, and the other likely to be foreground to the Galactic center. As described in Y-Z09, the green sources were divided into these two categories based on their Galactic latitude. Sources with $|b|>10$\arcmin\, are assumed to be in the foreground, and sources with $|b|<10$\arcmin\, are assumed to be at the Galactic center distance. Based on this separation method, we find that 16 and 15 green sources in our sample are foreground sources and Galactic center sources, respectively. While this distance estimate may not be exact, it is a reasonable first approximation. Based on the scale height of 24~$\mu$m\, sources and young stellar objects in the Galactic center ($\sim$~8\arcmin; Y-Z09), it is plausible that the green sources at low Galactic latitudes are at the distance of the Galactic center. With kinematic velocities, one could attempt to derive kinematic distances to these sources. Because the sources are close in projection to the Galactic center, however, the derived kinematic distances would have very large errors. As a result, we have not made distance estimates based on kinematics. \subsection{Masses of Green Sources \label{masses}} The masses of the green sources were determined by Y-Z09, who performed SED fits of the sources using YSO models \citep{robi06, whit03a, whit03b} and a linear regression fitter \citep{robi07}. The fits to individual sources range from well-constrained to poorly constrained, and have typical errors of $\sim$~25\%. The current masses derived from these fits range from 2.1 to 29.9~M$_{\odot}$, with a median mass of 10.3~M$_{\odot}$. Because these green sources are likely still accreting, their final masses may be larger than their current mass, and the masses we list are a lower limit to the final masses of individual stars. Moreover, the green sources may consist of multiple sources at higher resolution, so the derived masses may represent the mass of a cluster of protostars rather than individual protostars. Nevertheless, the derived masses of the green sources, along with their positional coincidence with Class~II CH$_3$OH\, maser emission, make it likely that the green sources harbor high-mass protostars. \subsection{Association of Green Sources with IRDCs \label{irdcs}} IRDCs, which are identified as absorption features against the Galactic IR background, are dense (n~$>$~10$^5$~cm$^{-3}$, N~$\sim$~10$^{24}$~cm$^{-2}$), cold ($<$ 25 K; Egan et al. 1998; Carey et al. 1998, 2000), and have characteristic sizes and masses of $\sim$~5 pc and $\sim$~few 10$^3$~M$_{\odot}$\, (Simon et al. 2006). These large reservoirs of molecular gas harbor the earliest stages of star and cluster formation. Within IRDCs are compact cores with characteristic sizes of $\sim$~0.5~pc and masses of $\sim$~120~M$_{\odot}$, comparable to compact cores associated with high-mass star formation \citep{rath06}. Some IRDC cores contain embedded young stars or protostars, and a few of these embedded young stellar objects will evolve into high-mass stars \citep[e.g.,][]{beut05,rath05}. \citet{cham09}\, found that the cores within IRDCs span a range of evolutionary stages, and can be separated into three broad categories: (1) `quiescent' cores, which display no bright IRAC (3-8~$\mu$m) or 24~$\mu$m\, emission and are in a pre-protostellar state, (2) `active' cores, which contain extended, enhanced 4.5~$\mu$m\, emission (a `green fuzzy') coincident with 24~$\mu$m\, emission, and (3) `red' cores, which display bright 8~$\mu$m\, emission, indicative of polycyclic aromatic hydrocarbon (PAH) emission and an \mbox{$\mathrm{H\,{\scriptstyle {II}}}$}\, region. We hypothesize that our sample of enhanced 4.5~$\mu$m\, sources are similar in their protostellar nature to the green fuzzies found within IRDCs. To test this possibility, we examined 8~$\mu$m\, IRAC images of the Galactic center region to determine if our green sources are associated with IRDCs. We find that 30 of our 31 green sources are associated with IRDCs, as identified by eye. Images of these sources are shown in Figures~\ref{g0}-\ref{g32}. \subsection{Notes on Individual Sources}\label{notes} Our source list is comprised of the 34 green sources identified by Y-Z09 (named g0 through g32; g21 is parsed into g21A and g21B). These sources were selected for their proximity, in projection, to the Galactic center (all sources have $|\ell|<1.05^{\circ}$ and $|b|<0.8^{\circ}$). In addition, all 34 sources were selected, by eye, for their enhanced 4.5~$\mu$m\, emission, as identified by the ratio: $I(4.5)/[I(3.6)^{1.2} \times I(5.8)]^{0.5}$, referred to as the `green ratio' hereafter. Y-Z09 found that this empirical green ratio of 4.5~$\mu$m\, intensity to that determined by a power-law interpolation between 3.6~$\mu$m\, and 5.8~$\mu$m\, intensities is successful at identifying sources with enhanced 4.5~$\mu$m\, emission. Here we describe each of the green sources in our sample. {\it Spitzer}/IRAC 3-color images of each source, using data obtained as part of GLIMPSE \citep{benj03} and another IRAC survey of the Galactic center region \citep{stol06, aren08, rami08}, are contained in Figures~\ref{g0}-\ref{g32}. In addition, Figures~\ref{g0}-\ref{g32} also contain {\it Spitzer}/MIPS 24~$\mu$m\, images of each source using data obtained by Y-Z09. For the fields toward which we detect CH$_3$OH\, maser emission, the maser positions are overlaid on the images, and the spectra are included in the figures. The masses of the sources given in the following sections are from Y-Z09, and are calculated as described in Section~\ref{masses}. In general, the 6.7~GHz masers that are associated with our green sources are located on the enhanced 4.5~$\mu$m\, emission that define the sources. The average separation between the center of the green sources and their associated 6.7~GHz masers is 2.3$''$. Only 2 of the 13 associated 6.7~GHz masers are $>$~5$''$ from the center of the green sources, supporting the idea that these masers are formed close to the central protostellar object. 44~GHz masers are found at an average angular separation of 10.6$''$ from the center of their associated green sources. The 44~GHz masers masers are farther away from the center of the green sources than the 6.7~GHz masers, near the edges of the enhanced 4.5~$\mu$m\, emission and consistent with their creation in outflows. In 6 fields (g6-g10, g31), we detect 6.7~GHz masers in our observations but do not associate them green sources. We include the positions of these masers in their appropriate figures to show where they reside in relation to the green sources in our sample. In general, the masers that are not associated with the green sources are not associated with any strong IRAC or MIPS emission. These masers may be associated with star formation along the line of sight to the green sources, but at an evolutionary state during which no significant IR emission is detectable. Alternatively, they may reside on the far side of the Galaxy, making the detection of infrared emission toward them difficult. \subsubsection{g0 (G1.041$-$0.072)} Green excess source g0 consists of two knots of enhanced 4.5~$\mu$m\, emission (Fig.~\ref{g0}), and is located to the west of Sgr~D. The mass of g0 (16.1$\pm$3.5~M$_{\odot}$) indicates that it is a site of high-mass star formation. Source g0 is located within an IRDC that is at the eastern edge of the prominent dust ridge seen toward the Galactic center \citep{lis94,lis98, lis01}. Because it is within 10$'$ of the Galactic plane, we assume that this source is at the distance of the Galactic center. We do not detect any CH$_3$OH\, maser emission toward this source. \subsubsection{g1 (G0.955$-$0.786), g2 (G0.868$-$0.697), g4 (G0.955$-$0.786)} Sources g1, g2, and g4 (Figs.~\ref{g1}, \ref{g2}, and \ref{g4}) are all coincident with the positions of known planetary nebulae \citep{vand01,jaco04}, are in close proximity to one another, and are isolated from other star formation regions. All three sources are associated with 24~$\mu$m\, emission, but not show no correlation with other observational signatures of star formation, such as IRDCs or CH$_3$OH\, maser emission. Because they are likely to be a planetary nebulae rather than protostars, we exclude them from our statistical analysis. \subsubsection{g3 (G0.826$-$0.211)} Similar to g0, green source g3 is embedded within an IRDC that may be part of the prominent dust ridge at positive Galactic longitude near the Galactic center. As seen in Figure~\ref{g3}, g3 appears more orange than green in the IRAC 3-color image, but, as the contours on the image show, it does have an enhancement at 4.5~$\mu$m. Its Galactic latitude places it at the distance of the Galactic center, and its possible association with the dust ridge and its proximity to Sgr~B2 support this claim. Source g3 has a mass of 10.6$\pm$0.0~M$_{\odot}$, and is not associated with any CH$_3$OH\, maser emission. \subsubsection{g5 (G0.708+0.408)} Green source g5 has the highest Galactic latitude of all the sources in our sample, making it likely to be foreground to the Galactic center. It is embedded within a small IRDC, and displays faint 24~$\mu$m\, emission (Fig.~\ref{g5}). This source, which displays no CH$_3$OH\, maser emission, has a mass of 6.0$\pm$3.0~M$_{\odot}$. \subsubsection{g6 (G0.693$-$0.045), g7 (G0.679$-$0.037), g8 (G0.667$-$0.037), g9 (G0.667$-$0.035), g10 (G0.665$-$0.053)} Green excess sources g6 through g10 are located in close proximity to one another in a large IRDC complex. This IRDC is part of the dust ridge, and is adjacent to Sgr~B2, one of the most massive star-forming regions in the Galaxy. The green sources are near the southern edge of the IRDC, just north of the bright Sgr~B2 \mbox{$\mathrm{H\,{\scriptstyle {II}}}$}\, regions, suggestive that star formation is progressing from south to north in this cloud (Y-Z09). We assign the Galactic center distance to each of these green sources. Source g6, at 29.9$\pm$8.2 the most massive of our sources, appears reddish in Figure~\ref{g6}, and is coincident with bright 24~$\mu$m\, emission. We do not associate g6 with any CH$_3$OH\, maser emission, but we do detect a 6.7~GHz maser emission to its north. This maser site, which displays multiple velocity features, is not associated with any obvious IRAC emission, but is coincident with faint 24~$\mu$m\, emission. Sources g7, g8, and g9 are shown in Figure~\ref{g789}. Bright green in color in the IRAC image, g7 also displays bright bright 24~$\mu$m. It has a mass of 9.4$\pm$1.2, and is not associated with any CH$_3$OH\, masers. Sources g8 and g9 are reddish in color, and the the region immediately surrounding them harbors seven 6.7~GHz masers (within $\sim$~1\arcmin), along with several sites of 44~GHz maser emission (within $\sim$~1\arcmin). The velocities of the 6.7~GHz masers fall in the range of $\sim$~48-73~km~s$^{-1}$, and the 44~GHz masers fall in the range of $\sim$~46-78~km~s$^{-1}$, both of which are consistent with the measured velocity of ionized gas seen toward this region. The large number of maser sites, along with their spread in velocity, indicates a high rate of star formation in this area. Source g9 has a mass of 21~$\pm$~7~M$_{\odot}$, but due to a poorly constrained fit, g8 has no derived mass. Both g8 and g9 are located in a region of bright 24~$\mu$m\, emission. Source g10 (9.5$\pm$0.7) displays extended 4.5~$\mu$m\, emission, and is not coincident with 24~$\mu$m\, emission (Fig.~\ref{g10}). We detect three 6.7~GHz masers in the g10 field, but none are close enough to be associated with the green source. Two of these masers display a single velocity feature, while the other shows two components. None of the maser sites are coincident with bright IRAC or 24~$\mu$m\, emission. \subsubsection{g11 (G0.542$-$0.476), g12 (G0.517$-$0.657), g13(G0.483$-$0.701),g14 (G0.477$-$0.727), g15 (G0.408$-$0.504)} Foreground green excess sources g11-g15 are associated with the Sharpless 20 \citep[Sh20;][]{shar59,dutr03} star formation region, which is centered at $l=0.5$$^{\circ}$, b$=-0.3$$^{\circ}$\, \citep{mars74}. Source g11 (5.2$\pm$2.5~M$_{\odot}$) displays bright, compact 4.5~$\mu$m\, emission, but no correlated 24~$\mu$m\, emission (Fig.~\ref{g11}). It is located within an IRDC, adjacent to a bright loop of 8~$\mu$m\, emission that is likely part of Sh20. We did not detect any CH$_3$OH\, masers toward g11. Green source g12, which has a mass of 2.1~$\pm$~1.2~M$_{\odot}$, shows faint, extended 4.5~$\mu$m\, emission along with coincident 24~$\mu$m\, emission (Fig.~\ref{g12}). The IRDC that harbors g12 is adjacent to Sh20. We detect two 44~GHz CH$_3$OH\, masers associated with g12, with single velocity features at 15 and 17~km~s$^{-1}$. One of the maser sites resides within the extent of the 4.5~$\mu$m\, emission, while the other is $\sim$~10\arcsec\, away. The 4.5~$\mu$m\, and 24~$\mu$m\, emission from g13 are bright and extended (Fig.~\ref{g13}). This source is locate just south of Sh20, embedded within an IRDC. It has a mass of 9.9~$\pm$~1.7~M$_{\odot}$, and we detect a single 44~GHz CH$_3$OH\, maser toward it. This maser has a single velocity feature at 13~km~s$^{-1}$\, that is located on a knot of 4.5~$\mu$m\, emission 19\arcsec\, away from the center of the green source. Located just south of g13, g14 displays two knots of 4.5~$\mu$m\, emission, both of which are coincident with 24~$\mu$m\, emission (Fig.~\ref{g14}). This 7.4$\pm$1.7~M$_{\odot}$, source displays no CH$_3$OH\, maser emission, and is found within an IRDC. Green source g15 (Fig.~\ref{g15}) shows extended 4.5~$\mu$m\, emission coincident with 24~$\mu$m\, emission. This source, with a mass of 14.3~$\pm$~4.2~M$_{\odot}$, is not associated with a 44~GHz maser. The 4.5~$\mu$m\, emission from g15 is positionally coincident with a 6.7~GHz maser that has a single velocity feature at 26~km~s$^{-1}$. It is also surrounded by a bright ring of 8 and 24~$\mu$m\, emission, so it is unclear if g15 is embedded in an IRDC, or if the region in which it sits is only dark at 8~$\mu$m\, relative to the bright ring of emission. \subsubsection{g16 (G0.376+0.040)} Located at the distance of the Galactic center, the extended green source g16 (Fig.~\ref{g16}) coincides with one of the string of submillimeter continuum emitting clouds that comprise the dust ridge. In addition to being located within an IRDC, g16 also displays bright 24~$\mu$m\, emission. This source, with a mass of 10.1~$\pm$~0.6~M$_{\odot}$, is coincident with a 6.7~GHz maser, but no 44~GHz maser. The 6.7~GHz maser has a single velocity feature at 37~km~s$^{-1}$, and is located 2\arcsec\, away (detected by C10 only) from the center of the green source, but coincident with the enhanced 4.5~$\mu$m\, emission. \subsubsection{g17 (G0.315$-$0.201)} The foreground source g17 (Fig.~\ref{g17}) lies in the vicinity of two stellar cluster candidates that are located within 1\arcmin\, of each other and within the Sharpless 20 region. This region also has variable X-ray emission, indicating the presence of very young stars \citep{law04}. Source g17 is associated with an IRDC, and it displays no 24~$\mu$m\, emission (one of only two with a maser and no 24~$\mu$m\, emission). A possible reason for its lack of 24~$\mu$m\, emission is that it lies next to a region of bright 24~$\mu$m\, emission, which may be overwhelming any emission coincident with the 4.5~$\mu$m\, emission. Green source g17, with a mass of 12.7~$\pm$~2.6~M$_{\odot}$, is associated with pair of 6.7~GHz masers that are $<$~5\arcsec\, from the source. One of the maser sites displays a single velocity feature at 20~km~s$^{-1}$, and the other displays several features from $\sim$~16--20~km~s$^{-1}$. As seen in Figure~\ref{g17}, both masers lie at an interface where bright 4.5~$\mu$m\, emission transitions into bright 8~$\mu$m\, emission. No 44~GHz maser is detected toward g17. \subsubsection{g18 (G0.167$-$0.445)} Another foreground source, g18 is found toward the \mbox{$\mathrm{H\,{\scriptstyle {II}}}$}\, region RCW~141. This enhanced 4.5~$\mu$m\, source appears orange in Figure~\ref{g18}, and has a mass of 14.0~$\pm$~4.3~M$_{\odot}$. It is found within a filamentary IRDC and is associated with 24~$\mu$m\, emission. While not detected in our data, a 6.7~GHz maser was detected by C10 positionally coincident with g18. The 6.7~GHz spectrum of this source displays several velocity features from 9--17~km~s$^{-1}$. No 44~GHz maser is detected toward g18. \subsubsection{g19 (G0.091$-$0.663), g20 (G0.084$-$0.642)} Green sources g19 (Fig.~\ref{g19}) and g20 (Fig.~\ref{g20}), are located in close proximity to one another in the same IRDC complex toward the \mbox{$\mathrm{H\,{\scriptstyle {II}}}$}\, region RCW~141. Both sources are likely foreground to the Galactic center, and are coincident with bright 24~$\mu$m\, emission. While g20 (7.3~$\pm$~1.3~M$_{\odot}$) is not associated with any CH$_3$OH\, masers, g19 (5.5~$\pm$~1.3~M$_{\odot}$) is associated with a 6.7~GHz and a 44~GHz maser. The 6.7~GHz maser spectrum displays two bright features between 20 and 25~km~s$^{-1}$. It is located within the confines of the green source, $\sim$~2\arcsec\, from the center of its extended emission. The 44~GHz maser emission was detected $\sim$~14\arcsec\, from the green source, not directly on the 4.5~$\mu$m\, emission. This 44~GHz maser displays a single emission feature at 17~km~s$^{-1}$. \subsubsection{g21A (G359.972$-$0.459A) and g21B (G359.972$-$0.459B)} Single SED fits were performed for most green sources (Y-Z09), but g21 was best fit by two sources, designated g21A (11.8~$\pm$~2.5~M$_{\odot}$) and g21B (23.8~$\pm$~6.6~M$_{\odot}$). These sources are seen as distinct lobes of 4.5~$\mu$m\, emission in Figure~\ref{g21}, and appear to be in the same star forming region, which is foreground to the Galactic center and in the vicinity of RCW~137. Like many of the other green sources, g21A and B are associated with an IRDC and are coincident with 24~$\mu$m\, emission. There is one 6.7~GHz maser that is located $<$~10\arcsec\, from g21A and g21B (7\arcsec\, and 1\arcsec, respectively). Because it is closer to g21B, we associate the maser with that source. This maser consists of a single, bright emission feature at 23~km~s$^{-1}$\, in the C10 data. No 44~GHz masers are detected toward either of the g21 sources. \subsubsection{g22 (G359.939+0.170)} Another example of an extended green source embedded within an IRDC, g22 is likely located foreground to the Galactic center near an \mbox{$\mathrm{H\,{\scriptstyle {II}}}$}\, complex. The enhanced 4.5~$\mu$m\, emission is coincident with both 24~$\mu$m\, emission and a 6.7~GHz maser (Fig.~\ref{g22}). The maser has a single, bright emission feature at -0.8~km~s$^{-1}$, and is located directly on the green region that defines the source, close to the peak of the 24~$\mu$m\, emission. Source g22 has a mass of 4.9~$\pm$~1.8~M$_{\odot}$, and is not associated with any 44~GHz maser emission. \subsubsection{g23 (G359.932$-$0.063)} Of all our sources, g23 is the closest (in projection) to the Galactic center. It is embedded in an IRDC that runs parallel to the Galactic plane, just south of Sgr~A$^*$. It is a compact 4.5~$\mu$m\, source (Fig.~\ref{g23}) with no 24~$\mu$m\, counterpart. Source g23 has a mass of 12.0~$\pm$~3.2~M$_{\odot}$, is located at the distance of the Galactic center, and is not associated with any CH$_3$OH\, masers. \subsubsection{g24 (G359.907$-$0.303)} Green source g24 shows a clear enhancement at 4.5~$\mu$m, but its emission at this wavelength is relatively faint (Fig.~\ref{g24}). It displays no corresponding 24~$\mu$m\, emission, and no CH$_3$OH\, maser emission. This source is located within an IRDC, adjacent to a region of bright 8~$\mu$m\, emission. It has a mass of 6.3~$\pm$~2.5~M$_{\odot}$, and is likely located foreground to the Galactic center. \subsubsection{g25 (G359.841$-$0.080)} Much like g23, green source g25 is found within an IRDC that is south of Sgr~A$^*$ and parallel to the Galactic plane. This source is comprised of two knots of 4.5~$\mu$m\, emission (Fig.~\ref{g25}), the brighter of which is correlated with 24~$\mu$m\, emission. This source, which displays no CH$_3$OH\, maser emission, is located at the distance of the Galactic center and has a mass of 10.3~$\pm$~1.2~M$_{\odot}$. \subsubsection{g26 (G359.618$-$0.245)} Similarly to sources g21A and g21B, green source g26 is located foreground to the Galactic center, and is in the vicinity of RCW~137. This bright, extended green source, which has a derived mass of 7.1~$\pm$~0.9~M$_{\odot}$, is located within a plume-shaped IRDC and is coincident with 24~$\mu$m\, emission (Fig.~\ref{g26}). 6.7~GHz maser emission is detected toward this source, and its spectrum displays multiple velocity features between 19 and 25~km~s$^{-1}$. The 6.7~GHz maser site is within the confines of the green source. In addition to the 6.7~GHz maser emission, we also detect two sites of 44~GHz maser emission $\sim$~3\arcsec\, away, with velocities of 19 and 20~km~s$^{-1}$. The multiple velocity features of 6.7~GHz maser emission, along with the multiple sites of 44~GHz maser emission, indicate that a cluster of stars may be forming in the IRDC that contains g26. \subsubsection{g27 (G359.599$-$0.032)} What stands out about g27 (Fig.~\ref{g27}) is that it is the only green source not clearly associated with an IRDC. Its compact 4.5~$\mu$m\, emission is, however, coincident with 24~$\mu$m\, emission, indicating that it could still be a region of high-mass star formation. Indeed, it has a mass of 17.5~$\pm$~3.0~M$_{\odot}$. This Galactic center green source is not associated with any 6.7~GHz maser emission, but is associated with a 44~GHz maser. This 44~GHz maser is $<$~2\arcsec\, from the center of the source, and has a velocity of 72~km~s$^{-1}$. \subsubsection{g28 (G359.57+0.270)} Source g28 is a faint, slightly extended 4.5~$\mu$m\, source that is not coincident with 24~$\mu$m\, emission (Fig.~\ref{g28}). Its height above the Galactic plane results in a distance assignment that is foreground to the Galactic center. This source (4.0~$\pm$~1.3~M$_{\odot}$) is found within a small IRDC that has a relatively low 8~$\mu$m\, flux decrement relative to its surroundings. We do not detect any CH$_3$OH\, masers toward g28. \subsubsection{g29 (G359.437$-$0.102)} Located in the Sgr~C region at the distance of the Galactic center, green source g29 is embedded within an IRDC and displays extended 4.5~$\mu$m\, emission (Fig.~\ref{g29}). This 14.1~$\pm$~2.4~M$_{\odot}$\, source is not associated with 24~$\mu$m\, emission, making it one of only two sources that has maser emission but no 24~$\mu$m\, emission. Despite the lack of this particular star-forming indicator, the association of g29 with both 6.7 and 44~GHz maser emission indicates that star formation is indeed underway toward this source. The 6.7~GHz maser emission arises in two locations that are near the edge of the 4.5~$\mu$m\, emission and are $\sim$~6\arcsec\, apart. Each of these maser sites displays two velocity components ($-$53 to $-$45~km~s$^{-1}$, and $-$57 to $-$53~km~s$^{-1}$). The single 44~GHz maser is 5\arcsec\, from the center of green source, but still within the extent of the 4.5~$\mu$m\, emission. This maser has a velocity of $-$66~km~s$^{-1}$. \subsubsection{g30 (G359.30+0.033)} Green excess source g30 displays extended 4.5~$\mu$m\, and is found within an IRDC that is long, filamentary, and perpendicular to the Galactic plane (Fig.~\ref{g30}). We find bright 24~$\mu$m\, emission associated with this green source, but no CH$_3$OH\, masers. This source is likely to be at the distance of the Galactic center, and it has a mass of 8.9~$\pm$~0.7~M$_{\odot}$. \subsubsection{g31 (G359.199+0.041)} A compact, bright 4.5~$\mu$m\, emission source, g31 is located at the distance of the Galactic center in an IRDC. It has a mass of 14.4~$\pm$~4.6~M$_{\odot}$, and is coincident with 24~$\mu$m\, emission. We detect a 6.7~GHz maser detected in the field, but it is $\sim$~70\arcsec\, away from g31, and thus not associated with the green source. The maser site is $\sim$~15\arcsec\, from a small region of bright 24~$\mu$m\, emission, but is not near any bright IRAC emission. This 44~GHz maser displays a single velocity feature at $-$4~km~s$^{-1}$. \subsubsection{g32 (G358.980+0.084)} Green source g32 shows faint, extended 4.5~$\mu$m\, emission (Fig.~\ref{g32}), is embedded within an IRDC, and is likely located at the distance of the Galactic center. This source is associated with bright 24~$\mu$m\, emission and 6.7~GHz maser emission, making it very likely that this is a star-forming source. The 6.7~GHz maser emission is positionally coincident with the extended, enhanced 4.5~$\mu$m\, emission, and displays a single velocity feature at $\sim$~6~km~s$^{-1}$\, (in the C10 data only). The SED fit for this source indicates that it has a mass of 10.5~$\pm$~6.0~M$_{\odot}$. No 44~GHz maser emission is detected toward g32. \section{Discussion} \subsection{Comparison of Galactic Center and Foreground Sources} The CMZ contains $\sim$~5~$\times$~10$^{7}$~M$_{\odot}$\, of molecular gas \citep{pier00}, including many prominent IRDCs. How exactly star formation proceeds in these clouds, however, remains unclear. Based on molecular line emission seen throughout the CMZ, such as SiO and HCO$^+$ \citep[e.g.,][]{mart00,riqu10}, as well as several H$_3^+$ lines of sight toward the CMZ \citep{oka05}, the chemistry of molecular gas in the Galactic center is likely to be different from the gas in the Galactic disk. In addition, a NH$_3$ study of of Galactic center clouds distributed between l=$-1$$^{\circ}$\, and 3$^{\circ}$\, \citep{huet93} shows a two-temperature distribution of molecular gas at T$_{kin}~\sim$200K and T$_{kin}\sim$25K, while the dust temperature of the clouds in the CMZ remains low \citep[$\leq$~30~K; e.g.,][]{oden84,cox89,pier00}. Stronger turbulence \citep[cf.][]{morr96} also differentiates CMZ molecular clouds from their counterparts in the Galactic disk. Thus, it is possible that the unique environment in CMZ molecular clouds results in different initial conditions for star formation. The recent results of Y-Z09, however, show that the Kennicut law \citep{kenn98} holds in the Galactic center, so the relationship between star formation rate per unit area and surface mass density in the Galactic center is similar to that of the Galactic disk, at least to first order. Nevertheless, it is possible that some important differences exist between Galactic center and disk star formation. To test this possibility, we compare the 6.7 (radiatively excited) and 44~GHz (collisionally excited) CH$_3$OH\, maser detection rates toward foreground and Galactic center green sources. The foreground sources have 6.7 and 44~GHz CH$_3$OH\, maser detection rates of 44$\pm$17\% and 25$\pm$13\%, respectively (errors for detection rates are calculated using $\sqrt{N}$ counting statistics). Galactic center green sources have 6.7~GHz CH$_3$OH\, maser detection rate of 33$\pm$15\%, and a 44~GHz maser detection rate of 27$\pm$13\%. The overall detection rate of 44~GHz masers is roughly the same for foreground and Galactic center sources. The detection rate of 6.7~GHz masers may be higher for foreground sources than Galactic center sources, but taking into account their large error bars, they are also consistent with being the same (Fig.~\ref{venn}; see also Tables~\ref{maser-summary}\, and \ref{fg-gc-summary}). The possible difference in the foreground and Galactic center 6.7~GHz maser detection rates could be due to small-number statistics (i.e., the relatively low number of sources resulting in large errors). If the $\sqrt{N}$ statistics errors are taken as true 1$\sigma$\, errors, then the difference in detection rates is roughly a 1$\sigma$\, result. Thus, it is possible that the detection rates for 6.7~GHz CH$_3$OH\, masers (in addition to the 44~GHz CH$_3$OH\, masers) are the same for both foreground and Galactic center green sources. If this is the case, then we find no obvious difference between star formation in these two regions, as traced by CH$_3$OH\, maser detections. Despite the small number of statistics, we do, however, find that 2/7 (29$\pm$20\%) of foreground sources with 6.7~GHz masers have 44~GHz masers and that 3/5 (60$\pm$35\%) of Galactic center sources with 6.7~GHz masers have 44~GHz masers. Thus, the Galactic center sources with 6.7~GHz masers may have a relative overabundance of 44~GHz masers (again, a roughly 1$\sigma$\, result). One possible explanation for this could be the properties of the molecular clouds that harbor the green sources. \citet{prat08}\, show that 44~GHz maser emission is enhanced at high densities (n(H$_{2})$~$\sim$~10$^5-10^6$ cm$^{-3}$) and warm temperatures (80 to 200~K). Thus, the environment in clouds within the Galactic center may give rise to more favorable conditions for the creation of the 44~GHz CH$_3$OH\, masers. Although not firmly established, some recent work \citep{elli07,bree10} has proposed a sequence of CH$_3$OH\, maser evolution in star forming regions. In this evolutionary sequence, Class~I CH$_3$OH\, masers, generated by protostellar outflows, are formed before Class~II CH$_3$OH\, masers, with an overlap period lasting $\sim~1.5~\times~10^{4}$~years. Subsequent work by \citet{voro10}\, shows that Class~I masers are also formed at later evolutionary states, when expanding \mbox{$\mathrm{H\,{\scriptstyle {II}}}$}\, regions collide with neighboring molecular clouds, possibly accounting for the exceptions noted by \citet{bree10} to their proposed sequence. If this sequence is correct, then a relative over-abundance of 44~GHz masers in Galactic center sources, along with a relative under-abundance of 6.7~GHz masers, suggests that the Galactic center sources are, on average, younger than the foreground sources. According to Y-Z09, there was a burst in the Galactic center star formation rate about 10$^5$ years ago (rising to 1.4~M$_{\odot}$~yr$^{-1}$), which may have given rise to an over-abundance of Stage~I young stellar objects in the Galactic center, supporting this result. Other recent work \citep[e.g.,][]{font10}, however, suggests that there is no correlation between evolutionary state and CH$_3$OH\, maser class, so further study is needed to test these results. We also compare our maser detection results with other studies of CH$_3$OH\, maser emission toward 4.5~$\mu$m\, emission sources. In one study, \citet{cham09}\, studied 25~GHz Class~I CH$_3$OH\, maser emission toward a sample of 47 green fuzzies found within IRDCs. These green fuzzies were located using the Green Fuzzy Finder (GFF), which identifies contiguous pixels with an enhancement at 4.5~$\mu$m. In another study, \citet{cyga09} studied 6.7 and 44~GHz CH$_3$OH\, maser emission toward a sample of $\sim$~20 EGOs. \citet{cyga09} identified EGOs by eye using 3-color images created with GLIMPSE data \citep{cyga08}. \citet{chen09} also searched for a correlation between EGOs and Class~I CH$_3$OH\, maser emission, using the results of four previously published CH$_3$OH\, maser searches. Green fuzzies and EGOs have different selection criteria from one another, and from our sample of 31 green sources, but comparisons of the CH$_3$OH\, maser detection rates may provide some insight into how these sources are related. For simplicity, we refer to the three types of enhanced 4.5~$\mu$m\, sources as green sources. \citet{cham09} find that 17\% of 47 green sources in Galactic disk IRDCs are associated with Class~I CH$_3$OH\, maser emission. \citet{cyga09} searched for CH$_3$OH\, maser emission toward a sample of $\sim$20 green sources, and report that $\sim$65\% of their sample harbor 6.7~GHz masers, $\sim$90\% of which also display 44~GHz maser emission. \citet{chen09} find that $\sim$~67\% of a sample of 61 sources are associated with Class~I CH$_3$OH\, masers. In our sample of 31 green sources, 12 (39~$\pm$~11\%) are associated with 6.7~GHz masers, 8 (26~$\pm$~9\%) are associated with 44~GHz masers, and 5 (16~$\pm$~7\%) are associated with both maser transitions. Our Class~I maser detection rate (26~$\pm$~9\%) is in rough agreement with that of \citet{cham09}. The similarity of the maser detection rates, along with the association of both sets of sources with IRDCs and 24~$\mu$m\, emission, indicates that our green sources are similar in nature to the green sources studied by \citet{cham09}. The difference in the 6.7~GHz maser detection rate between our green sources (39~$\pm$~11\%) and the \citet{cyga09} sample of green sources (65\%) is fairly large. Moreover, we find that only 5 of the 12 sources with 6.7~GHz masers (42~$\pm$~19\%) also harbor 44~GHz maser emission, while \citet{cyga09}\, and \citet{chen09} both find higher Class~I detection rates. A possible reason for these discrepancies is that the \citet{cyga09} and \citet{chen09} green sources are larger in angular extent than our green sources. The green sources in our sample are roughly 5-10\arcsec\, in size. The typical green source size in the \citet{cyga09} and \citet{chen09} samples, however, is $\sim$~10-20\arcsec, with some as large as 30\arcsec\, or more. If we assume that these larger green sources are located at roughly similar distances to the sources in our sample, then their larger angular sizes would correspond to larger physical sizes. Because the extended 4.5~$\mu$m\, emission is associated with outflows, it stands to reason that the \citet{cyga09} and \citet{chen09}\, green sources are associated with larger outflows, and are therefore more evolved. The larger, more evolved outflows have a greater surface area of interaction with the surrounding medium, thereby possibly explaining the increased 44~GHz detection rate toward the \citet{cyga09} and \citet{chen09} samples. Alternatively, the difference in the maser detection rates could be due to the the variable nature of CH$_3$OH\, maser emission \citep{goed04}, combined with the different sensitivities of the surveys. Indeed, the 6$\sigma$\, sensitivity for the \citet{cyga09}\, 6.7~GHz observations is $\sim$~0.16~Jy, and the 3$\sigma$\, sensitivity of our 6.7~GHz observations is 0.15~Jy. \subsection{Green Sources with and without 24~$\mu$m\, Emission} Both enhanced 4.5~$\mu$m\, emission (used to identify green sources) and 24~$\mu$m\, emission are star formation indicators, and when they are coincident with one another, they are a powerful identifier of active star formation \citep{beut07,cyga08,cham09}. We find that 77\%\, (24 of 31) of the green sources in our sample are coincident with 24~$\mu$m\, emission. The detection rate of 6.7~GHz CH$_3$OH\, masers for green sources coincident with 24~$\mu$m\, emission is 46$\pm$14\%, and is 29$\pm$20\% for those without 24~$\mu$m\, emission. The detection rate of 44~GHz CH$_3$OH\, masers for green sources coincident with 24~$\mu$m\, emission is 29$\pm$11\%. We detect only one 44~GHz CH$_3$OH\, maser toward a green source without 24~$\mu$m\, emission (detection rate of 14$\pm$14\%). Again, we are limited by the small number of statistics and the large errors, and conclusions drawn from this sample should be taken with caution. Nevertheless, it appears that green sources with 24~$\mu$m\, emission are more likely to harbor both 6.7 and 44~GHz CH$_3$OH\, masers than the green sources that are not coincident with 24~$\mu$m\, emission (see Tables~\ref{maser-summary}\, and \ref{fg-gc-summary}). If true, these results once again show that the combination of enhanced 4.5~$\mu$m\, emission, 24~$\mu$m\, emission, and CH$_3$OH\, maser emission are reliable protostellar locators. Even though the green sources that are coincident with 24~$\mu$m\, emission seem more likely to harbor CH$_3$OH\, masers, we do detect masers toward green sources that display no 24~$\mu$m\, emission. These sources may contain 24~$\mu$m\, emission too faint to detect using our MIPS data. Alternatively, they may be at a different, perhaps earlier, evolutionary state, before the dust around the central protostar has heated sufficiently to emit brightly at 24~$\mu$m. Finally, it is also possible that some (or all) of these green sources are not related to star formation. For example, planetary nebulae may also appear as 4.5~$\mu$m\, enhancement sources--recall that three green sources in our original sample have been excluded from this analysis because their positions coincide with those of known planetary nebulae. \subsection{Masses of Green Sources with and without CH$_3$OH\, Masers} We find that higher-mass sources are more likely to harbor 6.7~GHz CH$_3$OH\, masers than lower-mass sources. The median mass of green sources associated with 6.7~GHz CH$_3$OH\, masers is 12.7~M$_{\odot}$, while the median mass for those not associated with 6.7~GHz CH$_3$OH\, masers is 9.5~M$_{\odot}$\, (see Fig.~\ref{mass_histos}). Using a Kolmogorov-Smirnov (K-S) test, we calculate a 76\% chance that the mass distributions of green sources with and without 6.7~GHz CH$_3$OH\, maser emission are drawn from a different parent distribution. The median mass of sources with 44~GHz CH$_3$OH\, masers is 9.9~M$_{\odot}$, and the median mass for those without is 10.3~M$_{\odot}$\, (see Fig.~\ref{mass_histos}). We find only a 5\% chance that the mass distributions of sources with and without 44~GHz CH$_3$OH\, masers are drawn from a different parent distribution (according to a K-S test). Because the mass distributions of green sources with and without 6.7~GHz masers are likely to be different, and because the population associated with 6.7 CH$_3$OH\, masers has a larger median mass than those without, we conclude that higher-mass sources are more likely to display 6.7~GHz maser emission than lower-mass sources. The case is less clear for the 44~GHz masers, the detections of which show no clear correlation with mass. The lowest mass that a 6.7~GHz maser is associated with is 4.9$\pm$1.8~M$_{\odot}$\,(g22). Because the masses determined by Y-Z09 are the current masses of the sources (not the final masses), it is possible that this source will evolve into a high-mass ($\geq$~8~M$_{\odot}$) star through accretion, consistent with the idea that 6.7~GHz CH$_3$OH\, maser emission exclusively traces high-mass star formation. Moreover, g22 is assumed to be foreground to the Galactic center; if it is instead at the distance of the Galactic center, its derived mass would increase. The lowest mass source associated with a 44~GHz maser is 2.1$\pm$1.2~M$_{\odot}$\, (g12). Because of its lower mass, the likelihood of this source evolving into a high-mass star is smaller. The association of g12 with a 44~GHz maser supports with the hypothesis that this maser transition traces low-mass star formation in addition to high-mass star formation. \section{Conclusions} To study how star formation in the Galactic center's CMZ may differ from that of the Galactic disk, we studied CH$_3$OH\, maser emission toward a sample of 34 green sources. Of the 15 Galactic center green sources, we find that 5 are associated with radiatively excited Class~II 6.7~GHz CH$_3$OH\, maser emission, and 4 are associated with collisionally excited Class~I 44~GHz CH$_3$OH\, maser emission. Of the 16 green sources located foreground to the Galactic center, we find that 8 are associated with 6.7~GHz CH$_3$OH\, maser emission, and 4 are associated with 44~GHz CH$_3$OH\, maser emission. Based on these detections, we find: (1) little difference between the sources in the Galactic center and the foreground sources, (2) that the 34 green sources in our sample are similar to the green sources identified by \citet{cham09} rather than the larger green sources identified by \citet{cyga08}, and (3) that the possible relative overabundance of green sources with both 6.7 and 44~GHz masers in the Galactic center may be consistent with a recent burst of star formation in the Galactic center. \acknowledgments We are grateful to the anonymous referee, whose comments and suggestions greatly improved the paper. We also thank Vivek Dhawan at the NRAO for his help with the 6.7 GHz data reduction.
2,869,038,156,518
arxiv
\section{Introduction} \noindent Our goal is the study of the defining equations of the Rees algebras ${\mathbf R}[It]$ of classes of almost complete intersection ideals when one of its important metrics--especially depth or reduction number--attains an extreme value in the class. We are going to show that such algebras occur frequently and develop novel means to identify them. As a consequence interesting properties of such algebras have been discovered. We argue that several questions, while often placed in the general context of Rees algebra theory, may be viewed as subproblems in this more narrowly defined environment. \medskip Let ${\mathbf R}$ be a Cohen--Macaulay local ring of dimension $d$, or a polynomial ring ${\mathbf R}=k[x_1, \ldots, x_d]$ for $k$ a field. By an {\em almost complete intersection} we mean an ideal $I=(a_1, \ldots, a_g, a_{g+1})$ of codimension $g$ where the subideal $J=(a_1, \ldots, a_g)$ is a complete intersection and $a_{g+1}\notin J$. By the {\em equations} of $I$ it is meant a free presentation of the Rees algebra ${\mathbf R}[It]$ of $I$, \begin{eqnarray}\label{presRees1} 0 \rightarrow {\mathbf L} \longrightarrow {\mathbf B} = {\mathbf R}[{\mathbf T}_1, \ldots, {\mathbf T}_{g+1}] \stackrel{\psi}{\longrightarrow} {\mathbf R}[It] \rightarrow 0, \quad {\mathbf T}_i \mapsto f_it . \end{eqnarray} More precisely, ${\mathbf L}$ is the defining ideal of the Rees algebra of $I$ but we refer to it simply as the {\em ideal of equations} of $I$. We are particularly interested in establishing the properties of ${\mathbf L}$ when ${\mathbf R}[It]$ is Cohen--Macaulay or has almost maximal depth. This broader view requires a change of focus from ${\mathbf L}$ to one of its quotients. We are going to study some classes of ideals whose Rees algebras have these properties. They tend to occur in classes where the reduction number $\mbox{\rm red}_J(I)$ attains an extremal value. \medskip We first set up the framework to deal with properties of ${\mathbf L}$ by a standard decomposition. We keep the notation of above, $I = (J,a)$. The presentation ideal ${\mathbf L}$ of ${\mathbf R}[It]$ is a graded ideal ${\mathbf L} = L_1 + L_2 + \cdots$, where $L_1$ are linear forms in the ${\mathbf T}_i$ defined by a matrix $\phi$ of the syzygies of $I$, $ L_1 = [{\mathbf T}_1, \ldots, {\mathbf T}_{g+1}] \cdot \phi $. Our basic prism is given by the exact sequence \[ 0 \rightarrow {\mathbf L}/(L_1) \longrightarrow {\mathbf B}/(L_1) \longrightarrow {\mathbf R}[It] \rightarrow 0.\] Here ${\mathbf B}/(L_1)$ is a presentation of the symmetric algebra of $I$ and ${\mathbf S} = \mbox{\rm Sym}(I)$ is a Cohen--Macaulay ring under very broad conditions, including when $I$ is an ideal of finite colength. The emphasis here will be entirely on $T = {\mathbf L}/(L_1)$, which we call the {\em module of nonlinear} relations of $I$. The usefulness arises because of the fact exhibited in the exact sequence \begin{eqnarray}\label{presRees2} 0 \rightarrow T \longrightarrow {\mathbf S} \longrightarrow {\mathbf R}[It] \rightarrow 0. \end{eqnarray} \begin{itemize} \item[{$\bullet$}] [Proposition~\ref{canoseq}] $T$ is a Cohen--Macaulay ${\mathbf S}$--module if and only if $\mbox{\rm depth } {\mathbf R}[It]\geq d$. \end{itemize} Rees algebras with this property will be called {\em almost Cohen--Macaulay} (aCM for short). We note that ${\mathbf L}$ carries very different kind of information than $T$ does. The advantage lies in the flexibility of treating Cohen--Macaulay modules Cohen--Macaulay ideals: the means to test for Cohen-Macaulayness in modules are more plentiful than in ideals. An elementary example lies in the proof of: \begin{itemize} \item[{$\bullet$}] [Theorem~\ref{reducedsymi}] Suppose that ${\mathbf R}$ is a Cohen--Macaulay local ring and $I$ is an ${\mathfrak m}$--primary almost complete intersection such that ${\mathbf S}=\mbox{\rm Sym}(I)$ is reduced. Then ${\mathbf R}[It]$ is almost Cohen--Macaulay. \end{itemize} \medskip We shall now discuss our more technical results. Throughout $({\mathbf R},{\mathfrak m})$ is a Cohen--Macaulay local ring of dimension $d$ (where we include rings of polynomials and the ideals are homogeneous), and $I$ is an almost complete intersection $I = (J,a)$ of finite colength. \medskip One technique we bring in to the treatment of the equations of Rees algebras is the theory of the Sally module. It gives a very direct relationship between the Cohen--Macaulayness of $T$ and of the Sally module $S_J(I)$ of $I$ relative to $J$. $S_J(I)$ gives also a quick connection between the Castelnuovo regularity and the relation type of ${\mathbf R}[It]$ and those of $S_J(I)$. A criterion of Huckaba (\cite{Huc96}) (of which we give a quick proof for completeness) gives a method to test the Cohen--Macaulayness of $T$ in terms of the values of the first Hilbert coefficient $e_1(I)$ of $I$ (Theorem~\ref{Huckaba}). It is particularly well-suited for the case when $I$ is generated by homogeneous polynomials defining a birational mapping for then the value of $e_1(I)$ is known. \medskip Our approach to the estimation for $\nu(T)$, the minimum number of generators of $T$, passes through the determination of an effective formula for $\deg {\mathbf S}$, the multiplicity of ${\mathbf S}$: \begin{itemize} \item[{$\bullet$}] [Theorem~\ref{degSymibis}] If $I$ is generated by forms of degree $n$, then \[ \deg {\mathbf S} = \sum_{j=0}^{d-1} n^j + \lambda(I/J).\] \end{itemize} The summation accounts for $\deg {\mathbf R}[It]$, according to \cite{HTU}, so $\deg T = \lambda(I/J)$. This is a number that will control the number of generators of $T$, and therefore of ${\mathbf L}$, whenever ${\mathbf R}[It]$ is almost Cohen--Macaulay. It achieves the goal to find estimates for the number of generators of ${\mathbf L}$ and of its {\em relation type}, that is \[ \mbox{\rm reltype}(I) = \inf\{n \mid {\mathbf L} = (L_1, L_2, \ldots, L_n)\}.\] Two other metrics of interest, widely studied for homogeneous ideals but not limited to them, are the following. One seeks to bound the {\em saturation exponent} of ${\mathbf L}/(L_1)$ (which was introduced in \cite{syl2} and has a simple ring-theoretic explanation as the index of nilpotency of ${\mathbf S}$), \[ \mbox{\rm sdeg}(I) = \inf\{s \mid {\mathfrak m}^s {\mathbf L} \subset (L_1)\},\] and the other is the degree of the special fiber $\mathcal{F}(I)$ of ${\mathbf R}[It]$, also called the {\em elimination degree} of $I$, \[ \mbox{\rm edeg}(I) = \deg \mathcal{F}(I)= \inf\{s \mid L_s \not \subset {\mathfrak m}{\mathbf B}\}.\] While $\mbox{\rm retype}(I)$ is the most critical of these numbers, the other two are significant because they are often found linked to the syzygies of $I$. Our notion of extremality will cover the supremum or infimum values of these degrees in a given class of ideals but also their relationship to the cohomology of ${\mathbf R}[It]$ as expressed by the depth of the algebra. \medskip Two classes of almost Cohen--Macaulay algebras arise from certain homogeneous ideals. First, we show that binary ideals with one linear syzygy have this property. This has been proved by several authors. We offer a very short proof using the technology of the Sally module (Proposition~\ref{aCMofbin}). It runs for a few lines and gives no details of the projective resolution of that algebra besides the fact that it has the appropriate length. We include it because we have found no similar technique in the literature. The proof structure, a simple combinatorial obstruction to the aCM property, is used repeatedly to examine the occurrence of the property amongst ideals generated by quadrics in $4$--space. \medskip A different class of algebras are those associated to monomials. These ideals have the form $I=(x^{\alpha}, y^{\beta}, z^{\gamma}, x^ay^bz^c)$. We showed that \begin{itemize} \item[{$\bullet$}] [Proposition~\ref{nnn111}] The ideals $(x^n, y^n, z^n, xyz)$, $n\geq 3$, and $(x^n, y^n, z^n, w^n, xyzw)$, $n\geq 4$, have almost Cohen--Macaulay Rees algebras. \end{itemize} We expect these statements are still valid in higher dimensions. Our proofs were computer-assisted as we used Macaulay2 (\cite{Macaulay2}) to derive deeper heuristics. \section{Approximation complexes and almost complete intersections} A main source of extremal Rees algebras lie in the construction of approximation complexes. We quickly recall them and some of their main properties. \subsection{The $\mathcal{Z}$--complex} These are complexes derived from Koszul complexes, and arise as follows (for details, see \cite{HSV1}, \cite{HSV83}, \cite[Chapter 4]{alt}). Let ${\mathbf R}$ be a commutative ring, $F$ a free ${\mathbf R}$-module of rank $n$, with a basis $\{e_1, \ldots, e_n\}$, and $\varphi: F\rightarrow {\mathbf R}$ a homomorphism. The exterior algebra $\bigwedge F$ of $F$ can be endowed with a differential \[ \partial_{\varphi} : \bigwedge^{r} F {\longrightarrow} \bigwedge^{r-1}F, \] \[ \partial_{\varphi}(v_1\wedge v_2\wedge \cdots \wedge v_{r}) = \sum_{i=1}^{r} (-1)^i\varphi(v_i) (v_1\wedge \cdots \wedge \widehat{v_i} \wedge \cdots \wedge v_{r}). \] The complex $\mathbb{K}(\varphi)=\{ \bigwedge F, \partial_{\varphi}\}$ is called the {\em Koszul complex} of $\varphi$. \index{Koszul complex} Another notation for it is: Let ${\mathbf x}=\{\varphi(e_1), \ldots, \varphi(e_n)\}$, denote the Koszul complex by $\mathbb{K}({\mathbf x})$. \medskip Let ${\mathbf S} = S(F)= \mbox{\rm Sym}(F) = {\mathbf R}[{\mathbf T}_1, \ldots, {\mathbf T}_n]$, and consider the exterior algebra of $F\otimes_{{\mathbf R}} S(F)$. It can be viewed as a Koszul complex obtained from $\{\bigwedge F, \partial_{\varphi}\}$ by change of scalars ${\mathbf R} \rightarrow {\mathbf S}$, and another complex defined by the ${\mathbf S}$-homomorphism \[ \psi: F \otimes_{{\mathbf R}}S(F) \longrightarrow S(F), \quad \psi(e_i)= {\mathbf T}_i. \] The two differentials $\partial_{\varphi}$ and $\partial_{\psi}$ satisfy \[ \partial_{\varphi}\partial_{\psi}+ \partial_{\psi}\partial_{\varphi}=0,\] which leads directly to the construction of several complexes. \begin{Definition}{\rm Let $\mathbf{Z}$, $\mathbf{B}$ and $\mathbf{H}$ be the modules of cycles, boundaries and the homology of $\mathbb{K}(\varphi)$. \begin{itemize} \item The $\mathcal{Z}$-complex of $\varphi$ is $ \mathcal{Z}=\{ \mathbf{Z}\otimes_{{\mathbf R}} {\mathbf S}, \partial\}$ \[ 0 \rightarrow Z_n\otimes {\mathbf S}[-n] \longrightarrow \cdots \longrightarrow Z_1\otimes {\mathbf S}[-1] \longrightarrow {\mathbf S} \rightarrow 0,\] where $\partial$ is the differential induced by $\partial_{\phi}$. \item The $\mathcal{B}$-complex of $\varphi$ is the sub complex of $ \mathcal{Z}$ \[ 0 \rightarrow B_n\otimes {\mathbf S}[-n] \longrightarrow \cdots \longrightarrow B_1\otimes {\mathbf S}[-1] \longrightarrow {\mathbf S} \rightarrow 0.\] \item The $\mathcal{M}$-complex of $\varphi$ is $\mathcal{M}=\{ \mathbf{H}\otimes_{{\mathbf R}} S, \partial\}$ \[ 0 \rightarrow {\mathrm H}_n\otimes {\mathbf S}[-n] \longrightarrow \cdots \longrightarrow {\mathrm H}_1\otimes {\mathbf S}[-1] \longrightarrow {\mathrm H}_0 \otimes {\mathbf S} \rightarrow 0,\] where $\partial$ is the differential induced by $\partial_{\psi}$. \end{itemize} }\end{Definition} These are complexes of graded modules over the polynomial ring ${\mathbf S}={\mathbf R}[{\mathbf T}_1, \ldots, {\mathbf T}_n]$. \begin{Proposition} Let $I = \varphi(F)$. Then \begin{enumerate} \item[{\rm (i)}] The homology of $\mathcal{Z}$ and of $\mathcal{M}$ depend only on $I$; \item[{\rm (ii)}] $ {\mathrm H}_0(\mathcal{Z})= \mbox{\rm Sym}(I)$. \end{enumerate} \end{Proposition} \subsubsection*{Acyclicity} The homology of the Koszul complex $\mathbf{K}(\varphi)$ is not fully independent of $I$, for instance, it depends on the number of generators. An interest here is the ideals whose $\mathcal{Z}$ complexes are acyclic. We recall a broad setting that gives rise to almost complete intersections with Cohen--Macaulay symmetric algebras. For a systematic examination of the notions here we refer to \cite{HSV83}. It is centered on one approximation complex associated to an ideal, the so-called $\mathcal{Z}$--complex. An significant interest for us is the following. \begin{Theorem}[{\cite[Theorem 10.1]{HSV83}}] Let ${\mathbf R}$ be a Cohen-Macaulay local ring and let $I$ be an ideal of positive height. Assume: \begin{itemize} \item[{\rm (a)}] $\nu(I_{\mathfrak{p}})\leq \mbox{\rm height } \mathfrak{p}+1$ for every \ $\mathfrak{p} \supset I$; \item[{\rm (b)}] $\mbox{\rm depth } ({\mathrm H}_i)_{\mathfrak{p}} \geq \mbox{\rm height } \mathfrak{p}-\nu(I_{\mathfrak{p}})+1 $ for every \ $\mathfrak{p}\supset I$ and every $0\leq i \leq \nu(I_{\mathfrak{p}})-\mbox{\rm height } I_{\mathfrak{p}} $. \end{itemize} Then \begin{itemize} \item[{\rm (i)}] The complex $\mathcal{Z}$ is acyclic. \item[{\rm (ii)}] $\mbox{\rm Sym}(I)$ is a Cohen-Macaulay ring. \end{itemize} \end{Theorem} \begin{Corollary}\label{Zaci} Let ${\mathbf R}$ be a Cohen-Macaulay local ring of dimension $d \geq 1$ and let $I$ be an almost complete intersection. The complex $\mathcal{Z}$ is acyclic and $\mbox{\rm Sym}(I)$ is a Cohen-Macaulay algebra in the following cases: \begin{itemize} \item[{\rm (i)}] \mbox{\rm (See also \cite{Rossi83})} $I$ is ${\mathfrak m}$-primary. In this case $\mbox{\rm Sym}(I)$ has Cohen--Macaulay type $d - 1$. \item[{\rm (ii)}] $\mbox{\rm height } I= d-1$. Furthermore if $I$ is generically a complete intersection then $I$ is of linear type. \item[{\rm (iii)}] $\mbox{\rm height } I= d-2$ and $\mbox{\rm depth } {\mathbf R}/I\geq 1$. Furthermore if $\nu(I_{\mathfrak{p}})\leq \mbox{\rm height } \mathfrak{p}$ for $I\subset \mathfrak{p}$ then $I$ is of linear type. \end{itemize} \end{Corollary} A different class of ideals with Cohen--Macaulay symmetric algebras is treated in \cite{Johnson}. \subsection{The canonical presentation} Let ${\mathbf R}$ be a Cohen--Macaulay local domain of dimension $d\geq 1$ and let $I$ be an almost complete intersection as in Corollary~\ref{Zaci}. The ideal of equations ${\mathbf L}$ can be studied in two stages: $(L_1)$ and ${\mathbf L}/(L_1)= T$: \begin{eqnarray} \label{canoseque} 0 \rightarrow T \longrightarrow {\mathbf S} = {\mathbf B}/(L_1)= \mbox{\rm Sym}(I) \longrightarrow {\mathbf R}[It] \rightarrow 0.\end{eqnarray} We will argue that this exact sequence is very useful. Note that $\mbox{\rm Sym}(I)$ and ${\mathbf R}[It]$ have dimension $d+1$, and that $T$ is the ${\mathbf R}$--torsion submodule of ${\mathbf S}$. Let us give some of its properties. \begin{Proposition} \label{canoseq} Let $I$ be an ideal as above. \begin{enumerate} \item[{\rm (i)}] $(L_1)$ is a Cohen-Macaulay ideal of ${\mathbf B}$. \item[{\rm (ii)}] $T$ is a Cohen--Macaulay ${\mathbf S}$--module if and only if $\mbox{\rm depth } {\mathbf R}[It]\geq d$. \item[{\rm (iii)}] If $I$ is ${\mathfrak m}$--primary then $\mathcal{N}=T\cap {\mathfrak m}{\mathbf S}$ is the nil radical of ${\mathbf S}$ and $\mathcal{N}^s = 0$ if and only if ${\mathfrak m}^sT =0$. This is equivalent to saying that $\mbox{\rm sdeg}(I)$ is the index of nilpotency of $\mbox{\rm Sym}(I)$. \item[{\rm (iv)}] $T=\mathcal{N} + \mathcal{F}$, where $\mathcal{F}$ is a lift in ${\mathbf S}$ of the relations in ${\mathbf S}/{\mathfrak m} {\mathbf S}$ of the special fiber ring $\mathcal{F}(I)={\mathbf R}[It] \otimes {\mathbf R}/{\mathfrak m}$. In particular if $\mathcal{F}(I)$ is a hyersurface ring, $T = (f, \mathcal{N})$. \end{enumerate} \end{Proposition} \noindent{\bf Proof. } (i) comes from Corollary~\ref{Zaci}. \medskip \noindent (ii) In the defining sequence of $T$, \[ 0 \rightarrow (L_1) \longrightarrow {\mathbf L} \longrightarrow T \rightarrow 0,\] since $(L_1)$ is a Cohen--Macaulay ideal of codimension $g$, as an ${\mathbf B}$--module, we have $\mbox{\rm depth } (L_1) = d+2$, while $\mbox{\rm depth } {\mathbf L} = 1 + \mbox{\rm depth } {\mathbf R}[It]$. It follows that $\mbox{\rm depth } T =\min \{d+1, 1 +\mbox{\rm depth } {\mathbf R}[It]\}$. Since $T$ is a module of Krull dimension $d+1$ it is a Cohen--Macaulay module if and only if $\mbox{\rm depth } {\mathbf R}[It] \geq d$. \medskip \noindent (iii) ${\mathfrak m}{\mathbf S}$ and $T$ are both minimal primes and for large $n$, ${\mathfrak m}^nT=0$. Thus $T$ and ${\mathfrak m}{\mathbf S}$ are the only minimal primes of ${\mathbf S}$, $\mathcal{N} = {\mathfrak m} {\mathbf S} \cap T$. To argue the equality of the two indices of nilpotency, let $n$ be such that ${\mathfrak m}^nT=0$. The ideal ${\mathfrak m}^n{\mathbf S} + T$ has positive codimension, so contains regular elements since ${\mathbf S}$ is Cohen--Macaulay. Therefore to show \[{\mathfrak m}^s T=0 \Longleftrightarrow \mathcal{N}^s =0\] it is enough to multiply both expressions by ${\mathfrak m}^n{\mathbf S} + T$. The verification is immediate. \medskip \noindent (iv) Tensoring the sequence (\ref{canoseque}) by ${\mathbf R}/{\mathfrak m}$ gives the exact sequence \[ 0 \rightarrow {\mathfrak m}{\mathbf S}\cap T/{\mathfrak m} T = \mathcal{N}/{\mathfrak m} T \longrightarrow T/{\mathfrak m} T \longrightarrow {\mathbf S}/{\mathfrak m} {\mathbf S} \longrightarrow \mathcal{F}(I) \rightarrow 0. \] By Nakayama Lemma we may ignore ${\mathfrak m} T$ and recover $T$ as asserted. \hfill$\Box$ \bigskip The main intuition derived from these basic observations is (ii): Whenever methods are developed to study the equations of ${\mathbf R}[It]$ when this algebra is Cohen--Macaulay, should apply [hopefully] in case they are almost Cohen--Macaulay. \begin{Remark}{\rm If $I$ is not ${\mathfrak m}$--primary but still satisfies one of the other conditions of Corollary~\ref{Zaci}, the nilradical $\mathcal{N}$ of ${\mathbf S}$ is given by $T\cap N_0{\mathbf S}$, where $N_0$ is the intersection of the minimal primes ${\mathfrak p}$ for which $I_{{\mathfrak p}}$ is not of linear type. } \end{Remark} \subsection{Reduced symmetric algebras} \begin{Proposition}\label{linkofmax} Let ${\mathbf R}$ be a Gorenstein local domain of dimension $d$ and let $J$ be a parameter ideal. If $J$ contains two minimal generators in ${\mathfrak m}^2$, then the Rees algebra of $I=J\colon {\mathfrak m}$ is Cohen--Macaulay and ${\mathbf L} = ( L_1, {\mathbf f})$ for some quadratic form ${\mathbf f}$. \end{Proposition} \noindent{\bf Proof. } The equality $I^2 = JI$ comes from \cite{CPV1}. The Cohen--Macaulayess of ${\mathbf R}[It]$ is a general argument (in \cite{CPV1} and probably elsewhere). Let ${\mathbf f}$ denote the quadratic form \[ {\mathbf f} = {\mathbf T}_{d+1} ^2 + \mbox{\rm lower terms}.\] Let us show that ${\mathfrak m} T=0$. Reduction modulo ${\mathbf f}$ can be used to present any element in ${\mathbf L}$ as \[ F = {\mathbf T}_{d+1} \cdot A + B\in {\mathbf L}, \] where $A$ and $B$ are forms in ${\mathbf T}_1,\ldots, {\mathbf T}_d$. Since $I = J:{\mathfrak m}$, any element in ${\mathfrak m} {\mathbf T}_{d+1}$ is equivalent, modulo $L_1$, to a linear form in the other variables. Consequently \[{\mathfrak m} F \subset (L_1, {\mathbf R}[{\mathbf T}_1, \ldots, {\mathbf T}_d]) \cap {\mathbf L} \subset (L_1),\] as desired. By Proposition~\ref{canoseq}, we have the exact sequence \[ 0\rightarrow T \longrightarrow {\mathbf S}/{\mathfrak m} {\mathbf S} \longrightarrow \mathcal{F}(I) \rightarrow 0.\] But $T$ is a maximal Cohen--Macaulay ${\mathbf S}$--module, and so it is also a maximal Cohen--Macaulay ${\mathbf S}/{\mathfrak m} {\mathbf S}$--module as well. It follows that $T$ is generated by a monic polynomial that divides the image of ${\mathbf f}$ in ${\mathbf S}/{\mathfrak m}{\mathbf S}$. It is now clear that $T=({\mathbf f}){\mathbf S}$. \hfill$\Box$ \begin{Corollary} For the ideals above $\mbox{\rm Sym}(I)$ is reduced. \end{Corollary} We now discuss a generalization, but since we are still developing the examples, we are somewhat informal. \begin{Corollary}\label{sdegs} Suppose the syzygies of $I$ are contained in ${\mathfrak m}^s{\mathbf B}$ and that ${\mathfrak m}^s{\mathbf L} \subset (L_1)$. We have the exact sequence \begin{eqnarray} \label{canoseque2} 0 \rightarrow T \longrightarrow {\mathbf S}/{\mathfrak m}^s{\mathbf S} \longrightarrow {\mathbf R}[It]\otimes {\mathbf R}/{\mathfrak m}^s = \mathcal{F}_s(I) \rightarrow 0. \end{eqnarray} If ${\mathbf R}[It]$ is almost Cohen--Macaulay, $T$ is a Cohen--Macaulay module that is an ideal of the polynomial ring ${\mathbf C} = {\mathbf R}/({\mathfrak m}^s)[{\mathbf T}_1, \ldots, {\mathbf T}_{d+1}]$, a ring of multiplicity ${s+d-1}\choose{d}$. Therefore we have that $\nu(T) \leq {{s+d-1}\choose {d}}$. \end{Corollary} Note that also here $\mathcal{F}_s(I)$ is Cohen--Macaulay. We wonder whether $\mathcal{F}(I)$ is Cohen--Macaulay. \bigskip Let $({\mathbf R}, {\mathfrak m})$ be a Cohen--Macaulay local ring and $I$ an almost complete intersection as in Corollary~\ref{Zaci}. We examine the following surprising fact. \begin{Theorem} \label{reducedsymi} Suppose that ${\mathbf R}$ is a Cohen--Macaulay local ring and $I$ is an ${\mathfrak m}$--primary almost complete intersection such that ${\mathbf S}=\mbox{\rm Sym}(I)$ is reduced. Then ${\mathbf R}[It]$ is an almost Cohen-Macaulay algebra. \end{Theorem} \noindent{\bf Proof. } Since $0 =\mathcal{N} = T\cap {\mathfrak m} {\mathbf S}$, on one hand from (\ref{canoseque}) we have that $T$ satisfies the condition $S_2$ of Serre, that is \[ \mbox{\rm depth } T_{P} \geq \inf\{2, \dim T_P\}\] for every prime ideal $P$ of ${\mathbf S}$. On the other hand, from (\ref{canoseque2}) $T$ is an ideal of the polynomial ring ${\mathbf S}/{\mathfrak m} {\mathbf S}$. It follows that $T= ({\mathbf f} ){\mathbf S}$, and consequently $\mbox{\rm depth } \ {\mathbf R}[It] \geq d$. \hfill$\Box$ \begin{Example}{\rm If ${\mathbf R} = \mathbb{Q}[x,y]/(y^4-x^3)$, $J = (x)$ and $I = J: (x,y)= (x, y^3)$, $\mbox{\rm depth } {\mathbf R}[It] = 1$. }\end{Example} There are a number of immediate observations. \begin{Corollary} If $I$ is an ideal as in {\rm Theorem~\ref{reducedsymi}}, then the special fiber ring $\mathcal{F}(I)$ is Cohen-Macaulay. \end{Corollary} \begin{Remark}{\rm If $I$ is an almost complete intersection as in (\ref{Zaci}) and its radical is a regular prime ideal $P$, that is ${\mathbf R}/P$ is regular local ring, the same assertions will apply if $\mbox{\rm Sym}(I)$ is reduced. }\end{Remark} \section{Almost Cohen--Macaulay algebras} We begin our treatment of the properties of an ideal when its Rees algebra ${\mathbf A} ={\mathbf R} [It]$ is almost Cohen-Macaulay. We first describe a large class of examples. \subsection{Direct links of Gorenstein ideals} We briefly outline a broad class of extremal Rees algebras. Let $({\mathbf R}, {\mathfrak m})$ be a Gorenstein local ring of dimension $d\geq 1$. A natural source of almost complete intersections in ${\mathbf R}$ direct links of Gorenstein ideals. That is, let $K$ be a Gorenstein ideal of ${\mathbf R}$ of codimension $s$, that is ${\mathbf R}/K$ is a Gorenstein ring of dimension $d-s$. If $J = (a_1, \ldots, a_s)\subset K$ is a complete intersection of codimension $s$, $J\neq K$, $I = J:K$ is an almost complete intersection, $I = (J, a)$. Depending on $K$, sometimes these ideals come endowed with very good properties. Let us recall one of them. \begin{Proposition}\label{sourceofacis} Let $({\mathbf R}, {\mathfrak m})$ be a Noetherian local ring of dimension $d$. \begin{enumerate} \item[{\rm (i)}] {\rm (\cite[Theorem 2.1]{CPV1})} Suppose ${\mathbf R}$ is a Cohen--Macaulay local ring and let ${\mathfrak p}$ be a prime ideal of codimension $s$ such that ${\mathbf R}_{{\mathfrak p}}$ is a Gorenstein ring and let $J$ be a complete intersection of codimension $s$ contained in ${\mathfrak p}$. Then for $I= J:{\mathfrak p}$ we have $I^2= JI$ in the following two cases: {\rm (a)} ${\mathbf R}_{{\mathfrak p}}$ is not a regular local ring; {\rm (b)} if ${\mathbf R}_{{\mathfrak p}}$ is a regular local ring two of the elements $a_i$ belong to ${\mathfrak p}^{(2)}$. \item[{\rm (ii)}] {\rm (\cite[Theorem 3.7]{CHV})} Suppose $J$ is an irreducible ${\mathfrak m}$--primary ideal. Then {\rm (a)} either there exists a minimal set of generators $\{x_1, \ldots, x_d\}$ of ${\mathfrak m}$ such that $J = (x_1,\ldots, x_{d-1}, {x_d}^r)$, or {\rm (b)} $I^2 = JI$ for $I = J:{\mathfrak m}$. \end{enumerate} \end{Proposition} \medskip The following criterion is a global version of Corollary~\ref{F2Cor} \begin{Proposition}\label{rednumone} Let ${\mathbf R}$ be a Gorenstein local ring and $I=(J,a)$ an almost complete intersection {\rm[}when we write $I=(J,a)$ we always mean that $J$ is a reduction{\rm]}. If $I$ is an unmixed ideal {\rm[}height unmixed{\rm]} then $\mbox{\rm red}_J(I) \leq 1$ if and only if $J:a = I_1(\phi)$. \end{Proposition} \noindent{\bf Proof. } Since the ideal $JI$ is also unmixed, to check the equality $J:a= I_1(\phi)$ we only need to check at the minimal primes of $I$ (or, of $J$, as they are the same). Now Corollary~\ref{F2Cor} applies. \hfill$\Box$ \bigskip If in Proposition~\ref{sourceofacis} ${\mathbf R}$ is a Gorenstein local ring and $I$ is a Cohen--Macaulay ideal, their associated graded rings are Cohen--Macaulay, while the Rees algebras are also Cohen--Macaulay if $\dim {\mathbf R}\geq 2$. \begin{Theorem} \label{reesoflink} Let ${\mathbf R}$ be a Gorenstein local ring and $I$ a Cohen--Macaulay ideal that is an almost complete intersection. If $\mbox{\rm red}_J(I)\leq 1$ then in the canonical representation \[ 0 \rightarrow T \longrightarrow {\mathbf S} \longrightarrow {\mathbf R}[It] \rightarrow 0,\] \begin{itemize} \item[{\rm (i)}] If $\dim {\mathbf R} \geq 2$ ${\mathbf R}[It]$ is Cohen--Macaulay. \item[{\rm (ii)}] $T$ is a Cohen--Macaulay module over ${\mathbf S}/(I_1(\phi)){\mathbf S}$, in particular \[ \nu(T) \leq \deg {\mathbf R}/I_1(\phi).\] \end{itemize} \end{Theorem} \begin{Example}{\rm Let ${\mathbf R}=k[x_1, \ldots, x_d]$, $k$ an algebraically closed field, and let ${\mathfrak p}$ be a homogeneous prime ideal of codimension $d-1$. Suppose $J= (a_1, \ldots, a_{d-1})$ is a complete intersection of codimension $d-1$ with at least two generators in ${\mathfrak p}^2$. Since ${\mathbf R}/{\mathfrak p}$ is regular, $I=J:{\mathfrak p}$ is an almost complete intersection and $I^2 = JI$. Since ${\mathfrak p}$ is a complete intersection, say ${\mathfrak p} = (x_1-c_1x_{d} , x_2-c_2x_d, \ldots, x_{d-1} -c_{d-1}x_d)$, $c_i\in k$, we write the matrix equation $J= {\mathbf A} \cdot {\mathfrak p}$, where ${\mathbf A}$ is a square matrix of size $d-1$. This is the setting where the Northcott ideals occur, and therefore $I = (J, \det {\mathbf A})$. \medskip By Theorem~\ref{reesoflink}(ii), $\nu(T) \leq \deg ({\mathbf R}/{\mathfrak p}) = 1$. Thus ${\mathbf L}$ is generated by the syzygies of $I$ (which are well-understood) plus a quadratic equation. }\end{Example} \subsection{Metrics of aCM Rees algebras} Let $({\mathbf R}, {\mathfrak m})$ be a Cohen--Macaulay local ring of dimension $d$ and let $I$ be an almost complete intersection of finite colength. We assume that $I=(J,a)$, where $J$ is a minimal reduction of $I$. These assumptions will hold for the remainder of the section. We emphasize that they apply to the case when ${\mathbf R}$ is a polynomial ring over a field and $I$ is a homogeneous ideal. \medskip In the next statement we highlight the information about the equations of $I$ that is a direct consequences of the aCM hypothesis. In the next segments we begin to obtain the required data in an explicit form. As for notation, ${\mathbf B} = {\mathbf R}[{\mathbf T}_1, \ldots, {\mathbf T}_{d+1}]$ and for a graded ${\mathbf B}$-module $A$, $\deg(A)$ denotes the multiplicity relative to the maximal homogeneous ideal $\mathcal{M}$ of ${\mathbf B}$, $\deg(A)= \deg( \mbox{\rm gr}_{\mathcal{M}}(A))$. In actual computations $\mathcal{M}$ can be replaced by a reduction. For instance, if $E$ is a graded ${\mathbf R}$--module and $A=E\otimes_{{\mathbf R}} {\mathbf B}$, picking a reduction $J$ for ${\mathfrak m}$ gives the reduction $\mathcal{N} = (J, {\mathbf T}_1, \ldots, {\mathbf T}_{d+1})$ of $\mathcal{M}$. It will follow that $\deg(A) = \deg(E)$. \begin{Theorem} \label{aCM1} If the algebra ${\mathbf R}[It]$ is almost Cohen--Macaulay, in the canonical sequence \[ 0 \rightarrow T \longrightarrow {\mathbf S} \longrightarrow {\mathbf R}[It] \rightarrow 0\] \begin{itemize} \item[{\rm (i)}] $\mbox{\rm reg}({\mathbf R}[It]) = \mbox{\rm red}_J(I) + 1$. \item[{\rm (ii)}] $\nu(T) \leq \deg({\mathbf S}) - \deg({\mathbf R}[It])$. \end{itemize} \end{Theorem} \noindent{\bf Proof. } (i) follows from Corollary~\ref{Sallyrel}. As for (ii), since $T$ is a Cohen--Macaulay module, $\nu(T) \leq \deg(T)$. \hfill$\Box$ \bigskip The goal is to find $\deg(T)$, $\deg({\mathbf R}[It])$ and $\deg({\mathbf S})$ in terms of more direct metrics of $I$. This will be answered in Theorem~\ref{degSymi}. \subsection*{Cohen--Macaulayness of the Sally module.} Fortunately there is a simple criterion to test whether ${\mathbf R}[It]$ is an aCM algebra: It is so if and only if it satisfies the Huckaba Test: \[ e_1(I) = \sum_{j\geq 1}{\lambda}(I^j/JI^{j-1}). \] Needless to say, this is exceedingly effective if you already know $e_1(I)$, in particular there is no need to determine the equations of ${\mathbf R}[It]$ for the purpose. \medskip Let ${\mathbf R}$ be a Noetherian ring, $I$ an ideal and $J$ a reduction of $I$. The Sally module of $I$ relative to $J$, $S_J(I)$, is defined by the exact sequence of ${\mathbf R}[Jt]$--modules \[ 0 \rightarrow I {\mathbf R}[Jt] \longrightarrow I {\mathbf R}[It] \longrightarrow S_J(I) = \bigoplus_{j\geq 2} I^j/IJ^{j-1} \rightarrow 0.\] The definition applies more broadly to other filtrations. We refer the reader to \cite[p. 101]{icbook} for a discussion. Of course this module depends on the chosen reduction $J$, but its Hilbert function and its depth are independent of $J$. There are extensions of this construction to more general reductions--and we employ one below. \medskip If ${\mathbf R}$ is a Cohen--Macaulay local ring and $I$ is ${\mathfrak m}$--primary with a minimal reduction, $S_J(I)$ plays a role in mediating among properties of ${\mathbf R}[It]$. \begin{Proposition} \label{Sallyelem} Suppose ${\mathbf R}$ is a Cohen--Macaulay local ring of dimension $d$. Then \begin{enumerate} \item[{\rm (i)}] If $S_J(I) = 0$ then $\mbox{\rm gr}_I({\mathbf R})$ is Cohen-Macaulay. \item[{\rm (ii)}] If $S_J(I)\neq 0$ then $\dim S_J(I) = d$. \end{enumerate} \end{Proposition} Some of the key properties of the Sally module are in display in the next result (\cite[Theorem 3.1]{Huc96}). It converts the property of ${\mathbf R}[It]$ being almost Cohen--Macaulay into the property of $S_J(I)$ being Cohen--Macaulay. \begin{Theorem}[Huckaba Theorem] \label{Huckaba} Let $({\mathbf R},{\mathfrak m})$ be a Cohen--Macaulay local ring of dimension $d \geq 1$ and $J$ a parameter ideal. Let $\mathcal{A}=\{I_n, n\geq 0\}$ be an filtration of ${\mathfrak m}$-primary ideals such that $J\subset I_1$ and ${\mathbf B}={\mathbf R}[I_nt^n, n\geq 1]$ is ${\mathbf A}={\mathbf R}[Jt]$-finite. Define the Sally module $S_{{\mathbf B}/{\mathbf A}}$ of ${\mathbf B}$ relative to ${\mathbf A}$ by the exact sequence \[ 0 \rightarrow I_1 {\mathbf A} \longrightarrow I_1{\mathbf B} \longrightarrow S_{{\mathbf B}/{\mathbf A}}\rightarrow 0.\] Suppose $S_{{\mathbf B}/{\mathbf A}}\neq 0$. Then \begin{enumerate} \item[{\rm (i)}] $e_0(S_{{\mathbf B}/{\mathbf A}})=e_1({\mathbf B})-\lambda(I_1/J) \leq \sum_{j\geq 2}\lambda(I_j/JI_{j-1})$. \item[{\rm (ii)}] The following conditions are equivalent: \begin{itemize} \item[{\rm (a)}] $S_{{\mathbf B}/{\mathbf A}}$ is Cohen-Macaulay; \item[{\rm (b)}] $\mbox{\rm depth } \mbox{\rm gr}_{\mathcal{A}}({\mathbf R})\geq d-1$; \item[{\rm (c)}] $e_1({\mathbf B})= \sum_{j \geq 1}\lambda(I_j/JI_{j-1})$; \item[{\rm (d)}] ${\mathbf R}[It]$ is almost Cohen--Macaulay. \end{itemize} \end{enumerate} \end{Theorem} \noindent{\bf Proof. } If $J=({\mathbf x})=(x_1,\ldots, x_d)$, $S_{{\mathbf B}/{\mathbf A}}$ is a finite module over the ring ${\mathbf R}[{\mathbf T}_1, \ldots, {\mathbf T}_d]$, ${\mathbf T}_i \rightarrow x_it$. Note that \[\lambda( S_{{\mathbf B}/{\mathbf A}}/{\mathbf x} S_{{\mathbf B}/{\mathbf A}})= \sum_{j\geq 2}\lambda(I_j/JI_{j-1}), \] which shows the first assertion. \medskip For the equivalencies, first note that equality means that the first Euler characteristic $\chi_1({\mathbf x};S_{{\mathbf B}/{\mathbf A}})$ vanishes, which by Serre's theorem (\cite[Theorem 4.7.10]{BH}) says that $S_{{\mathbf B}/{\mathbf A}}$ is Cohen--Macaulay. The final assertion comes from the formula for the multiplicity of $S_{{\mathbf B}/{\mathbf A}}$ in terms of $e_1({\mathbf B})$ (\cite[Theorem 2.5]{icbook}). \hfill$\Box$ \subsection*{Castelnuovo regularity} The Sally module encodes also information about the Castelnuovo regularity $\mbox{\rm reg}({\mathbf R}[It])$ of the Rees algebra. The following Proposition and its Corollary are extracted from the literature (\cite{Huck87}, \cite{Trung98}), or proved directly by adding the exact sequence that defines $S_J(I)$ (note that $I{\mathbf R}[Jt]$ is a maximal Cohen--Macaulay module) to the canonical sequences relating ${\mathbf R}[It])$ to $\mbox{\rm gr}_I({\mathbf R})$ and ${\mathbf R}$ via $I{\mathbf R}[It]$ (see \cite[Section 3]{Trung98}). \begin{Proposition} \label{Sallyreg} Let ${\mathbf R}$ be a Cohen--Macaulay local ring, $I$ an ${\mathfrak m}$--primary ideal and $J$ a minimal reduction. Then \[ \mbox{\rm reg}({\mathbf R}[It]) = \mbox{\rm reg}(S_J(I)). \] In particular \[ \mbox{\rm reltype}(I) \leq \mbox{\rm reg}(S_J(I)).\] \end{Proposition} \begin{Corollary} \label{Sallyrel} If $I$ is an almost complete intersection and ${\mathbf R}[It]$ is almost Cohen--Macaulay, then \[ \mbox{\rm reltype}(I) = \mbox{\rm red}_J(I) + 1.\] \end{Corollary} \subsection*{The Sally fiber of an ideal} To help analyze the problem, we single out an extra structure. Let $({\mathbf R}, {\mathfrak m})$ be a Cohen--Macaulay local ring of dimension $d>0$, $I$ an ${\mathfrak m}$--primary ideal and $J$ one of its minimal reductions. \begin{Definition}[Sally fiber] The Sally fiber of $I$ is the graded module \[ F(I) = \bigoplus_{j\geq 1} I^j/JI^{j-1}. \] \end{Definition} $F(I)$ is an Artinian ${\mathbf R}[Jt]$--module whose last non-vanishing component is $I^r/JI^r$, $r=\mbox{\rm red}_J(I)$. The equality $e_1(I) = {\lambda}(F(I))$ is the condition for the almost Cohen--Macaulayness of ${\mathbf R}[It]$. We note that $F(I)$ is the fiber of $S_J(I)$ extended by the term $I/J$. To obtain additional control over $F(I)$ we are going to endow it with additional structures in cases of interest. \bigskip Suppose ${\mathbf R}$ is a Gorenstein local ring, $I=(J,a)$. The modules $F_j = I^j/JI^{j-1}$ are cyclic modules over the Artinian Gorenstein ring $ {\mathbf A} = {\mathbf R}/J:a$. We turn $F(I)$ into a graded module over the polynomial ring ${\mathbf A}[s]$ by defining \[ a^j \in F_j \mapsto s\cdot a^j = a^{j+1} \in F_{j+1}.\] This is clearly well-defined and has $s^r\cdot F(I)=0$. Several of the properties of the $F_n$'s arise from this representation, for instance the length of $F_j$ are non-increasing. Thus $F(I)$ is a graded module over the Artinian Gorenstein ring ${\mathbf B}= {\mathbf A}[s, s^r=0]$. \medskip \begin{Remark}\label{Fvasneweq}{\rm The variation of the values of the $F_j$ is connected to the degrees of the generators of ${\mathbf L}$. For convenience we set $I=(J,a)$ and ${\mathbf B} = {\mathbf R}[u, {\mathbf T}_1, \ldots, {\mathbf T}_d]$, with $u$ corresponding to $a$. For example: \medskip \begin{itemize} \item[{\rm (i)}] Suppose that for some $s$, ${\mathbf f}_s = \lambda(F_s) = 1$. This means that we have $d$ equations of the form \[{\mathbf h}_i= x_i u^s + {\mathbf g}_i \in {\mathbf L}_s\] where ${\mathbf g}_i\in ({\mathbf T}_1, \ldots, {\mathbf T}_{d}){\mathbf B}_{s-1}$. Eliminating the $x_i$, we derive a nonvanishing monic equation in ${\mathbf L}$ of degree $d\cdot s$. Thus $\mbox{\rm red}_J(I) \leq ds -1$. \medskip \item[{\rm (ii)}] A more delicate observation, is that whenever ${\mathbf f}_s > {\mathbf f}_{s+1}$ then there are {\bf fresh} equations in ${\mathbf L}_{s+1}$. Let us explain why this happens: ${\mathbf f}_s = \lambda(JI^{s-1}: I^s)$, that is the ideal $L_s$ contains elements of the form \[ c\cdot u^s + \mathbf{g}, \quad c\in JI^{s-1}:I^s, \quad \mathbf{g} \in ({\mathbf T}_1, \ldots, {\mathbf T}_d){\mathbf B}_{s-1}.\] Since ${\mathbf f}_{s+1} < {\mathbf f}_s$, $JI^s: I^{s+1}$ contains properly $JI^{s-1}:I^s$, which means that we must have elements in $L_{s+1}$ \[ d\cdot u^{s+1} + \mathbf{g}, \quad\] with $d \notin JI^{s-1}: I^s$ and $\mathbf{g} \in ({\mathbf T}_1, \ldots, {\mathbf T}_d){\mathbf B}_{s}$. Such elements cannot belong to $L_s\cdot {\mathbf B}_1$, so they are fresh generators. The converse also holds. \end{itemize} }\end{Remark} \subsubsection*{A toolbox} We first give a simplified version of \cite[Proposition 2.2]{CPV1}. Suppose ${\mathbf R}$ is a Gorenstein local ring of dimension $d$. Consider the two exact sequences. \[0 \rightarrow J/JI= ({\mathbf R}/I)^d \longrightarrow {\mathbf R}/JI \longrightarrow {\mathbf R}/J\rightarrow 0\] and the syzygetic sequence \[ 0 \rightarrow \delta(I) \longrightarrow H_1(I) \longrightarrow ({\mathbf R}/I)^{d+1} \longrightarrow I/I^2 \rightarrow 0. \] The first gives \[{\lambda}({\mathbf R}/JI)= d\cdot {\lambda}({\mathbf R}/I) +{\lambda}({\mathbf R}/J),\] the other \[{\lambda}({\mathbf R}/I^2) =(d+2){\lambda}({\mathbf R}/I)-{\lambda}({\mathrm H}_1(I)) + {\lambda} (\delta(I)).\] Thus \[{\lambda}(I^2/JI)= {\lambda}(I/J) -{\lambda}(\delta(I))\] since ${\mathrm H}_1(I)$ is the canonical module of ${\mathbf R}/I$. Taking into account the syzygetic formula in \cite{syl2} we finally have: \begin{Proposition} \label{F2} Let $({\mathbf R}, {\mathfrak m})$ be a Gorenstein local ring of dimension $d>0$, $J= (a_1, \ldots, a_d)$ a parameter ideal and $I=(J,a)$ and $a\in {\mathfrak m}$. Then \begin{eqnarray*} {\lambda}(I^2/JI) & = &{\lambda}(I/J) -{\lambda}({\mathbf R}/I_1(\phi))\\ &=& {\lambda}({\mathbf R}/J:a) -{\lambda}({\mathbf R}/I_1(\phi))\\ & = & {\lambda}({\mathbf R}/J:a) - {\lambda}(\mbox{\rm Hom}({\mathbf R}/I_1(\phi), {\mathbf R}/J:a) \\ & = & {\lambda}({\mathbf R}/J:a) - {\lambda}((J:a): I_1(\phi))/J:a) \\ & = & {\lambda} ({\mathbf R}/(J:a):I_1(\phi)). \end{eqnarray*} \end{Proposition} Note that in dualizing ${\mathbf R}/I_1(\phi)$ we made use of the fact that ${\mathbf R}/J:a$ is a Gorenstein ring. \begin{Corollary} \label{F2Cor} $I^2= JI$ if and only if $J:a= I_1(\phi)$. In this case, if $d>1$ the algebra ${\mathbf R}[It]$ is Cohen--Macaulay. \end{Corollary} \begin{Corollary}\label{F3Cor} If ${\mathbf R}[It]$ is an aCM algebra and $\mbox{\rm red}_J(I) = 2$, then $e_1(I) = 2\cdot \lambda(I/J) - \lambda({\mathbf R}/I_1(\phi))$. \end{Corollary} \begin{Remark}{\rm We could enhance these observations considerably if formulas for $\lambda(JI^2:I^3)$ were to be developed. More precisely, how do the syzygies of $I$ affect $JI^2:I^3$? } \end{Remark} \subsection{Multiplicities and number of relations} To benefit from Theorem~\ref{aCM1}, we need to have effective formulas for $\deg({\mathbf S})$ and $\deg({\mathbf R}[It])$. We are going to develop them now. \begin{Proposition}\label{multirees} Let ${\mathbf R}=k[x_1, \ldots, x_d]$ and $I$ an almost complete intersection as above, $I=(f_1, \ldots, f_d, f_{d+1})=(J, f_{d+1})$ generated by forms of degree $n$. Then $ \deg({\mathbf R}[It]) = \sum_{j=0}^{d-1} n^j.$ \end{Proposition} \noindent{\bf Proof. } After an elementary observation, we make use of one of the beautiful multiplicity formulas of \cite{HTU}. Set $A={\mathbf R}[It]$, $A_0 = {\mathbf R}[Jt]$, $\mathcal{M} = ({\mathfrak m}, It)A$ and $\mathcal{M}_0 = ({\mathfrak m}, Jt)A_0$. Then \[ \deg(\mbox{\rm gr}_{\mathcal{M}_0}(A_0))= \deg(\mbox{\rm gr}_{\mathcal{M}_0}(A))= \deg(\mbox{\rm gr}_{\mathcal{M}}(A)), \] the first equality because $A_0 \rightarrow A$ is a finite rational extension, the second is because $({\mathfrak m}, Jt)A$ is a reduction of $({\mathfrak m}, It)A$. Now we use \cite[Corollary 1.5]{HTU} that gives $\deg(A_0)$. \hfill$\Box$ \subsubsection*{The multiplicity of the symmetric algebra} We shall now prove one of our main results, a formula for $\deg S(I)$ for ideals generated by forms of the same degree. Let ${\mathbf R}=k[x_1, \ldots, x_d]$, $I = ({\mathbf f}) = (f_1, \ldots, f_d, f_{d+1})$ an almost complete intersection generated by forms of degree $n$. At some point we assume, harmlessdly, that $J = (f_1, \ldots, f_d)$ is a complete intersection. There will be a slight change of notation in the rest of this section. We set ${\mathbf B} = {\mathbf R}[{\mathbf T}_1, \ldots, {\mathbf T}_{d+1}]$ and ${\mathbf S} = \mbox{\rm Sym}(I)$. \begin{Theorem}[{\bf Degree Formula}] \label{degSymi} $\deg {\mathbf S} = \sum_{j=0}^{d} n^j - \lambda({\mathbf R}/I)$. \end{Theorem} \noindent{\bf Proof. } Let $\mathbb{K}({\mathbf f}) = \bigwedge^{d+1} {\mathbf R}^{d+1}(-n)$ be the Koszul complex associated to ${\mathbf f}$, \[ 0 \rightarrow {K}_{d+1} \rightarrow K_{d} \rightarrow \cdots \rightarrow K_2 \rightarrow K_1 \rightarrow K_0 \rightarrow 0,\] and consider the associated $\mathcal{Z}$--complex \[ 0 \rightarrow Z_d\otimes {\mathbf B}(-d) \rightarrow Z_{d-1} \otimes {\mathbf B}(-d+1) \rightarrow \cdots \rightarrow Z_2 \otimes {\mathbf B}(-2) \rightarrow Z_1 \otimes {\mathbf B}(-1) \stackrel{\psi}{\rightarrow} {\mathbf B} \rightarrow 0.\] This complex is acyclic with ${\mathrm H}_0(\mathcal{Z}) = {\mathbf S} = \mbox{\rm Sym}(I)$. Now we introduce another complex obtained by replacing $Z_1 \otimes {\mathbf B}(-1) $ by $B_1 \otimes {\mathbf B}(-1)$, where $B_1$ is the module of $1$--boundaries of $\mathbb{K}({\mathbf f})$, followed by the restriction of $\psi$ to $B_1 \otimes {\mathbf B}(-1)$. \medskip This defines another acyclic complex, $\mathcal{Z}^*$, actually the $\mathcal{B}$--complex of ${\mathbf f}$, and we set ${\mathrm H}_0(\mathcal{Z}^*) = {\mathbf S}^*$. The relationship between ${\mathbf S}$ and ${\mathbf S}^*$ is given in the following observation: \begin{Lemma} $\deg {\mathbf S}^* = \deg {\mathbf S} + \lambda({\mathbf R}/I)$. \end{Lemma} \noindent{\bf Proof. } Consider the natural mapping between $\mathcal{Z} $ and $\mathcal{Z}^*$: \[ \diagram 0 \rto & Z_{d}\otimes {\mathbf B}(-d) \rto \dto_{\phi_d} & \cdots \rto & Z_2\otimes {\mathbf B}(-2) \rto \dto_{\phi_2} & B_1 \otimes {\mathbf B}(-1) \rto \dto & {\mathbf B} \rto \dto & {\mathbf S}^* \rto \dto & 0 \\ 0 \rto & Z_{d}\otimes {\mathbf B}(-d) \rto & \cdots \rto & Z_2\otimes {\mathbf B}(-2) \rto & Z_1 \otimes {\mathbf B}(-1) \rto & {\mathbf B} \rto & {\mathbf S} \rto & 0. \enddiagram \] The maps $\phi_2, \ldots, \phi_2 $ are isomorphisms while the other maps are defined above. They induce the short exact sequence of modules of dimension $d+1$, \[ 0 \rightarrow (Z_1/B_1)\otimes {\mathbf B}(-1) \longrightarrow {\mathbf S}^* \longrightarrow {\mathbf S} \rightarrow 0.\] Note that $Z_1/B_1= {\mathrm H}_1(\mathbb{K}({\mathbf f}))$ is the canonical module of ${\mathbf R}/I$, and therefore has the same length as ${\mathbf R}/I$. Finally, by the additivity formula for the multiplicities (\cite[Lemma 13.2]{Eisenbudbook}), \[ \deg {\mathbf S}^* = \deg {\mathbf S} + \lambda(Z_1/B_1),\] as desired. \hfill$\Box$ \bigskip We are now give our main calculation of multiplicities. \begin{Lemma} $\deg {\mathbf S}^* = \sum_{j=0}^{d} n^j.$ \end{Lemma} \noindent{\bf Proof. } We note that the $\mathcal{Z}^*$--complex is homogeneous for the total degree [as required for the computation of multiplicities] provided the $Z_i$'s and $B_1$ have the same degree. We can conveniently write $B_i$ for $Z_i$, $i\geq 2$. This is clearly the case since they are generated in degree $n$. This is not the case for $Z_1$. However when ${\mathbf f}$ is a regular sequence, all the $Z_i$ are equigenerated, an observation we shall make use of below. \medskip Since the modules of $\mathcal{Z}^*$ are homogeneous we have that the Hilbert series of ${\mathbf S}^*$ is given as \[ H_{{\mathbf S}^*}({\mathbf t}) = {\frac{\sum_{i=0}^d (-1)^{i} h_{B_i}({\mathbf t}) {\mathbf t}^i}{(1-{\mathbf t})^{2d+1}}} = {\frac{h({\mathbf t})}{(1-{\mathbf t})^{2d+1}}}, \] where $h_{B_i}({\mathbf t})$ are the $h$--polynomials of the $B_i$. More precisely, each of the terms of $\mathcal{Z}^*$ is a ${\mathbf B}$--module of the form $A\otimes {\mathbf B}(-r)$ where $A$ is generated in a same degree. Such modules are isomorphic to their associated graded modules. \medskip The multiplicity of ${\mathbf S}^*$ is given by the standard formula \[\deg {\mathbf S}^* = (-1)^d {\frac{h^{(d)}(1)}{ d!}}.\] We now indicate how the $h_{B_i}({\mathbf t})$ are assembled. Let us illustrate the case when $d =4$ and $i=1$. $B_1$ has a free resolution of the strand of the Koszul complex \[ 0 \rightarrow {\mathbf R}(-3n) \rightarrow {\mathbf R}^{5}(-2n) \longrightarrow {\mathbf R}^{10}(-n) \longrightarrow {\mathbf R}^{10} \longrightarrow B_1 \rightarrow 0,\] so that \[ h_{B_1}({\mathbf t}) = 10 - 10{\mathbf t}^{n} + 5{\mathbf t}^{2n}-{\mathbf t}^{3n},\] and similarly for all $B_i$. \medskip We are now ready to make our key observation. Consider a complete intersection $P$ generated by $d+1$ forms of degree $n$ in a polynomial ring of dimension $d+1$ and set ${\mathbf S}^{**} = \mbox{\rm Sym}(P)$. The corresponding approximation complex now has $B_1=Z_1$. The approach above would for the new $Z_i$ give the same $h$--polynomials of the $B_i$ in the case of an almost complete intersection (but in dimension $d$). This means that the Hilbert series of ${\mathbf S}^{**}$ is given by \[ H_{{\mathbf S}^{**}}({\mathbf t}) = {\frac{h({\mathbf t})}{(1-{\mathbf t})^{2d+2}}}.\] It follows that $\deg {\mathbf S}^*$ can be computed as the degree of the symmetric algebra generated by a regular sequence of $d+1$ forms of degree $n$, a result that is given in \cite{HTU}. Thus, \[ \deg {\mathbf S}^* = \deg {\mathbf S}^{**} = \sum_{j=0}^d n^j,\] and the calculation of $\deg {\mathbf S}$ is complete. \hfill$\Box$ \bigskip We will now write Theorem~\ref{degSymi} in a more convenient formulation for applications. \begin{Theorem}\label{degSymibis} Let ${\mathbf R} =k[x_1, \ldots, x_d]$ and $I = (f_1, \ldots, f_d, f_{d+1})$ is an ideal of forms of degree $n$. If $J = (f_1, \ldots, f_d)$ is a complete intersection, then \[ \deg {\mathbf S} = \sum_{j=0}^{d-1} n^j + \lambda({\mathbf R}/J:I).\] \end{Theorem} \noindent{\bf Proof. } The degree formula gives \begin{eqnarray*} \deg {\mathbf S} & = & \sum_{j=0}^{d-1}n^j + [n^d - \lambda(R/I)] = \sum_{j=0}^{d-1}n^j + [\lambda({\mathbf R}/J) - \lambda({\mathbf R}/I)]\\ &=& \sum_{j=0}^{d-1} n^j +\lambda(I/J) = \sum_{j=0}^{d-1}n^j + \lambda({\mathbf R}/J:I). \end{eqnarray*} \begin{Corollary} \label{degT} Let $I=(J,a)$ be an ideal of finite colength as above. Then the module of linear relations satisfies $\deg T = {\lambda} (I/J)$. In particular if ${\mathbf R}[It]$ is almost Cohen--Macaulay, $T$ can be generated by $\lambda(I/J)$ elements. \end{Corollary} \noindent{\bf Proof. } From the sequence of modules of the same dimension \[ 0 \rightarrow T \longrightarrow {\mathbf S} \longrightarrow {\mathbf R}[It]\rightarrow 0\] we have \[ \deg T = \deg {\mathbf S} - \deg {\mathbf R}[It] = \lambda(I/J).\] \hfill$\Box$ The last assertion of this Corollary can also be obtained from \cite[Theorem 4.1]{MPV12}. \subsubsection*{The Cohen--Macaulay type of the module of nonlinear relations} We recall the terminology of Cohen--Macaulay type of a module. Set ${\mathbf B}={\mathbf R}[{\mathbf T}_1, \ldots, {\mathbf T}_{d+1}]$. If $E$ is a finitely generated ${\mathbf B}$--module of codimension $r$, we say that $\mbox{\rm Ext}_{{\mathbf B}}^r(E, {\mathbf B})$ is its canonical module. It is the first non vanishing $\mbox{\rm Ext}_{{\mathbf B}}^i(E, {\mathbf B})$ module denoted by $\omega_E$. The minimal number of the generators of $\omega_{E}$ is called the {\em Cohen--Macaulay type} of $E$ and is denoted by $\mbox{\rm type}(E)$. When $E$ is graded and Cohen--Macaulay, it gives the last Betti number of a projective resolution of $E$. It can be expressed in different ways, for example for the module of linear relations $\omega_T=\mbox{\rm Ext}_{{\mathbf B}}^d(T, {\mathbf B}) = \mbox{\rm Hom}_{{\mathbf S}}(T,\omega_{{\mathbf S}})$. \begin{Proposition}\label{typeofT} Let ${\mathbf R}$ be a Gorenstein local ring of dimension $d \geq 2$ and $I=(J,a)$ an ideal of finite colength as above. If ${\mathbf R}[It]$ is an aCM algebra and $\omega_{R[It]}$ is Cohen--Macaulay, then the type of the module $T$ of nonlinear relations satisfies \[ \mbox{\rm type}(T) \leq \mbox{\rm type }(S_J(I)) + d-1,\] where $S_J(I)$ is the Sally module. \end{Proposition} \noindent{\bf Proof. } We set $\mbox{$\mathcal{R}$} = {\mathbf R}[It]$ and $\mbox{$\mathcal{R}$}_0 = {\mathbf R}[Jt]$. First apply $\mbox{\rm Hom}_{{\mathbf B}}(\cdot, {\mathbf B}) $ to the basic presentation \[ 0 \rightarrow T \longrightarrow {\mathbf S} \longrightarrow \mbox{$\mathcal{R}$} \rightarrow 0,\] to obtain the cohomology sequence \begin{eqnarray}\label{type1} 0 \rightarrow \omega_{\mbox{$\mathcal{R}$}} \longrightarrow \omega_{{\mathbf S}} \longrightarrow \omega_T \longrightarrow \mbox{\rm Ext}_{{\mathbf B}}^{d+1}(\mbox{$\mathcal{R}$}, {\mathbf B}) \rightarrow 0. \end{eqnarray} Now apply the same functor to the exact sequence of ${\mathbf B}$--modules \[ 0 \rightarrow I\cdot \mbox{$\mathcal{R}$}[-1] \longrightarrow \mbox{$\mathcal{R}$} \longrightarrow {\mathbf R} \rightarrow 0\] to obtain the exact sequence \[ 0 \rightarrow \omega_{\mbox{$\mathcal{R}$}} \stackrel{\theta}{\longrightarrow} \omega_{I\mbox{$\mathcal{R}$}[-1]} \longrightarrow \mbox{\rm Ext}_{{\mathbf B}}^{d+1}({\mathbf R}, {\mathbf B}) = {\mathbf R} \longrightarrow \mbox{\rm Ext}_{{\mathbf B}}^{d+1}(\mbox{$\mathcal{R}$}, {\mathbf B}) \longrightarrow \mbox{\rm Ext}_{{\mathbf B}}^{d+1}(I\mbox{$\mathcal{R}$}[-1], {\mathbf B}) \rightarrow 0.\] Since $\omega_{\mbox{$\mathcal{R}$}}$ is Cohen--Macaulay and $\dim {\mathbf R} \geq 2$, the cokernel of $\theta$ is either ${\mathbf R}$ or an ${\mathfrak m}$-primary ideal that satisfies the condition $S_2$ of Serre. The only choice is $\mbox{\rm coker}(\theta)={\mathbf R}$. Therefore \[\mbox{\rm Ext}_{{\mathbf B}}^{d+1}(\mbox{$\mathcal{R}$}, {\mathbf B})\simeq \mbox{\rm Ext}_{{\mathbf B}}^{d+1}(I\mbox{$\mathcal{R}$}[-1], {\mathbf B}).\] Now we approach the module $ \mbox{\rm Ext}_{{\mathbf B}}^{d+1}(I\mbox{$\mathcal{R}$}, {\mathbf B})$ from a different direction. We note that $\mbox{$\mathcal{R}$}$---but not ${\mathbf S}$ and $T$---is also a finitely generated ${\mathbf B}_0={\mathbf R}[{\mathbf T}_1, \ldots, {\mathbf T}_d]$--module as it is annihilated by a monic polynomial ${\mathbf f}$ in ${\mathbf T}_{d+1}$ with coefficients in ${\mathbf B}_0$. By Rees Theorem we have that for all $i$, $\mbox{\rm Ext}_{{\mathbf B}}^i(\mbox{$\mathcal{R}$}, {\mathbf B}) =\mbox{\rm Ext}_{{\mathbf B}/({\mathbf f})}^{i-1}(\mbox{$\mathcal{R}$}, {\mathbf B}/({\mathbf f}))$, and a similar observation applies to $I\cdot \mbox{$\mathcal{R}$}$. \medskip Next consider the finite, flat morphism ${\mathbf B}_0 \rightarrow {\mathbf B}/({\mathbf f})$. For any ${\mathbf B}/({\mathbf f})$--module $E$ with a projective resolution $\mathbb{P}$, we have that $\mathbb{P}$ is a projective ${\mathbf B}_0$--resolution of $E$. This means that the isomorphism of complexes \[ \mbox{\rm Hom}_{{\mathbf B}_0}(\mathbb{P}, {\mathbf B}_0) \simeq \mbox{\rm Hom}_{{\mathbf B}/({\mathbf f})}(\mathbb{P}, \mbox{\rm Hom}_{{\mathbf B}_0}({\mathbf B}/({\mathbf f}), {\mathbf B}_0))= \mbox{\rm Hom}_{{\mathbf B}/({\mathbf f})}(\mathbb{P}, {\mathbf B}/({\mathbf f})) \] gives isomorphisms for all $i$ \[ \mbox{\rm Ext}_{{\mathbf B}_0}^i(E, {\mathbf B}_0) \simeq \mbox{\rm Ext}_{{\mathbf B}/({\mathbf f})}^i(E, {\mathbf B}/({\mathbf f})). \] Thus \[ \mbox{\rm Ext}_{{\mathbf B}}^i(\mbox{$\mathcal{R}$}, {\mathbf B}) \simeq \mbox{\rm Ext}_{{\mathbf B}_0}^{i-1}(\mbox{$\mathcal{R}$}, {\mathbf B}_0).\] In particular, $\omega_{\mbox{$\mathcal{R}$}} = \mbox{\rm Ext}_{{\mathbf B}_0}^{d-1}(\mbox{$\mathcal{R}$}, {\mathbf B}_0)$. \medskip Finally apply $\mbox{\rm Hom}_{{\mathbf B}_0}(\cdot, {\mathbf B}_0)$ to the exact sequence of ${\mathbf B}_0$--modules and examine its cohomology sequence. \[ 0 \rightarrow I\cdot \mbox{$\mathcal{R}$}_0 \longrightarrow I\cdot \mbox{$\mathcal{R}$} \longrightarrow S_J(I) \rightarrow 0\] is then \[ 0 \rightarrow \omega_{I\mbox{$\mathcal{R}$}} \longrightarrow \omega_{I\mbox{$\mathcal{R}$}_0} \longrightarrow \omega_{S_J(I)}\longrightarrow \mbox{\rm Ext}_{{\mathbf B}_0}^d(\mbox{$\mathcal{R}$}, {\mathbf B}_0)= \mbox{\rm Ext}_{{\mathbf B}}^{d+1}(\mbox{$\mathcal{R}$}, {\mathbf B}) \rightarrow 0.\] Taking this into (\ref{type1}) and the that $\mbox{\rm type}({\mathbf S})=d-1$ gives the desired estimate. \hfill$\Box$ \medskip \begin{Remark}{\rm A class of ideals with $\omega_{\mbox{$\mathcal{R}$}}$ Cohen--Macaulay is discussed in Corollary~\ref{canofRees}(b). }\end{Remark} \section{Distinguished aCM algebras} This section treats several classes of Rees algebras which are almost Cohen--Macaulay. \subsection{Equi-homogeneous acis} We shall now treat an important class of extremal Rees algebras. Let ${\mathbf R} = k[x_1, \ldots, x_d]$ and let $I=(a_1, \ldots, a_d, a_{d+1})$ be an ideal of finite colength, that is, ${\mathfrak m}$--primary. We further assume that the first $d$ generators form a regular sequence and $a_{d+1} \notin (a_1, \ldots, a_d)$. If $\deg a_i=n$, the integral closure of $J=(a_1, \ldots, a_d)$ is the ideal ${\mathfrak m}^n$, in particular $J$ is a minimal reduction of $I$. The integer $\mbox{\rm edeg}(I) = \mbox{\rm red}_J(I) +1$ is called the {\em elimination degree} of $I$. The study of the equations of $I$, that is, of ${\mathbf R}[It]$, depends on a comparison between the metrics of ${\mathbf R}[It]$ to those of ${\mathbf R}[{\mathfrak m}^n t]$, which are well known. \begin{Proposition} {\rm (\cite{syl2})} \label{birideal} The following conditions are equivalent: \begin{itemize} \item[{\rm (i)}] $\Phi$ is a birational mapping, that is the natural embedding $\mathcal{F}(I) \hookrightarrow \mathcal{F}({\mathfrak m}^n)$ is an isomorphism of quotient fields; \item[{\rm (ii)}] $\mbox{\rm red}_J(I) = n^{d-1}-1$; \item[{\rm (iii)}] $e_1(I) = {\frac{d-1}{2}}(n^d - n^{d-1})$; \item[{\rm (iv)}] ${\mathbf R}[It]$ satisfies the condition $R_1$ of Serre. \end{itemize} \end{Proposition} For lack of a standardized terminology, we say that these ideals are {\em birational}\label{birational ideal}. \begin{Corollary}\label{canofRees} For an ideal $I$ as above, the following hold: \begin{itemize} \item[{\rm (i)}] The algebra ${\mathbf R}[It]$ is not Cohen--Macaulay except when $I = (x_1, x_2)^2$. \item[{\rm (ii)}] The canonical module of ${\mathbf R}[It]$ is Cohen--Macaulay. \end{itemize} \end{Corollary} \noindent{\bf Proof. } (i) follows from the condition of Goto--Shimoda (\cite{GS82}) that the reduction number of a Cohen--Macaulay Rees algebra ${\mathbf R}[It]$ must satisfy $\mbox{\rm red}_J(I) \leq \dim {\mathbf R} -1$, which in the case $n^{d-1}- 1 \leq d-1$ is only met if $d =n =2$ \medskip \noindent (ii) The embedding ${\mathbf R}[It] \hookrightarrow {\mathbf R}[{\mathfrak m}^n t]$ being an isomorphism in codimension one, their canonical modules are isomorphic. The canonical module of a Veronese subring such as ${\mathbf R}[{\mathfrak m}^n t]$ is well-known (see \cite[p. 187]{HV85}, \cite{HSV87}; see also \cite[Proposition 2.2]{BR}). \subsection*{Binary ideals} These are the ideals of ${\mathbf R}=k[x,y]$ generated by $3$ forms of degree $n$. Many of their Rees algebras are almost Cohen--Macaulay. We will showcase the technology of the Sally module in treating a much studied class of ideals. First we discuss a simple case (see also \cite{syl1})). \begin{Proposition}\label{22} Let $\phi$ be a $3\times 2$ matrix of quadratic forms in ${\mathbf R}$ and $I$ the ideal given by its $2\times 2$ minors. Then ${\mathbf R}[It]$ is almost Cohen--Macaulay. \end{Proposition} \noindent{\bf Proof. } These ideals have reduction number $1$ or $3$. In the first case all of its Sally modules vanish and ${\mathbf R}[It]$ is Cohen--Macaulay. \medskip In the other case $I$ is a birational ideal and $e_1(I) = {4\choose 2} = 6$. A simple calculation shows that $\lambda({\mathbf R}/I) = 12$, so that $\lambda(I/J) = 16-12 = 4$. To apply Theorem~\ref{Huckaba}, we need to verify the equation \begin{eqnarray}\label{Sally22} f_1 + f_2 + f_3 = 6. \end{eqnarray} We already have $f_1=4$. To calculate $f_2$ we need to take $\lambda(R/I_1(\phi))$ in Corollary~\ref{F2}. $I_1(\phi)$ is an ideal generated by $2$ generators or $I_1(\phi) = (x,y)^2$. But in the first case the Sylvester resultant of the linear equations of ${\mathbf R}[It]$ would be a quadratic polynomial, that is $I$ would have reduction number $1$, which would contradict the assumption. Thus by Corollary~\ref{F2}, $f_2 = 4-\lambda({\mathbf R}/I_1(\phi)) = 1$. Since $f_2\geq f_3>0$ we have $f_3=1$ and the equation (\ref{Sally22}) is satisfied. \hfill$\Box$ \bigskip We have examined higher degrees examples of birational ideals of this type which are/are not almost Cohen--Macaulay. Quite a lot is known about the following ideals. ${\mathbf R} = k[x,y]$ and $I$ is a codimension $2$ ideal given by that $2\times 2$ minors of a $3\times 2$ matrix with homogeneous entries of degrees $1$ and $n-1$. \begin{Theorem} \label{2birideal} If $I_1(\phi) = (x, y)$ then: \begin{enumerate} \item[{\rm (i)}] $\deg \mathcal{F}(I) = n$, that is $I$ is birational. \medskip \item[{\rm (ii)}] ${\mathbf R}[It]$ is almost Cohen--Macaulay. \medskip \item[{\rm (iii)}] The equations of ${\mathbf L}$ are given by a straightforward algorithm. \end{enumerate} \end{Theorem} \noindent{\bf Proof. } The proof of (i) is in \cite{CHW}, and in other sources (\cite{CdA}, \cite{KPU}; see also \cite[Theorem 2.2]{DHS} for a broader statement in any characteristic and \cite[Theorem 4.1]{Simis04} in characteristic zero), and of (ii) in \cite[Theorem 4.4]{KPU}, while (iii) was conjecturally given in \cite[Conjecture 4.8]{syl1} and proved in \cite{CHW}. We give a combinatorial proof of (ii) below (Proposition~\ref{aCMofbin}). \hfill$\Box$ \medskip We note that $\deg({\mathbf S}) = 2n$, since ${\mathbf S}$ is a complete intersection defined by two forms of [total] degrees $2$ and $n$, while ${\mathbf R}[Jt]$ is defined by one equation of degree $n+1$. Thus $\nu(T) \leq 2n-(n+1)= n-1$, which is the number of generators given in the algorithm. \bigskip We point out a property of the module $T$. We recall that an ${\mathbf A}$--module is an {\em Ulrich} module if it is a maximal Cohen--Macaulay module with $\deg M=\nu(M)$ (\cite{HK}). \begin{Corollary} $T$ is an Ulrich ${\mathbf S}$--module. \end{Corollary} Considerable numerical information in the Theorem~\ref{2birideal} will follow from: \begin{Proposition}\label{aCMofbin} If $\deg \alpha =1$ and $\deg \beta = n-1$, then ${\lambda}(F_j) = n -j$. In particular, ${\mathbf R}[It]$ is almost Cohen--Macaulay. \end{Proposition} \noindent{\bf Proof. } Note that the ideal is birational, $F_{n-1}\neq 0$. On the other hand, ${\mathbf L}$ contains fresh generators in all degrees $j\leq n$. This means that for $f_j = \lambda(F_j)$, \[ f_j> f_{j+1}>0, \quad j<n.\] Since $f_1 = n-1$, the decreasing sequence of integers \[ n-1 = f_1 > f_2 > \cdots > f_{n-2} > f_{n-1}> 0\] implies that $f_j = n-j$. Finally, applying Theorem~\ref{Huckaba} we have that ${\mathbf R}[It]$ is an aCM algebra since $\sum_{j}f_j = e_1(I)= {n\choose 2}$. \hfill$\Box$ \subsection*{Quadrics} Here we explore sporadic classes of aCM algebras defined by quadrics in $k[x_1, x_2, x_3, x_4]$. \medskip First we use Proposition~\ref{F2} to look at other cases of quadrics. For $d=3$, $n=2$, $\mbox{\rm edeg}(I)=2$ or $4$. In the first case $J:a= I_1(\phi)$. In addition $J:a\neq {\mathfrak m}$ since the socle degree of ${\mathbf R}/J$ is $3$. Then ${\mathbf R}[It]$ is Cohen--Macaulay. If $\mbox{\rm edeg}(I)=4$ we must have [and conversely!] ${\lambda}({\mathbf R}/J:a)=2$ and $I_1(\phi)={\mathfrak m}$. Then ${\mathbf R}[It]$ is almost Cohen--Macaulay. \medskip Next we treat almost complete intersections of finite colength generated by quadrics of ${\mathbf R}= k[x_1, x_2, x_3, x_4]$. Sometimes we denote the variables by $x,y,\ldots$, or use these symbols to denote [independent] linear forms. For notation we use $J= (a_1, a_2, a_3, a_4)$, and $I = (J, a)$. \bigskip Our goal is to address the following: \begin{Question}{\rm Let $I$ be an almost complete intersection generated by $5$ quadrics of $x_1, x_2, x_3, x_4$. If $I$ is a birational ideal, in which cases is ${\mathbf R}[It]$ is an almost Cohen--Macaulay algebra? In this case, what are the generators of the its module of nonlinear relations? }\end{Question} In order to make use of Theorem~\ref{Huckaba}, our main tools are Corollary~\ref{F2} and \cite[Theorem 2.2]{syl2}. They make extensive use of the syzygies of $I$. The question forks into three cases, but our analysis is complete in only one of them. \subsubsection*{The Hilbert functions of quaternary quadrics} We make a quick classification of the Hilbert functions of the ideals $I=(J,a)$. Since $I/J \simeq {\mathbf R}/J:a$ and $J$ is a complete intersection, the problem is equivalent to determine the Hilbert functions of ${\mathbf R}/J:a$, with $J:a$ a Gorenstein ideal. The Hilbert function $H({\mathbf R}/I)$ of ${\mathbf R}/I$ is $H({\mathbf R}/J)-H({\mathbf R}/J:a)$. We will need the Hilbert function of the corresponding canonical module in order to make use of \cite[Proposition 3.7]{syl2} giving information about $L_2/{\mathbf B}_1L_1$. \medskip We shall refers to the sequence $(f_1, f_2, f_3, \ldots, )$, $f_i = \lambda(I^i/JI^{i-1})$, as the ${\mathbf f}$--sequence of $(I,J)$. We recall that these sequences are monotonic and that if $I$ is birational, $\sum_{i\geq 1} f_i=e_1(I)=12$. \begin{Proposition} Let ${\mathbf R}= k[x_1, x_2, x_3, x_4]$ and $I=(J,a)$ an almost complete intersection generated by $5$ quadrics, where $J$ is a complete intersection,. Then $L=\lambda({\mathbf R}/J:a)\leq 6$ and the possible Hilbert functions of ${\mathbf R}/J:a)$ are: \begin{eqnarray*} L = 6 & : & (1,4,1), \quad (1,2,2,1)^{*}, \quad (1,1,1,1,1,1)^{*} \\ L = 5 & : & (1,3,1), \quad (1,1,1,1,1)^{*} \\ L = 4 & : & (1,2,1), \ \quad (1,1,1,1)^{*} \\ L = 3 & : & (1,1,1)^{*} \\ L = 2 & : & (1,1)^{**} \\ L = 1 & : & (1)^{**} \end{eqnarray*} If $I$ is a birational ideal, the corresponding Hilbert function is one of the unmarked sequences above. \end{Proposition} \noindent{\bf Proof. } Since $\lambda({\mathfrak m}^2/I)\geq 5$ and $\lambda({\mathfrak m}^2/J)=11$, $L=\lambda({\mathbf R}/J:a)\leq 6$. Because the Hilbert function of $R/(J:a)$ is symmetric and $L \leq 6$, the list includes all the viable Hilbert functions. \medskip Let us first rule out those marked with ${\mbox{\rm a}}^{*}$, while those marked with ${\mbox{\rm a}}^{**}$ cannot be birational. In each of these $J:a$ contains at least $2$ linearly independent linear forms, which we denote by $x,y$, so that $J:a/(x,y)$ is a Gorenstein ideal of the regular ring ${\mathbf R}/(x,y)$. It follow that $(J:a)/(x,y)$ is a complete intersection. In the case of $(1,2,2,1)$, $J:a = (x,y, \alpha, \beta)$, where $\alpha$ is a form of degree $2$ and $\beta$ a form of degree $3$, since $\lambda({\mathbf R}/J:a)=6$. Since $J\subset J:a$, all the generators of $J$ must be contained in $(x,y,\alpha)$, which is impossible by Krull theorem. Those strings with at least $3$ $1$'s are also excluded since $J:a$ would have the form $(x,y,z,w^s)$, $s\geq 3$, and the argument above applies. The case $(1,1)$, $J:a = (x,y,z,w^2)$. This means that $I_1(\phi)=J:a$, or $J:a = {\mathfrak m}$. In the first case, by Corollary~\ref{F2Cor}, $I^2 = JI$. In the second case, $I_1(\phi) = {\mathfrak m}$. This will imply that $ \lambda(I^2/JI) = 2 - 1 = 1$, and therefore $I$ will not be birational (need the summation to add to $12$). \hfill$\Box$ \subsubsection*{Hilbert function $(1,4,1)$} If $R/J:a$ has Hilbert function $(1,4,1)$, $J:a \subset {\mathfrak m}^2$ but we cannot have equality since ${\mathfrak m}^2$ is not a Gorenstein ideal. We also have $I_1(\phi) \subset {\mathfrak m}^2$. If they are not equal, $I_1(\phi) = J:a$, which by Corollary~\ref{F2Cor} would mean that $\mbox{\rm red}_J(I) = 1$. \begin{Theorem} \label{141} Suppose $I$ that is birational and $I_1(\phi)\subset {\mathfrak m}^2$. Then ${\mathbf R}[It]$ is almost Cohen--Macaulay. \end{Theorem} \noindent{\bf Proof. } The assumption $I_1(\phi)\subset {\mathfrak m}^2$ means that the Hilbert function of $J:a$ is $(1,4,1)$ and vice-versa. Note also that by assumption ${\lambda}(I^7/JI^6)\neq 0$. Since $\lambda(I/J) = \lambda({\mathbf R}/J:a) = 6$, it suffices to show that $\lambda(I^2/JI)=1$. From $\lambda({\mathfrak m}^2/I) = 5$, the module ${\mathfrak m}^2/I$ is of length $5$ minimally generated by $5$ elements. Therefore ${\mathfrak m}^3\subset I$, actually ${\mathfrak m}^3={\mathfrak m} I$. \medskip There is an isomorphism ${\mathbf h}: {\mathbf R}/J:a \simeq I/J$, $r\mapsto ra$. It moves the socle of ${\mathbf R}/J:a$ into the socle of $I/J$. If $a\notin J:a$, then ${\mathfrak m}^2 = (J:a, a)$ and $a$ gives the socle of ${\mathbf R}/J:a$, thus it is mapped by ${\mathbf h}$ into the socle of $I/J$, that is ${\mathfrak m}\cdot a^2\in J$. Thus, $ {\mathfrak m}\cdot a^2\in {\mathfrak m}^3 J \subset JI$. On the other hand, if $a\in J:a$, then, since $a^2\in J$, we have $a^2\in {\mathfrak m}^2 J$ and ${\mathfrak m}\cdot a^2\in {\mathfrak m}^3 J\subset JI$. \medskip An example is $J= (x^2, y^2, z^2, w^2)$, $a = xy + xz + xw + yz$. \hfill$\Box$ \subsubsection*{Hilbert function $(1,3,1)$} Our discussion about this case is very spare. \begin{itemize} \item For these Hilbert functions, $ J:a = (x, P)$, where $P$ is a Gorenstein ideal in a regular local ring of dimension $3$--and therefore is given by the Pfaffians of a skew-symmetric matrix, necessarily $5\times 5$. Since $J \subset J:a$, $L$ must contain forms of degree $2$. In addition, $P$ is given by $5$ $2$--forms (and $(x,{\mathfrak m}^2)/(x,P)$ is the socle of ${\mathbf R}/J:a$). \item If $I$ is birational, ${\mathbf R}[It]$ is almost Cohen--Macaulay if and only if $\lambda(I^2/JI) = 2$ and $\lambda(I^3/JI^2)=1$. The first equality, by Proposition~\ref{F2}, requires $\lambda({\mathbf R}/I_1(\phi)) = 3$ which gives that $I_1(\phi)$ contains the socle of $J:a$ and another independent linear form. In all it means that $I_1(\phi) = (x,y, (z,w)^2)$. On the other hand $\lambda(I^2/JI)=2$ means that $JI:I^2 = (x,y,z,w^2)$ (after more label changes). \item An example is $J=(x^2, y^2, z^2, w^2)$ with $a= xy + yz + zw + wx + yw$. The ideal $I= (J,a)$ is birational. \end{itemize} \subsubsection*{Hilbert function $(1,2,1)$} We do not have the full analysis of this case either. \begin{itemize} \item An example is $J=(x^2, y^2, z^2, w^2)$ and $a = xy+yz+xw+zw$. The ideal $I=(J,a)$ is birational. The expected ${\mathbf f}$--sequence of such ideals is $(4,3,1,1,1,1,1)$. \item If $I$ is birational then $I_1(\phi)={\mathfrak m}$. We know that $I_1(\phi)\neq J:a$, so $I_1(\phi) \supset {\mathfrak m}^2$, that is $I_1(\phi)= (x,y,{\mathfrak m}^2)$, $(x,y,z,{\mathfrak m}^2)$, or ${\mathfrak m}$. Let us exclude the first two cases. $(x,y,{\mathfrak m}^2)$: This leads to two equations \begin{eqnarray*} xa &=& xb + yc\\ ya &=& xd + ye, \end{eqnarray*} with $b,c,d,e\in J$. But this gives the equation $(a-b)(a-e) -dc=0$, and $\mbox{\rm red}_J(I)\leq 1$. \medskip $(x,y,z,{\mathfrak m}^2)$: Then the Hilbert function of ${\mathbf R}/I_1(\phi)$ is $(1,1)$. According to \cite[Proposition 3.7]{syl2}, $L_2$ has a form of bidegree $(1,2)$, with coefficients in $I_1(\phi)$, that is, in $(x,y,z)$. This gives $3$ forms with coefficients in this ideal, two in degree $1$, so by elimination we get a monic equation of degree $4$. \end{itemize} We summarize the main points of these observations into a normal form assertion. \begin{Proposition}\label{121} Let $I$ be a birational ideal and the Hilbert function of ${\mathbf R}/J:a$ is $(1,2,1)$. Then up to a change of variables to $\{ x,y,z,w\}$, $I$ is a Northcott ideal, that is there is a $4\times 4$ matrix ${\mathbf A}$, \[ {\mathbf A} = \left[ \begin{array}{ccc} & {\mathbf B} & \\ \hline & \mathbf{C} & \\ \end{array} \right] . \] where ${\mathbf B}$ is a $2\times 4$ matrix whose entries are linear forms and $\mathbf{C}$ is a matrix with scalar entries and $\mathbf{V} = [x,y, \alpha, \beta]$, where $\alpha, \beta$ are quadratic forms in $z,w$ such that \[ I = (\mathbf{V}\cdot {\mathbf A}, \det {\mathbf A}).\] \end{Proposition} \noindent{\bf Proof. } There are two independent linear forms in $J:a$ which we denote by $x,y$. We observe that $(J:a)/(x,y)$ is a Gorenstein ideal in a polynomial ring of dimension two, so it is a complete intersection: $J:a = (x,y, \alpha,\beta)$, with $\alpha$ and $\beta$ forms of degree $2$ (as $\lambda({\mathbf R}/J:a)=4$), from which we remove the terms in $x,y$, that is we may assume $\alpha, \beta\in (z,w)^2$. \medskip Since $J \subset J:a$, we have a matrix ${\mathbf A}$, \[ J = [x,y,\alpha, \beta] \cdot {\mathbf A} =\mathbf{V} \cdot {\mathbf A}.\] By duality $I=J:(J:a)$, which by Northcott theorem (\cite{Northcott}) gives \[ I= (J, \det {\mathbf A}).\] Note that $a$ gets, possibly, replaced by $\det {\mathbf A}$. The statement about the degrees of the entries of ${\mathbf A}$ is clear. \hfill$\Box$ \begin{Example}{\rm Let \[ {\mathbf A} = \left[ \begin{array}{rrrr} x+y & z + w & x-w & z \\ z & y+w & x - z & y \\ 1 & 0 & 2 & 3 \\ 0 & 1 & 1 & 2 \\ \end{array} \right], \quad {\mathbf v} = \left[ x, y, z^2 + zw + w^2, z^2-w^2\right] .\] This ideal is birational but ${\mathbf R}[It]$ is not aCM. This is unfortunate but opens the question of when such ideals are birational. The ${\mathbf f}$--sequence here is $(4,3,3,1,1,1,1)$. }\end{Example} \subsubsection*{The degrees of ${\mathbf L}$} We examine how the Hilbert function of ${\mathbf R}/J:a$ organizes the generators of ${\mathbf L}$. We denote the presentations variables by $u, {\mathbf T}_1, {\mathbf T}_2, {\mathbf T}_3, {\mathbf T}_4$, with $u$ corresponding to $a$. \begin{itemize} \item $(1,4,1)$: We know (Theorem~\ref{141}) that $JI:I^2= {\mathfrak m}$. This means that we have forms \begin{eqnarray*} {\mathbf h}_1 & = & xu^2 + \cdots \\ {\mathbf h}_2 & = & yu^2 + \cdots \\ {\mathbf h}_3 & = & zu^2 + \cdots \\ {\mathbf h}_4 & = & w u^2 + \cdots \end{eqnarray*} with the $(\cdots)$ in $({\mathbf T}_1, {\mathbf T}_2, {\mathbf T}_3, {\mathbf T}_4){\mathbf B}_1$. The corresponding resultant, of degree $8$, is nonzero. \item $(1,2,1)$: There are two forms of degree $1$ in ${\mathbf L}$, \begin{eqnarray*} {\mathbf f}_1 & = & x u + \cdots\\ {\mathbf f}_2 & = & y u + \cdots \end{eqnarray*} The forms in $L_2/{\mathbf B}_1L_1$ have coefficients in ${\mathfrak m}^2$. This will follow from $I_1(\phi)= {\mathfrak m}$. We need a way to generate two forms of degree $3$. Since we expect $JI:I^2 = {\mathfrak m}$, this would mean the presence of two forms in $L_3$, \begin{eqnarray*} {\mathbf h}_1^{*} & = & z u^3 + \cdots\\ {\mathbf h}_2^{*} & = & w u^3 + \cdots , \end{eqnarray*} which together with ${\mathbf f}_1$ and ${\mathbf f}_2$ would give the nonzero degree $8$ resultant.% \item $(1,3,1)$: There is a form ${\mathbf f}_1 = xu + \cdots \in L_1$ and two forms in $L_2$ \begin{eqnarray*} {\mathbf h}_1 & = & y u^2 + \cdots \\ {\mathbf h}_2 & = & zu^2+ \cdots \end{eqnarray*} predicted by \cite[Prop 3.7]{syl2} if $I_1(\phi) = (x,y,z,w^2)$. (There are indications that this is always the case.) We need a cubic equation ${\mathbf h}_3^{*} = wu^3+ \cdots$ to launch the nonzero resultant of degree $8$. \end{itemize} For all quaternary quadrics with ${\mathbf R}[It]$ almost Cohen--Macaulay, Corollary~\ref{degT} says that $\nu(T) \leq \lambda({\mathbf R}/J:a)$. Let us compare to the actual number of generators in the examples discussed above: \[ \left[ \begin{array}{ccc} \nu(T) & & \lambda(I/J)\\ 5 & (1,4,1) & 6\\ 4 & (1,3,1) & 5\\ 4 & (1,2,1)& 4 \end{array} \right] \] We note that in the last case, $T$ is an Ulrich module. \subsection{Monomial ideals} Monomial ideals of finite colength which are almost complete intersections have a very simple description. We examine a narrow class of them. Let ${\mathbf R}=k[x,y,z]$ be a polynomial ring over and infinite field and let $J$ and $I$ be ${\mathbf R}$--ideals such that \[{\displaystyle J=(x^{a},\; y^{b},\; z^{c}) \subset (J,\; x^{\alpha} y^{\beta} z^{\gamma})=I. }\] This is the general form of almost complete intersections of ${\mathbf R}$ generated by monomials. Perhaps the most interesting cases are those where ${\displaystyle \frac{\alpha}{a} + \frac{\beta}{b} + \frac{\gamma}{c} <1}$. This inequality ensures that $J$ is not a reduction of $I$. Let \[{\displaystyle Q =(x^{a}-z^{c},\; y^{b}-z^{c},\;x^{\alpha} y^{\beta} z^{\gamma} ) }\] and suppose that ${\displaystyle a > 3 \alpha,\; b > 3 \beta,\; c > 3 \gamma }$. Note that $I=(Q,\; z^c)$. Then $Q$ is a minimal reduction of $I$ and the reduction number $\mbox{\rm red}_{Q}(I) \leq 2$. We label these ideals $I(a,b,c,\alpha, \beta, \gamma)$. \medskip We will examine in detail the case $a=b=c=n\geq 3$ and $\alpha=\beta = \gamma =1$. We want to argue that ${\mathbf R}[It]$ is almost Cohen--Macaulay. To benefit from the monomial generators in using {\em Macaulay2} we set $I = (xyz, x^n, y^n,z^n)$. Setting ${\mathbf B}={\mathbf R}[u, {\mathbf T}_1, {\mathbf T}_2, {\mathbf T}_3]$, we claim that \[ {\mathbf L}= (z^{n-1}u - xy{\mathbf T}_3, y^{n-1}u - xz{\mathbf T}_2, x^{n-1}u - yz{\mathbf T}_1, z^n{\mathbf T}_2-y^n{\mathbf T}_3, z^n{\mathbf T}_1-x^n{\mathbf T}_3, y^n{\mathbf T}_1-x^n{\mathbf T}_2, \] \[ y^{n-2}z^{n-2}u^2 - x^2{\mathbf T}_2{\mathbf T}_3, x^{n-2} z^{n-2} u^2 -y^2 {\mathbf T}_1{\mathbf T}_3, x^{n-2}y^{n-2} u^2 - z^2{\mathbf T}_1{\mathbf T}_2, x^{n-3}y^{n-3}z^{n-3}u^3 - {\mathbf T}_1{\mathbf T}_2{\mathbf T}_3). \] We also want to show that these ideals define an almost Cohen--Macaulay Rees algebra. \bigskip There is a natural specialization argument. Let $X$, $Y$ and $Z$ be new indeterminates and let ${\mathbf B}_0 = {\mathbf B}[X,Y,Z]$. In this ring define the ideal ${\mathbf L}_0$ obtained by replacing in the list above of generators of ${\mathbf L}$, $x^{n-3}$ by $X$ and accordingly $x^{n-2}$ by $xX$, and so on; carry out similar actions on the other variables: \[ {\mathbf L}_0= (z^2 Zu - xy{\mathbf T}_3, y^{2}Yu - xz{\mathbf T}_2, x^{2}Xu - yz{\mathbf T}_1, z^3Z{\mathbf T}_2-y^3Y{\mathbf T}_3, z^3Z{\mathbf T}_1-x^3X{\mathbf T}_3, y^3 Y{\mathbf T}_1-x^3X{\mathbf T}_2, \] \[ yzYZu^2 - x^2{\mathbf T}_2{\mathbf T}_3, x z XZ u^2 -y^2 {\mathbf T}_1{\mathbf T}_3, xyXY u^2 - z^2{\mathbf T}_1{\mathbf T}_2, XYZu^3 - {\mathbf T}_1{\mathbf T}_2{\mathbf T}_3). \] Invoking {\em Macaulay2} gives a (non-minimal) projective resolution \[ 0 \rightarrow {\mathbf B}_0^4 \stackrel{\phi_4}{\longrightarrow} {\mathbf B}_0^{17} \stackrel{\phi_3}{\longrightarrow} {\mathbf B}_0^{22} \stackrel{\phi_2}{\longrightarrow} {\mathbf B}_0^{10} \stackrel{\phi_1}{\longrightarrow} {\mathbf B}_0 \longrightarrow {\mathbf B}_0/{\mathbf L}_0 \rightarrow 0. \] We claim that the specialization $X \rightarrow x^{n-3}$, $Y \rightarrow y^{n-3}$, $Z \rightarrow z^{n-3}$ gives a projective resolution of ${\mathbf L}$. \begin{itemize} \item Call ${\mathbf L}'$ the result of the specialization in ${\mathbf B}$. We argue that ${\mathbf L}' = {\mathbf L}$. \medskip \item Inspection of the Fitting ideal $F$ of $\phi_4$ shows that it contains $(x^3, y^3,z^3, u^3, {\mathbf T}_1{\mathbf T}_2{\mathbf T}_3)$. From standard theory, the radicals of the Fitting ideals of $\phi_2$ and $\phi_2$ contain ${\mathbf L}_0$, and therefore the radicals of the Fitting ideals of these mappings after specialization will contain the ideal $(L_1)$ of ${\mathbf B}$, as $L_1 \subset {\mathbf L}'$. \medskip \item Because $(L_1)$ has codimension $3$, by the acyclicity theorem (\cite[1.4.13]{BH}) the complex gives a projective resolution of ${\mathbf L}'$. Furthermore, as $\mbox{\rm proj. dim }{\mathbf B}/{\mathbf L}' \leq 4$, ${\mathbf L}'$ has no associated primes of codimension $\geq 5$. Meanwhile the Fitting ideal of $\phi_4$ having codimension $\geq 5$, forbids the existence of associated primes of codimension $4$. Thus ${\mathbf L}'$ is unmixed. \medskip \item Finally, in $(L_1) \subset {\mathbf L}' $, as ${\mathbf L}'$ is unmixed its associated primes are minimal primes of $(L_1)$, but by Proposition~\ref{canoseq}(iii), there are just two such, ${\mathfrak m}{\mathbf B}$ and ${\mathbf L}$. Since ${\mathbf L}' \not \subset {\mathfrak m}{\mathbf B}$, ${\mathbf L}$ is its unique associated prime. Localizing at ${\mathbf L}$ gives the equality of ${\mathbf L}' $ and ${\mathbf L}$ since ${\mathbf L}$ is a primary component of $(L_1)$. \end{itemize} Let us sum up this discussion: \begin{Proposition} \label{nnn111} The Rees algebra of $I(n, n, n, 1, 1, 1)$, $n\geq 3$, is almost Cohen--Macaulay. \end{Proposition} \begin{Corollary} $e_1(I(n,n,n,1,1,1)) = 3(n+1)$. \end{Corollary} \noindent{\bf Proof. } Follows easily since $e_0(I) = 3n^2$, the colengths of the monomial ideals $I$ and $I_1(\phi)$ directly calculated and $\mbox{\rm red}_J(I) = 2$ so that \[ e_1(I) = \lambda(I/J) + \lambda(I^2/JI) = \lambda(I/J) + [\lambda(I/J) - \lambda({\mathbf R}/I_1(\phi))]= (3n-1) + 4.\] \hfill$\Box$ \begin{Remark}{\rm We have also experimented with other cases beyond those with $xyz$ and in higher dimension as well. \begin{itemize} \item In $\dim {\mathbf R}=4$, the ideal $I= I(n,n,n,n,1,1,1,1)= (x_1^n, x_2^n, x_3^n, x_4^n, x_1x_2x_3x_4)$, $n\geq 4$, has a Rees algebra ${\mathbf R}[It]$ which is almost Cohen--Macaulay. \item The argument used was a copy of the previous case, but we needed to make an adjustment in the last step to estimate the codimension of the last Fitting ideal $F$ of the corresponding mapping $\phi_5$. This is a large matrix, so it would not be possible to find the codimension of $F$ by looking at all its maximal minors. Instead, one argues as follows. Because $I$ is ${\mathfrak m}$--primary, ${\mathfrak m}\subset \sqrt{F}$, so we can drop the entries in $\phi_5$ in ${\mathfrak m}$ Inspection will give $u^{16} \in F$, so dropping all $u$'s gives an additional minors in ${\mathbf T}_1, \ldots, {\mathbf T}_4$, for $\mbox{\rm height } (F)\geq 6$. This suffices to show that ${\mathbf L} = {\mathbf L}'$. \end{itemize} }\end{Remark} \begin{Conjecture} \label{monoaciRacm} {\rm Let $I$ be a monomial ideal of $k[x_1, \ldots, x_n]$. If $I$ is an almost complete intersection of finite colength its Rees algebra ${\mathbf R}[It]$ is almost Cohen-Macaulay. }\end{Conjecture}
2,869,038,156,519
arxiv
\section{Introduction} \label{intro} It has been stated in most papers on this subject that the correspondence between a field theory on anti-de Sitter space (AdS) and a conformal field theory (CFT) on its horizon is formally described by the formula \cite{Gubser,Witten} \begin{equation} \label{form} \int_{\phi_0} \mathcal{D}\phi\, \e^{-I_{AdS}[\phi]} = \left\langle \exp \int d^dx\; \phi_0(x) \mathcal{O}(x) \right\rangle, \end{equation} where the functional integral on the left hand side is over all fields $\phi$, which asymptotically approach $\phi_0$ on the AdS horizon. On the right hand side, $\phi_0$ couples as a current to some boundary conformal field $\mathcal{O}$. In the classical approximation the left hand side is identical to $\exp(-I[\phi_0])$, where $I[\phi_0]$ is the on-shell action evaluated as a functional of the boundary value. Thus, the formula \eqref{form} enables one to calculate correlation functions of the field $\mathcal{O}$ in the boundary conformal field theory. This rather formal identification of partition functions needs refinement due to the fact that $I[\phi_0]$ is divergent as a result of the divergence of the AdS metric on the horizon. Let us choose the conventional representation of anti de-Sitter space, namely the upper half space $x^0>0$, $\bvec{x}\in\mathbb{R}^d$ with the metric \begin{equation} \label{metric} ds^2 = \frac1{(x^0)^2} \left[(dx^0)^2 +(d\bvec{x})^2 \right]. \end{equation} The horizon is given by $x^0=0$, but in order to regularize the action one considers the space restricted to $x^0>\epsilon$. The regularized on-shell action will be a function of $\epsilon$. Moreover, the terms which diverge in the limit $\epsilon\to0$ can be isolated and cancelled with counterterms. The remaining finite result is identified with the right hand side of eqn.\ \eqref{form}. There is a subtlety concerning the proper choice of boundary values, but consistency forces us to use the boundary values at $x_0=\epsilon$ (We call this the Dirichlet prescription). This subtlety and its resolution is illustrated for the example of the massive scalar field in section~\ref{prod}. The Dirichlet prescription of the AdS/CFT correspondence has been used to successfully calculate the two-point functions of scalar fields \cite{Gubser,Mueck1,Freedman1}, spinors \cite{Mueck2}, vector fields \cite{Mueck2}, Rarita Schwinger fields \cite{Corely} and gravitons \cite{Mueck3}. It must be noted that the subtlety mentioned above affects neither the finite terms in the two-point functions for massless scalar and vector fields, gravitons, spinors and Rarita Schwinger fields \cite{Witten,Henningson1,Liu,AVolovich,Koshelev}, nor higher point correlators (cf.\ \cite{Freedman2} and references therein). More recently, attention has been brought to the divergent contributions, which have to be cancelled by counterterms \cite{Henningson2,Henningson3,Nojiri1,Hyun,Balasu,Nojiri2,Gonzalez,Emparan,Nishi}. Of particular importance are terms, which are logarithmically divergent, since those counterterms are not invariant under conformal or Weyl scaling transformations. Hence, the presence of a logarithmic divergence leads to a conformal or Weyl anomaly in the finite part of the action. The Weyl anomaly has recently been calculated for the cases $d=2,4,6$ \cite{Henningson2,Henningson3}. However, the authors of these papers used a regularization, which does not consistently address the subtlety mentioned above. Therefore, we present in section~\ref{anomaly} the calculation of the divergent terms for free gravity using the Dirichlet prescription. Our results for the terms relevant to the Weyl anomaly in $d=2,4,6$ agree with those of \cite{Henningson2,Henningson3}, but we regard this as a coincidence particular to gravitons. Finally, we urge the reader to consult the appendix for our notation and for a review of the time slicing formalism, which is used in section~\ref{anomaly}. \section{The Regularization Procedure} \label{prod} We illustrate the regularization procedure with the example of the free massive scalar field, whose action is given by \begin{equation} \label{action} I = \frac12 \int d^{d+1}x \sqrt{g} \left( D_\mu \phi D^\mu \phi + m^2 \phi^2 \right), \end{equation} and whose equation of motion with the metric \eqref{metric} is \begin{equation} \label{eqnmot1} \left[ x_0^2 \partial_\mu \partial_\mu - x_0(d-1)\partial_0 -m^2 \right] \phi=0. \end{equation} The solution of eqn.\ \eqref{eqnmot1}, which does not diverge for $x_0\to\infty$ is given in terms of the mode \[ x_0^\frac{d}2 \eikx K_\alpha(kx_0), \qquad \text{where} \quad \alpha= \sqrt{\frac{d^2}4+m^2} \] and $K_\alpha$ is a modified Bessel function. Let us isolate the leading behaviour for small $x_0$ by defining \begin{equation} \label{phihatdef} \phi(x) = x_0^{\frac{d}2-\alpha} \hat\phi(x). \end{equation} Then, $\hat\phi$ has a finite limit as $x_0$ goes to zero. However, one must take care to express the regularized on-shell action in terms of the boundary value at $x_0=\epsilon$. This is easiest done by using \begin{equation} \label{phihat} \hat\phi(x) = \left(\frac{x_0}\epsilon\right)^\alpha \kint \eikx \frac{K_\alpha(kx_0)}{K_\alpha(k\epsilon)} \phi_\epsilon(\bvec{k}), \end{equation} which satisfies $\hat\phi(\bvec{x},\epsilon)=\phi_\epsilon(\bvec{x})$. Consider the regularized on-shell action, which is \cite{Mueck1} \begin{equation} \label{onshell} I(\epsilon) = -\frac12 \int d^dx\, \epsilon^{-2\alpha} \left[ \left( \frac{d}2 -\alpha \right) \phi_\epsilon^2 + \epsilon \phi_\epsilon \left. \partial_0 \hat\phi\right|_\epsilon \right] \end{equation} The first term on the right hand side is divergent and must be cancelled with a counterterm. The second term might contain other divergent terms, but also gives rise to the finite term \cite{Mueck1} \begin{equation} \label{Ifin} I_{fin} = -\alpha c_\alpha \int d^dx d^dy\, \frac{\phi_\epsilon(\bvec{x}) \phi_\epsilon(\bvec{y})}{|\bvec{x-y}|^{d+2\alpha}}, \end{equation} where $c_\alpha = \Gamma(d/2+\alpha)/[\pi^\frac{d}2 \Gamma(\alpha)].$ On the other hand, there appears to be a slightly different, and in our view not entirely consistent, prescription. Essentially, it expresses $\hat\phi$ in terms of the boundary value $\phi_0$ at $x_0=0$, which can be done by writing \begin{equation} \label{phihat2} \hat\phi(x) = \frac{2^{1-\alpha}}{\Gamma(\alpha)} \kint \eikx (kx_0)^\alpha K_\alpha(kx_0) \phi_0(\bvec{k}). \end{equation} For small $x_0$ this can be expanded as \begin{equation} \label{phihat3} \hat\phi(x) = \phi_0(\bvec{x}) +x_0^{2\alpha} c_\alpha \int d^dy\, \frac{\phi_0(\bvec{y})}{|\bvec{x-y}|^{d+2\alpha}} + \mathcal{O}\left(x_0^{2n},x_0^{2(\alpha+n)}\right). \end{equation} Substituting eqn.\ \eqref{phihat3} into eqn.\ \eqref{onshell} one obtains \begin{equation} \label{Iwrong} I = -\frac12 \int d^dx\, \epsilon^{-2\alpha} \left(\frac{d}2-\alpha\right) \phi_0^2 - \frac{d}2 c_\alpha \int d^dx d^dy\,\frac{\phi_0(\bvec{x}) \phi_0(\bvec{y})}{|\bvec{x-y}|^{d+2\alpha}} +\mathcal{O}\left(\epsilon^{2(n-\alpha)},\epsilon^{2n}\right). \end{equation} Obviously, the finite term in eqn.\ \eqref{Iwrong} does not agree with eqn.\ \eqref{Ifin}, except for $d=2\alpha$, i.e.\ for $m=0$. The reason for the discrepancy is that the first term on the right hand side of eqn.\ \eqref{onshell}, which is purely divergent in the Dirichlet prescription, contributes to the finite term, if eqn.\ \eqref{phihat2} is used. Ignoring this spurious contribution (by including it into the counterterm), the finite terms coincide. Thus, one must accept that counterterms are to be expressed in terms of $\phi_\epsilon$, not $\phi_0$, which is the essence of the Dirichlet prescription. \section{Divergent Terms for Gravity} \label{anomaly} \subsection{General Formalism} The gravity action is given by \cite{Liu,Arutyunov} \begin{equation} \label{gen:action} I = - \int_\epsilon d^{d+1}x \sqrt{\tilde g} \left[\tilde R +\frac{d(d-1)}{l^2} \right] + 2 \int d^dx \sqrt{g}\, \left[H+\frac{d-1}l\right], \end{equation} where the cosmological constant has been set equal to $2\Lambda =-d(d-1)/l^2$. The last term in the boundary integral can be considered as the first counterterm. As for our calculation of the finite part of the action \cite{Mueck3} we use the time slicing formalism, which is summarized in the appendix. Let us choose $\rho=X^0$ as time coordinate and use the gauge \begin{equation} \label{gen:gauge} n =\frac{l}{2\rho}, \qquad n^i=0. \end{equation} After isolating the leading behaviour of $g_{ij}$ for small $\rho$ (which can be found from the equation of motion) by defining \begin{equation} \label{gen:ghatdef} g_{ij} = \frac1\rho \hat g_{ij}, \end{equation} the equation of motion \eqref{sli:eqnmot} becomes \begin{equation} \label{gen:eqnmot1} l^2 \hR_{ij} +(d-2) \hat g_{ij}' - 2\rho \hat g_{ij}'' + 2 \rho \hat g^{kl} \hat g_{ik}' \hat g_{lj}' - \hat g^{kl} \hat g_{kl}' \left(\rho \hat g_{ij}' -\hat g_{ij}\right) = 0. \end{equation} Here, $\hR_{ij}=R_{ij}$ is the Ricci tensor of the time slice hypersurface. Raising an index with the metric $\hat g^{ij}$ we realize that it is handy to define the quantity \begin{equation} \label{gen:hdef} h^i_j = \hat g^{ik} \hat g_{kj}'. \end{equation} In fact, eqn.\ \eqref{gen:eqnmot1} becomes \begin{equation} \label{gen:eqnmot} l^2 \hR^i_j +(d-2) h^i_j +h \delta^i_j - \rho \left(2{h^i_j}' + h h^i_j \right) =0. \end{equation} Similarly, rewriting the constraints \eqref{sli:con1} and \eqref{sli:con2} using eqns.\ \eqref{gen:gauge}, \eqref{gen:ghatdef} and \eqref{gen:hdef} one obtains \begin{equation} \label{gen:con1} l^2 \hR + 2 (d-1) h +\rho \left( h^i_j h^j_i -h^2 \right) =0 \end{equation} and \begin{equation} \label{gen:con2} D_i h -D_j h^j_i = 0, \end{equation} respectively. In the AdS/CFT correspondence we have to calculate the on-shell value of the action \eqref{gen:action} as a functional of prescribed boundary values $\hat g_{ij}$, where the boundary is given by $\rho=\epsilon$. First, the on-shell action is easily found to be \begin{equation} \label{gen:onsh} I(\epsilon) = \frac{d}l \int_\epsilon d\rho\, d^dx\, \sqrt{\hat g}\, \rho^{-1-\frac{d}2} +\frac2l \int d^dx \sqrt{\hat g}\, \epsilon^{-\frac{d}2} (\epsilon h -1). \end{equation} In order to find the singular terms in the limit $\epsilon\to0$, we differentiate eqn.\ \eqref{gen:onsh} with respect to $\epsilon$, leading to \begin{equation} \label{gen:delI} \frac{\partial I}{\partial \epsilon} = \int d^d x \sqrt{\hat g}\, \epsilon^{-\frac{d}2} \left[ l \hR +\frac{d-1}l h \right]. \end{equation} We have made use of the trace of the equation of motion \eqref{gen:eqnmot} in order to simplify this expression. One can find the singular terms by calculating $h$ from eqns.\ \eqref{gen:eqnmot}, \eqref{gen:con1} and \eqref{gen:con2} as a power series in $\epsilon$, keeping only terms of order smaller than $\epsilon^\frac{d}2$. Thus, for odd $d$, eqn.\ \eqref{gen:delI} contains only singular terms proportional to powers $\epsilon^{-n+\frac12}$. On the other hand, for even $d$, eqn.\ \eqref{gen:delI} contains a term proportional to $1/\epsilon$, which yields a corresponding term proportional to $\ln \epsilon$ in $I$. This logarithmic divergence is the source of the Weyl anomaly in the regularized finite action. \subsection{$d=2$} There is not really much to do for $d=2$. In fact, the divergent term in eqn.\ \eqref{gen:delI} is obtained from the leading order solution for $h$. Using the constraint \eqref{gen:con1} one finds \begin{equation} \label{d2:h} h = -\frac{l^2}2 \hR +\mathcal{O}(\epsilon). \end{equation} Hence, the divergent term in the action is \begin{equation} \label{d2:Idiv} I_{div} = \ln\epsilon\; \frac{l}2 \int d^dx \sqrt{\hat g}\, \hR. \end{equation} \subsection{$d=4$} Starting from the constraint \eqref{gen:con1} one finds \begin{equation} \label{d4:h1} h = -\frac16 \left[l^2 \hR+\epsilon \left(h^i_j h^j_i -h^2 \right)\right]. \end{equation} Here, the leading order behaviour of the term in parentheses is sufficcient. The equation of motion \eqref{gen:eqnmot} gives \[ h^i_j = -\frac{l^2}2 \left(\hR^i_j - \frac16 \delta^i_j \hR \right) +\mathcal{O}(\epsilon), \] which in turn yields \[ h^i_j h^j_i -h^2 = \frac{l^4}4 \left(\hR^i_j \hR^j_i - \frac13 \hR^2 \right) +\mathcal{O}(\epsilon). \] Hence, one finds \begin{equation} \label{d4:Idiv} I_{div} = - \int d^dx \sqrt{\hat g}\left[ \frac{l}{2\epsilon}\hR + \ln\epsilon\; \frac{l^3}8 \left(\hR^i_j \hR^j_i - \frac13 \hR^2 \right)\right]. \end{equation} \subsection{$d=6$} The constraint \eqref{gen:con1} yields \begin{equation} \label{d6:h1} h =-\frac1{10}\left[ l^2 \hR +\epsilon \left(h^i_j h^j_i-h^2 \right)\right]. \end{equation} We have to calculate the term in parentheses up to order $\epsilon$. Starting from the equation of motion \eqref{gen:eqnmot} we obtain \[ h^i_j = -\frac{l^2}4 \left(\hR^i_j -\delta^i_j \frac{\hR}{10} \right) + \frac\epsilon4 \left[2{h^i_j}' +\frac{l^4\hR}{40} \left(\hR^i_j -\delta^i_j\frac{\hR}{10} \right) + \delta^i_j \frac1{10} \left( h^k_l h^l_k -h^2 \right) \right] + \mathcal{O}(\epsilon^2), \] which in turn yields \begin{equation} \label{d6:h2} h^i_j h^j_i -h^2 = \frac{l^4}{16} \left( \hR^i_j \hR^j_i -\frac3{10} \hR^2 \right) - \frac\epsilon8 \left( 2l^2 \hR^j_i {h^i_j}' - \frac{l^2}5 \hR h' + \frac{15 l^6}{400} \hR \hR^i_j \hR^j_i - \frac{29 l^6}{4000} \hR^3 \right) +\mathcal{O}(\epsilon^2). \end{equation} The quantities $h'$ and ${h^i_j}'$ can be found by differentiating the equation of motion \eqref{gen:eqnmot} with respect to $\rho$, leading to \begin{align} \label{d6:hprime} h' &= -\frac18 \left(l^2 {\hR}' -h^2\right)+ \mathcal{O}(\epsilon),\\ \label{d6:hijprime} {h^i_j}' &= -\frac12 \left(l^2 {\hat {R^i_j}}' -\frac{l^2}8 {\hR}' \delta^i_j + \frac18 \delta^i_j h^2 - h h^i_j \right) +\mathcal{O}(\epsilon). \end{align} The missing quantity ${\hat {R^i_j}}'$ is given by \begin{equation} \label{d6:Rijprime} {\hat {R^i_j}}' = \frac12 \left( \hR^i_k h^k_j - \hR^k_j h^i_k \right) - \hR^{ik}_{\;\;jl} h^l_k +\frac12 D^i D_j h - \frac12 D^k D_k h^i_j, \end{equation} where we have used the constraint \eqref{gen:con2}. Taking the trace of eqn.\ \eqref{d6:Rijprime} yields \begin{equation} \label{d6:Rprime} \hR' = - \hR^i_j h^j_i. \end{equation} Thus, substituting everything back into eqn.\ \eqref{d6:h2} we find \begin{equation} \label{d6:h3} \begin{split} h^i_j h^j_i -h^2 &= \frac{l^4}{16} \left( \hR^i_j \hR^j_i -\frac3{10}\hR^2 \right) -\frac{\epsilon l^6}{32} \left( \frac1{20} \hR D_k D^k \hR + \frac15 \hR^i_j D^j D_i \hR -\frac12 \hR^i_j D_k D^k \hR^j_i\right.\\ &\quad \left. - \hR^{ik}_{\;\;jl} \hR^j_i \hR^l_k +\frac12 \hR \hR^i_j \hR^j_i - \frac3{50} \hR^3 \right) +\mathcal{O}(\epsilon^2). \end{split} \end{equation} Finally, substituting eqns.\ \eqref{d6:h1} and \eqref{d6:h3} into eqn.\ \eqref{gen:delI} we obtain the result \begin{equation} \label{d6:Idiv} \begin{split} I_{div} &= \int d^dx \sqrt{\hat g} \left[ -\frac{l}{4\epsilon^2} \hR + \frac{l^3}{32 \epsilon} \left( \hR^i_j \hR^j_i -\frac3{10}\hR^2 \right) + \ln\epsilon\; \frac{l^5}{64} \left( \frac1{20} \hR D_k D^k \hR \right. \right. \\ &\quad\left.\left. + \frac15 \hR^i_j D^j D_i \hR -\frac12 \hR^i_j D_k D^k \hR^j_i -\hR^{ik}_{\;\;jl} \hR^j_i \hR^l_k + \frac12 \hR \hR^i_j \hR^j_i - \frac3{50} \hR^3 \right) \right]. \end{split} \end{equation} \section{Conclusions} In this paper, we first explained the regularization procedure for the AdS/CFT correspondence. This was done using the example of a massive scalar field. The regularization procedure involves considering a family of surfaces as space time boundary, which tend towards the AdS horizon for some limit $\epsilon\to0$. When using the cut-off, one must express all counterterms in terms of the boundary values of the AdS fields at the cut-off boundary, not the asymptotic horizon value. Our example demonstrates the importance of this step and thus shows that the ``Dirichlet prescription'' is the only entirely consistent one known so far. Then, we calculated the divergent terms for AdS gravity for $d=2,4,6$ using the Dirichlet prescription. We found agreement with earlier results, whose derivation did not properly address the boundary value subtlety \cite{Henningson2}, or used different techniques \cite{Hyun,Balasu,Emparan}. The fact that the subtlety does not affect the result should be regarded as a coincidence, as in the other cases mentioned in the introduction. In fact, we calculated some divergent terms for the scalar field and found that they generically disagree for the correct and asymptotic boundary values -- even in the massless case, where the finite terms coincide. As in our calculation of the finite term \cite{Mueck3}, which yields the two-point function of CFT energy momentum tensors, the time slicing formalism proves a valuable tool for the gravity part of the AdS/CFT correspondence. Moreover, we found that the calculation was greatly simplified by considering the derivative of the action, eqn.\ \eqref{gen:delI}. \section*{Acknowledgments} This work was supported in part by a grant from NSERC. W.~M.\ is very grateful to Simon Fraser University for financial support. \numberwithin{equation}{section} \begin{appendix} \section{Time Slicing Formalism} \label{slicing} Let us begin with a review of basic geometric relations for immersed hypersurfaces \cite{Eisenhart}. Let a hypersurface be defined by the functions $X^\mu(x^i)$, ($\mu=0,\ldots d$, $i=1,\ldots d$) and let $\tilde g_{\mu\nu}$ and $g_{ij}$ be the metric tensors of the imbedding manifold and the hypersurface, respectively. The tangents $\partial_i X^\mu$ and the normal $N^\mu$ of the hypersurface satisfy the following orthogonality relations: \begin{align} \label{sli:induced} \tilde g_{\mu\nu} \partial_i X^\mu \partial_j X^\nu &= g_{ij},\\ \label{sli:orth} \partial_i X^\mu N_\mu &= 0,\\ \label{sli:norm} N_\mu N^\mu &=1. \end{align} We shall in the sequel use a tilde to label quantities relating to the $d+1$ dimensional space time manifold and leave those relating to the hypersurface unadorned. Moreover, we use the symbol $D$ to denote a covariant derivative with respect to whatever indices may follow. Then, there are the equations of Gauss and Weingarten, which define the second fundamental form $H_{ij}$ of the hypersurface, \begin{align} \label{sli:gauss1} D_i\partial_j X^\mu &\equiv \partial_i\partial_j X^\mu + \tilde \Gamma^\mu_{\lambda\nu} \partial_i X^\lambda \partial_j X^\nu -\Gamma^k_{ij} \partial_k X^\mu = H_{ij} N^\mu,\\ \label{sli:weingarten} D_i N^\mu &\equiv \partial_i N^\mu +\tilde \Gamma^\mu_{\lambda\nu} \partial_i X^\lambda N^\nu = -H^j_i \partial_j X^\mu. \end{align} The second fundamental form describes the extrinsic curvature of the hypersurface and is related to the intrinsic curvature by another equation of Gauss, \begin{align} \label{sli:gauss2} \tilde R_{\mu\nu\lambda\rho} \partial_i X^\mu \partial_j X^\nu \partial_k X^\lambda \partial_l X^\rho &=R_{ijkl} + H_{il} H_{jk} - H_{ik} H_{jl}.\\ \intertext{Furthermore, it satisfies the equation of Codazzi,} \label{sli:codazzi} \tilde R_{\mu\nu\lambda\rho} \partial_i X^\mu \partial_j X^\nu N^\lambda \partial_k X^\rho &= D_i H_{jk} -D_j H_{ik}. \end{align} In the time slicing formalism \cite{Wald,MTW} we consider the bundle of immersed hypersurfaces defined by $X^0=const.$, whose tangent vectors are given by $\partial_i X^0=0$ and $\partial_i X^\mu =\delta_i^\mu$ ($\mu=1,\ldots d$). One conveniently splits up the metric as (shown here for Euclidean signature) \begin{align} \label{sli:split} \tilde g_{\mu\nu} &= \begin{pmatrix} n_i n^i +n^2 & n_j \\ n_i & g_{ij} \end{pmatrix},\\ \intertext{whose inverse is given by} \label{sli:splitinv} \tilde g^{\mu\nu} &= \frac1{n^2} \begin{pmatrix} 1& -n^j \\ -n^i & n^2 g^{ij} +n^i n^j \end{pmatrix} \end{align} and whose determinant is $\tilde g=n^2 g$. The quantities $n$ and $n^i$ are called the lapse function and shift vector, respectively. The normal vector $N^\mu$ satisfying eqns.\ \eqref{sli:orth} and \eqref{sli:norm} is given by \begin{equation} \label{sli:normal} N_\mu = (-n,\bvec{0}), \qquad N^\mu= \frac1{n}(-1,n^i), \end{equation} where the sign has been chosen such that the normal points outwards on the boundary ($n>0$ without loss of generality). Then, one obtains the second fundamental form from the equation of Gauss \eqref{sli:gauss1} as \begin{equation} \label{sli:Hij} H_{ij} = \frac1{2n} (g_{ij}' -D_i n_j -D_j n_i), \end{equation} where the prime denotes a derivative with respect to the time coordinate ($X^0$). The advantage of the time slicing formalism is that one removes the diffeomorphism invariance in Einstein's equation by specifying the lapse function $n$ and shift vector $n^i$ and thus obtains an equation of motion as well as constraints for the physical degrees of freedom $g_{ij}$. Consider Einstein's equation without matter fields, \begin{equation} \label{sli:einstein} \tilde R_{\mu\nu} -\frac12 \tilde g_{\mu\nu} \tilde R = - \tilde g_{\mu\nu} \Lambda. \end{equation} Multiplying it with $N^\mu N^\nu$ and using the equation of Gauss \eqref{sli:gauss2} as well as the relation \eqref{sli:norm} one obtains the first constraint, \begin{equation} \label{sli:con1} R+H_{ij}H^{ij} -H^2 = 2\Lambda, \end{equation} where $H=H^i_i$. Similarly, multiplying with $N^\mu \partial_i X^\nu$, using the equation of Codazzi \eqref{sli:codazzi} and the relation \eqref{sli:orth} yields the second constraint, \begin{equation} \label{sli:con2} D_i H - D_j H^j_i =0. \end{equation} Finally, rewriting eqn.\ \eqref{sli:einstein} in the form \begin{equation} \label{sli:einstein2} \tilde R_{\mu\nu} = \frac{2}{d-1}\tilde g_{\mu\nu} \Lambda \end{equation} and projecting out its tangential components we obtain the equation of motion \begin{equation} \label{sli:eqnmot} \tilde R_{ij} = \frac2{d-1} g_{ij} \Lambda. \end{equation} \end{appendix}
2,869,038,156,520
arxiv
\section{Production of Light Scalar Mesons at COSY} The COoler SYnchrotron COSY-J\"ulich provides proton and deuteron beams --- phase-space cooled and polarized if desired --- with momenta up to 3.7 GeV/c. It is thus well suited to produce the light scalars $a_0/f_0$(980) since masses up to 1.1 (1.5, 1.03) GeV/c$^2$ can be produced in $pp$ ($pd$, $dd$) collisions. Such hadronic interactions offer a few particular advantages: \begin{itemize} \item Due to large production cross sections rare processes, like radiative decays into vector mesons $a_0/f_0\to \gamma V$~\cite{sraddec,Nagahiro:2008mn,radiativeproposal} or the isospin-violating $a_0$-$f_0$ mixing~\cite{Hanhart:2003sk,ivproposal}, are expected to occur at reasonable count rates. \item The isospin of the initial state and of the produced scalar meson can be selected. $pp\to dX^+$ or $pd\to tX^+$ reactions must lead to $a_0^+$ ($I{=}1$) production, a $pp\to ppX^0$, $pn\to dX^0$ (using a deuterium target) or a $pd\to {}^3\mathrm{He}\, X^0$ reaction can produce both the $a_0$ and the $f_0$, whereas the $dd\to{}^4\mathrm{He}\, X$ process is a filter for the $f_0$ ($I{=}0$) resonance, because the initial deuterons and the $\alpha$ particle in the final state both have isospin $I{=}0$. \item For the $a_0/f_0\to K\bar K$ decays the maximum accessible excess energy $Q$ is rather small. Thus with a forward magnetic spectrometer like ANKE (see below), large acceptances and an unprecedented mass resolution $\delta_{m_{K\bar K}}$ can be reached. This is important to unravel effects induced by the opening of the $K^+K^-$ and $K^0 \bar K^0$ thresholds, which are separated by only 8~MeV/c$^2$. \end{itemize} At the same time the following drawbacks should be mentioned: \begin{itemize} \item Also the cross sections for background processes like multi-pion production are large. \item The final states contain at least two baryons. Therefore, the scalar meson signal can be distorted by final-state interactions (FSI) between these baryons and/or between one or more baryons and the mesons from $a_0/f_0$ decays. This effect has, {\em e.g.\/} been seen for the $pp\to pp K^+K^-$ reaction~\cite{Maeda:2007cy} and also in $pd\to {}^3\mathrm{He}\,K^+K^-$ data~\cite{Grishina:2006cm}. On the other hand such experiments can be exploited for the investigation of the low energy $\bar{K}N$ and $\bar{K}A$ interactions, see {\em e.g.\/} the analyses in Refs.~\cite{Sibirtsev:2004kk,Grishina:2005tu}. \end{itemize} $a_0/f_0$ production has been or will be studied at COSY in $pp$, $pn$, $pd$ and $dd$ interactions for the strong decays into $K\bar K$ and $\pi\eta$/$\pi\pi$ as well as radiative decays $\gamma V$ into vector mesons. While near-threshold decay channels with at least one charged kaon can well be investigated with the magnetic spectrometer ANKE (``Apparatus for the detection of Nucleonic and Kaon Ejectiles''), the $\pi\eta$, $\pi\pi$ and $\gamma V$ final states will be measured with WASA (``Wide Angle Shower Apparatus'') which is available for measurements at COSY since 2007. \subsection{ANKE spectrometer} The magnetic ANKE spectrometer~\cite{Barsov:2001xj} consists of three dipoles and detection systems for identification of charged particles emitted under forward angles. For our measurements an H$_{2}$/D$_{2}$ cluster-jet target, which can provide areal densities of up to $5\cdot 10^{14}$ cm$^{-2}$s$^{-1}$, has been used. Together with $10^{11}$ particles in the COSY ring, this corresponds to luminosities up to a few times $10^{31}$ cm$^{-2}$s$^{-1}$. $K^{+}$-mesons are detected in a positive side detection system~\cite{Buescher:2002zc}, using time-of-flight (TOF) measurement between 23 scintillation start counters, which are placed near a side exit window of the spectrometer magnet, and the range telescopes system or a wall of scintillation counters. The momentum reconstruction algorithm uses the track information provided by two multiwire proportional chambers (MWPCs). This information as well as the kaon energy losses in the scintillators are used in order to suppress background. High momentum particles ($p$, $d$, $t$ or He) produced in coincidence with the kaons are detected by a forward detection system which consists of three MWPCs (used for momentum reconstruction) and two layers of scintillation counters (particle ID). As a selection criteria, the energy loss of the particles and time difference between the hits in the side and forward systems are used. $K^{-}$-mesons are observed in a negative side detection system containing layers of scintillation counters and two MWPCs, which also provide the possibility to use the time difference between negative and positive detection systems, and $\Delta E$ techniques and to reconstruct $K^{-}$ momenta~\cite{Hartmann:2007ks}. \subsection{WASA spectrometer} \label{wasa} The 4$\pi$ detector facility WASA~\cite{Adam:2004ch} has been designed for studies of production and decays of light mesons. WASA makes use of a hydrogen and deuterium pellet target. The pellet concept is crucial to achieve a close to 4$\pi$ detection acceptance in this internal target storage ring experiment. The target system provides small spheres of frozen hydrogen or deuterium and allows for high luminosities of up to $10^{32}$ cm$^{-2}$s$^{-1}$. The WASA detectors comprise a forward part for measurements of charged target-recoil particles and scattered projectiles ($p$, $d$, $t$ or He), and a central part for measurements of the scalar meson decay products. The forward part consists of eleven planes of plastic scintillators and of proportional counter drift tubes. The central part comprises an electromagnetic calorimeter with $\sim 1000$ CsI(Na) crystals surrounding a superconducting solenoid. Inside the solenoid a cylindrical chamber with drift tubes and a plastic scintillator barrel are placed. \section{The $\mathbf{K\bar K}$ Final State} Close-to-threshold data on kaon pair production in nucleon-nucleon scattering, like in the reactions $pp \to d K^+\bar K^0$ or $pp \to ppK^+K^-$, allow to study the $K\bar K$ and $\bar KN$ subsystems in the final state. The strength of the $K\bar K$ interaction is of relevance for a possible $K\bar K$ molecule interpretation of the scalar resonances $a_0$(980) and $f_0$(980). Similarly, a better understanding of the $\bar KN$ system is one prerequisite to infer the nature of the $\Lambda$(1405). The $pp \to d K^+\bar K^0$ reaction has been measured with ANKE at two proton kinetic energies $T_p = 2.65$ GeV, and 2.83 GeV, corresponding to excess energies of $Q = 48$ MeV and 105 MeV with respect to the $dK^+\bar K^0$ threshold~\cite{Kleber:2003kx,Dzyuba:2006bj}. Figure~\ref{fig:K+K0} shows the invariant $K^+\bar K^0$ mass distribution (normalized to the phase-space volume) for the higher beam energy. The data seem to indicate a two-peak structure which clearly deviates from the expected Flatt\'e-like behaviour for the $a_0^+$ resonance (indicated by the lines). While the enhancement at higher $K\bar K$ masses probably is a reflection of the $\bar Kd$ FSI~\cite{ad_tbp}, there is no obvious interpretation of the lower one. Interestingly, the same peak structure (although, due to the lower mass resulution, only supported by the lowest-mass data point) is also observed for data from $\bar pp$ annihilations. A possible explanation of the unexpected shape is that interference effects by chance lead to the same structure in both data sets. \begin{figure} \includegraphics[width=0.8\textwidth]{ANKE-vs-CB.eps} \caption{\label{fig:K+K0}Invariant $K^+\bar K^0$ mass distributions normalized to the phase-space volume. The Crystal-Barrel data (open squares) are from the $\bar p p\to K_L K^\pm \pi^\mp$ reaction~\cite{Abele:1998qd}; the ANKE data (full circles) for $pp\to dK^+\bar K^0$ at $Q=105$ MeV~\cite{Dzyuba:2006bj}. The mass resolution of the ANKE data is $\delta_{m_{K\bar K}} =3\ldots 10$ MeV/c$^2$ (FWHM). The lines denote the Flatt\'e fits to Crystal-Barrel data from Refs.~\cite{Abele:1998qd} (solid) and \cite{Bugg:1994mg} (dashed).} \end{figure} The measurements of the $pp \to pp K^+K^-$ reaction were performed at three energies of $T_p = 2.65$ GeV, 2.70 GeV and 2.83 GeV, {\em i.e.\/} at $Q$ values of 51 MeV, 67 MeV and 108 MeV with respect to the $ppK^+K^-$ threshold. Figure~\ref{fig:K+K-} presents the invariant $K^+K^-$ mass resolution for the lowest beam energy. The lines show the expected shape of the distribution from a Monte-Carlo simulation that takes into account the ANKE acceptance~\cite{Maeda:2007cy}. Three data points at the lowest $K\bar K$ masses lie significantly above the simulation. This behaviour is also visible for the other two beam energies and in DISTO results on the same reaction at $Q=110$~MeV~\cite{Balestra:2000ex} as well as in our data on the $pn \to dK^+K^-$ reaction~\cite{Maeda:2006wv}. This effect is demonstrated in the right part of Fig.~\ref{fig:K+K-} where the ANKE differential cross section is normalized to the simulated spectra. At all three beam energies a sigificant enhancement between the $K^+K^-$ and $K^0\bar K^0$ thresholds is observed. \begin{figure} \includegraphics[width=0.35\textwidth,angle=270]{pp_ppKK_waves.eps} \includegraphics[width=0.35\textwidth,angle=270]{KpKm_threshold-enhancement.eps} \caption{\label{fig:K+K-}Left: Differential cross section for the $pp \to ppK^+K^-$ reaction with respect to the $K^+K^-$ invariant mass at $T_p = 2.65$ GeV~\cite{Maeda:2007cy}. The solid (black) line shows the expected shape of the distribution from a Monte-Carlo simulation that takes into account the $\phi(1020)$, non-$\phi$ contributions as well as the $\bar Kp$ and $pp$ FSIs (best fit)~\cite{Maeda:2007cy}. For illustration, the dashed (blue) and dotted (red) lines show the shape of the non-$\phi$ mass distribution with the $K^+K^-$ being in an S(P) wave. Right: Data for all measured energies 2.65 GeV, 2.70 GeV, and 2.83 GeV normalized to the best-fit distributions. The mass resolution is $\delta_{m_{K\bar K}} = 2\ldots 3$ MeV/c$^2$ (FWHM).} \end{figure} The mass scale of the low-mass variation in Fig.~\ref{fig:K+K-} is not that of the widths of the scalar resonances, which are much larger. It is tempting to suggest that this structure might be due to the opening of the $K^0 \bar K^0$ channel at a mass of 0.995~GeV/c$^2$, which induces some cusp structure that also changes the energy dependence of the total cross section near threshold. This would require a very strong $K^+K^-\rightleftharpoons K^0 \bar K^0$ channel coupling, which might be driven by the $a_0/f_0$ resonances. Thus, although the $pp \to pp K^+K^-$ reaction may not be ideal for investigating the properties of scalar states, their indirect effects might still be crucial. \section{Searches for Isospin-Symmetry Violation} An evident test of isospin symmetry is to scrutinize the $a_0^0$ and $a_0^+$ mass distributions. So far, the best knowledge of the $a_0^0$ shape comes from a Flatt\'e fit to the $\pi^0\eta$ mass distribution from $p\bar p$ annihilations, measured with Crystal Barrel at LEAR~\cite{Bugg:1994mg}. The fit yields the coupling constants with small statistical uncertainty, however, the $\pi^0\eta$ channel could, in principle, be distorted by isospin-violating $a_0^0$-$f_0$ mixing effects, {\em e.g.} distortions of a basically symmetric Breit-Wigner type ($I{=}1,\, I_3{=}0$) $a_0$ mass distribution by admixtures of the ($I{=}0,\, I_3{=}0$) $f_0$ resonance. This effect might be sizable if the $\bar p p \to f_0 X$ cross section is significantly larger than for the $a_0^0$ \cite{Amsler:1995bf,Amsler:1994pz} It is thus desirable to obtain high statistics data for the $I_3{=}+1$ $a_0^+$ state, where mixing effects with the $f_0$ are strictly forbidden. The best data so far on the $a_0^+\to \pi^+\eta$ decay from an experiment at BNL on pion-induced reactions~\cite{Teige:1996fi} are displayed in Fig.~\ref{fig:a0-vs.-a+} together with the above mentioned Crystal-Barrel data. The measured $a_0^+$ shape can as well be fitted by a Flatt\'e distribution as by a Breit-Wigner (the latter corresponds to the case of zero coupling of the $a_0^+$ to kaons). \begin{figure} \includegraphics[width=0.7\textwidth]{CB-vs-E852.eps} \caption{\label{fig:a0-vs.-a+}Invariant $\pi^0\eta$ mass distribution from $\bar p p\to \pi^0\eta\eta$ Crystal-Barrel data (green squares)~\cite{Amsler:1995bf} in comparison with that for the positive charge state $\pi^+\eta$ from pion-induced reactions (blue circles)~\cite{Teige:1996fi}. The line denotes a Flatt\'e fit from Ref.~\cite{Bugg:1994mg} to the Crystal-Barrel data.} \end{figure} It has been predicted that $a_0$-$f_0$ mixing can lead to a comparatively large isospin violation in the reactions $pd \to {}^3\mathrm{He}\, a_0^0({\to}\pi^0\eta)/f_0({\to}\pi\pi)$ and $pd \to t\, a_0^+({\to}\pi^+\eta)$ close to the corresponding production thresholds~\cite{Grishina:2001zj}. One may either test whether the cross-section ratios for these reactions follow isospin relations, or whether the $a_0^0$ and $a_0^+$ mass distributions exhibit different shapes. Both require data with high statistical accuracy for the strong scalar decay channels and backgound, {\em e.g.\/} from non-resonant $\pi\eta$ production must be well understood. A first test beam time took place in November 2007 at WASA and the data are currently being analyzed. \section{Outlook} The $pd \to {}^3\mathrm{He}\, a_0^0/f_0$ and $pd \to t a_0^+$ data that have been taken at WASA in November 2007 will also be used to search for events from radiative decays into vector mesons $a_0/f_0\to \gamma V$. According to our estimates~\cite{radiativeproposal} a few thousand of such events could be measured in 10 weeks of beam time, however, scheduling of such a long beam time at COSY has to await the result of the above mentioned test measurement. Another proposed measurement for WASA aims at data for the isospin-violating $dd\to {}^4\mathrm{He}\, \pi^0\eta$ reaction~\cite{ivproposal}. That cross section is expected to be dominated by a primary reaction of the type $dd\to {}^4\mathrm{He}\, f_0$, followed by an $f_0\to a_0$ conversion, and an isospin-conserving $a_0\to \pi^0\eta$ decay. In order to determine the $f_0\to a_0$ mixing strength, the $dd\to {}^4\mathrm{He}\, f_0$ production cross section must be experimentally determined. For that purpose, the $dd\to {}^4\mathrm{He}\, K^+K^-$ reaction has been measured at ANKE. A preliminary analysis yields a total cross section in the range 50--100~pb. If that cross section is dominated by the $f_0\to K^+K^-$ decay, then isospin violation in the $dd\to {}^4\mathrm{He}\, \pi^0\eta$ reaction should be measurable at WASA within a few weeks of beam time. \begin{theacknowledgments} This work has been supported by: Deutscher Akademischer Austausdienst, Deutsche Forschungsgemeinschaft, China Scholarship Council, COSY-FFE Program, Helmholtz-Gemeinschaft, Russian Academy of Science, Russian Fund for Basic Research, European Community. Contributions by the ANKE and WASA collaborations are greatfully acknowledged. \end{theacknowledgments}
2,869,038,156,521
arxiv
\section{Introduction} A polyomino, in its original definition, is a connected interior-disjoint union of axis-aligned unit squares joined edge-to-edge. In other words, it is an edge-connected union of cells in the planar square lattice. For the origin of polyominoes we quote Klarner \cite{handbook_dcg}: \lq\lq Polyominoes have a long \begin{figure}[ht] \begin{center} \setlength{\unitlength}{0.30cm} \begin{picture}(19,15) \put(0,15){\line(1,0){1}} \put(0,15){\line(0,-1){1}} \put(0,14){\line(1,0){1}} \put(1,14){\line(0,1){1}} \put(6,14){\line(1,0){2}} \put(6,15){\line(1,0){2}} \put(6,14){\line(0,1){1}} \put(8,14){\line(0,1){1}} \put(13,14){\line(1,0){3}} \put(13,15){\line(1,0){3}} \put(13,14){\line(0,1){1}} \put(16,14){\line(0,1){1}} \put(17,13){\line(0,1){2}} \put(17,13){\line(1,0){1}} \put(18,13){\line(0,1){1}} \put(18,14){\line(1,0){1}} \put(19,14){\line(0,1){1}} \put(19,15){\line(-1,0){2}} \put(0,12){\line(0,-1){2}} \put(0,12){\line(1,0){2}} \put(2,10){\line(-1,0){2}} \put(2,10){\line(0,1){2}} \put(3,11){\line(1,0){1}} \put(3,11){\line(0,-1){1}} \put(3,10){\line(1,0){2}} \put(5,10){\line(0,1){1}} \put(5,11){\line(1,0){1}} \put(6,11){\line(0,1){1}} \put(6,12){\line(-1,0){2}} \put(4,12){\line(0,-1){1}} \put(7,12){\line(1,0){4}} \put(7,11){\line(1,0){4}} \put(7,12){\line(0,-1){1}} \put(11,12){\line(0,-1){1}} \put(12,12){\line(1,0){3}} \put(12,12){\line(0,-1){1}} \put(12,11){\line(1,0){2}} \put(14,11){\line(0,-1){1}} \put(14,10){\line(1,0){1}} \put(15,10){\line(0,1){2}} \put(16,12){\line(1,0){3}} \put(16,12){\line(0,-1){1}} \put(16,11){\line(1,0){1}} \put(17,11){\line(0,-1){1}} \put(17,10){\line(1,0){1}} \put(18,10){\line(0,1){1}} \put(18,11){\line(1,0){1}} \put(19,11){\line(0,1){1}} \put(0,9){\line(1,0){3}} \put(0,9){\line(0,-1){1}} \put(0,8){\line(1,0){1}} \put(1,8){\line(0,-1){2}} \put(1,6){\line(1,0){1}} \put(2,6){\line(0,1){2}} \put(2,8){\line(1,0){1}} \put(3,8){\line(0,1){1}} \put(5,9){\line(1,0){2}} \put(7,9){\line(0,-1){1}} \put(7,8){\line(-1,0){1}} \put(6,8){\line(0,-1){2}} \put(6,6){\line(-1,0){1}} \put(5,6){\line(0,1){1}} \put(5,7){\line(-1,0){1}} \put(4,7){\line(0,1){1}} \put(4,8){\line(1,0){1}} \put(5,8){\line(0,1){1}} \put(9,9){\line(1,0){1}} \put(9,9){\line(0,-1){2}} \put(10,9){\line(0,-1){3}} \put(10,6){\line(-1,0){3}} \put(7,6){\line(0,1){1}} \put(7,7){\line(1,0){2}} \put(11,9){\line(1,0){2}} \put(11,9){\line(0,-1){3}} \put(13,9){\line(0,-1){2}} \put(13,7){\line(-1,0){1}} \put(12,7){\line(0,-1){1}} \put(12,6){\line(-1,0){1}} \put(19,9){\line(-1,0){5}} \put(19,8){\line(-1,0){5}} \put(19,9){\line(0,-1){1}} \put(14,9){\line(0,-1){1}} \put(19,7){\line(-1,0){4}} \put(19,7){\line(0,-1){2}} \put(19,5){\line(-1,0){1}} \put(18,5){\line(0,1){1}} \put(18,6){\line(-1,0){3}} \put(15,6){\line(0,1){1}} \put(0,5){\line(1,0){1}} \put(1,5){\line(0,-1){1}} \put(1,4){\line(1,0){1}} \put(2,4){\line(0,-1){1}} \put(2,3){\line(1,0){1}} \put(3,3){\line(0,-1){1}} \put(3,2){\line(-1,0){2}} \put(1,2){\line(0,1){1}} \put(1,3){\line(-1,0){1}} \put(0,3){\line(0,1){2}} \put(3,5){\line(1,0){2}} \put(5,5){\line(0,-1){2}} \put(5,3){\line(1,0){1}} \put(6,3){\line(0,-1){1}} \put(6,2){\line(-1,0){2}} \put(4,2){\line(0,1){2}} \put(4,4){\line(-1,0){1}} \put(3,4){\line(0,1){1}} \put(7,5){\line(1,0){2}} \put(9,5){\line(0,-1){3}} \put(9,2){\line(-1,0){2}} \put(7,2){\line(0,1){1}} \put(7,3){\line(1,0){1}} \put(8,3){\line(0,1){1}} \put(8,4){\line(-1,0){1}} \put(7,4){\line(0,1){1}} \put(11,5){\line(1,0){1}} \put(12,5){\line(0,-1){1}} \put(12,4){\line(1,0){1}} \put(13,4){\line(0,-1){1}} \put(13,3){\line(-1,0){1}} \put(12,3){\line(0,-1){1}} \put(12,2){\line(-1,0){1}} \put(11,2){\line(0,1){1}} \put(11,3){\line(-1,0){1}} \put(10,3){\line(0,1){1}} \put(10,4){\line(1,0){1}} \put(11,4){\line(0,1){1}} \put(19,4){\line(0,-1){1}} \put(19,4){\line(-1,0){2}} \put(19,3){\line(-1,0){4}} \put(15,3){\line(0,1){1}} \put(15,4){\line(1,0){1}} \put(16,4){\line(0,1){1}} \put(16,5){\line(1,0){1}} \put(17,5){\line(0,-1){1}} \put(19,2){\line(-1,0){3}} \put(16,2){\line(0,-1){1}} \put(16,1){\line(-1,0){1}} \put(15,1){\line(0,-1){1}} \put(15,0){\line(1,0){2}} \put(17,0){\line(0,1){1}} \put(17,1){\line(1,0){2}} \put(19,1){\line(0,1){1}} \end{picture}\\[-3mm] \end{center} \caption{Polyominoes with at most 5 squares.} \label{fig_polyominoes} \end{figure} \noindent history, going back to the start of the 20th century, but they were popularized in the present era initially by Solomon Golomb i.e. \cite{Golomb1954,golomb_first,Golomb1966}, then by Martin Gardner in his \textit{Scientific American} columns.'' At the present time they are widely known by mathematicians, physicists, chemists and have been considered in many different applications, i.e. in the \textit{Ising Model} \cite{Cube}. To give an illustration of polyominoes Figure \ref{fig_polyominoes} depicts the polyominoes consisting of at most 5 unit squares. One of the first problems for polyominoes was the determination of there number. Altough there has been some progress, a solution to this problem remains outstanding. In the literature one sometimes speaks also of the cell-growth problem and uses the term animal instead of polyomino. Due to its wide area of applications polyominoes were soon generalized to the two other tessellations of the plane, to the eight Archimedean tessellations \cite{archm_tess} and were also considered as unions of $d$-dimensional hypercubes instead of squares. For the known numbers we refer to the \lq\lq Online Encyclopedia of Integer Sequences'' \cite{oeis}. \begin{figure}[!h] \begin{center} \includegraphics{tiling_new.eps} \caption{A nice $5$-polyomino.} \label{fig_tilling} \end{center} \end{figure} \vspace*{-6mm} \noindent In this article we generalize concept of polyominoes to unions of regular nonoverlapping edge-to-edge connected $k$-gons. For short we call them $k$-polyominoes. An example of a $5$-polyo\-mino, which reminds somewhat to Penrose's famous non-periodic tiling of the plane, is depicted in Figure \ref{fig_tilling}. In the next sections we determine exact formulas for the number $a_k(n)$ of nonisomorphic $k$-polyominoes with $k\le 4$ and give some further values for small parameters $k$ and $n$ obtained by computer enumeration. So far edge-to-edge connected unions of regular $k$-gons were only enumerated if overlapping of the $k$-gons is permitted \cite{0301.05119}. We finish with some open problems for $k$-polyominoes. \section{Formulas for the number of $\mathbf{k}$-polyominoes} By $a_k(n)$ we denote the number of nonisomorphic $k$-polyominoes consisting of $n$ regular $k$-gons as cells where $a_k(n)=0$ for $k < 3$. For at most two cells we have $a_k(1)=a_k(2)=1$. If $n\ge 3$ we characterize three edge-to-edge connected cells $\mathcal{C}_1$, \begin{figure}[!ht] \begin{center} \vspace*{-5mm} \includegraphics[width=10cm]{Figur_zwei_dick.eps} \vspace*{-13mm} \caption{Angle $\beta=\angle(P_1,P_2,P_3)$ between three neighbored cells.} \label{fig_angles_between_3_cells} \end{center} \end{figure} \noindent $\mathcal{C}_2$ and $\mathcal{C}_3$ of a $k$-polyomino, see Figure \ref{fig_angles_between_3_cells}, by the angle $\beta=\angle(P_1,P_2,P_3)$ between the centers of the cells. Since these angles are multiples of $\frac{2\pi}{k}$ we call the minimum $$\min\left( \angle(P_1,P_2,P_3)\frac{k}{2\pi}, \left(2\pi-\angle(P_1,P_2,P_3)\right) \frac{k}{2\pi}\right)$$ the discrete angle between $\mathcal{C}_1$, $\mathcal{C}_2$, and $\mathcal{C}_3$ and denote it by $\delta(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)$. \begin{lemma} \label{lemma_overlapping} Two $k$-gons $\mathcal{C}_1$ and $\mathcal{C}_3$ joined via an edge to a $k$-gon $\mathcal{C}_2$ are nonoverlapping if and only if $\delta(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)\ge\Big\lfloor\frac{k+5}{6}\Big\rfloor$. The three $k$-gons are neighbored pairwise if and only if $k\equiv 0\,\mbox{mod}\,6$. \end{lemma} \textbf{Proof.} We consider Figure \ref{fig_angles_between_3_cells} and set $\beta=\delta(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)\frac{2\pi}{k}$. If the cells $\mathcal{C}_1$ and $\mathcal{C}_3$ are non-overlapping we have $\overline{P_1P_3}\ge \overline{P_1P_2}$ because the lengths of the lines $\overline{P_1P_2}$ and $\overline{P_2P_3}$ are equal. Thus $\beta\ge \frac{2\pi}{6}$ and $\delta(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)\ge\Big\lfloor\frac{k+5}{6}\Big\rfloor$ is necessary. Now we consider the circumcircles of the cells $\mathcal{C}_1$ and $\mathcal{C}_3$, see Figure \ref{Proof_lemma_one}. Due to $\beta\ge\frac{2\pi}{6}$ only the circlesegments between points $P_4$, $P_5$ and $P_6$, $P_7$ may intersect. The last step is to check that the corresponding lines $\overline{P_4P_5}$ and $\overline{P_6P_7}$ do not intersect and they touch each other if and only if $k\equiv 0\,\mbox{mod}\,6$.\hfill{$\square$} \begin{figure}[ht] \begin{center} \vspace*{-10mm} \includegraphics{Figur_drei_dick.eps} \vspace*{-16mm} \caption{Nonoverlapping 12-gons.} \label{Proof_lemma_one} \end{center} \end{figure} \begin{corollar} \label{cor_neighbors} The number of neighbors of a cell in a $k$-polyomino is at most $$\min\left(k,\frac{k}{\Big\lfloor\frac{k+5}{6} \Big\rfloor}\right)\le 6\,.$$ \end{corollar} \noindent With the aid of Lemma \ref{lemma_overlapping} we are able to determine the number $a_k(3)$ of $k$-polyominoes consisting of $3$ cells. \begin{satz} \label{theorem_a_k_3} $$a_k(3)=\left\lfloor\frac{k}{2}\right\rfloor-\left\lfloor\frac{k+5}{6}\right\rfloor+1\quad\mbox{for}\,\, k\ge 3\,.$$ \end{satz} \textbf{Proof.} It suffices to determine the possible values for $\delta(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)$. Due to Lemma \ref{lemma_overlapping} we have $\delta(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)\ge \left\lfloor\frac{k+5}{6}\right\rfloor$ and due to to symmetry considerations we have $\delta(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)\le \left\lfloor\frac{k}{2}\right\rfloor$.\hfill{$\square$} In order to determine the number of $k$-polyominoes with more than $3$ cells we describe the classes of $k$-polyominoes by graphs. We represent each $k$-gon by a vertex and join two vertices exactly if they are connected via an edge. \begin{figure}[ht] \begin{center} \setlength{\unitlength}{0.6cm} \begin{picture}(14.5,2) \put(0,1){\line(1,0){1}} \put(0,1){\circle*{0.3}} \put(1,1){\circle*{0.3}} \put(1,1){\line(1,1){0.74}} \put(1,1){\line(1,-1){0.74}} \put(1.74,1.74){\circle*{0.3}} \put(1.74,0.26){\circle*{0.3}} % \put(3,1){\line(1,0){1}} \put(3,1){\circle*{0.3}} \put(4,1){\circle*{0.3}} \put(4,1){\line(1,1){0.74}} \put(4,1){\line(1,-1){0.74}} \put(4.74,1.74){\circle*{0.3}} \put(4.74,0.26){\circle*{0.3}} \qbezier(3,1)(3.87,1.37)(4.74,1.74) % \put(6,1){\line(1,0){1}} \put(6,1){\circle*{0.3}} \put(7,1){\circle*{0.3}} \put(7,1){\line(1,1){0.74}} \put(7,1){\line(1,-1){0.74}} \put(7.74,1.74){\circle*{0.3}} \put(7.74,0.26){\circle*{0.3}} \qbezier(6,1)(6.87,1.37)(7.74,1.74) \qbezier(7.74,1.74)(7.74,1)(7.74,0.26) \put(9,1){\line(1,0){3}} \put(9,1){\circle*{0.3}} \put(10,1){\circle*{0.3}} \put(11,1){\circle*{0.3}} \put(12,1){\circle*{0.3}} % \put(13.5,0.5){\line(1,0){1}} \put(13.5,1.5){\line(1,0){1}} \put(13.5,0.5){\line(0,1){1}} \put(14.5,0.5){\line(0,1){1}} \put(13.5,0.5){\circle*{0.3}} \put(13.5,1.5){\circle*{0.3}} \put(14.5,0.5){\circle*{0.3}} \put(14.5,1.5){\circle*{0.3}} \end{picture} \caption{The possible graphs of $k$-polyominoes with $4$ vertices.} \label{fig_graphs} \end{center} \end{figure} \begin{lemma} \label{lemma_count_4_1} The number of $k$-polyominoes with a graph isomorphic to one of the first three ones in Figure \ref{fig_graphs} is given by $$ \left\lfloor\frac{\left(k-3\left\lfloor\frac{k+5}{6}\right\rfloor\right)^2+ 6\left(k-3\left\lfloor\frac{k+5}{6}\right\rfloor\right)+12}{12}\right\rfloor. $$ \end{lemma} \textbf{Proof.} We denote the cell corresponding to the unique vertex of degree $3$ in the graph by $\mathcal{C}_0$ and the three other cells by $\mathcal{C}_1$, $\mathcal{C}_2$, and $\mathcal{C}_3$. With $\delta_1=\delta(\mathcal{C}_1,\mathcal{C}_0,\mathcal{C}_2)-\left\lfloor\frac{k+5}{6}\right\rfloor$, $\delta_2=\delta(\mathcal{C}_2,\mathcal{C}_0,\mathcal{C}_3)-\left\lfloor\frac{k-1}{6}\right\rfloor$, and $\delta_3=\delta(\mathcal{C}_3,\mathcal{C}_0,\mathcal{C}_1)-\left\lfloor\frac{k-1}{6}\right\rfloor$ we set $m=\delta_1+\delta_2+\delta_3=k-3\left\lfloor\frac{k+5}{6}\right\rfloor$. Because the $k$-polyominoes with a graph isomorphic to one of the first three ones in Figure \ref{fig_graphs} are uniquely described by $\delta_1,\delta_2,\delta_3$, due to Lemma \ref{lemma_overlapping} and due to symmetry their number equals the number of partitions of $m$ into at most three parts. This number is the coefficient of $x^m$ in the Taylor series of $\frac{1}{(1-x)(1-x^2)(1-x^3)}$ in $x=0$ and can be expressed as $\left\lfloor\frac{m^2+6m+12}{12}\right\rfloor$. \hfill{$\square$} \begin{figure}[ht] \begin{center} \setlength{\unitlength}{0.6cm} \begin{picture}(7.5,2) \put(0,1){\line(1,-1){1}} \put(1,0){\line(1,0){1.5}} \put(2.5,0){\line(1,1){1}} \put(0,1){\circle*{0.3}} \put(1,0){\circle*{0.3}} \put(2.5,0){\circle*{0.3}} \put(3.5,1){\circle*{0.3}} \qbezier(0.6,0.4)(1.25,0.7)(1.57,0) \qbezier(1.93,0)(2.15,0.7)(2.9,0.4) % \put(4.5,0){\line(1,1){1}} \put(5.5,1){\line(1,-1){1}} \put(6.5,0){\line(1,1){1}} \put(4.5,0){\circle*{0.3}} \put(5.5,1){\circle*{0.3}} \put(6.5,0){\circle*{0.3}} \put(7.5,1){\circle*{0.3}} \qbezier(5.1,0.6)(5.5,0.3)(5.9,0.6) \qbezier(6.1,0.4)(6.5,0.7)(6.9,0.4) \end{picture} \caption{Paths of lengths $3$ representing chains of four neighbored cells.} \label{fig_paths} \end{center} \end{figure} \noindent In Lemma \ref{lemma_overlapping} we have given a condition for a chain of three neighbored cells avoiding an overlapping. For a chain of four neighbored cells we have to consider the two cases of Figure \ref{fig_paths}. In the second case the two vertices of degree one are not able to overlap so we need a lemma in the spirit of Lemma \ref{lemma_overlapping} only for the first case. \begin{lemma} \label{lemma_overlap_2} Four $k$-gons $\mathcal{C}_1$, $\mathcal{C}_2$, $\mathcal{C}_3$, and $\mathcal{C}_4$ arranged as in the first case of Figure \ref{fig_paths} are nonoverlapping if and only if Lemma \ref{lemma_overlapping} is fulfilled for the two subchains of length $3$ and $$ \delta(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)+\delta(\mathcal{C}_2,\mathcal{C}_3,\mathcal{C}_4)\ge \left\lfloor\frac{k+1}{2}\right\rfloor\,. $$ The chain is indeed a $4$-cycle if and only if $$ \delta(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)+\delta(\mathcal{C}_2,\mathcal{C}_3,\mathcal{C}_4) =\frac{k}{2}\,. $$ \end{lemma} \textbf{Proof.} We start with the second statement and consider the quadrangle of the centers of the $4$ cells. Because the angle sum of a quadrangle is $2\pi$ we have $$ \delta(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)+ \delta(\mathcal{C}_2,\mathcal{C}_3,\mathcal{C}_4)+d(\mathcal{C}_3,\mathcal{C}_4,\mathcal{C}_1)+ \delta(\mathcal{C}_4,\mathcal{C}_1,\mathcal{C}_2)=k\,. $$ Due to the fact that the side lengths of the quadrangle are equal we have $$ \delta(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)+ \delta(\mathcal{C}_2,\mathcal{C}_3,\mathcal{C}_4)= \delta(\mathcal{C}_3,\mathcal{C}_4,\mathcal{C}_1)+ \delta(\mathcal{C}_4,\mathcal{C}_1,\mathcal{C}_2) $$ which is equivalent to the statement. Thus $\delta(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)+\delta(\mathcal{C}_2,\mathcal{C}_3,\mathcal{C}_4)\ge \left\lfloor\frac{k+1}{2}\right\rfloor$ is a necessary condition. Similar to the proof of Lemma \ref{lemma_overlapping} we consider the circumcircles of the cells $\mathcal{C}_1$, $\mathcal{C}_4$ and check that the cells do not intersect.\hfill{$\square$} \begin{lemma} \label{lemma_count_4_2} For $k\ge 3$ the number of $k$-polyominoes with a graph isomorphic to one of the last two ones in Figure \ref{fig_graphs} is given by\\[-3mm] \begin{eqnarray*} \frac{5k^2+4k}{48}\,\,\,\mbox{for}\,\,\, k\equiv 0\,\mbox{mod}\,12,&\quad& \frac{5k^2+6k-11}{48}\,\,\,\mbox{for}\,\,\, k\equiv 1\,\mbox{mod}\,12,\\ \frac{5k^2+12k+4}{48}\,\,\,\mbox{for}\,\,\, k\equiv 2\,\mbox{mod}\,12,&\quad& \frac{5k^2+14k+9}{48}\,\,\,\mbox{for}\,\,\, k\equiv 3\,\mbox{mod}\,12,\\ \frac{5k^2+20k+32}{48}\,\,\,\mbox{for}\,\,\, k\equiv 4\,\mbox{mod}\,12,&\quad& \frac{5k^2+22k+5}{48}\,\,\,\mbox{for}\,\,\, k\equiv 5\,\mbox{mod}\,12,\\ \frac{5k^2+4k-12}{48}\,\,\,\mbox{for}\,\,\, k\equiv 6\,\mbox{mod}\,12,&\quad& \frac{5k^2+6k+1}{48}\,\,\,\mbox{for}\,\,\, k\equiv 7\,\mbox{mod}\,12,\\ \frac{5k^2+12k+16}{48}\,\,\,\mbox{for}\,\,\, k\equiv 8\,\mbox{mod}\,12,&\quad& \frac{5k^2+14k-3}{48}\,\,\,\mbox{for}\,\,\, k\equiv 9\,\mbox{mod}\,12,\\ \frac{5k^2+20k+20}{48}\,\,\,\mbox{for}\,\,\, k\equiv 10\,\mbox{mod}\,12,&\quad& \frac{5k^2+22k+17}{48}\,\,\,\mbox{for}\,\,\, k\equiv 11\,\mbox{mod}\,12\,. \end{eqnarray*} \end{lemma} \textbf{Proof.} Because each of the last two graphs in Figure \ref{fig_graphs} contains a path of length $3$ as a subgraph we consider the two cases of Figure \ref{fig_paths}. We denote the two interesting discrete angles by $\delta_1$ and $\delta_2$. Due to symmetry we may assume $\delta_1\le \delta_2$ and because the graphs do not contain a triangle we have $\delta_2\ge\delta_1\ge\left\lfloor\frac{k+6}{6}\right\rfloor$ due to Lemma \ref{lemma_overlapping}. From the definition of the discrete angle we have $\delta_1\le\delta_2\le\left\lfloor\frac{k}{2}\right\rfloor$. To avoid double counting we assume $\left\lfloor\frac{k+6}{6}\right\rfloor\le \delta_1\le\delta_2\le\left\lfloor\frac{k-1}{2}\right\rfloor$ in the second case, so that we get a number of $$ {\left\lfloor\frac{k-1}{2}\right\rfloor-\left\lfloor\frac{k+6}{6}\right\rfloor+2\choose 2} $$ $k$-polyominoes. With Lemma \ref{lemma_overlap_2} and a look at the possible symmetries the number of $k$-polyo\-minoes in the first case is given by $$ \sum_{\delta_1=\left\lfloor\frac{k+6}{6}\right\rfloor}^{\left\lfloor\frac{k}{2}\right\rfloor} \sum_{\delta_2=\max(\delta_1,\left\lfloor\frac{k+1}{2}\right\rfloor-\delta_1)}^{\left\lfloor\frac{k}{2}\right\rfloor}1\,. $$ A little calculation yields the proposed formulas. \hfill{$\square$} \begin{satz} \label{theorem_a_k_4} For $k\ge 3$ we have $$ a_k(4)=\left\{ \begin{array}{cccccc} \frac{3k^2+8k+24}{24} & \mbox{for} & k\equiv 0\,\,\mbox{mod}\,\,12,& \frac{3k^2+4k-7}{24} & \mbox{for} & k\equiv 1\,\,\mbox{mod}\,\,12, \\ \frac{3k^2+8k-4}{24} & \mbox{for} & k\equiv 2\,\,\mbox{mod}\,\,12, & \frac{3k^2+10k+15}{24} & \mbox{for} & k\equiv 3\,\,\mbox{mod}\,\,12, \\ \frac{3k^2+14k+16}{24} & \mbox{for} & k\equiv 4\,\,\mbox{mod}\,\,12, & \frac{3k^2+16k+13}{24} & \mbox{for} & k\equiv 5\,\,\mbox{mod}\,\,12, \\ \frac{3k^2+8k+12}{24} & \mbox{for} & k\equiv 6\,\,\mbox{mod}\,\,12, & \frac{3k^2+4k-7}{24} & \mbox{for} & k\equiv 7\,\,\mbox{mod}\,\,12, \\ \frac{3k^2+8k+8}{24} & \mbox{for} & k\equiv 8\,\,\mbox{mod}\,\,12, & \frac{3k^2+10k+3}{24} & \mbox{for} & k\equiv 9\,\,\mbox{mod}\,\,12, \\ \frac{3k^2+14k+16}{24} & \mbox{for} & k\equiv 10\,\,\mbox{mod}\,\,12, & \frac{3k^2+16k+13}{24} & \mbox{for} & k\equiv 11\,\,\mbox{mod}\,\,12. \end{array} \right. $$ \end{satz} \textbf{Proof.} The list of graphs in Figure \ref{fig_graphs} is complete because the graphs have to be connected and the complete graph on $4$ vertices is not a unit distance graph. Adding the formulas from Lemma \ref{lemma_count_4_1} and Lemma \ref{lemma_count_4_2} yields the theorem.\hfill{$\square$} \section{Computer enumeration of $\mathbf{k}$-polyominoes} For $n\ge 5$ we have constructed $k$-polyominoes with the aid of a computer and have obtained the following values of $a_k(n)$ given in Table \ref{table_a_k_n} and Table \ref{table_a_k_n_2}. \begin{table}[ht] \begin{center} \begin{tabular}[c]{|r|r|r|r|r|r|r|r|r|r|} \hline $\!\!\:\!\!$ $k\!\!\setminus\!\! n$$\!\!\:\!\!$ & 5$\!\!\:\!\!$ & 6$\!\!\:\!\!$ & 7$\!\!\:\!\!$ & 8$\!\!\:\!\!$ & 9$\!\!\:\!\!$ & 10$\!\!\:\!\!$ & 11$\!\!\:\!\!$ & 12$\!\!\:\!\!$ & 13$\!\!\:\!\!$ \\ \hline $\!\!\:\!\!$ 3$\!\!\:\!\!$ &$\!\!\:\!\!$ 4$\!\!\:\!\!$ &$\!\!\:\!\!$ 12$\!\!\:\!\!$ &$\!\!\:\!\!$ 24$\!\!\:\!\!$ &$\!\!\:\!\!$ 66$\!\!\:\!\!$ &$\!\!\:\!\!$ 160$\!\!\:\!\!$ &$\!\!\:\!\!$ 448$\!\!\:\!\!$ & $\!\!\:\!\!$ 1186$\!\!\:\!\!$ &$\!\!\:\!\!$ 3334$\!\!\:\!\!$ &$\!\!\:\!\!$ 9235$\!\!\:\!\!$ \\ $\!\!\:\!\!$ 4$\!\!\:\!\!$ &$\!\!\:\!\!$ 12$\!\!\:\!\!$ &$\!\!\:\!\!$ 35$\!\!\:\!\!$ &$\!\!\:\!\!$ 108$\!\!\:\!\!$ &$\!\!\:\!\!$ 369$\!\!\:\!\!$ &$\!\!\:\!\!$ 1285$\!\!\:\!\!$ &$\!\!\:\!\!$ 4655$\!\!\:\!\!$ & $\!\!\:\!\!$ 17073$\!\!\:\!\!$ &$\!\!\:\!\!$ 63600$\!\!\:\!\!$ &$\!\!\:\!\!$ 238591$\!\!\:\!\!$ \\ $\!\!\:\!\!$ 5$\!\!\:\!\!$ &$\!\!\:\!\!$ 25$\!\!\:\!\!$ &$\!\!\:\!\!$ 118$\!\!\:\!\!$ &$\!\!\:\!\!$ 551$\!\!\:\!\!$ &$\!\!\:\!\!$ 2812$\!\!\:\!\!$ &$\!\!\:\!\!$ 14445$\!\!\:\!\!$ &$\!\!\:\!\!$ 76092$\!\!\:\!\!$ & $\!\!\:\!\!$ 403976$\!\!\:\!\!$ &$\!\!\:\!\!$ 2167116$\!\!\:\!\!$ &$\!\!\:\!\!$ 11698961$\!\!\:\!\!$ \\ $\!\!\:\!\!$ 6$\!\!\:\!\!$ &$\!\!\:\!\!$ 22$\!\!\:\!\!$ &$\!\!\:\!\!$ 82$\!\!\:\!\!$ &$\!\!\:\!\!$ 333$\!\!\:\!\!$ &$\!\!\:\!\!$ 1448$\!\!\:\!\!$ &$\!\!\:\!\!$ 6572$\!\!\:\!\!$ &$\!\!\:\!\!$ 30490$\!\!\:\!\!$ & $\!\!\:\!\!$ 143552$\!\!\:\!\!$ &$\!\!\:\!\!$ 683101$\!\!\:\!\!$ &$\!\!\:\!\!$ 3274826$\!\!\:\!\!$ \\ $\!\!\:\!\!$ 7$\!\!\:\!\!$ &$\!\!\:\!\!$ 25$\!\!\:\!\!$ &$\!\!\:\!\!$ 118$\!\!\:\!\!$ &$\!\!\:\!\!$ 558$\!\!\:\!\!$ &$\!\!\:\!\!$ 2876$\!\!\:\!\!$ &$\!\!\:\!\!$ 14982$\!\!\:\!\!$ &$\!\!\:\!\!$ 80075$\!\!\:\!\!$ & $\!\!\:\!\!$ 431889$\!\!\:\!\!$ &$\!\!\:\!\!$ 2354991$\!\!\:\!\!$ &$\!\!\:\!\!$ 12930257$\!\!\:\!\!$ \\ $\!\!\:\!\!$ 8$\!\!\:\!\!$ &$\!\!\:\!\!$ 50$\!\!\:\!\!$ &$\!\!\:\!\!$ 269$\!\!\:\!\!$ &$\!\!\:\!\!$ 1605$\!\!\:\!\!$ &$\!\!\:\!\!$ 10102$\!\!\:\!\!$ &$\!\!\:\!\!$ 65323$\!\!\:\!\!$ &$\!\!\:\!\!$ 430302$\!\!\:\!\!$ & $\!\!\:\!\!$ 2868320$\!\!\:\!\!$ &$\!\!\:\!\!$ 19299334$\!\!\:\!\!$ &$\!\!\:\!\!$ 130807068$\!\!\:\!\!$ \\ $\!\!\:\!\!$ 9$\!\!\:\!\!$ &$\!\!\:\!\!$ 82$\!\!\:\!\!$ &$\!\!\:\!\!$ 585$\!\!\:\!\!$ &$\!\!\:\!\!$ 4418$\!\!\:\!\!$ &$\!\!\:\!\!$ 34838$\!\!\:\!\!$ &$\!\!\:\!\!$ 280014$\!\!\:\!\!$ &$\!\!\:\!\!$ 2285047$\!\!\:\!\!$ & $\!\!\:\!\!$ 18838395$\!\!\:\!\!$ &$\!\!\:\!\!$ 156644526$\!\!\:\!\!$ & $\!\!\:\!\!$ 1311575691$\!\!\:\!\!$ \\ $\!\!\:\!\!$ 10$\!\!\:\!\!$ &$\!\!\:\!\!$ 127$\!\!\:\!\!$ &$\!\!\:\!\!$ 985$\!\!\:\!\!$ &$\!\!\:\!\!$ 8350$\!\!\:\!\!$ &$\!\!\:\!\!$ 73675$\!\!\:\!\!$ &$\!\!\:\!\!$ 664411$\!\!\:\!\!$ &$\!\!\:\!\!$ 6078768$\!\!\:\!\!$ & $\!\!\:\!\!$ 56198759$\!\!\:\!\!$ &$\!\!\:\!\!$ 523924389$\!\!\:\!\!$ & \\ $\!\!\:\!\!$ 11$\!\!\:\!\!$ &$\!\!\:\!\!$ 186$\!\!\:\!\!$ &$\!\!\:\!\!$ 1750$\!\!\:\!\!$ &$\!\!\:\!\!$ 17501$\!\!\:\!\!$ &$\!\!\:\!\!$ 181127$\!\!\:\!\!$ &$\!\!\:\!\!$ 1908239$\!\!\:\!\!$ &$\!\!\:\!\!$ 20376032$\!\!\:\!\!$ & $\!\!\:\!\!$ 219770162$\!\!\:\!\!$ &$\!\!\:\!\!$ 2390025622$\!\!\:\!\!$ & \\ $\!\!\:\!\!$ 12$\!\!\:\!\!$ &$\!\!\:\!\!$ 168$\!\!\:\!\!$ &$\!\!\:\!\!$ 1438$\!\!\:\!\!$ &$\!\!\:\!\!$ 13512$\!\!\:\!\!$ &$\!\!\:\!\!$ 131801$\!\!\:\!\!$ &$\!\!\:\!\!$ 1314914$\!\!\:\!\!$ &$\!\!\:\!\!$ 13303523$\!\!\:\!\!$ & $\!\!\:\!\!$ 136035511$\!\!\:\!\!$ &$\!\!\:\!\!$ 1402844804$\!\!\:\!\!$ & \\ $\!\!\:\!\!$ 13$\!\!\:\!\!$ &$\!\!\:\!\!$ 187$\!\!\:\!\!$ &$\!\!\:\!\!$ 1765$\!\!\:\!\!$ &$\!\!\:\!\!$ 17775$\!\!\:\!\!$ &$\!\!\:\!\!$ 185297$\!\!\:\!\!$ &$\!\!\:\!\!$ 1968684$\!\!\:\!\!$ &$\!\!\:\!\!$ 21208739$\!\!\:\!\!$ & $\!\!\:\!\!$ 230877323$\!\!\:\!\!$ & & \\ $\!\!\:\!\!$ 14$\!\!\:\!\!$ &$\!\!\:\!\!$ 263$\!\!\:\!\!$ &$\!\!\:\!\!$ 2718$\!\!\:\!\!$ &$\!\!\:\!\!$ 30467$\!\!\:\!\!$ &$\!\!\:\!\!$ 352375$\!\!\:\!\!$ &$\!\!\:\!\!$ 4158216$\!\!\:\!\!$ &$\!\!\:\!\!$ 49734303$\!\!\:\!\!$ & $\!\!\:\!\!$ 601094660$\!\!\:\!\!$ & & \\ $\!\!\:\!\!$ 15$\!\!\:\!\!$ &$\!\!\:\!\!$ 362$\!\!\:\!\!$ &$\!\!\:\!\!$ 4336$\!\!\:\!\!$ &$\!\!\:\!\!$ 55264$\!\!\:\!\!$ &$\!\!\:\!\!$ 725869$\!\!\:\!\!$ &$\!\!\:\!\!$ 9707046$\!\!\:\!\!$ &$\!\!\:\!\!$ 131517548$\!\!\:\!\!$ & $\!\!\:\!\!$ 1800038803$\!\!\:\!\!$ & & \\ $\!\!\:\!\!$ 16$\!\!\:\!\!$ &$\!\!\:\!\!$ 472$\!\!\:\!\!$ &$\!\!\:\!\!$ 6040$\!\!\:\!\!$ &$\!\!\:\!\!$ 83252$\!\!\:\!\!$ &$\!\!\:\!\!$ 1180526$\!\!\:\!\!$ &$\!\!\:\!\!$ 17054708$\!\!\:\!\!$ &$\!\!\:\!\!$ 249598727$\!\!\:\!\!$ & $\!\!\:\!\!$ 3690421289$\!\!\:\!\!$ & & \\ $\!\!\:\!\!$ 17$\!\!\:\!\!$ &$\!\!\:\!\!$ 613$\!\!\:\!\!$ &$\!\!\:\!\!$ 8814$\!\!\:\!\!$ &$\!\!\:\!\!$ 134422$\!\!\:\!\!$ &$\!\!\:\!\!$ 2104485$\!\!\:\!\!$ &$\!\!\:\!\!$ 33522023$\!\!\:\!\!$ &$\!\!\:\!\!$ 540742895$\!\!\:\!\!$ & & & \\ $\!\!\:\!\!$ 18$\!\!\:\!\!$ &$\!\!\:\!\!$ 566$\!\!\:\!\!$ &$\!\!\:\!\!$ 7678$\!\!\:\!\!$ &$\!\!\:\!\!$ 112514$\!\!\:\!\!$ &$\!\!\:\!\!$ 1694978$\!\!\:\!\!$ &$\!\!\:\!\!$ 26019735$\!\!\:\!\!$ &$\!\!\:\!\!$ 404616118$\!\!\:\!\!$ & & & \\ $\!\!\:\!\!$ 19$\!\!\:\!\!$ &$\!\!\:\!\!$ 615$\!\!\:\!\!$ &$\!\!\:\!\!$ 8839$\!\!\:\!\!$ &$\!\!\:\!\!$ 135175$\!\!\:\!\!$ &$\!\!\:\!\!$ 2123088$\!\!\:\!\!$ &$\!\!\:\!\!$ 33942901$\!\!\:\!\!$ &$\!\!\:\!\!$ 549711709$\!\!\:\!\!$ & & & \\ $\!\!\:\!\!$ 20$\!\!\:\!\!$ &$\!\!\:\!\!$ 776$\!\!\:\!\!$ &$\!\!\:\!\!$ 11876$\!\!\:\!\!$ &$\!\!\:\!\!$ 195122$\!\!\:\!\!$ &$\!\!\:\!\!$ 3291481$\!\!\:\!\!$ &$\!\!\:\!\!$ 56537856$\!\!\:\!\!$ &$\!\!\:\!\!$ 983715865$\!\!\:\!\!$ & & & \\ \hline \end{tabular} \caption{Number of $k$-polyominoes with $n$ cells for small $k$ and $n$.} \label{table_a_k_n} \end{center} \end{table} \vspace*{-3mm} \noindent Now we go into more detail how the computer enumeration was done. At first we have to represent $k$-polyominoes by a suitable data structure. As in Lemma \ref{lemma_overlapping} a $k$-polyomino can be described by the set of all discrete angles between three neighbored cells. By fixing one direction we can define the discrete angle between this direction and two neighbored cells and so describe a $k$-polyomino by an $n \times n$-matrix with integer entries. Due to Corollary \ref{cor_neighbors} we can also describe it as a $6 \times n$-matrix by listing only the neighbors. To deal with symmetry we define a canonical form for these matrices. Our general construction strategy is orderly generation \cite{winner}, where we use a variant introduced in \cite{phd_kurz,paper_characteristic}. Here a $k$-polyomino consisting of $n$ cells is constructed by glueing two $k$-polyominoes consisting of $n-1$ cells having $n-2$ cells in common. There are two advantages of this approach. In a $k$-polyomino each two cells must be nonoverlapping. If we would add a cell in each generation step we would have to check $n-1$ pairs of cells whether they are nonoverlapping or not. By glueing two $k$-polyominoes we only need two perform one such check. To demonstrate the the second advantage we compare in Table \ref{table_compare} the numbers $c_1(n,k)$ and $c_2(n,k)$ of candidates produced by the original version and the used variant via glueing of orderly generation. To avoid numerical twists in the overlapping check we utilize Gr\"obner bases \cite{groebner}. \begin{table}[!ht] \begin{center} \begin{tabular}{|r|r|r|r|r||r|r|r|r|r|} \hline $\!k\!\!\setminus\!\! n\!$ & 5 & 6 & 7 & 8 & $\!k\!\!\setminus\!\! n\!$ & 5 & 6 & 7\\ \hline 21 & 972 & 16410 & 294091 & 5402087 & 36 & 4575 & 130711 & 3943836 \\ 22 & 1179 & 20970 & 397852 & 7739008 & 37 & 4796 & 140434 & 4326289 \\ 23 & 1437 & 27720 & 566007 & 11832175 & 38 & 5380 & 163027 & 5204536 \\ 24 & 1347 & 24998 & 495773 & 10079003 & 39 & 6089 & 193587 & 6464267 \\ 25 & 1439 & 27787 & 568602 & 11917261 & 40 & 6760 & 221521 & 7634297 \\ 26 & 1711 & 34763 & 751172 & 16624712 & 41 & 7578 & 259396 & 9311913 \\ 27 & 2045 & 44687 & 1031920 & 24389611 & 42 & 7282 & 244564 & 8643473 \\ 28 & 2376 & 54133 & 1307384 & 32317393 & 43 & 7584 & 259838 & 9341040 \\ 29 & 2786 & 67601 & 1729686 & 45260884 & 44 & 8373 & 295558 & 10958872 \\ 30 & 2641 & 62252 & 1557663 & 39891448 & 45 & 9321 & 342841 & 13215115 \\ 31 & 2790 & 67777 & 1737915 & 45587429 & 46 & 10207 & 385546 & 15274792 \\ 32 & 3204 & 81066 & 2169846 & 59424885 & 47 & 11282 & 442543 & 18169170 \\ 33 & 3706 & 99420 & 2808616 & 81124890 & 48 & 10890 & 420154 & 17012270 \\ 34 & 4193 & 116465 & 3413064 & 102292464 & 49 & 11290 & 443178 & 18217475 \\ 35 & 4789 & 140075 & 4306774 & 135337752 & 50 & 12309 & 495988 & 20944951 \\ \hline \end{tabular} \caption{Number of $k$-polyominoes with $n$ cells for small $k$ and $n$.} \label{table_a_k_n_2} \end{center} \end{table} \begin{table}[!ht] \begin{center} \begin{tabular}[c]{|r|r|r|r|r|r|r|r|r|} \hline $n$ & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ \hline $\!\!\:\!\!$ $a_5(n)$$\!\!\:\!\!$ &$\!\!\:\!\!$ 7$\!\!\:\!\!$ &$\!\!\:\!\!$ 25$\!\!\:\!\!$ &$\!\!\:\!\!$ 118$\!\!\:\!\!$ &$\!\!\:\!\!$ 551$\!\!\:\!\!$ &$\!\!\:\!\!$ 2812$\!\!\:\!\!$ &$\!\!\:\!\!$ 14445$\!\!\:\!\!$ &$\!\!\:\!\!$ 76092$\!\!\:\!\!$ &$\!\!\:\!\!$ 403976$\!\!\:\!\!$\\ $\!\!\:\!\!$ $c_1(n,5)$$\!\!\:\!\!$ &$\!\!\:\!\!$ 21$\!\!\:\!\!$ &$\!\!\:\!\!$ 74$\!\!\:\!\!$ &$\!\!\:\!\!$ 242$\!\!\:\!\!$ &$\!\!\:\!\!$ 1038$\!\!\:\!\!$ &$\!\!\:\!\!$ 4476$\!\!\:\!\!$ &$\!\!\:\!\!$ 21945$\!\!\:\!\!$ &$\!\!\:\!\!$ 111232$\!\!\:\!\!$ &$\!\!\:\!\!$ 580139$\!\!\:\!\!$\\ $\!\!\:\!\!$ $c_2(n,5)$$\!\!\:\!\!$ &$\!\!\:\!\!$ 19$\!\!\:\!\!$ &$\!\!\:\!\!$ 62$\!\!\:\!\!$ &$\!\!\:\!\!$ 192$\!\!\:\!\!$ &$\!\!\:\!\!$ 816$\!\!\:\!\!$ &$\!\!\:\!\!$ 3541$\!\!\:\!\!$ &$\!\!\:\!\!$ 17297$\!\!\:\!\!$ &$\!\!\:\!\!$ 87336$\!\!\:\!\!$ &$\!\!\:\!\!$ 452215$\!\!\:\!\!$\\ \hline $\!\!\:\!\!$ $a_7(n)$$\!\!\:\!\!$ &$\!\!\:\!\!$ 7$\!\!\:\!\!$ &$\!\!\:\!\!$ 25$\!\!\:\!\!$ &$\!\!\:\!\!$ 118$\!\!\:\!\!$ &$\!\!\:\!\!$ 558$\!\!\:\!\!$ &$\!\!\:\!\!$ 2876$\!\!\:\!\!$ &$\!\!\:\!\!$ 14982$\!\!\:\!\!$ &$\!\!\:\!\!$ 80075$\!\!\:\!\!$ &$\!\!\:\!\!$ 431889$\!\!\:\!\!$\\ $\!\!\:\!\!$ $c_1(n,7)$$\!\!\:\!\!$ &$\!\!\:\!\!$ 31$\!\!\:\!\!$ &$\!\!\:\!\!$ 107$\!\!\:\!\!$ &$\!\!\:\!\!$ 356$\!\!\:\!\!$ &$\!\!\:\!\!$ 1530$\!\!\:\!\!$ &$\!\!\:\!\!$ 6682$\!\!\:\!\!$ &$\!\!\:\!\!$ 33057$\!\!\:\!\!$ &$\!\!\:\!\!$ 168881$\!\!\:\!\!$ &$\!\!\:\!\!$ 889721$\!\!\:\!\!$\\ $\!\!\:\!\!$ $c_2(n,7)$$\!\!\:\!\!$ &$\!\!\:\!\!$ 19$\!\!\:\!\!$ &$\!\!\:\!\!$ 62$\!\!\:\!\!$ &$\!\!\:\!\!$ 196$\!\!\:\!\!$ &$\!\!\:\!\!$ 821$\!\!\:\!\!$ &$\!\!\:\!\!$ 3584$\!\!\:\!\!$ &$\!\!\:\!\!$ 17778$\!\!\:\!\!$ &$\!\!\:\!\!$ 91109$\!\!\:\!\!$ &$\!\!\:\!\!$ 479814$\!\!\:\!\!$\\ \hline $\!\!\:\!\!$ $a_{13}(n)$$\!\!\:\!\!$ &$\!\!\:\!\!$ 23$\!\!\:\!\!$ &$\!\!\:\!\!$ 187$\!\!\:\!\!$ &$\!\!\:\!\!$ 1765$\!\!\:\!\!$ &$\!\!\:\!\!$ 17775$\!\!\:\!\!$ &$\!\!\:\!\!$ 185297$\!\!\:\!\!$ &$\!\!\:\!\!$ 1968684$\!\!\:\!\!$ &$\!\!\:\!\!$ 21208739$\!\!\:\!\!$ &$\!\!\:\!\!$ 230877323$\!\!\:\!\!$\\ $\!\!\:\!\!$ $c_1(n,13)$$\!\!\:\!\!$ &$\!\!\:\!\!$ 126$\!\!\:\!\!$ &$\!\!\:\!\!$ 721$\!\!\:\!\!$ &$\!\!\:\!\!$ 5059$\!\!\:\!\!$ &$\!\!\:\!\!$ 43842$\!\!\:\!\!$ &$\!\!\:\!\!$ 420958$\!\!\:\!\!$ &$\!\!\:\!\!$ 4294445$\!\!\:\!\!$ &$\!\!\:\!\!$ 45258582$\!\!\:\!\!$ &$\!\!\:\!\!$ 485481211$\!\!\:\!\!$\\ $\!\!\:\!\!$ $c_2(n,13)$$\!\!\:\!\!$ &$\!\!\:\!\!$ 76$\!\!\:\!\!$ &$\!\!\:\!\!$ 408$\!\!\:\!\!$ &$\!\!\:\!\!$ 2697$\!\!\:\!\!$ &$\!\!\:\!\!$ 23412$\!\!\:\!\!$ &$\!\!\:\!\!$ 223789$\!\!\:\!\!$ &$\!\!\:\!\!$ 2274489$\!\!\:\!\!$ &$\!\!\:\!\!$ 23849241$\!\!\:\!\!$ &$\!\!\:\!\!$ 254712159$\!\!\:\!\!$\\ \hline $\!\!\:\!\!$ $a_{17}(n)$$\!\!\:\!\!$ &$\!\!\:\!\!$ 48$\!\!\:\!\!$ &$\!\!\:\!\!$ 614$\!\!\:\!\!$ &$\!\!\:\!\!$ 8814$\!\!\:\!\!$ &$\!\!\:\!\!$ 134422$\!\!\:\!\!$ &$\!\!\:\!\!$ 2104485$\!\!\:\!\!$ &$\!\!\:\!\!$ 33522023$\!\!\:\!\!$ &$\!\!\:\!\!$ 540742895$\!\!\:\!\!$ &$\!\!\:\!\!$ $\!\!\:\!\!$\\ $\!\!\:\!\!$ $c_1(n,17)$$\!\!\:\!\!$ &$\!\!\:\!\!$ 255$\!\!\:\!\!$ &$\!\!\:\!\!$ 2039$\!\!\:\!\!$ &$\!\!\:\!\!$ 22038$\!\!\:\!\!$ &$\!\!\:\!\!$ 292887$\!\!\:\!\!$ &$\!\!\:\!\!$ 4311681$\!\!\:\!\!$ &$\!\!\:\!\!$ 66600525$\!\!\:\!\!$ &$\!\!\:\!\!$ 1057440375$\!\!\:\!\!$ &$\!\!\:\!\!$ $\!\!\:\!\!$\\ $\!\!\:\!\!$ $c_2(n,17)$$\!\!\:\!\!$ &$\!\!\:\!\!$ 171$\!\!\:\!\!$ &$\!\!\:\!\!$ 1261$\!\!\:\!\!$ &$\!\!\:\!\!$ 12964$\!\!\:\!\!$ &$\!\!\:\!\!$ 173839$\!\!\:\!\!$ &$\!\!\:\!\!$ 2545538$\!\!\:\!\!$ &$\!\!\:\!\!$ 39008006$\!\!\:\!\!$ &$\!\!\:\!\!$ 614066925$\!\!\:\!\!$ &$\!\!\:\!\!$ $\!\!\:\!\!$\\ \hline \end{tabular} \caption{Number of candidates $c_1(n,k)$ and $c_2(n,k)$ for $k$-polyominoes with $n$ cells.} \label{table_compare} \end{center} \end{table} \section{Open problems for $\mathbf{k}$-polyominoes} For $4$-polyominoes the maximum area of the convex hull was considered in \cite{Bezdek1994}. If the area of a cell is normalized to $1$ then the maximum area of a $4$-polyomino consisting of $n$ squares is given by $n+\frac{1}{2}\left\lfloor\frac{n-1}{2}\right\rfloor\left\lfloor\frac{n}{2}\right\rfloor$. The second author has proven an analogous result for the maximum content of the convex hull of a union of $d$-dimensional units hypercubes \cite{dipl_kurz}, which is given by $$ \sum\limits_{I\subseteq\{1,\dots,d\}}\frac{1}{|I|!}\prod\limits_{i\in I}\left\lfloor\frac{n-2+i}{d}\right\rfloor\ $$ for $n$ hypercubes. For other values of $k$ the question for the maximum area of the convex hull of $k$-polyominoes is still open. Besides from \cite{0747.52010} no results are known for the question of the minimum area of the convex hull, which is non trivial for $k\neq 3,4$. Another class of problems is the question for the minimum and the maximum number of edges of $k$-polyominoes. The following sharp inequalities for the number $q$ of edges of $k$-polyominoes consisting of $n$ cells were found in \cite{extremal} and are also given in \cite{0828.00001}. \begin{eqnarray*} k=3:&\quad& n+\left\lceil \frac{1}{2}\left(n+\sqrt{6n}\right)\right\rceil\le q\le 2n+1\\ k=4:&\quad& 2n+\left\lceil 2\sqrt{n}\right\rceil\le q\le 3n+1\\ k=6:&\quad& 3n-\left\lceil\sqrt{12n-3}\right\rceil\le q\le 5n+1 \end{eqnarray*} \noindent In general the maximum number of edges is given by $(k-1)n+1$. The numbers of $4$-polyominoes with a minimum number of edges were enumerated in \cite{counting}. Since for $k\neq 3,4,6$ regular $k$-gons do not tile the plane the question about the maximum density $\delta(k)$ of an edge-to-edge connected packing of regular $k$-gons arises. In \cite{phys} $$ \delta(5)=\frac{3\sqrt{5}-5}{2}\approx 0.8541 $$ is conjectured.
2,869,038,156,522
arxiv
\section{Introduction} Multiparticle production in hard collision processes is described within QCD by combining the perturbative approach to the parton cascade evolution with a non-perturbative treatment of the transition towards the hadronic final state. The perturbative phase is essentially determined by the QCD scale parameter $\Lambda$ and, possibly, the quark masses. The gluon bremsstrahlung which dominates the partonic cascade process with its collinear and soft singularities leads to the characteristic jet structure of the multiparton final state. The formation of partonic jets can be quantitatively studied by constructing jets explicitly using an algorithm which combines partons into jets at an externally given resolution scale (parameter $Q_c$). In the calculation of jet production phenomena one relates hadronic jets with partons at the same resolution scale neglecting in general the effects of hadronization. It is an interesting question down to which scale one can follow this scheme of identifying a parton and a hadron jet which we will adress below. In a particularly simple ansatz for hadronization one assumes that observables for the multiparticle final state are proportional to the corresponding quantities for partons at a characteristic resolution scale $Q_0$, an idea which has been proposed originally for single inclusive energy spectra and has been called ``Local Parton Hadron Duality'' (LPHD \cite{lphd}). Subsequently, this idea has been applied to a wider range of phenomena including inclusive multiparticle correlations and even quasi-exclusive processes (for reviews, see \cite{dkmt2,ko}). The agreement with data generally increases with the accuracy of the calculation. The question arises whether this kind of agreement can be derived from a general principle or should the agreement be considered as largely accidental. Whereas there is no generally accepted answer it is worth while to check this correspondence for more complex observables and at the same time to increase the accuracy of the predictions aiming at a reliable phenomenology. Here we will consider in particular the moments of multiplicity distributions of hadrons and jets. We investigate to what extent the jet observables can be connected with the corresponding hadron observables in the limit \begin{equation} Q_c\to Q_0. \label{q0limit} \end{equation} Such a connection would suggest a one-to-one correspondence between hadrons and partons. Clearly, such a correspondence cannot exist in general for any exclusive limit but only after a certain averaging in the sense of a dual description. The mean multiplicities and multiplicity moments in quark and gluon jets as function of primary energy and (sub-) jet resolution has been calculated using the evolution equation in the Modified Leading Logarithmic Approximation (MLLA) \cite{lo,bfo}. This equation has been presented originally at the Leningrad Winterschool 1984 \cite{lewi}, is explained in \cite{dkmt2}, and can be considered as an extension of the well known DGLAP evolution equation towards small particle energies taking into account soft gluon coherence as realized in a probabilistic way by angular ordering \cite{ao}. Whereas there have been various levels of analytical approximations to the solution of the MLLA evolution equation with increasing number of subleading logarithmic terms we find that the numeric solution of this equation yields quantitative agreement with the variety of global observables discussed here. \section{Moments of multiplicity distribution} Be $P_n$ the distribution of particle (parton) multiplicity in a jet. Then we consider the unnormalized and normalized factorial moments $f_q$ and $F_q$ \begin{equation} f_q=\sum_{n=0}^\infty n(n-1)\ldots (n-q+1)P_n, \quad F_q=f_q/N^q, \quad N\equiv f_1 \label{fmom} \end{equation} with mean multiplicity $N$. Furthermore, one introduces the cumulant moments $k_q$ and $K_q$ which are used to measure the genuine correlations without uncorrelated background in a multiparticle sample \begin{equation} k_q=f_q-\sum_{i=1}^{q-1} {q-1 \choose i} k_{q-i} f_i, \qquad K_q=k_q/N^q, \label{kmom} \end{equation} in particular $K_2=F_2-1,\ K_3=F_3-3F_2+2$; for a Poisson distribution $K_1=1,\ K_q=0$ for $q>1$. The moments can be conveniently computed with the help of the generating function \begin{eqnarray} Z(u)&=& \sum_{n=1}^\infty P_n u^n\\ f_q&=& \frac{\partial^n Z(u)}{\partial u}|_{u=1},\qquad k_q\ =\ \frac{\partial^n \ln Z(u)}{\partial u}|_{u=1} \label{genfunction} \end{eqnarray} Of special interest are the ratios \begin{equation} H_q=K_q/F_q \end{equation} which have been predicted to show an oscillatory behaviour at high energies \cite{dreminosc} with the first minimum near $q\approx5$ at the mass of the $Z$ boson. Such a minimum has been observed indeed in $e^+e^-$ annihilations at SLC \cite{slacosc} and at LEP \cite{L3osc,mangeol} but the magnitudes of moments are found much smaller than originally expected in \cite{dreminosc}. \section{Perturbative QCD predictions} Predictions on the global quantities as above can be obtained from the MLLA evolution equation for the generation function $Z(Y_c,u)$ and the initial condition at threshold which read in the simplified world of gluodynamics without quarks \begin{eqnarray} \frac{d}{dY_c} Z(Y_c,u)&=& \int_{z_c}^{1-z_c} dz \frac{\alpha_s(\tilde{k_T})}{2\pi}P_{gg}(z)\times \nonumber\\ && \hspace{1.5cm} \{Z(Y_c + \ln z,u)Z(Y_c + \ln (1-z),u) - Z(Y_c,u)\} \label{Zevol}\\ Z(0,u)&=& u. \label{Zbound} \end{eqnarray} In the general case there are two coupled equations for $Z_g$ and $Z_q$. In (\ref{Zevol}) the evolution variable is $Y_c=\ln \frac{E}{Q_c}$ in the jet energy $E$ and resolution parameter $Q_c$. The parameter $Q_c$ limits the transverse momentum from below $\tilde k_T = \min (z,1-z)E > Q_c$. This restriction yields a parton cascade with a minimum $k_T$ separation and can be compared with the jet ensemble constructed from hadrons using the so-called $k_T$- or ``Durham''-algorithm. The initial condition (\ref{Zbound}) sets the multiplicity to $N=1$ at threshold $E=Q_c$ and $F_q=0$ for $q>1$. Asymptotic solutions can be obtained in the Double Logarithmic Approximation (DLA) which includes only the dominant contributions from the collinear and soft singularities, i.e. the splitting function $P_{gg}(z)\sim 1/z$ in (\ref{Zevol}); the next to leading single log terms are included in the MLLA. Up to this order the results are complete; further logarithmic contributions beyond NLLO can be calculated, but they are not complete and neglect in particular process dependent large angle emissions. Nevertheless they improve the results considerably as they take into account energy conservation with increasing accuracy. The full solution of Eq. (\ref{Zevol}), corresponding to the summation of all logarithmic orders, can be obtained numerically. Alternatively, one may calculate results of the QCD cascade from a Monte Carlo generator, we compare here especially with ARIADNE \cite{ARIADNE}, which is based on similar construction principles as Eq. (\ref{Zevol}). \section{Mean particle multiplicity in quark and gluon jets} The multiplicities $N_g,N_q$ in gluon and quark jets can be obtained from the MLLA evolution equations. At high energies one can write \begin{equation} N_g(Y) \sim \exp\left(\int^Y\gamma(y)dy\right) \label{nasy} \end{equation} where the anomalous dimension $\gamma$ has the expansion in $\gamma_0=\sqrt{2C_A\alpha_s/\pi}$ \begin{equation} \gamma=\gamma_0\ (1-a_1 \gamma_0 - a_2 \gamma_0^2 - a_3 \gamma_0^3 \ldots), \label{gamma} \end{equation} likewise the ratio of gluon and quark jet multiplicity \begin{equation} r\equiv \frac{N_g}{N_q}=\frac{C_A}{C_F}(1-r_1\gamma_0 - r_2\gamma_0^2-r_3 \gamma_0^3 \ldots). \label{rgq} \end{equation} with the colour factors $C_A=3$ and $C_F=\frac{4}{3}$. The coefficients $a_i$ and $r_i$ can be obtained from the evolution equations. The rise of parton multiplicity in a quark jet is given in MLLA by\\ $N\propto \exp{[c_1\sqrt{\ln(E/\Lambda)}+c_2\ln\ln (E/\Lambda)]}$ and this formula describes well the data in $e^+e^-$ annihilation at LEP1 and LEP2 (for reviews, see \cite{dg,ob} and \cite{hamacher}). \begin{figure}[t!] \vspace*{8.5cm} \begin{center} \special{psfile= p-001077-Model.EPS voffset=-60 vscale=40 hscale= 40 hoffset=10 angle=0} \vspace{-1.0cm} \caption[*]{ The ratio of the mean multiplicities in gluon jets and quark jets $N_g$ and $N_q$. Results from evolution equations of different order of approximation in comparison with experimental data obtained in $e^+e^-$-annihilation. } \end{center} \label{fig:rgq} \end{figure} The role of higher logarithmic orders can be studied in the behaviour of the multiplicity ratio $r$ in (\ref{rgq}). The asymptotic limit $r=C_A/C_F$ acquires large finite energy corrections in NLLO \cite{ahm1,mw} and 2NLLO order \cite{gm,dreminosc} \begin{eqnarray} r_1 &=& 2\left(h_1+\frac{N_f}{12N_C^3}\right) -\frac{3}{4}\\ r_2 & =& \frac{r_1}{6} \left(\frac{25}{8}- \frac{3N_f}{4N_C} - \frac{C_FN_f}{N_C^2} -\frac{7}{8}-h_2-\frac{C_F}{N_C}h_3+\frac{N_f}{12N_C} h_4\right) \end{eqnarray} with $h_1=\frac{11}{24},\ h_2=\frac{67- 6\pi^2}{36},\ h_3=\frac{4\pi^2-15}{24}$ and $ h_4=\frac{13}{3}$, also 3NLLO results have been derived \cite{cdgnt}. Results from these approximations \cite{dg} are shown in Fig.~1 together with the numerical solution of the MLLA evolution equations obtained in 1998 \cite{lo} which takes into account all higher order corrections from this equation and fulfils the (non-perturbative) boundary condition (\ref{Zbound}). All curves are absolute predictions, as the parameter $\Lambda$ (and $Q_0$ in case of the numerical calculation) is adjusted from the growth of the total particle multiplicity in the $e^+e^-$ jets. The slow convergence of this $\sqrt{\alpha_s}$ expansion can be seen and there are still considerable effects beyond 3NLLO. The numerical solution is also in close agreement with the MC result at the parton level obtained \cite{opalmult} from the HERWIG MC above the jet energy $E_{\rm jet}>15$ GeV ($E_{\rm jet}=Q/2$ in $e^+e^-$ annihilation) and $\sim$ 20\% larger at $E_{\rm jet}\sim 5$ GeV. This overall agreement suggests that the effects not included in the MLLA evolution equation, such as large angle emission, are small. These numerical results are also compared in Fig.~1 with data from OPAL \cite{opalmult} where the data on gluon jets are derived from 3-jet events in $e^+e^-$-annihilation. Note also that a proportionality constant relating partons and hadrons according to LPHD drops in the ratio $r$. Recent results from DELPHI \cite{delphimult,hamacher} fall slightly below the curve by about 20\% at the lowest energies but converge for the higher ones; the CDF collaboration comparing quark and gluon jets at high $p_T$ in $pp$ collisions \cite{pronko} finds the ratio $r$ in the range $5<E_{\rm jet}<15$ GeV a bit larger, closer to the 3NLLO prediction, but with larger errors and therefore still consistent with the LEP results. An alternative calculation is based on the colour dipole model which treats the evolution of dipoles in NLL approximation and includes recoil effects \cite{eden}. It provides a good description of the data but includes an extra (non-perturbative) parameter which allows to adjust a low energy point. Whether such a non-perturbative input is definitely required by the DELPHI data depends in particular on the theoretical uncertainty in the extraction of $N_g$ from 3-jet events. \section{Multiplicity moments of hadrons and sub-jets} We now differentiate between sub-jets and hadrons in a jet of hard scale $E=Q/2$ in $e^+e^-$ annihilation. Sub-jets are defined by the resolution scale $Q_c$ ($k_T>Q_c>\Lambda$), hadrons are related to partons at scale $Q_0$ ($k_T>Q_0$) and experience shows that $Q_0\approx \Lambda$. In the DLA the multiplicity of partons at resolution $Q_c$ in a jet can be obtained analytically from (\ref{Zevol}) with (\ref{Zbound}) in terms of modified Bessel functions \begin{equation} N_g (Y)=\beta \sqrt{Y+\lambda_c} \{ K_0(\beta \sqrt{\lambda_c}) I_1(\beta \sqrt{Y+\lambda_c}) + I_0(\beta \sqrt{\lambda_c}) K_1(\beta \sqrt{\lambda_c}) \}. \label{dlarun} \end{equation} where $\beta^2=\frac{16 N_C}{b}$ with $ b=\frac{11}{3} N_C-\frac{2}{3}n_f$ and $\lambda_c=\ln\frac{Q_c}{\Lambda}$. At high energies $I_n,K_n$ rise exponentially and one obtains in two limits for resolution $Q_c$ \begin{eqnarray} \textrm{at high resolution}& (Q_c\to Q_0): & N\sim (\beta^2Y)^{1/4} \ln\left(\frac{2}{\beta\sqrt{\lambda_c}}\right) \exp{\sqrt{\beta (Y+\lambda_c)}} \phantom{abc} \label{hres}\\ \textrm{at low resolution } & (Q_c\to E): & N\to 1, \label{resolvedla} \end{eqnarray} where we also used $K_0(z)\simeq \ln(2/z)$ for small $z$. At high resolution for $Q_c~\to~\Lambda$ ($\lambda_c\to 0$) the parton multiplicity diverges logarithmically because of the Landau pole appearing in the running coupling. The pole is shielded by the cut-off $Q_c=Q_0$ and at this value the parton multiplicity $N$ reaches the hadron multiplicity $N_h$ according to the LPHD prescription (up to an overall constant $K$). \begin{figure}[t! \begin{center} \hspace{-1cm}\includegraphics[angle=-90,width=12.5cm]{NF-mlla.ps} \end{center} \vspace{-4.5cm} \caption[*]{ Parton multiplicity $N$ in MLLA in a single gluon jet vs energy for fixed $Q_c=Q_0$ (representing hadrons, full line) and at fixed energy $Y_0=\ln(E/Q_0)=5$ (LEP-1 energy) for variable jet resolution $Q_c$ (l.h.s.) and the factorial moments $F_q$ vs. energy $Y$ (r.h.s.). Numerical solutions of evolution eq., taken from \protect\cite{bfo}. } \label{hqmlla-nf} \end{figure} The multiplicity in MLLA comes out considerably smaller than in DLA, however the dependences on $E$ and $Q_c$ are qualitatively the same. It is shown in Fig. \ref{hqmlla-nf}: for fixed $Q_c=Q_0$ the multiplicity $N$ starts with $N=1$ at threshold and rises $\sim \exp(c\sqrt{\ln E})$; at fixed energy $E$ we find $N=1$ for $Q_c=E$ according to (\ref{resolvedla}) whereas $N$ rises rapidly for $Q_c\to \Lambda$ and ultimately approaches the upper curve at $Q_c=Q_0\gsim \Lambda$. The splitting between the upper and lower curve is entirely due to the running of the coupling $\alpha_s$. \begin{figure}[t! \begin{center} \hspace{-2cm}\includegraphics[angle=-90,width=11cm]{N.ps} \end{center} \vspace{-0.3cm} \caption[*]{ Multiplicity $N$ of hadrons (taken as $N_{ch}\times1.25$) and multiplicity of jets in $e^+e^-$ annihilation at three energies $Q$ together with parton Monte Carlo results (parameters $\Lambda,Q_0$ fitted) as function of $Y_{cs}=\ln(Q^2/(Q_c^2+Q_0^2))$, Fig. from \protect\cite{bfo}. } \label{multiplicity} \end{figure} The full numerical solution of the MLLA evolution equations has been studied already in \cite{lo}. Similar results are obtained from the ARIADNE MC at the parton level \cite{ARIADNE} with readjusted two parameters $\Lambda,\ Q_0$ according to the duality approach (``ARIADNE-D''). They are shown in Fig. \ref{multiplicity} in comparison with the experimental hadron multiplicities (upper curve) and the jet multiplicities in $e^+e^-$ annihilation represented as superposition of 2 single jets ($N=2$ at threshold). The normalization of the hadron data is adjusted to the calculation; taking into account that the data do not include neutrals one finds that the proportionality factor $K$ between multiplicities of hadrons and partons at scale $Q_0$ is close to $K=1$ in agreement with previous findings \cite{lo}. In the comparison between data and calculations we used the variable $Y_{cs}$; it takes into account that in the experimental (and MC) jet algorithm the hadrons are resolved at $Q_c=0$ but in the analytical MLLA calculation at $Q_c=Q_0$. A satisfactory overall description of the data can be obtained, especially in the transition region between jets and hadrons, with parameters $\Lambda=400$ MeV and $Q_0=404$ MeV. Some disagreement with data of the jet curves occurs around $Q_c>10$ GeV which may be related to $b\bar b$ production, not included in the calculation. The differences between the various curves are determined by the behaviour of the running coupling: for fixed coupling all curves would coincide and behave like a power $N\propto (Q/Q_c)^\alpha$. The rapid variation of the jet curves at their upper end comes from the closeness of the parameters $Q_0$ and $\Lambda$. \begin{figure}[t! \begin{center} \hspace{-2cm}\includegraphics[angle=-90,width=8cm]{H2.ps}\\%[0.5cm] \hspace{-2cm}\includegraphics[angle=-90,width=8cm]{H3.ps}\\% 10cm \end{center} \vspace{-0.3cm} \caption[*]{ Ratio of moments $H_q$ for hadrons as function of energy $Q=2E$ (fixed $Q_0$) and for jets as function of resolution $Q_c$ at $Q=91$ GeV using the variable $Y_{cs}$ as in Fig. \ref{multiplicity} in comparison with ARIADNE-D MC, from \protect\cite{bfo}.} \label{hq2-3exp} \end{figure} Next we turn to the higher multiplicity moments. The energy dependence of the factorial moments $F_q$ in MLLA is also shown in Fig. \ref{hqmlla-nf}. The calculation takes into account energy conservation, therefore the threshold for production of $q$ particles is shifted towards $E_{\rm thr}=qQ_0$. Remarkably, all $F_q$ curves cross to good approximation at $F_q=1$, a Poissonian point. For a Poissonian, the kumulant moments $K_q$ and therefore the ratios $H_q=K_q/F_q$ vanish for $q>1$. At threshold one finds the rapid oscillations with $q$ of the kumulants $K_q=(-1)^{q-1}(q-1)!$, they continue up to the Poissonian point with decreasing amplitudes. At higher energies, the oscillation length of $K_q,H_q$ increases and should approach asymptotically the DLA limit $H_q\sim 1/q^2$ with all kumulants positive. In Fig. \ref{hq2-3exp} we show as representative examples the moment ratios $H_2$ and $H_3$ for hadrons and jets as function of energy or resolution, respectively. One observes the approximate coincidence of the zeros of $H_2$ and the minima near zero of $H_3$ corresponding to the ``Poissonian point'' and the alternating signs below and same sign above this point. Again, the agreement of the structures in the region of highly resolved jets is well reproduced and the same is observed for all available higher moments \cite{bfo}. As function of order $q$ one obtains for both hadrons and jets an oscillation pattern which depends on energy and resolution. The main characteristics can be derived from the numerical solution of the MLLA evolution equation and in good overall quantitative agreement with the ARIADNE-D MC. \section{Conclusions} The perturbative approach to multiparticle production based on the MLLA is found to work well, also for the correlation phenomena discussed here, both for hadrons and jets at variable resolution and it properly distinguishes between quark and gluon jets. An important condition for this success is the high accuracy of the calculation. The DLA at realistic energies does not always give qualitatively correct results and misses, for example, the Poissonian point and the oscillations of $H_q$ at higher energies. Also some additional terms in a $\sqrt{\alpha_s}$ expansion are insufficient for a quantitative description of $N_g/N_q$ and the higher multiplicity moments. On the other hand, numerical solutions of the MLLA equation and the parton MC considered here come to a rather close description in terms of only two essential parameters $\Lambda,Q_0$. The normalization parameter is found close to $K=1$ \cite{lo} which means that the global hadronic observables considered here can be described after replacing a hadron by a parton at resolution $Q_0\gsim\Lambda$. This description does not take into accont local effects like resonance production, but its success is in support of a dual description of a large class of hadronic and partonic observables. \section*{Acknowledgements} I would like to thank Matt Buican, Clemens F\"orster and Sergio Lupia for the enjoyable collaboration and to Lev Lipatov and Victor Kim for organizing this well inspired meeting and their kind hospitality during the visit.
2,869,038,156,523
arxiv
\section{Introduction} The All Sky Monitor (ASM) on the {\it Rossi X-ray Timing Explorer} (RXTE) has been regularly observing bright celestial X-ray sources since 1996 February 22. Detector problems had been encountered during the first days of operation (1996 January 5-12), but the instrument has remained stable under an operation plan restricted to low-background regions of the RXTE orbit (580 km altitude). In addition there is an on-board monitor system that switches off high voltage in the event of moderately high rates in any of several different measures of the detector background. The ASM currently operates with 20 of the original 24 detector anodes, and the typical observation duty cycle is 40\%, with the remainder of the time lost to the high-background regions of the orbit, spacecraft slews, and instrument rotation or rewinds. The net yield from the ASM exposures is about 5 celestial scans per day, excluding regions near the Sun. All of the ASM calibrations, data archiving, the derivation of source intensities, and efforts to find new sources are carried out with integrated efforts of the PI team at M.I.T. and the RXTE Science Operations Center at Goddard Space Flight Center. The ASM instrument consists of three scanning shadow cameras (SSC) attached to a rotating pedestal. Each camera contains a position-sensitive proportional counter, mounted below a wide-field collimator that restricts the field of view (FOV) to 6$^{o}$ x 90$^{o}$ FWHM and 12$^{o}$ x 110$^{o}$ FWZI. One camera (SSC3) points in the same direction as the ASM rotation axis. The other two SSCs are pointed perpendicular to SSC3. The latter cameras face the same direction but the long axes of their collimators are tilted by +12$^{o}$ and -12$^{o}$, respectively, relative to the ASM rotation axis. The ASM can be rotated so that the co-pointing SSCs are aligned with the larger instruments of RXTE (i.e. the PCA and HEXTE). The top of each collimator is covered with an aluminum plate perforated by 6 parallel (and different) series of narrow, rectangular slits that function as a coded mask by casting a two-dimensional shadow pattern for each X-ray source in the collimator's field of view. Further information on this instrument is given by Levine et al. (1996). The ASM raw data includes 3 types of data products that are tabulated and formatted for telemetry by the two ASM event analyzers in the RXTE Experiment Data System. In the current observing mode, position histograms are accumulated for 90 s ``dwells'' in which the cameras' FOVs are fixed on the sky. Each dwell is followed by a 6$^{o}$ instrument rotation to observe the adjacent patch of sky. The rotation plans for ASM dwell sequences are chosen to avoid having any portions of the Earth in the FOVs of SSCs 1 and 2. The position histograms are accumulated in three energy channels: 1.5-3.0, 3-5, and 5-12 keV. The second data product consists of various measurements from each camera binned in time. These data are useful in studying bright X-ray pulsars, bursters, gamma ray bursts, and several other categories of rapid variability. The ``good events'' from each camera are recorded for each energy channel in 0.125 s bins, while 6 different types of background measures are recorded in 1 s bins. Finally, 64-channel X-ray spectra from each camera are output every 64 s. These data provide a means of monitoring the detector gain, since we may integrate the spectra over long time scales to observe the 5.9 keV emission line from the weak $^{55}$Fe calibration sources mounted in each collimator. In addition, ASM spectra may be useful in investigations of spectral changes in very bright X-ray sources. The ASM data archive is a public resource available for both planning purposes and scientific analysis. The ``realtime'' and archived source histories are available in FITS format from the RXTE guest observer facility: http://heasarc.gsfc.nasa.gov/docs/xte/xte\_1st.html. The ASM archive is also available in ASCII table format from the ASM web site at M.I.T.: http://space.mit.edu/XTE/XTE.html. \section{ASM Performance and Systematic Issues} Since this forum addresses many instrumental topics related to all-sky monitors, we briefly review performance and calibration issues encountered with the RXTE ASM. The static model for the position of the cameras and the ASM rotation axis yields a net uncertainty (rms) of about 45'', as determined from observations of Sco X-1. We minimize the effects of this uncertainty by allowing a camera position to float during data analysis, using chi square to obtain the best fit for detected X-ray sources that lie in a camera's FOV. We estimate that the ASM will obtain positions for new X-ray sources with uncertainties of 2' to 12' for sources in range of 1 Crab to 30 mCrab, respectively. The core of the ASM analysis task is to deconvolve the position histograms into the X-ray shadow patterns for individual X-ray sources. Our geometric model for the mask, collimator, and anode alignments was significantly revised in 1997 Jan, leading to a complete re-analysis of the ASM database. The primary feature of this revision was the introduction of time-dependent calibrations of the physical to ``electronic'' position relationships that governs the locations of mask shadows in the position histograms. Observations of Sco X-1 clearly demonstrated variable degrees of evolution of each anode, appearing as secular shifts in the shadow boundaries in the position histograms. The ASM position calibrations are now periodically revised, and the analysis system computes models for individual mask shadows by interpolating or extrapolating from these calibrations for each detector anode. The observations of the Crab Nebula provide a means of gauging the magnitude of systematic noise in the ASM light curves. The observed variance of the derived intensities is slightly larger than the estimated statistical variance, implying a systematic uncertainty of 1.9\% of the mean flux. For faint X-ray sources, systematic effects can be quantified by investigating the light curves for ``blank field'' positions and quiescent X-ray novae. By binning the ASM source intensities on very long time scales (e.g. 1 year), there is a +1 mCrab bias in the distribution of mean values. The net effect of systematic and statistical limitations allows the ASM to reach 5 mCrab at 3 $\sigma$ significance in a typical daily exposure (2-12 keV). All of these uncertainty estimates pertain to celestial positions well separated from the Galactic center, where source confusion increases systematic problems. The observations of the ``polar'' AM Her serves to illustrate the accuracy achieved in the ASM instrument model. There are no significant detections of this source with the ASM at 1-day time scales. The global average for this source (adjusting for the bias mentioned above) is only 2.3 mCrab. However, the analysis of the individual measurements clearly reveals a periodic signal at 3.0943 $\pm$ 0.0001 hr, which is consistent with the known binary period (Ritter and Kolb 1995). \section{Highlights of ASM Results} Many of the results of ASM science investigations fall under four categories which are outlined below. Under each topic, we discuss representative cases that illustrate the value of monitoring observations and the effectiveness of the ASM alarm system in attracting observations at other wavelengths. \subsection{Light Curves of X-ray Transients} There has been a remarkable diversity of active X-ray transients during 1996 and 1997. The prolonged outbursts from the 2 sources of relativistic radio jets, GRS1915+105 (X-ray Nova Aql 1992) and GROJ1655-40 (X-ray Nova Sco 1994), further distinguished these sources from the typical soft X-ray transients. The ASM light curve for GRS1915+105 displays the unique behavior of this source, as it wanders through four emission states that appear to be new variants of the ``very high'' X-ray state of black hole binaries (Morgan, Remillard, and Greiner 1997; MGR97). Many observatories, including the revitalized Greenbank Interferometer (http://info.gb.nrao.edu/gbint/GBINT.html), are now monitoring the behavior of GRS1915+105. Many groups have begun the investigation of the complex interrelations between radio emission, IR variations, soft X-ray instability, and hard X-ray flares in GRS1915+105 as a means to probe the production of relativistic plasma that powers the radio flares and jets in this system. \begin{figure*}[t] \centering \psbox[xsize=0.9#1,ysize=0.6#1] {asm_trans1.ps} \caption{ ASM light curves for six bright X-ray transients over the time period 1996 Jan to 1997 May. The top three sources are accreting neutron stars, while the bottom three are black hole binaries or candidates. The Crab Nebula produces as ASM rate of 75.5 c/s. } \end{figure*} Turning to other transient X-ray sources, most of the two outburst cycles in GROJ1744-28 were covered with RXTE, as was the prolonged outburst in a new transient, GRS1739-278. Major eruptions were also seen in the recurrent transients 4U1608-52, Aql X-1, and 4U1630-47. The ASM light curves for these 6 sources are shown in Figure 1. Rapid rise with a brief maximum and an extended period of low-level emission characterizes both neutron star systems Aql X-1 and 4U1608-52. In contrast, 4U1630-47 rises quickly but then reaches its luminosity peak very slowly, with an increasingly hard spectrum, before it decays rapidly (e-fold time of 14 days) with no further signs of brightness enhancment. In the case of GRS1739-278, complex secondary brightening events are seen and the spectrum becomes increasingly soft with time. All of these ASM results complement earlier investigations from more limited X-ray monitoring missions (see Chen, Shrader, and Livio 1997). The shapes and spectral details of these outbursts provide substantial challenges for the disk instability models being applied to soft X-ray transients (e.g. Cannizzo, Chen, and Livio 1995; Narayan, McClintock, and Yi 1996). Particular support for this model was gained with the detection of an optical precursor to the April 1996 X-ray ouburst in GROJ1655-40 (Orosz et al. 1997). The 6-day delay time in the X-ray rise, relative to optical brightening, has been interpreted in favor of advection-dominated accretion during the brief period of X-ray quiescence prior to this event (Hameury et al. 1997). \begin{figure*}[t] \centering \psbox[xsize=0.9#1,ysize=0.6#1] {asm_trans2.ps} \caption{ ASM light curves (1996 Feb to 1997 May) for six faint X-ray transients. Only the Rapid Burster and RXJ17095-266 were known prior to 1996. } \end{figure*} ASM light curves are also available for a number of fainter X-ray transients, as shown in Figure 2. These results are displayed in 1-day or 2-day time bins. Of these 6 cases, only the Rapid Burster and RXJ1709-266 were known prior to 1996. Again, there is a broad diversity in the time scales for both X-ray decay and recurrence. Further examples are needed in order to determine whether the fainter X-ray transients can be understood as more distant examples of the same parent populations that produce the bright transients. \subsection{Binary Periods and Super-Orbital Periods} The ASM has already accumulated about two dozen detections of orbital or ``superorbital'' periods, ranging from the well-known eclipsing neutron star systems with massive supergiant companions to the more subtle and less regular periodicities frequently associated with geometric effects related to the precession of an accretion disk inclined with respect to the binary plane. The paper by Corbet in this conference proceedings is devoted to the theme of orbital and super-orbital periods detected with the RXTE ASM, and readers are referred to that work for further information on this topic. \subsection{State Changes in X-ray Binaries} The topic of aperiodic variability in X-ray binary systems is very rich and complex, and a thorough description of the applications for ASM data is well beyond the scope of this paper. However, one aspect of this phenomenology, that of state changes in X-ray binaries such as Cyg X-1, Cyg X-3, and the ``microquasars'', may serve to illustrate the productivity to be gained by coordinating the observations of wide-field monitors with those of larger, pointing instruments. \begin{figure*}[t] \centering \psbox[xsize=0.8#1,ysize=0.45#1] {asm_state1.ps} \caption{ ASM light curves and hardness ratios for Cyg X-1 and Cyg X-3 over the time period 1996 Feb to 1997 May. HR2 is the ratio of the flux in the 5-12 keV band to that in the 3-5 keV band. } \end{figure*} Both GRS1915+105 and GROJ1655-40 migrate through different X-ray emission states (e.g. MGR97) on time scales of 3-20 weeks, as evidenced by correlated changes in the characteristics of the ASM light curves, the photon spectrum, the shape of the PCA power spectrum, and the properties of X-ray quasi-periodic oscillations (QPO). The cause of these transitions remains one of the fundamental mysteries associated with black hole accretion, the evolution of X-ray novae, and the mechanism of QPOs. A particularly important aspect of this science concerns the stationary, high-frequency QPOs seen at 67 Hz in GRS1915+105 (MGR97) and at 300 Hz in GROJ1655-40 (Remillard et al. 1997). The origin of these QPOs is likley founded in general relativity, with the QPO frequency dependent on the mass and rotation of the black hole. While it is not yet clear how to interpret these high-frequency QPOs, there is strong evidence that their appearance is correlated with X-ray emission states. The ASM light curves are particularly valuable in providing context for these investigations and in planning future RXTE observations when the X-ray state implies that these QPOs are likely to recur. The ASM has also captured the more classical state transitions to the ``soft X-ray high state'' in both Cyg X-1 and Cyg X-3. Figure 3 shows the ASM light curve and spectral hardness ratio for these sources. In each case, a 30-day interval with moderate spectral softening preceeds the main event. In the case of Cyg X-1, the combined coverage of RXTE and BATSE has demonstrated that there is only a slight (15\%) increase in total luminosity during the soft/high state (Zhang et al. 1997). Detailed temporal and spectral analyses of RXTE observations further indicate that the inner edge of the accretion disk is significantly closer to the black hole event horizon in the soft/high state, while the power-law component (associated with inverse Compton emission from energetic electrons) switches from a flatter spectrum with a thermal cutoff, in the low/hard state, to a steeper spectrum without a thermal cutoff from a small emission cloud in the soft/high state (Cui et al. 1997). The full ramifications of these geometric changes are still under investigation. In the case of Cyg X-3, the soft/high transition has also led to the formation of a radio jet with a velocity that may be as high as 0.9 c (Ghigo et al. 1997). This high-state episode is still in progress, and the full story is yet to be told. Again, the monitoring efforts from the ASM, BATSE, and the GBI telescope are vital elements to be combined with observations from the VLA and RXTE pointing instruments in the effort to understand what has happened to Cyg X-3 during 1997 and what conditions led to the succession of the radio events that are still underway. \subsection{Active Galactic Nuclei} \begin{figure*}[t] \centering \psbox[xsize=0.9#1,ysize=0.45#1] {asm_agn.ps} \caption{ ASM light curves of Cen A, an optically obscured AGN, the Seyfert 1.5 galaxy NGC 4151, the quasar 3C273, and 3 BL Lac objects. The data are displayed in 1-day or 2-day bins. } \end{figure*} With a 30 mCrab detection threshold per SSC-dwell, it was not expected that the RXTE ASM would have the sensitivity necessary to provide useful information on the different subclasses of active galactic nuclei (AGN). There are only a few AGN expected to exceed 3 mCrab at 2-12 keV (e.g. Wood et al. 1984). However, the ability to combine the individual ASM measurements into 1-day or 2-day time bins without encountering significant fluctuations from systematic problems allows the ASM to monitor many nearby AGN for major changes in X-ray brightness. Six examples are shown in Figure 4. Both Cen A and NGC 4151 vary by a factor of two, and the flux from NGC 4151 reaches 24 mCrab at 2-12 keV. A major outburst in the BL Lac object Mkn 501 began during 1997 January (near MJD 50450), with X-ray intensity rising from 3 to 20 mCrab. This outburst is correlated with strong detections of Mkn 501 in TeV $\gamma$-rays with the HEGRA Cherenkov array (Aharonian et al. 1997). The TeV and X-ray components are very likley to be inverse-Compton and synchrotron photons emitted by the same parent population of relativistic electrons. These exciting results and the opportunities for further detections of AGN outbursts has led us to increase the total number of ASM-monitored emission-line AGNs and BL Lac objects from 10 to 74 during 1997 May. \section*{References} \re Aharonian, F. et al. 1997, A\&A, submitted, astro-ph/9706019 \re Cannizzo, J. K., Chen, W., and Livio M. 1995, ApJ, 454, 880 \re Chen, W., Shrader, C. R., and Livio, M. 1997, ApJ, in press, astro-ph/9707138 \re Cui, W., Zhang, S. N., Focke, W., and Swank, J. H. 1997, ApJ, 484, 383 \re Ghigo, F. D. et al. 1997, Bull. AAS, 29, 841 \re Hameury, J. M., Lasota, J. P., McClintock, J. E., and Narayan, R. 1997, ApJ, submitted; astro-ph/9703095 \re Levine, A. M. et al. 1996 ApJL, 469, L33 \re Morgan E. H., Remillard, R. A. and Greiner, J. 1997, ApJ, 482, 993 \re Narayan, R., McClintock, J. E., and Yi, I. 1996, ApJ, 457 821 \re Orosz, J. A., Remillard, R. A., Bailyn, C., and McClintock, J. E. 1997, ApJL, 478, L83 \re Remillard, R. A. et al. 1997, Procs. Texas Symposium on Relativistic Astrophysics (1996 December), in press; astro-ph/9705064 \re Ritter, H, and Kolb, U. 1995, in X-ray Binaries, eds. Lewin and van Paradijs (Cambridge: Cambridge Univ. Press), 578 \re Wood et al. 1984, ApJS, 56, 507 \re Zhang, S. N. et al. 1997, ApJL, 477, L95 \re \label{last} \end{document}
2,869,038,156,524
arxiv
\section{Introduction} \lettrine[lines=2]{H}{yperspectral} images (HSIs) are the 3-D data recording the spatial-spectral information of land covers. HSI classification, as an active research area, has received extensive attentions in many fields. Recently, deep learning methods exhibit good classification performance in HSIs \cite{YC2016,HP2021}. Graph theory is an effective manner to represent the similar relationship of HSI data for revealing their intrinsic geometric properties. Specifically, graph embedding methods have been used to learn graph representation in the low-dimensional space for reducing the redundant information of HSI data \cite{FL2016,FL2020}. Inspired by the success of deep learning \cite{AS2022}, Kipf \emph{et al.} \cite{TN2017} proposed a graph convolutional network (GCN) to aggregate and transform the node feature with the neighborhood structure of graph data. Moreover, aiming at the case of limited labeled samples, a spectral-spatial GCN \cite{AQ2019} was utilized to construct a semisupervised framework for HSI classification. Subsequently, Wan \emph{et al.} proposed a multiscale dynamic GCN (MDGCN) \cite{SW2020} and a dual interactive GCN (DIGCN) \cite{SS2021} to capture the spatial information at different scales. A minibatch GCN model was developed to reduce the complexity of training \cite{DF2020}. Recently, the graph topological consistent was utilized to learn the underlying spatial context information of HSIs \cite{YY2021}. In \cite{JY2020}, GCN was applied to model the temporal correlation between different frames in video. Similar to the temporal relation, the spectral correlation exists in HSI data, which has been applied for HSI classification \cite{HC2020}. Moreover, to produce high-resolution HSI, the spectral structures in low-spatial-resolution HSI and high-spatial-resolution multispectral images were inherited according to the construction of spectral graph along spectral dimension \cite{KM2018}. Therefore, graph learning or GCN might be one of the methods to capture the spectral correlation in the spectral domain. In recent years, with the development of attention mechanism in deep learning, it has been gradually applied for GCNs, which enables them more learnable with stronger generalization capability. Pu \emph{et al.} \cite{CH2021} designed a multiscale attention mechanism by distinguishing the importances of different spectral bands. Moreover, by allocating weights jointly in the horizontal and vertical directions, the effectiveness of cross attention was demonstrated in \cite{WC2020}. Nevertheless, there are still some issues in the GCN-based models for HSI classification. Firstly, the initial adjacent matrix may fail to reveal the effective spatial relationship. Secondly, the relationship of spectral bands is underused. Thirdly, existing GCNs usually utilize the addition, multiplication, or concatenation operations to fuse features, which fail to explore the complementary between features. As such, in this letter, an adaptive cross-attention-driven spatial-spectral graph convolutional network (ACSS-GCN) is proposed to jointly extract the spatial-spectral features for HSI classification. The contributions are summarized as follows. \begin{figure*}[t] \centering \setlength{\abovecaptionskip}{-5pt} \begin{center} \includegraphics[ width=5.2in]{acssgcn.jpg} \end{center} \tiny \centering \caption{The framework of the proposed ACSS-GCN model. (a) Data preprocessing. (b) Spatial graph convolutional network (Sa-GCN). (c) Spectral graph convolutional network (Se-GCN). (d) Graph-based cross-attention fusion module (GCAFM). (e) Classification result. Particularly, ${\oplus}$ and ${\otimes}$ are addition and multiplication operations, respectively. \textcolor[rgb]{0.75,0.56,0.00}{ {{\boxed{ \quad \quad }}}} represents the spatial graph convolution layer. \textcolor[rgb]{0.19,0.33,0.59}{${\boxed{ \quad \quad }}$} is the spectral graph convolution layer. } \end{figure*} \hangindent 2.5em (1) Considering the spatial-spectral characteristic of HSI data, a dual-branch GCN-based spatial-spectral structure is proposed, containing the spatial GCN (Sa-GCN) and spectral GCN (Se-GCN) subnetworks to extract the spatial and spectral features by exploring correlations from the spatial and spectral dimensions, respectively. \hangindent 2.5em (2) A novel graph-based cross-attention fusion module (GCAFM) with a spatial graph attention block (SAGB) and a spectral graph attention block (SEGB) is developed by integrating attention mechanism into information aggregation over graph-structured data to explore the complementary information of Sa-GCN and Se-GCN. \hangindent 2.5em (3) With the idea of the adaptive graph, a novel ACSS-GCN framework is constructed for the spatial-spectral feature extraction and classification of HSIs. Experimental results demonstrate the superiority of the proposed method over existing GCNs in HSI classification. \section{Methodology} The overall architecture of our ACSS-GCN framework is depicted in Fig. 1. Fig. 1(a) shows data preprocessing on the original HSI data. The backbone of the proposed framework is composed of Sa-GCN and Se-GCN, as shown in Fig. 1(b)-(c), which can capture the spatial and spectral information from HSI data. Moreover, Fig. 1(d) displays the GCAFM, followed by the classification result in Fig. 1(e). \subsection*{A. Data Preprocessing} Simple linear iterative clustering \cite{SW2020} and principal component analysis (PCA) are imposed on the original HSI data to reduce the computational complexity, shown in Fig. 1(a). The feature $x$ of each superpixel is defined by the average of its all pixels, and the features of all superpixels in the HSI data are denoted by $X {=\{x_1,x_2,...,x_n\}\in{\mathbb R}^{n \times s}}$, where $n$ and $s$ are the number of superpixels and spectral bands, respectively. A set of superpixels for the ith spectral band is represented as $Z_{Sa_i}=\left\{x_{1i},x_{2i},\ldots,x_{ni}\right\}$ for $i=1,2,\ldots,s$, while a set of spectral bands for the $j$th superpixel is denoted as $Z_{Se_j}=\left\{x_{j1},x_{j2},\ldots,x_{js}\right\}$ for $j=1,2,\ldots,n$. \subsection*{B. Spatial-Spectral Feature Extraction} From Fig. 1, for our ACSS-GCN framework, a dual-branch GCN-based spatial-spectral structure forms its backbone, which is composed of Sa-GCN and Se-GCN to model the correlations between spatial pixels and between spectral bands. Specifically, Sa-GCN extracts spatial features by exploring the spatial relationship, while Se-GCN learns spectral information by modeling the spectral correlation. \emph{1) Sa-GCN} The relationship of pixels is distinctive in multiple spectral bands. As shown in Fig. 1(b), a spatial graph corresponding to each spectral band is constructed individually to explore the suitable spatial relationship. For convenience, a set of spatial graphs is written as $\left\{G_{Sa_1},G_{Sa_2},\ldots,G_{Sa_s}\right\}$. The spatial graph $G_{Sa_i}$ of the $i$th spectral band is defined as $(V_{Sa_i},E_{Sa_i},A_{Sa_i})$, where each superpixel is treated as a graph node, $V_{Sa_i}$ is a set of graph nodes, and $E_{Sa_i}$ is a set of edges. $A_{Sa_i}$ is the adjacent matrix, indicating whether each pair of superpixels is connected. The Gaussian distance is employed to measure the pairwise similarity between these superpixels. The weight of graph nodes $x_{mi}$ and $x_{hi}$ in $A_{Sa_i}$ is written as \begin{equation} {A}_{Sa_{i_{mh}}} = \begin{cases} e^{-\gamma{\left \|x_{mi}-x_{hi}\right \|}^2}, & \emph{if $x_{mi} \in {\mathcal N}(x_{hi})$} \\ 0, & \emph{otherwise} \end{cases}, \end{equation} where $\mathcal N(x_{hi})$ is the set of neighbors of $x_{hi}$, which includes all superpixels directly connected to $x_{hi}$. The value of $\gamma$ is empirically set as 0.5 in the experiments. After that, the Sa-GCN individually processes each spectral band by the corresponding spatial graph, which has two graph convolutional layers with the ${ReLU}$ function. Thus, for the $i$th band, the spatial information is learned by \begin{equation} {H_{Sa_i}^{l+1}} ={ReLU}(L_{Sa_i}{H_{Sa_i}^{l}}{W_{Sa_i}^l}),\\ \end{equation} where $L_{Sa_i}={\tilde{D}_{Sa_i}^{-\frac{1}{2}}}{\tilde{A}_{Sa_i}}{\tilde{D}_{Sa_i}^{-\frac{1}{2}}}$ with ${\tilde{D}_{Sa_i}}$ being the diagonal degree matrix of $\tilde{A}_{Sa_i}=A_{Sa_i}+I$. $H_{Sa_i}^{l}\in{{\mathbb R}^{N \times \frac{F^l}{s}}}$ denotes the spatial information of the $i$th band in the ${l}$th layer with $H_{Sa_i}^{0}=Z_{Sa_i}$ and $F^l$ is the dimension of feature in $l$ layer. ${W_{Sa_i}^l}$ represents the trainable weight for the $i$th band in the ${l}$th layer, and ${ReLU\left(\cdot\right)}$ is the activation function. Finally, the spatial features in all bands are concatenated as \begin{equation} {H_{Sa}} = [H_{Sa_1},H_{Sa_2},\ldots,H_{Sa_s}].\\ \end{equation} \emph{2) Se-GCN} Considering the spectral characteristic of HSIs, a Se-GCN subnetwork is further designed. Similar to \cite{KM2018}, a set of initial spectral graphs $\left\{G_{Se_1},G_{Se_2},\ldots,G_{Se_n}\right\}$ is built to model the spectral relation by using \emph{s}-nearest neighbor strategy in Fig. 1(c). Different from the spatial graph, $G_{Se_j}=(V_{Se_j},E_{Se_j},A_{Se_j})$ is the spectral graph of the $j$th superpixel. The construction of $A_{Se_j}$ is similar to (1), however, the similarity calculation is implemented between each pair of bands in the jth superpixel. Then, the Se-GCN is built by stacking two convolutional layers with the ${ReLU}$ function, in which each superpixel is separately processed to learn spectral information. The output for the jth superpixel is expressed as follows \begin{equation} {H_{Se_j}^{l+1}} ={ReLU}(L_{Se_j}{H_{Se_j}^{l}}{W_{Se}^{l}}),\\ \end{equation} where $L_{Se_j}={\tilde{D}_{Se_j}^{-\frac{1}{2}}}{\tilde{A}_{Se_j}}{\tilde{D}_{Se_j}^{-\frac{1}{2}}}$, ${H_{Se_j}^l}\in{{\mathbb R}^{1 \times {F^l}}}$ is the spectral information of the jth superpixel in the $l$th layer with ${H_{Se_j}^0=Z_{Se_j}}$, and the dimension in $l$th layer of Se-GCN and Sa-GCN is the same. ${W_{Se}^l}$ is the trainable weight in the lth layer shared by all superpixels, which alleviates the overfitting of the proposed network to a certain extent. Finally, similar to (3), by cascading the outputs of all superpixels, spectral features $H_{Se}\in{{\mathbb R}^{N \times {F^2}}}$ are extracted in Se-GCN. \subsection*{C. GCAFM} Based on the above analysis, Sa-GCN and Se-GCN can extract different and complementary features by modeling the spatial and spectral correlations of HSIs, respectively. As such, a novel GCAFM is designed by introducing attention mechanism into graph node to fully exploit this complementary information, whose backbone contains an SAGB, an SEGB, and a fusion block, as illustrated in Fig. 1(d). \emph{1) SAGB} From Fig. 1(c), Se-GCN only uses the spectral information such that the spatial correlation of HSI data is not effectively explored. Therefore, an SAGB module with a spatial graph convolutional layer and two fully connected (FC) layers is designed in Fig. 1(d). Therefore, spatial attention weights are generated from the first spectral graph convolutional layer of the Se-GCN to promote the induction of effective spatial feature extraction from Sa-GCN. Specifically, spatial graph convolutional layer is applied to learn spatial information, and two FC layers are utilized to acquire more representational information. The feature from the second FC layer is denoted by ${I_{Sa}\in{{\mathbb R}^{N \times {F^2}}}}$, which is further fed into a softmax function to generate the normalized spatial attention map. The spatial-enhanced features $H_1\in{{\mathbb R}^{N \times {F^2}}}$ can be computed as \begin{equation} \begin{aligned} &{I}_{Sa}=f_{FC}(f_{FC}({ReLU}(L_{Sa}{H_{Se}^1}{W_{Sa}})))\\ &H_1={H}_{Sa}{\odot}softmax({I}_{Sa}), \end{aligned} \end{equation} where $H_{Se}^1\in{{\mathbb R}^{N \times {F^1}}}$ is output of the first layer in Se-GCN. $f_{FC}(.)$ is the FC layer, ${\odot}$ is the element-wise multiplication operation, and $softmax\left(\cdot\right)$ function is used for normalization. \emph{2) SEGB} The special structure of Sa-GCN makes it contain the primary spectral relationship. Therefore, an SEGB module is built to generate spectral weight coefficients for adaptively selecting the important spectral features from the Se-GCN. From Fig. 1(d), the SEGB contains a spectral graph convolutional layer, two FC layers, and a softmax function, which, however, calculates the spectral attention map along with the spectral dimension from the output of the first spatial graph convolutional layer in Sa-GCN. In addition, the output of the second FC layer is expressed as $I_{Se}\in{{\mathbb R}^{N \times {F^2}}}$. Therefore, the spectral-enhanced features $H_2\in{{\mathbb R}^{N \times {F^2}}}$ are written as \begin{equation} \begin{aligned} &I_{Se}=f_{FC}(f_{FC}({ReLU}(L_{Se}{H_{Sa}^1}{W_{Se}})))\\ &H_2={H}_{Se}{\odot}softmax({I}_{Se}). \end{aligned} \end{equation} where $H_{Sa}^1 \in{{\mathbb R}^{N \times {F^1}}}$ is the features from the first layer in Sa-GCN. Finally, in the fusion block, the addition and concatenation operations are applied to fuse the spatial and spectral attention features of Sa-GCN and Se-GCN. Inspired by the residual learning, two fusion methods can be written as \begin{equation} \begin{aligned} &H_{add}=(H_1{\oplus}{H}_{Sa}){\oplus}(H_2{\oplus}{H}_{Se})\\ &H_{con}=\left[H_1{\oplus}{H}_{Sa},H_2{\oplus}{H}_{Se}\right], \end{aligned} \end{equation} where ${\oplus}$ is the element-wise addition. $H_{add}\in{{\mathbb R}^{N \times {F^2}}}$ and $H_{con}\in{{\mathbb R}^{N \times 2{F^2}}}$ mean the extracted spatial-spectral features by these two methods. \subsection*{D. Adaptive Graph Refinement} The performance of graph convolution operation mainly depends on the quality of graph structure, which is important to explore the optimal graph for HSI classification. Due to the effects of the noise and complex scenes on HSI data, the spatial and spectral graphs initially constructed in (1) are not accurate, which are not completely suitable for HSI classification. As such, the adaptive matrix $W_p$ is designed to dynamically refine these two graphs, and these corresponding adjacency matrices can be updated as \begin{equation} A_o=A_{in}+{\beta}W_pA_{in}, \end{equation} where $A_{in}=A_{Sa_i}$ or $A_{in}=A_{Se_j}$, and ${\beta}$ is a tuning parameter. Specifically, each spatial graph with its own $W_p$ makes the model more flexible, while all spectral graphs share the same $W_p$ to avoid the overfitting, all of which are randomly initialized. Similar to \cite{SW2020}, by optimizing the cross-entropy loss function with full-batch gradient descent, all $W_p$ together with other trainable parameters of ACSS-GCN can be updated during training, thus producing the refined spatial and spectral graphs and extracting the more effective spatial-spectral features for HSI classification. \section{Experimental Results} In this Section, the performance of the ACSS-GCN is validated on two public HSI data sets, i.e., Indian Pines and University of Pavia. The Indian Pines data set recorded northwestern Indiana with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor in 1992, which is made up of 145${\times}$145 pixels with 200 bands. The University of Pavia data were captured in the Pavia University in Italy by the Reflective Optics System Imaging Spectrometer (ROSIS) sensor in 2001, consisting of 610${\times}$340 pixels and 103 spectral bands. In the experiments, 30 pixels of each class are selected as the training set, while 15 pixels will be extracted for the class where the number of samples dose not reach 30. In addition, overall accuracy (OA), average accuracy (AA), and kappa coefficient ($\kappa$) are used to study the performance of all methods. To evaluate the performance of our proposed method, seven methods, i.e, SVM \cite{CC2011}, 2-D CNN \cite{YC2016}, GCN \cite{TN2017}, FuNet \cite{DF2020}, AMDPCN \cite{CH2021}, MDGCN \cite{SW2020}, and DIGCN \cite{SS2021}, are utilized for comparison. Moreover, our ACSS-GCN model with two fusion strategies (i.e., addition and concatenation) are represented as ACSS-GCN-A and ACSS-GCN-C, respectively. \subsection*{A. Parameter Settings} In the experiments, the spectral dimension after PCA is set to 20. In addition, $dropout=0.5$ for each graph convolutional layer to alleviate the overfitting. For Sa-GCN and Se-GCN, the dimensions of two graph convolutional layers are set as 40 and 20, respectively. The output dimensions of the graph convolutional layer and two FC layers in the SEGB and SAGB modules are set to 40, 25, and 20. Moreover, a FC layer and a softmax function are applied to classify the HSI samples. To eliminate the deviation caused by random initialization, the average value after 5 repetitions is given for each quantitative metric. The epoch is set to 3000. The adaptive momentum optimizer with a learning rate of 0.005 is used to optimize the loss function of cross entropy for the proposed model. \begin{table}[H] \centering \setlength{\abovecaptionskip}{-2pt} \renewcommand\thetable{\Roman{table}} \renewcommand\tabcolsep{3.0pt} \caption{Sensitivity Analysis under Different Values of Parameter ${\beta}$} \scriptsize \begin{tabular}{p{1.95cm}<{\centering}p{0.65cm}<{\centering}p{0.65cm}<{\centering}p{0.65cm}<{\centering}p{0.65cm}<{\centering}p{0.65cm}<{\centering}p{0.65cm}<{\centering} p{0.65cm}<{\centering}p{0.65cm}<{\centering}} \specialrule{0.1em}{0.6pt}{0pt} ${\beta}$ &0 & 0.0001 & 0.0005 & 0.001 & 0.005 & 0.01 & 0.05 & 0.1 \\ \hline Indian Pines &64.40 &85.78 &92.90&94.06&\textbf{95.02}&94.19&94.64&94.59 \\ University of Pavia &69.97& 91.93 & 95.77 & \textbf{96.50} & 96.38 & 94.40 & 93.41 & 93.63\\ \toprule [1 pt] \end{tabular} \vspace{-0.25cm} \end{table} Besides, we further investigate the effect of the tuning parameter ${\beta}$ on the classification performance of HSIs. Table I reports the OA results for different ${\beta}$, from which we can observe that the proposed model can obtain the quasi-optimal performance when $\beta$ is set to 0.005 and 0.001 for these two HSI data sets, respectively. \subsection*{B. Classification Performance} The classification results of all methods on these two HSI data sets are reported in Tables II-III. We can find that the proposed method outperforms other methods. On the one hand, Sa-GCN and Se-GCN effectively learn the spatial and spectral correlations of HSI data. On the other hand, GCAFM adaptively explores complementary information of the extracted spatial and spectral features and suppresses noise interference in Sa-GCN and Se-GCN. From Tables II-III, compared with the second best classification performance achieved by DIGCN or MDGCN, the OA results of ACSS-GCN-A on two data sets are increased by $1.00\%$ and $1.28\%$, respectively. In particular, the comparison between two fusion strategies reveals that ACSS-GCN-A can achieve the best classification results and bring at least $0.45\%$ increments in OA over the ACSS-GCN-C. In addition, the total sum of the training and testing computational time (seconds (s)) of all methods on two data sets is summarized in the last row of Tables II-III, from which compared with other GCN-based methods, the GCN method \cite{TN2017} consumes more time owing to the participation of all pixels in training step; however, our method only costs a little more time to construct the spectral graph and run the GCAFM, but achieves the best performance for HSI classification. \begin{table}[H] \centering \setlength{\abovecaptionskip}{-2pt} \renewcommand\tabcolsep{3.5pt} \label{tab:default} \caption{Classification Results for the Indian Pines Data Set} \scriptsize \begin{tabular}{p{0.65cm}<{\centering}p{0.60cm}<{\centering}p{0.60cm}<{\centering}p{0.60cm}<{\centering}p{0.60cm}<{\centering}p{0.80cm}<{\centering}p{0.60cm}<{\centering}p{0.60cm}<{\centering}p{0.80cm}<{\centering}p{0.80cm}<{\centering}} \specialrule{0.1em}{0pt}{0pt} \multirow{2}{*}{Class} & \multirow{2}{*}{SVM} & 2-D & \multirow{2}{*}{GCN} & \multirow{2}{*}{FuNet} & AMD-& MD-& DI-& {ACSS-} & ACSS- \\ &&CNN&&&PCN&GCN&GCN&GCN-C&GCN-A\\ \hline 1 &\textbf{100.00} &\textbf{100.00} &97.82 &\textbf{100.00} & \textbf{100.00} & \textbf{100.00} &93.75 &\textbf{100.00} & \textbf{100.00}\\ 2 &55.22 &58.58 &48.67 &60.52 &52.79 &78.68 &89.63 &\textbf{92.03} & 89.49\\ 3 &80.50 &85.00 &64.10 &66.75 &57.87 & 94.88 &94.88 &95.86 &\textbf{96.38}\\ 4 &99.52 &\textbf{100.00} &91.98 &90.34 &95.65 & 96.14 &96.16 &\textbf{100.00} & 99.61\\ 5 &77.04 &85.43 &89.86 &91.83 &88.18 & 90.73 &\textbf{98.45} &92.49 & 96.29\\ 6 &76.86 &71.57 &95.75 &\textbf{98.29 } &96.78 & 97.86 &97.43 &97.57 & 96.11\\ 7 &\textbf{100.00} &\textbf{100.00} &92.85 &\textbf{100.00} &\textbf{100.00} &\textbf{100.00} &69.23 &\textbf{100.00} & 90.76\\ 8 &95.76 &96.42 &89.96 &99.78 &97.66 & \textbf{100.00} &\textbf{100.00} &\textbf{100.00} & 99.77\\ 9 &\textbf{100.00} &\textbf{100.00} &\textbf{100.00} &\textbf{100.00} &\textbf{100.00} & \textbf{100.00} & \textbf{100.00} &\textbf{100.00} & \textbf{100.00}\\ 10 &56.47 &59.76 &69.75 &71.23 &72.13 & \textbf{94.16} &89.92 &83.97 & 90.02\\ 11 &59.09 &64.12 &61.22 &56.53 &72.64 & \textbf{96.58} &92.21 &94.22 &94.49\\ 12 &74.78 &76.38 &53.29 &70.52 &54.71 & 90.76 &84.55 &90.94 & \textbf{91.79}\\ 13 &98.86 &98.86 &97.56 &\textbf{100.00} & 99.42 &97.71 &97.14 &97.14 &\textbf{100.00}\\ 14 &87.61 &86.48 &85.30 &84.21 &86.07 & 99.68 &99.76 &\textbf{99.83} & 99.66\\ 15 &94.94 &93.54 &46.63 &77.81 &75.84& \textbf{99.44} &98.03 &98.60 & 98.82\\ 16 &\textbf{100.00} &\textbf{100.00} &96.77 &\textbf{100.00} &\textbf{100.00} & \textbf{100.00} &98.41 &\textbf{100.00} & 99.68\\ \hline OA &71.57 &73.94 &69.71 &72.93 &74.15 & 93.84 &93.76 &94.39 & \textbf{94.84}\\ AA &84.79 &86.00 &80.10 &85.49 &84.38 & 96.04 &93.72 &96.41 & \textbf{96.43}\\ \(\kappa\) &67.97 &70.50 &65.86 &69.45 &70.56 & 92.94 &92.86 &93.58 & \textbf{94.10}\\ \hline Time(s) & 398.56 & 463.55 & 137.23 & 53.67 & 57.17 &32.38 & 23.17 & 86.69 & 82.14\\ \toprule [1 pt] \end{tabular} \vspace{-0.25cm} \end{table} \begin{table}[H] \centering \setlength{\abovecaptionskip}{-2pt} \renewcommand\thetable{\Roman{table}} \renewcommand\tabcolsep{3.0pt} \caption{Classification Results for the University of Pavia Data Set} \scriptsize \begin{tabular}{p{0.65cm}<{\centering}p{0.65cm}<{\centering}p{0.70cm}<{\centering}p{0.70cm}<{\centering}p{0.65cm}<{\centering}p{0.75cm}<{\centering}p{0.65cm}<{\centering}p{0.65cm}<{\centering}p{0.80cm}<{\centering}p{0.80cm}<{\centering}} \specialrule{0.1em}{0pt}{0pt} \multirow{2}{*}{Class} & \multirow{2}{*}{SVM} & 2-D & \multirow{2}{*}{GCN} & \multirow{2}{*}{FuNet} & AMD-& MD-& DI- & {ACSS-} & ACSS- \\ &&CNN&&&PCN&GCN&GCN&GCN-C&GCN-A\\ \hline 1 &63.29 &58.86 &74.00 &82.85 & 81.30& \textbf{92.27} &88.50 &88.35 & 86.12\\ 2 &62.91 &77.79 &61.95 &93.87 &97.89 &94.44 &91.78 &97.55 & \textbf{98.96}\\ 3 &41.32 &52.58 &62.12 &79.07 &76.94 & 88.88 &94.20 &\textbf{99.85} &99.08\\ 4 &71.65 &79.23 &95.52 &\textbf{95.81} &91.33 & 94.26 &91.76 &92.04 & 92.35\\ 5 &83.65 &88.06 &97.62 &99.70 &\textbf{99.77} & 99.01 &99.39 &99.01 & 99.01\\ 6 &51.09 &54.51 &44.87 &71.51 &67.87 & \textbf{100.00} &\textbf{100.00} &\textbf{100.00} & \textbf{100.00}\\ 7 &51.53 &61.76 &82.85 &86.77 &84.61 &\textbf{99.69} &\textbf{99.69} &99.63 & 97.83\\ 8 &55.78 &76.59 &85.25 &70.45 &71.93 & 98.49 &\textbf{99.45} &96.82 & 98.14\\ 9 &88.66 &94.33 & 98.83 &99.34 & \textbf{99.89} &96.51 &87.68 &88.85 & 96.07\\ \hline OA &61.39 &71.07 &68.81 &87.02 &87.76 & 95.17 &93.40 &95.99 & \textbf{96.45}\\ AA &63.32 &71.52 &78.11 &86.60 &85.73 & 95.95 &94.72 &95.79 & \textbf{96.40}\\ \(\kappa\) &51.13 &61.22 &58.38 &82.76 &60.77 & 93.65 &91.38 &94.69 & \textbf{95.30}\\ \hline Time(s) & 163.16 & 1849.85 & 2093.78 &61.63 & 62.00 & 164.31 & 49.70 & 271.26 & 266.06\\ \toprule[1pt] \end{tabular} \vspace{-0.25cm} \end{table} \begin{figure}[ht!] \tin \centering \mbox{ \subfigure[\label{subfig:isp}]{\includegraphics[width=0.15\linewidth]{gtindian.jpg}} \subfigure[\label{subfig:isp}]{\includegraphics[width=0.15\linewidth]{svmindian.jpg}} \subfigure[\label{subfig:isp}]{\includegraphics[width=0.15\linewidth]{dcnnindian.jpg}} \subfigure[\label{subfig:isp}]{\includegraphics[width=0.15\linewidth]{gcnindian.jpg}} \subfigure[\label{subfig:isp}]{\includegraphics[width=0.15\linewidth]{funetmindian.jpg}} }\\ \mbox{ \subfigure[\label{subfig:isp}]{\includegraphics[width=0.16\linewidth]{amdpcnindian.jpg}} \subfigure[\label{subfig:isp}]{\includegraphics[width=0.15\linewidth]{mdgcnindian.jpg}} \subfigure[\label{subfig:isp}]{\includegraphics[width=0.15\linewidth]{digcnindian.jpg}} \subfigure[\label{subfig:isp}]{\includegraphics[width=0.15\linewidth]{ourcindian.jpg}} \subfigure[\label{subfig:isp}]{\includegraphics[width=0.15\linewidth]{ouraindian.jpg}} } \caption{Classification maps for the Indian Pines Data Set. (a) Ground-truth map. (b) SVM. (c) 2-D CNN. (d) GCN. (e) FuNet. (f) AMDPCN. (g) MDGCN. (h) DIGCN. (i) ACSS-GCN-C. (j) ACSS-GCN-A.} \label{fig:hh_are} \end{figure} To analyze the classification performance more intuitively, the visualized results of all methods on the Indian Pines data set are shown in Fig. 2, from which it can be observed that the classification map of our proposed model is closest to the ground-truth map, and there are least noise and misclassifications. Particularly, for the GCN, FuNet, and ADMPCN methods, the pepper-noise-like mistakes in certain regions can be observed due to the loss of spatial context information, leading to poor classification results. The above experimental results further illustrate the superiority of ACSS-GCN. \subsection*{C. Sensitivity Comparison and Analysis under Small Samples} To further illustrate the classification performance under the small number of training samples, 5, 10, 15, 20, 25, and 30 samples of each class are randomly selected from these two HSI data sets for training. Fig. 4 shows the OA curves with different numbers of training samples. As expected, the performance of all methods is improved with the increase of training samples. The ACSS-GCN method outperforms the other methods, which verifies the advantages of our ACSS-GCN method. It is noted that the ACSS-GCN-A is superior to the ACSS-GCN-C model in most cases, which may be because the concatenation operation generate more redundant features than the addition operation affecting the final classification performance. These experimental results also show that the ACSS-GCN model is superior to other considered models. \begin{figure}[h] \centering \renewcommand\tabcolsep{1.0pt} \scriptsize \begin{tabular}{cc} \begin{minipage}[t]{0.35\linewidth} \centering \includegraphics[height=1.1in,width=1.4in]{oaindian.jpg} \end{minipage} &\begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[height=1.1in,width=1.4in]{oapavia.jpg} \end{minipage} \\ (a) Indian Pines& (b) University of Pavia \\ \end{tabular} \caption{OA (\%) of all methods with different number of training samples for two HSI data sets. (a) Indian Pines. (b) University of Pavia.} \end{figure} \subsection*{D. Ablation Study} To highlight the effectiveness of Se-GCN, Sa-GCN, and GCAFM in our ACSS-GCN-A, detailed ablation studies are conducted to see how they contribute to the classification accuracy, where the ASS-GCN-A is a variant of ACSS-GCN-A by replacing the GCAFM with the element-wise addition operation. From Table IV, by directly adding the features of Se-GCN and Sa-GCN, the ASS-GCN-A can extract the joint spatial-spectral features for good classification performance of HSIs. Furthermore, by introducing the idea of cross attention, a GCAFM is designed to fully explore the complementary information between Se-GCN and Sa-GCN. Compared with the ASS-GCN-A, GCAFM can bring 0.76\% and 0.49\% gains to the ACSS-GCN-A for these two data sets, respectively. Experimental results show that the Se-GCN and Sa-GCN are effective and complementary, and the joint learning of spatial-spectral information can further improve the performance of HSI classification. Besides, the GCAFM can be benefit for improving the classification accuracy of the ACSS-GCN. \begin{table}[H] \centering \setlength{\abovecaptionskip}{-2pt} \renewcommand\thetable{\Roman{table}} \renewcommand\tabcolsep{3.0pt} \caption{Ablation Studies of ACSS-GCN on the Indian Pines and University of Pavia Data Sets} \scriptsize \begin{tabular}{p{0.80cm}<{\centering} |p{0.65cm}<{\centering} p{0.65cm}<{\centering} p{0.80cm}<{\centering} p{0.80cm}<{\centering}|p{0.65cm}<{\centering} p{0.65cm}<{\centering} p{0.80cm}<{\centering} p{0.80cm}<{\centering}} \specialrule{0.1em}{0pt}{0pt} \hline \multirow{3}{*}{Methods} & \multicolumn{4}{c|}{Indian Pines}& \multicolumn{4}{c}{University of Pavia} \\ &Se-& Sa- & ASS-&ACSS- & Se-& Sa- & ASS-&ACSS- \\ &GCN&GCN&GCN-A&GCN-A &GCN&GCN&GCN-A&GCN-A\\ \hline OA&50.52&93.87&94.08&\textbf{94.84} &43.54&95.69&95.96&\textbf{96.45}\\ \(\kappa\)&44.82&92.98&93.23&\textbf{94.10} &33.52&94.29&94.64&\textbf{95.30}\\ \toprule[1pt] \end{tabular} \end{table} \section{Conclusion} In this letter, a novel ACSS-GCN framework has been proposed for HSI classification. Firstly, a dual-branch GCN-based spatial-spectral structure is proposed as a backbone of ACSS-GCN to jointly learn the spatial and spectral information of HSI data. Then, the GCAFM is designed to explore complementary information of Sa-GCN and Se-GCN for better HSI classification. Finally, an adaptive graph is proposed to dynamically update the spatial and spectral graphs during the back propagation of the whole model. Experimental results on two HSI data sets show that our method offers better classification performance than other GCN-based methods. \ifCLASSOPTIONcaptionsoff \newpage \fi \input{main.bbl} \end{document}
2,869,038,156,525
arxiv
\section{Introduction} \label{intro} The recent observation of a scalar Higgs particle as the final missing ingredient of the standard model in particle physics~\cite{PhysLettB.716.1, PhysLettB.716.30} has led to a resurgence of interest in the question whether a well defined excitation associated with an amplitude mode is present and observable in condensed matter systems with a broken continuous symmetry~\cite{PekkerVarma}. A case in question are neutral superfluids, in which a global $U(1)$ symmetry is broken as the simplest example of continuous symmetry breaking. The dynamics of the superfluid order parameter, however, is generically of the Gross-Pitaevskii type, i.e. first order in time. Despite the Mexican hat structure of the effective potential, there is thus only a Goldstone but no Higgs mode. Indeed, a necessary condition for a Higgs mode in the non-relativistic context of condensed matter systems is an emergent Lorentz invariance. This appears, for instance, in the superfluid phase of ultracold Bosons in an optical lattice at integer filling, where the particle-hole symmetry close to the transition to a Mott -insulating phase gives rise to an effectively Lorentz invariant theory~\cite{PRB.75.085106, PRL.109.010401}. In this regime, a Higgs mode has indeed been observed from the absorbed energy in response to shaking the optical lattice~\cite{nature11255}. The associated effective Higgs mass vanishes in a continuous manner at the transition to a Mott insulator, essentially like a mirror image of the gapped particle-hole excitations in the incompressible phase. A Higgs mode has also been observed via neutron scattering in the antiferromagnetically ordered phase of TlCuCl$_3$ at high pressure~\cite{PRL.100.205701, naturephysics.10.373}. In both cases, the evolution of the Higgs mode can be followed into the regime where the order parameter vanishes near a quantum critical point. Much earlier, in superconductors, the presence of an amplitude mode has been inferred by Sooryakumar and Klein from the observation of a distinct peak in Raman scattering from NbSe$_2$ in a regime, where superconductivity coexists with a charge density wave~\cite{PRL.45.660}. In this special situation, the superconducting order parameter directly couples to light because the modulation of the charge density wave by the electromagnetic field in the $A_{1g}$ symmetry of the Raman response leads to a periodic modulation of the density of states at the Fermi surface and thus of the superconducting gap~\cite{PRL.47.811}. The amplitude mode associated with oscillations of the gap magnitude can thus be directly seen in Raman scattering. This interpretation of the experiment has recently been confirmed within a detailed theoretical analysis by Cea and Benfatto~\cite{PRB.90.224515} and also in an independent experiment by M\'easson et.al.~\cite{PRB.89.060503}. \\ In our present paper, we analyze the possibility to observe a Higgs mode in quantum spin models whose ground state exhibits antiferromagnetic order in two dimensions via Raman scattering. This problem is of interest for at least two different reasons: First of all, there are experimental data on clean samples of undoped cuprates~\cite{PRB.78.020511, EPJST.188.131} for which no theoretical model and understanding seems to have been given so far. Second, due to its relevance in the parent compounds of high temperature superconductors, quantum antiferromagnets in two dimensions are among the most intensively studied examples of a broken continuous symmetry in condensed matter. In particular, the issue of a well defined Higgs excitation is nontrivial in two dimensions (2D). Indeed, in this case the phase space for the decay of a Higgs mode into two Goldstone modes is very large. As a result, the Higgs spectrum at zero wave vector extends down to zero energy, diverging with a universal power law $\sim 1/\omega$~\cite{PRB.49.11919, PRL.92.027203} which has in fact been observed in neutron scattering from the N\'eel state of undoped La$_2$CuO$_4$~\cite{PRL.105.247001}. To leading order, this low energy singularity is due to two-magnon processes. The presence of a large weight in the spectrum of the amplitude mode at low energies suggests that the Higgs mode is overdamped in 2D. As emphasized by Podolsky, Auerbach and Arovas~\cite{PRB.84.174522}, this is not the case in general, however. In particular, response functions which - unlike neutron scattering - couple to the square of the order parameter are expected to show a clear Higgs peak even in two dimensions because for them the low frequency response is suppressed by a factor $\omega^3$. Precisely this type of response is measured in the lattice shaking experiment near the superfluid to Mott insulator transition performed by Endres etal~\cite{nature11255}. As a result, a sharp peak due to a Higgs excitation appears even in 2D, as supported by a detailed theoretical analysis of this problem, both numerically~\cite{PRL.109.010401, PRL.110.170403, PRL.110.140401} and analytically~\cite{PRB.86.054508}. In particular, the Higgs mode remains well defined even near the quantum critical point because both its energy and width vanish with the same power in the distance from the critical point~\cite{PRL.109.010401, PRL.110.170403, PRL.110.140401, PRB.86.054508, PRB.89.180501}. Regarding the N\'eel ordered state of 2D quantum antiferromagnets, Raman scattering seems to be ideally suited to observe the amplitude mode since it couples to the square of the order parameter~\cite{PRB.84.174522}. Experimentally, however, no indications for a Higgs mode are found~\cite{PRB.78.020511}. Instead, there is a broad, asymmetric peak which appears to be due to a strongly renormalized two-magnon excitation. As will be shown below, this observation can be explained both in qualitative and quantitative terms within a rather simple O(3) - model of the N\'eel ordered phase. In particular, the absence of a separate peak associated with the Higgs mode is a consequence of the fact that the leading order contributions from the amplitude mode vanish because of the nontrivial Raman vertex in the relevant $B_{1g}$ symmetry. Nevertheless, the Higgs mode turns out to have a crucial effect on the detailed form of the Raman spectrum. In the dominant two-magnon response, it shows up by mediating the magnon-magnon interactions, which lead both to a downward shift of the peak compared to a simple spin-wave analysis~\cite{J.Phys.C.2.2012, PRB.4.922} and a characteristic asymmetric line shape. Moreover, the Higgs mode gives rise to a broad continuum above the two-magnon peak. Both features are found in experimental spectra for La$_2$CuO$_4$~\cite{EPJST.188.131}, YBa$_2$Cu$_3$O$_6$~\cite{PRB.78.020511} and also in - still unpublished - data on Sr$_2$IrO$_4$~\cite{Gretarsson}. Using the well known values for the antiferromagnetic exchange coupling $J$ in these systems, the data turn out to be in very good agreement with the theory up to frequencies of order $6J$ and beyond. As will be shown below, the agreement between experiment and theory relies crucially on whether the Higgs mode contribution to the Raman response is included or not. Raman scattering can therefore indeed be used to probe the existence of a Higgs mode in 2D quantum antiferromagnets. \section{Raman spectrum} \label{sec:Raman spectrum} \subsection{Effective light scattering operator} \label{sec:Effective light scattering operator} The differential cross section for Raman scattering is given by~\cite{RevModPhys.79.175} \begin{equation} \frac{d^2\sigma}{d\Omega d\omega} = \hbar r_0^2\frac{\omega_i}{\omega_f}R(i\rightarrow f) \label{eq:CrossSection} \end{equation} where $r_0=e^2/mc^2$ is the Thompson radius and $R(i \rightarrow f)$ is the transition rate from the initial photon state $\ket{\mathbf{k}_i, \mathbf{e}_i}$ to the final state $\ket{\mathbf{k}_f, \mathbf{e}_f}$. The rate is determined by Fermi's golden rule~\cite{RevModPhys.79.175, PRL.65.1068} \begin{equation} R(i\rightarrow f)=\frac{1}{\mathcal{Z}} \sum_{I, F} e^{-\beta E_I} |M_{F, I}|^2 \delta(E_F-E_I-\hbar\omega) \label{eq:TRate1} \end{equation} where $\ket{I}$, $\ket{F}$ are the initial, final state of the sample, $\hat{M}$ is the effective light scattering operator and $\omega=\omega_i-\omega_f$ is the frequency transfer. For the specific example of the Mott-insulating, antiferromagnetic phase of undoped cuprates, an appropriate microscopic Hamiltonian is the nearest neighbor Heisenberg model on a square lattice which arises from a superexchange mechanism \cite{PhysRev.115.2}. The associated effective light scattering operator is then given by the Fleury-Loudon Hamiltonian in leading order of a moment expansion~\cite{PhysRev.166.514, PRL.65.1068, PhysLett.3.189} Performing the common symmetry decomposition of the Raman scattering operator~\cite{RevModPhys.79.175}, it turns out that only the $B_{1g}$-mode is Raman active because the operators of the other symmetry modes commute with the Heisenberg Hamiltonian. Restricting the interaction to this mode, the reduced effective Hamiltonian is given by \begin{equation} \hat{H}_\text{FL}=2 B\, P(\mathbf{e}_i, \mathbf{e}_f) \sum_j \left(\hat{\mathbf{S}}_{\mathbf{x}_j}\cdot\hat{\mathbf{S}}_{\mathbf{x}_j+\mathbf{\hat{x}}} - \hat{\mathbf{S}}_{\mathbf{x}_j}\cdot\hat{\mathbf{S}}_{\mathbf{x}_j+\mathbf{\hat{y}}}\right) \label{eq:FL} \end{equation} with $P(\mathbf{e}_i, \mathbf{e}_f)=e_i^x e_f^x - e_i^y e_f^y$ and $B\sim t^2/(U-\hbar\omega_i)$. As has been shown by Shastry and Shraiman\cite{PRL.65.1068}, there is also a response in other symmetry modes associated with ring exchanges on a plaquette or the fluctuations of a chiral spin operator \footnote{Regarding the latter, a contribution from a spin-chirality term $\sim\mathbf{S}_i\cdot(\mathbf{S}_j\wedge\mathbf{S}_k)$ is not present in Raman scattering from cuprates, where the magnetic order arises from electrons on a square lattice with nearest neighbor hopping. It may appear, however, in more complex systems like a Kagom\'e lattice where it may be relevant to detect spin-liquid phases, see~\cite{PRB.81.024414}. }. They require an extension of the microscopic Hamiltonian beyond the Heisenberg model with nearest neighbor exchange only and thus lead to corrections to the simple form of the Fleury-Loudon Hamiltonian~\eqref{eq:FL}. In our present work, we do not consider such extensions. In fact, as will be shown below, already the leading order Hamiltonian~\eqref{eq:FL}, which applies to 2D quantum antiferromagnets with a well defined N\'eel order, properly accounts for the essential features of the Raman spectrum. Apart from microscopic prefactors, the transition rate in Eq.~\eqref{eq:TRate1} can then be written as the Fourier transform of the van Hove function $S(t)$ of $\hat{H}_\mathrm{FL}$ \begin{align} R(i\rightarrow f)&=\frac{e^2}{\hbar^3 c^2}g(k_i)g(k_f)\int\limits_{-\infty}^{\infty}dt \text{ } e^{i\omega t} \langle \hat{H}_\text{FL}(t) \hat{H}_\text{FL}(0)\rangle_T\notag\\ &=\frac{e^2}{\hbar^3 c^2}g(k_i)g(k_f)S(\omega) \label{eq:TRate2} \end{align} The resulting Raman response function $S(\omega)$ is connected to an associated spectral function $\chi''_\text{Raman}(\omega)$ by the fluctuation-dissipation theorem. At zero temperature, this takes the simple form \begin{equation} S(\omega)=2\hbar\chi''_\text{Raman}(\omega)\theta(\omega)\equiv 2\hbar\text{ Im}\chi_\text{Raman}(\omega+i0)\theta(\omega) \end{equation} In practice, the spectral function will be determined from the imaginary time ordered correlation function \begin{equation} \chi_\text{Raman}(\tau)=\frac{1}{\hbar}\left\langle T_\tau \left[\hat{H}_\text{FL}(\tau)\hat{H}_\text{FL}(0)\right]\right\rangle \end{equation} by analytic continuation to real frequencies. \subsection{Field theoretic formalism} \label{sec:Field theoretic formalism} In order to disentangle the contributions from magnons and the Higgs mode to the Raman spectrum, it is both necessary and convenient to replace the spin-operators of the microscopic Heisenberg model by a coarse grained description in which the relevant excitations appear more directly. Such an effective description is provided by the $O(3)$ nonlinear $\sigma$-model (NL$\sigma$M), which is the continuum field theory of the antiferromagnetic Heisenberg model as used by Chakravarty, Halperin and Nelson~\cite{PRB.39.2344}. The associated effective action can be written in an apparently relativistic invariant form \begin{equation} S\left[\mathbf{n}\right]=\frac{1}{2g}\int_\Lambda d^3x~\left(\partial_\mu\mathbf{n}\right)^2 \text{ , } g=\frac{\hbar c_s}{\rho_s} \label{eq:NLM} \end{equation} with a three-vector $(x^\mu)=(c_s\tau, \mathbf{x})$ and spin wave velocity $c_s$. Apart from $c_s$ and the momentum cutoff $\Lambda$, the dimensionless coupling constant $\tilde{g}=g\Lambda$ of the nonlinear $\sigma$-model contains the spin stiffness $\rho_s$, which is of the order of the exchange coupling constant of the underlying Heisenberg model. The slowly varying order parameter field $\mathbf{n}(x)$ is a three component field which describes deviations from perfect N\'eel order. It is normalized by $\mathbf{n}^2=1$. On the microscopic level, the physical meaning of the field $\mathbf{n}$ becomes evident by the mapping \begin{equation} \frac{\hat{\mathbf{S}}(\mathbf{x}_j)}{\hbar S} \rightarrow (-1)^{\mathrm{sgn}( \mathbf{x}_j)} \mathbf{n}(\mathbf{x}_j) \sqrt{1-a^{4}\mathbf{L}^2(\mathbf{x}_j)}+a^2 \mathbf{L}(\mathbf{x}_j) \label{eq:Haldane} \end{equation} between spin operators in the semiclassical limit $S\gg 1$ and bosonic fields originally due to Haldane \cite{PRL.50.1153, PhysLettA.93.464}. Here $a$ is the lattice constant and the angular momentum field $\mathbf{L}$ is constrained by $\mathbf{n}\cdot\mathbf{L}=0$, $a^{4}\mathbf{L}^2\ll1$. Using Eq.~\eqref{eq:Haldane} to express the spin operators in the Fleury-Loudon Hamiltonian~\eqref{eq:FL} in terms of the order parameter field, it turns out that the Raman spectrum requires to determine the connected correlation function of the following operator in the NL$\sigma$M \begin{align} O(\tau) &= 2B\,\hbar^2S^2 P(\mathbf{e}_i, \mathbf{e}_f) \int d^2x \text{ }\sigma_{ij}^z \partial_i \mathbf{n}(\mathbf{x}, \tau)\cdot\partial_j \mathbf{n}(\mathbf{x}, \tau)\notag \\ &=4B\, \hbar^2 S^2 P(\mathbf{e}_i, \mathbf{e}_f) \int\frac{d^2 k}{(2\pi)^2} \gamma(\mathbf{k}) \left|\mathbf{n}(\mathbf{k}, \tau)\right|^2 \label{eq:RamanOxy} \end{align} Here $\gamma(\mathbf{k})=\tfrac{1}{2}(k_x^2-k_y^2)$ is the continuum form of the $B_{1g}$-symmetry factor. Unfortunately, the combination of nontrivial vertices in the NL$\sigma$M which appear beyond a simple Gaussian approximation due to the constraint $\mathbf{n}^2=1$ and the complicated form of the operator~\eqref{eq:RamanOxy} make it very hard to calculate the Raman spectrum directly within the NL$\sigma$M. Moreover, it is rather difficult to disentangle the contributions which arise from magnons or the Higgs excitation in the standard decomposition $\mathbf{n}=(\bm{\pi}, \sqrt{1-\bm{\pi}^2})$ which only involves the two-component field $\bm{\pi}$ associated with the magnons. It is therefore more convenient to use a soft-spin description of the order parameter within a linear $O(3)$ $\sigma$-model, whose effective action \begin{align} S\left[\bm{\Phi}\right] = \frac{1}{2g} \int_\Lambda d^{3} x \left[\left(\partial_\mu\bm{\Phi}\right)^2+\frac{m_0^2}{12}\left(\left|\bm{\Phi}\right|^2-3\right)^2\right] \label{eq:linearON} \end{align} contains the bare mass $m_0$ of the amplitude mode as an additional parameter. It determines the stiffness for longitudinal fluctuations in the N\'eel ordered phase which arise from the fact that for quantum spins the operator $\hat{S}^z_{\mathbf{Q}_{\rm AF}}$ which determines the long range antiferromagnetic order at the wave vector $\mathbf{Q}_{\rm AF}$ has finite fluctuations along the $z$-direction \footnote{For a discussion of these fluctuations on a microscopic basis and in the context of neutron scattering, see~\cite{PRB.72.224511}.} In general, the coupling constant $g$ and $m_0$ are independent parameters. It is only in the scaling regime close to the quantum critical point $\tilde{g}=\tilde{g}_c$ for the loss of N\'eel order, where the value of the bare Higgs mass is irrelevant and where the $O(3)$ $\sigma$-model is completely equivalent to the NL$\sigma$M~\cite{PRB.86.054508}. In fact, as will be shown below, the dominant features of the Raman spectrum are not very sensitive to the value of the Higgs mass $m_0$ even in the opposite regime $\tilde{g}\ll \tilde{g}_c$ of well defined N\'eel order. They are determined essentially by the spin stiffness or the equivalent Heisenberg exchange energy $J$, which is known rather accurately from neutron scattering data. An intuitive argument for why it is possible to replace the fixed length NL$\sigma$M with a soft-spin model relies on viewing the linear $\sigma$-model as an effective low-energy theory of the NL$\sigma$M, where short wave-length fluctuations have been integrated out~\cite{Sachdev}. The Ne\'el-field $\mathbf{n}(\mathbf{x})$ is therefore averaged over some domain in real space and the resulting averaged field $\tilde{\bm{\Phi}}$ is no longer constrained to have a fixed length. Instead, it is subject to a potential $V(\tilde{\bm{\Phi}})$, which exhibits a minimum at a finite vacuum expectation value (VEV), which has been chosen to be $\langle\left|\bm{\Phi}\right|\rangle=\sqrt{3}$ in Eq.~\eqref{eq:linearON}. The magnitude of the fluctuations around the minimum is controlled by the dimensionless mass parameter $\tilde{m}_0=m_0/\Lambda$, which is expected to be large compared to one in the semiclassical limit $S\gg 1$, where fluctuations of the order parameter magnitude vanish. The dynamics of the averaged field is fixed by the condition that the magnon dispersion $\omega_k=c_s|\mathbf{k}|$ is linear in momentum, giving rise to an effectively Lorentz invariant action. In order to calculate the Raman spectrum within the linear $\sigma$-model~\eqref{eq:linearON}, we use the standard parametrization for the field $\bm{\Phi}$ in the symmetry broken phase: \begin{equation} \bm{\Phi}=\left(\bm{\pi}, r\sqrt{3} + \sigma\right)\, . \label{eq:para} \end{equation} The parameter $r=r(g, \Lambda)$ incorporates the renormalization of the VEV by quantum fluctuations and has to be determined from the condition $\langle\sigma\rangle=0$. The components of $\bm{\pi}$ are the two massless antiferromagnetic magnons, while $\sigma$ denotes the massive amplitude or Higgs mode. Expressed in terms of the relativistic three-momentum $k=(\omega/c_s,\mathbf{k})$, the free propagators for the fields are \begin{align} &G_{\pi_i\pi_j}^0(k)=\langle\pi_i(k)\pi_j(-k)\rangle=\delta_{ij} G_{\pi\pi}^0(k)=\delta_{ij}\frac{g}{k^2}\label{eq:MPropLO}\\ &G_{\sigma\sigma}^0(k)=\langle\sigma(k)\sigma(-k)\rangle=\frac{g}{k^2+m_0^2}\label{eq:HPropLO} \end{align} These propagators will change with increasing strength $g$ of the quantum fluctations. In particular, the exact propagator \begin{equation} G_{\sigma\sigma}(k)=\frac{g}{k^2+m_0^2}+\frac{g^2}{\left(k^2+m_0^2\right)^2}\frac{m_0^4}{24 |k|}+\ldots \label{eq:exactGsigma} \end{equation} of the Higgs mode acquires a contribution $\sim 1/|k|$ due to the decay into Goldstone modes. Instead of a simple pole, the Higgs propogator therefore exhibits a branch cut which starts at the threshold $\omega_k=c_s|\mathbf{k}|$ for the excitation of magnons~\cite{PRL.92.027203}. This low energy singularity first appears at order $g^2$ and is expected to remain valid to all orders, i.e. $G_{\sigma\sigma}(k\to 0)$ is proportional to $1/|k|$ at any finite value of $g$. Similarly, by the Goldstone theorem, the magnons are always massless. The structure $G_{\pi\pi}(k) =Zg/k^2$ therefore remains exact with a finite, renormalized coupling constant $Z g$ throughout the N\'eel ordered phase\footnote{For a recent discussion of infrared singularities in the context of neutral fermionic superfluids see \cite{PRB.88.144508}.}. Inserting Eq.~\eqref{eq:para} into Eq.~\eqref{eq:RamanOxy} we obtain \begin{align} &O(\tau)=O_\text{M}(\tau)+O_\text{H}(\tau)\label{eq:RamanO}\\ &O_\text{M}(\tau)\propto \int d^2x \left[\left(\partial_x\bm{\pi}\right)^2 - \left(\partial_y\bm{\pi}\right)^2 \right]\\ &O_\text{H}(\tau) \propto \int d^2x \left[ \left(\partial_x\sigma\right)^2 - \left(\partial_y\sigma\right)^2\right] \end{align} The operator to which light couples in the Raman response therefore involves the square of gradients of the magnon and Higgs field. As a result, the response always involves at least two magnon or Higgs excitations. Taken together, the full Raman susceptibility is the sum of three different contributions \begin{equation} \chi_\text{Raman}(\omega)=\chi_{\text{M}}(\omega) + \chi_{\text{H}}(\omega) + \chi_\text{Int}(\omega) \label{eq:RamanSusceptSum} \end{equation} which involve a two-magnon susceptibility $\chi_\text{M}$ and a two-Higgs susceptibility $\chi_\text{H}$. They are determined by the correlation functions of two Magnon and two Higgs operators $O_\text{M}$ and $O_{H}$, respectively. In addition, an interference susceptibility $\chi_\text{Int}$ arises which consists of the two mixed correlation functions. A similar structure has been found by Canali and Girvin \cite{PRB.45.7127}, who have calculated the Raman spectrum of undoped cuprates by means of a Dyson-Maleev transformation of the underlying antiferromagnetic Heisenberg model. Compared to our present formulation, this approach does not allow to separate the magnon and Higgs contributions to the spectrum, with the latter appearing as part of a four-magnon term. Moreover, the magnon-magnon interaction is replaced by an instantaneous one, thus missing the retardation effects associated with the Higgs-mediated interaction discussed below which is treated properly by our numerical solution of the Bethe-Salpeter equation.\\ To leading order in the coupling $g$, only the two-magnon and two-Higgs susceptibilities are nonzero while the interference susceptibility contributes only at order $g^3$. The corresponding Feynman diagrams in FIG.~\ref{fig:Bubbles} are just a two-magnon, respectively a two-Higgs bubble dressed with two Raman symmetry factors $\gamma(\mathbf{k})$. Dropping the polarization factor $P(\mathbf{e}_i, \mathbf{e}_f)$, the two-magnon response to lowest order in $g$ is given by \begin{equation} \chi_{\text{M}}''(\omega)=\frac{g^2B^2\hbar^3S^4}{12c_s}\omega^3 \theta(2 c_s \Lambda-\omega)+O(g^3)\, . \label{eq:2MagnonLeading} \end{equation} The spectrum is a pure power law $\sim\omega^3$ with a sharp cutoff at twice the zone-boundary energy of a magnon. If we assume, that the Heisenberg model underlying the continuum theory contains only a nearest-neighbor coupling $J$, this corresponds to $2c_s\Lambda=2\pi\hbar J$. Regarding the contribution from longitudinal fluctuations of the order parameter, the spectrum to leading order in $g$ is determined by the diagram in FIG.~\ref{fig:Bubbles} b, which gives \begin{align} \chi_{\text{H}}''(\omega)=&\frac{g^2 B^2\hbar^3S^4}{12c_s\omega}\left[\left(\frac{\omega}{2}\right)^2-c_s^2 m_0^2\right]^2 \theta(\omega-2c_sm_0)\notag\\ &\times\theta\left(2c_s\sqrt{\Lambda^2+m_0^2}-\omega\right) + O(g^3)\, . \label{eq:2HiggsLO} \end{align} It shows the expected threshold at twice the Higgs mass and peaks at the maximum energy of two Higgs excitations. Note that in contrast to neutron scattering, which couples linearly to the order parameter, the Raman spectrum does not contain the longitudinal propagator $G_{\sigma\sigma}$ directly. As a result, no sharp peak due to the Higgs mode appears even at leading order. In the following section, we will show that the sharp features in both the two-magnon and the two-Higgs response, which depend quite sensitively on the precise form of the momentum cutoff and the specific value of the Higgs mass $m_0$, are completely eliminated in a calculation which sums up the next-to-leading order diagrams in a consistent fashion. \begin{figure} \centering \includegraphics[scale=0.6]{Bubbles} \caption{Leading order diagram for the two-magnon (a) and the two-Higgs susceptibility (b). The dashed line propagators are those of the bare magnons in Eq.~\eqref{eq:MPropLO}, the full lines denote the Higgs propagator in Eq.~\eqref{eq:HPropLO}. } \label{fig:Bubbles} \end{figure} \subsection{Beyond leading order and Bethe-Salpeter equation} \label{sec:Beyond leading order and Bethe-Salpeter equation} The next-to-leading order diagrams, apart from self-energy insertions, for the two-magnon, two-Higgs and interference susceptibility are shown in figures FIG.~\ref{fig:2MagnonNLO}, FIG.~\ref{fig:2HiggsNLO} and FIG.~\ref{fig:InterferenceNLO} respectively. To see whether these diagrams may lead to sharp features associated with the Higgs mode, we start with a qualitative discussion based on symmetries alone. One immediately notices, that the loop integrals in FIG.~\ref{fig:2MagnonNLO}(a) decouple at vanishing external wave vector, where $q=(\omega,0)$. These diagrams therefore give no contribution to the Raman response because of the antisymmetry of the $B_{1g}$-symmetry factor $\tilde{\gamma}(\mathbf{k})$ under the exchange $k_x\leftrightarrow k_y$. \begin{figure} \centering \includegraphics[scale=0.35]{2MagnonNLO-eps-converted-to.pdf} \caption{Next-to-leading order diagrams apart from self-energy insertions for the 2-magnon susceptibility $\chi_{\text{M}}$} \label{fig:2MagnonNLO} \end{figure} Especially the vanishing of diagram FIG.~\ref{fig:2MagnonNLO}(b) is important to notice, as it would lead to a peak at the Higgs mass due to the pole in the Higgs propagator at least in leading order in $g$. The only non-vanishing diagram for the two-magnon susceptibility is FIG.~\ref{fig:2MagnonNLO}(c), in which the Higgs line is independent of the frequency transfer. As a result, no sharp peak at the Higgs mass arises. For the two-Higgs and the interference susceptibilities the situation is quite similar. Indeed, the diagrams (a) and (b) in FIG.~\ref{fig:2HiggsNLO} and FIG.~\ref{fig:InterferenceNLO} vanish due to the antisymmetry of the $B_{1g}$-symmetry factor under the exchange $k_x\leftrightarrow k_y$. Concerning the interference contribution, it turns out, that up to order $g^3$ it gives a negligible contribution to the full Raman response. Thus, in the following, we only consider the two-magnon and two-Higgs contributions. \begin{figure} \centering \includegraphics[scale=0.35]{2HiggsNLO-eps-converted-to.pdf} \caption{Next-to-leading order diagrams apart from self-energy insertions for the 2-Higgs susceptibility $\chi_{\text{H}}$} \label{fig:2HiggsNLO} \end{figure} \begin{figure} \centering \includegraphics[scale=0.35]{InterferenceNLO-eps-converted-to.pdf} \caption{Next-to-leading order diagrams for the interference susceptibility $\chi_{\text{Int}}$.} \label{fig:InterferenceNLO} \end{figure} In order to determine the Raman spectrum in quantitative terms, we now derive a formally exact equation for the two-magnon and the two-Higgs response. Based on the diagrammatic expansion shown in FIG.~\ref{fig:MagnonBS}, the two-magnon contribution $\chi_{\text{M}}$ can be written in terms of the exact two-magnon Raman vertex function $\Gamma_\pi(\mathbf{k}, \mathbf{k}', \Omega, \Omega', \omega)$ in the following form \begin{align} \chi_{\text{M}}(\omega)=2\int\frac{d^3 k}{(2\pi)^3}&\int\frac{d^3 k'}{(2\pi)^3} \tilde{\gamma}(\mathbf{k})\tilde{\gamma}(\mathbf{k}') G_{\pi\pi}(\mathbf{k}, \Omega+\omega)\notag\\ &\times G_{\pi\pi}(\mathbf{k}, \Omega) \Gamma_{\pi}(\mathbf{k}, \mathbf{k}', \Omega, \Omega', \omega)\, . \label{eq:BS2MagnonF} \end{align} The vertex function obeys a Bethe-Salpeter equation \begin{align} \Gamma_\pi(\mathbf{k}, \mathbf{k}&', \Omega, \Omega', \omega)=2(2\pi)^3 \delta(k-k') + \int\frac{d^3k''}{(2\pi)^3} \mathcal{V}_\pi(k, k'', \omega)\notag\\ &\times G_{\pi\pi}(\mathbf{k}'', \Omega''+\omega)G_{\pi\pi}(\mathbf{k}'', \Omega'') \Gamma_\pi(\mathbf{k}'', \mathbf{k}', \Omega'', \Omega', \omega) \label{eq:BSMagnonVertexF} \end{align} which is depicted diagrammatically in FIG.~\ref{fig:MagnonVertex}. To determine the Raman spectrum, we do not need the full vertex function. Instead, it is sufficient to know the reduced one, which is defined as \begin{equation} \Gamma_\pi(\mathbf{k}, \Omega, \omega)\equiv \int\frac{d^3k'}{(2\pi)^3}\tilde{\gamma}(\mathbf{k}')\Gamma_\pi(\mathbf{k}, \mathbf{k}', \Omega, \Omega', \omega) \label{eq:2MagnonVert} \end{equation} Using this definition, the equations for the two-magnon susceptibility, Eq.'s~\eqref{eq:BS2MagnonF} and \eqref{eq:BSMagnonVertexF}, can be recast in the form \begin{align} \chi_{\text{M}}(\omega)=2\int\frac{d^3 k}{(2\pi)^3} &\tilde{\gamma}(\mathbf{k})G_{\pi\pi}(\mathbf{k}, \Omega+\omega)\notag\\ &\times G_{\pi\pi}(\mathbf{k}, \Omega) \Gamma_{\pi}(\mathbf{k}, \Omega, \omega)\label{eq:BS2MagnonR} \end{align} \begin{align} \Gamma_\pi(\mathbf{k}, \Omega, \omega)= 2\tilde{\gamma}&(\mathbf{k}) + \int\frac{d^3k''}{(2\pi)^3} \mathcal{V}_\pi(k, k'', \omega) G_{\pi\pi}(\mathbf{k}'',\Omega'')\notag\\ &\times G_{\pi\pi}(\mathbf{k}'', \Omega''+\omega) \Gamma_\pi(\mathbf{k}'', \Omega'', \omega)\label{eq:BSMagnonVertexR} \end{align} Up to this point, there are no approximations involved. A solution of Eq. \eqref{eq:BSMagnonVertexR}, however requires knowledge of both the exact magnon propagator \begin{align} G_{\pi\pi}(p)=\frac{g}{p^2-g\Sigma_{\pi\pi}(p)} \end{align} including all self-energy corrections via the exact magnon self-energy $\Sigma_{\pi\pi}$ and the full irreducible two-magnon interaction $\mathcal{V}_\pi(k, k'', \omega)$. \begin{figure} \includegraphics[scale=0.75]{MagnonBS} \caption{Diagrammatic series for the 2-magnon susceptibility. Magnon lines have to be interpreted as full propagators. The shaded area represents the full 2-magnon Raman vertex $\Gamma_\pi$.} \label{fig:MagnonBS} \end{figure} \begin{figure} \includegraphics[scale=0.47]{MagnonVertex} \caption{Diagrammatic series for the full 2-magnon Raman vertex, giving the Bethe-Salpeter equation \eqref{eq:BSMagnonVertexF}.} \label{fig:MagnonVertex} \end{figure} For an explicit and quantitative solution, we truncate the irreducible magnon interaction at lowest order in $g$, which amounts to a ladder approximation. Properly taking into account combinatorial factors we find \begin{equation} \mathcal{V}_{\pi}(k, k'', \omega)=2\cdot 4 \cdot \frac{1}{2} \left(\frac{m_0^2}{2\sqrt{3}g}\right)^2 G_{\sigma\sigma}^0(k-k'') \label{eq:irredIntMagnonLadder} \end{equation} Thus, to lowest order, the interaction of magnons which is relevant for the $B_{1g}$ symmetry is mediated by the Higgs mode. Inserting Eq.~\eqref{eq:irredIntMagnonLadder} into Eq.~\eqref{eq:BSMagnonVertexR}, we obtain the ladder Bethe-Salpeter equation, which corresponds to a consistent next-to-leading order approximation in the irreducible magnon-magnon interaction. For the magnon propagator, the zeroth order result $G_{\pi\pi}^0$ captures the exact low energy behavior $\sim g/|k|^2$ due to the Goldstone theorem apart from a finite renormalization factor $g\to Zg$. Regarding the Higgs-propagator, the leading order result contains a simple pole at the bare Higgs mass $m_0$ but misses the $1/|k|$-singularity due to the decay of the Higgs into two Goldstone modes, see Eq. \eqref{eq:exactGsigma}. The final form of the Bethe-Salpeter equation in ladder approximation is given by \begin{align} \Gamma_\pi&^\text{Ladder}(\mathbf{k}, \Omega, \omega)= 2\tilde{\gamma}(\mathbf{k}) +\frac{m_0^4}{3g^2} \int\frac{d^3k''}{(2\pi)^3} G^0_{\sigma\sigma}(k-k'')\notag\\ &\times G^0_{\pi\pi}(\mathbf{k}'', \Omega'')G^0_{\pi\pi}(\mathbf{k}'', \Omega''+\omega)\Gamma_\pi^\text{Ladder}(\mathbf{k}'', \Omega'', \omega)\, . \label{eq:2MagnonLadderV} \end{align} This is still too complicated to be solved analytically, however it is amenable to an efficient numerical solution. Before, doing so, however, a similar equation will be derived for the two-Higgs susceptibility. \\ The way to derive the integral equations for the two-Higgs susceptibility is completely analogous to the case of the two-Magnon susceptibility, one just needs to replace the magnon propagators by Higgs propagators. Hence we will just write down the resulting equations, containing already the reduced two-Higgs Raman vertex function $\Gamma_\sigma(\mathbf{k}, \Omega, \omega)$: \begin{align} \chi_{\text{H}}(\omega)=2\int\frac{d^3 k}{(2\pi)^3} \tilde{\gamma}(\mathbf{k})&G_{\sigma\sigma}(\mathbf{k}, \Omega+\omega) G_{\sigma\sigma}(\mathbf{k}, \Omega)\notag\\ &\times \Gamma_{\sigma}(\mathbf{k}, \Omega, \omega) \end{align} \begin{align} \Gamma_\sigma(\mathbf{k}, \Omega, \omega)= 2\tilde{\gamma}&(\mathbf{k}) + \int\frac{d^3k''}{(2\pi)^3} \mathcal{V}_\sigma(k, k'', \omega)G_{\sigma\sigma}(\mathbf{k}'', \Omega'')\notag\\ &\times G_{\sigma\sigma}(\mathbf{k}'', \Omega''+\omega)\Gamma_\sigma(\mathbf{k}'', \Omega'', \omega) \label{eq:BSHiggsVertexR} \end{align} The leading diagrams for the full irreducible two-Higgs interaction $\mathcal{V}_{\sigma}(k, k'', \omega)$ are depicted in FIG.~\ref{fig:HiggsInt}. Truncating the irreducible two-Higgs interaction again at leading order in $g$, we obtain \begin{align} \mathcal{V}_{\sigma}(k, k'', \omega)=2\cdot(2\cdot 3)^2 \cdot \frac{1}{2} \left(\frac{m_0^2}{2\sqrt{3}g}\right)^2 G^0_{\sigma\sigma}(k-k'') \label{eq:irredIntHiggsLadder} \end{align} Note that this has precisely the same form as the irreducible two-magnon interaction, differing only in the combinatorial and coupling prefactors. Inserting eq.~\eqref{eq:irredIntHiggsLadder} into Eq.~\eqref{eq:BSHiggsVertexR} we obtain the Bethe-Salpeter equation for the reduced two-Higgs Raman vertex function \begin{align} \Gamma_\sigma&^\text{Ladder}(\mathbf{k}, \Omega, \omega)= 2\tilde{\gamma}(\mathbf{k}) +\frac{3m_0^4}{g^2} \int\frac{d^3k''}{(2\pi)^3} G^0_{\sigma\sigma}(k-k'')\notag\\ &\times G^0_{\sigma\sigma}(\mathbf{k}'', \Omega'')G^0_{\sigma\sigma}(\mathbf{k}'', \Omega''+\omega)\Gamma_\sigma^\text{Ladder}(\mathbf{k}'', \Omega'', \omega) \label{eq:2HiggsLadderV} \end{align} which again needs to be solved numerically. \begin{figure} \includegraphics[scale=0.6]{HiggsInt} \caption{Diagrammatic representation of the irreducible 2-Higgs interaction vertex.} \label{fig:HiggsInt} \end{figure} \subsection{Numerical solution of the Bethe-Salpeter equation and Raman spectra} \label{sec:Numerical solution of the Bethe-Salpeter equation and Raman spectra} For the numerical solution of the two independent Bethe-Salpeter equations ~\eqref{eq:2MagnonLadderV} and ~\eqref{eq:2HiggsLadderV}, it is convenient to first write them in dimensionless form. This is achieved by introducing $\tilde{k}=k/\Lambda$, $\tilde{m_0}=m_0/\Lambda$, $\tilde{g}=g\Lambda$, $\tilde{\chi}(\tilde{\omega}) = \chi(\tilde{\omega})/\Lambda$ and $\tilde{\Gamma}(\tilde{\mathbf{k}},\tilde{\mathbf{k}}', \tilde{\omega})=\Lambda^{-2}\Gamma(\tilde{\mathbf{k}}, \tilde{\mathbf{k}}', \tilde{\omega})$.The dimensionless frequency $\tilde{\omega}=\omega/(c_s\Lambda)$ is measured in units of $c_s\Lambda=\pi\hbar J \sim S$. Note that the dimensionless coupling $\tilde{g}=g\Lambda$ is related to the spin quantum number $S$ by $\tilde{g}=2\pi/S$, which is far from small in the relevant case $S=1/2$ discussed below. In practice, the relation $\tilde{g}=2\pi/S$, which arises from a semi-classical expansion valid for $S\gg 1$, does not enter our results for the Raman spectra in quantitative terms, except for determining the relative weight and frequency scales for different $S$ in Figs.~\ref{fig:2MagnonImaginary} and~\ref{fig:Spect2Magnon}. For a given value of $S$, our expansion to leading order in $g$ is therefore valid provided $\tilde{g}\ll\tilde{g}_c$, i.e. the antiferromagnet is far away from a complete destruction of N\'eel order by quantum fluctuations~\cite{PRB.39.2344}. The explicit numerical solution of the Bethe-Salpeter equation employs the standard approach to integral equations, namely discretizing the integrals and solving the resulting system of linear equations. For the dimensionless Brillouin-zone $\left[-1,1\right]^2$, we use $20\times20$ points. Since an increase up to a grid size of $30\times30$ gave only very small corrections, we can assume that the results have converged to the continuum limit. While the frequency integration has no intrinsic microscopic cut-off, it is restricted in practice to values below $4c_s\Lambda$. Contributions beyond that do not affect our results. We have varied the number of sampling points for the frequency integral between $20$ and $40$. Similar to the situation for the momentum integration, the grid size had no influence on the results. \begin{figure} \includegraphics[width=\linewidth, height=200pt]{Pictures2Magnon} \caption{(Color online) Numerical results for the 2-magnon susceptibility as a function of imaginary frequency for $S=1/2$, $S=1$ and $S=2$. The solid curves are fits of Eq. (\ref{eq:fitmodel}).} \label{fig:2MagnonImaginary} \end{figure} In FIG.~\ref{fig:2MagnonImaginary}, the two-magnon susceptibility $\tilde{\chi}_{\text{M}}(\tilde{\omega})$ obtained from the numerical solution of the Bethe-Salpeter equation is shown for three different values of the dimensionless coupling $\tilde{g}$, corresponding to $S=1/2, 1, 2$ and for an intermediate value of the Higgs mass $\tilde{m}_0=0.5$. To obtain the associated spectral function, which determines the Raman spectrum in real frequencies, an analytic continuation is required. Since our numerical data contain no noise, we use an Ansatz to fit the data which can then be continued in analytical form. Specifically, as a fit model, we use \begin{equation} \chi(\tilde{\omega})=\frac{a_1+a_2q^2}{1+a_3q^2+a_4q^3+a_5q^4} \text{ , }q=\sqrt{\tilde{\omega}^2} \label{eq:fitmodel} \end{equation} This Ansatz is motivated by our leading order results for the two-magnon susceptibility, which obeys $\chi_{\text{M}}\sim\tilde{\omega}^{-2}$ for large $\tilde{\omega}$ and $\tilde{\chi}''_{\text{M}}\sim \tilde{\omega}^3$ for small $\tilde{\omega}$. The fitted curves are also shown in FIG.~\ref{fig:2MagnonImaginary}. Apparently they reproduce the numerical results very well with a variance which is only $1.4\times10^{-6}$. Based on the explicit form of the spectral function Eq.~\eqref{eq:fitmodel} in terms of the imaginary frequency variable $q$, the analytic continuation $q\rightarrow -i\tilde{\omega}$ is easily done. The resulting two-magnon spectral functions $\tilde{\chi}_{\text{M}}''(\tilde{\omega})$ for $S=\frac{1}{2}$, $1$, $2$ are shown in FIG.~\ref{fig:Spect2Magnon}. For all values of $S$, a clear two-magnon peak appears. In the most relevant case $S=1/2$, its maximum is at $\tilde{\omega}_{\rm max}\simeq 1.29$. Expressed in terms of the exchange coupling $J$, this translates into a frequency $\omega_{\rm max}\simeq 2.44\, J$, somewhat smaller than the result $\omega_{\rm max}\simeq 2.76\, J$ obtained from an interacting spin-wave theory~\cite{J.Phys.C.2.2012}. As will be shown below, the experimental spectra are consistent with our present result. Indeed the observed peak in the Raman spectra are close to $2.44\, J$ with values of the exchange coupling which agree very well with those determined independently from neutron scattering. Taking into account the different frequency scales for different values of $S$, the peak is shifted towards lower frequencies for smaller spin. This behavior has previously also been observed by Canali and Girvin\cite{PRB.45.7127}. A quite significant difference between their results and ours, however, is the pronounced asymmetry of the two-magnon peak. As will be shown in the following section, this asymmetry is essential for achieving a quantitative agreement with experiment.\\ \begin{figure} \includegraphics[width=\linewidth]{Pictures2MagnonSpectral} \caption{(Color online) 2-magnon spectral function $\chi''_{2\text{Magnon}}$ for $S=1/2$, $S=1$ and $S=2$.} \label{fig:Spect2Magnon} \end{figure} In order to see to which extent the value of the Higgs mass $\tilde{m}_0$ influences our results for the two-magnon response, we have varied this parameter in the range $0.1<\tilde{m}_0<0.9$. Surprisingly, we find no detectable change in the two-magnon line shape. The physical reason for the fact that the two-magnon spectral function is essentially independent of the Higgs mass despite the fact that the magnon interaction is mediated by the amplitude mode, is the following: The inverse Higgs mass is the range of the interaction in position space while the interaction strength is controlled by the coupling $g$. As both magnons, involved in the interaction process, are created by a single photon, they are initially on neighboring lattice sites. For $\tilde{m}_0<1$ the range of the Higgs-mediated interaction is larger than one lattice spacing, hence we do not expect a sensitive dependence of the 2-magnon peak on the Higgs mass. \begin{figure} \includegraphics[width=\linewidth, height=200pt]{Pictures2Higgs} \caption{(Color online) Numerical results for the 2-Higgs susceptibility as a function of imaginary frequency for $\tilde{m}_0=0.9$, $\tilde{m}_0=0.61$ and $\tilde{m}_0=0.25$. The solid curves are fits of eq.~\eqref{eq:fitmodel}.} \label{fig:2HiggsImaginary} \end{figure} As a second step, we turn to the results for the two-Higgs susceptibility where a similar dependence on the spin quantum number $S$ appears as for the two-magnon response. We therefore only show the results for $S=1/2$ which is relevant for the comparison with experiments below. In FIG.~\ref{fig:2HiggsImaginary} the imaginary time data for the two-Higgs susceptibility is shown for different Higgs masses. First of all one notices, that the absolute values of the two-Higgs susceptibility are smaller than the ones for the two-magnon susceptibility. As a result, the two-magnon peak will always dominate the Raman spectrum. From the leading order analysis, we again expect a $\tilde{\omega}^{-2}$-behavior for large frequencies. An Ansatz of the form used in Eq.~\eqref{eq:fitmodel} again reproduces the numerical data rather well, with a variance $1.8\times10^{-5}$. The resulting conribution $\tilde{\chi}''_{\text{H}}(\tilde{\omega})$ to the Raman spectrum in real frequencies is shown in FIG.~\ref{fig:Spect2Higgs}. As expected, the spectra are shifted towards higher frequencies with increasing values of the bare Higgs mass $\tilde{m}_0$. In particular, the spectral weight is rather small below $2\tilde{m}_0$, reminiscent of the threshold behavior in leading order. \begin{figure} \includegraphics[width=\linewidth]{Pictures2HiggsSpectral} \caption{(Color online) 2-Higgs spectral function $\chi''_{\text{H}}$ for $S=1/2$ and three different values of the Higgs-mass, $\tilde{m}_0=0.9$, $\tilde{m}_0=0.61$ and $\tilde{m}_0=0.25$.} \label{fig:Spect2Higgs} \end{figure} An exact result which determines the relative weight with which magnons or the Higgs mode appear in the Raman spectrum can be obtained by considering the ratio of the first moments of the two-magnon and two-Higgs susceptibilities $\chi''_\text{M}$ and $\chi''_\text{H}$. These moments can be expressed in terms of the exact progators $G_{\pi\pi}(k)$ and $G_{\sigma\sigma}(k)$ in a way which is analogous to the standard f-sum rule for the dynamic structure factor. In explicit form, the sum rules are \begin{align} &M_\text{M}^{(1)}=\int\limits_{0}^{\infty}\frac{d\omega}{2\pi}~ \omega \chi_\text{M}''(\omega)=2g \int \frac{d^3k}{(2\pi)^3} ~\gamma\left(\mathbf{k}\right) ^2 G_{\pi\pi}(k)\label{eq:MagnonSum}\\ &M_\text{H}^{(1)}=\int\limits_{0}^{\infty}\frac{d\omega}{2\pi} ~\omega \chi_\text{H}''(\omega)=2g \int \frac{d^3k}{(2\pi)^3} ~\gamma\left(\mathbf{k}\right) ^2 G_{\sigma\sigma}(k)\label{eq:HiggsSum} \end{align} Using the propagators to zeroth order, the ratio between the Higgs and the magnon part of the Raman response is controlled by the value of the Higgs mass $m_0$. As shown in FIG.~\ref{fig:FirstMoment}, the weight of the spectrum coming from the amplitude mode decreases continuously with increasing mass as expected on intuitive grounds. \begin{figure} \includegraphics[width=\linewidth]{FirstMoment} \caption{(Color online) Ratio of the first moments, $M_\text{H}^{(1)}$ and $M_\text{M}^{(1)}$, of the two-Higgs and two-magnon susceptibility, respectively versus the Higgs mass $\tilde{m}_0=m_0/\Lambda$ in units of the momentum cut-off at leading order.} \label{fig:FirstMoment} \end{figure} Finally, we consider the total Raman spectral function\linebreak $\tilde{\chi}''_\text{Raman}=\tilde{\chi}''_{\text{M}}+\tilde{\chi}''_{\text{H}}$ without the interference term, which is negligible in the regime $\tilde{g}\ll\tilde{g}_c$ studied here. We again concentrate on $S=1/2$ and show the Raman spectra for three different values of $\tilde{m}_0$ in FIG.~\ref{fig:ChiSpectRaman}. These values correspond to a light, an intermediate and a heavy Higgs mass. For both a light and a heavy Higgs with $\tilde{m}_0\lesssim 0.35$ or $\tilde{m}_0\gtrsim 0.8$, the full spectrum is dominated by the response associated with magnons. The contribution from $\tilde{\chi}''_{\text{H}}$ only appears as an additional spectral weight at frequencies above the two magnon peak, while no distinct peak is associated with an excitation of the amplitude mode. It is only in the intermediate regime $0.5\lesssim \tilde{m}_0\lesssim 0.7$ where a shoulder appears above the two-magnon peak as a remnant of the sharp onset of a Higgs-mode in the leading order calculation of Eq.~\eqref{eq:2HiggsLO}. As will be discussed below, such a feature is present in the experimental Raman spectrum of La$_2$CuO$_4$. \begin{figure} \includegraphics[width=\linewidth]{RamanSpectral} \caption{(Color online) Total Raman-spectrum, $\tilde{\chi}''_\text{Raman}(\tilde{\omega})=\tilde{\chi}''_{\text{M}}(\tilde{\omega})+\tilde{\chi}''_{\text{H}}(\omega)$, for three different values of the Higgs-mass $\tilde{m}_0$ and $S=1/2$. $\tilde{m}_0=0.25$ is an example for a light Higgs, $\tilde{m}_0=0.61$ for a intermediate Higgs and $\tilde{m}_0=0.9$ for a heavy Higgs.} \label{fig:ChiSpectRaman} \end{figure} \subsection{Comparison with experiment} \label{sec:Comparison with experiment} In the following, we compare our theoretical results for the Raman spectral function $\chi''_\text{Raman}$ with experimental data for the undoped cuprate materials YBa$_2$Cu$_3$O$_6$ (Y-123)~\cite{PRB.78.020511} and\linebreak La$_2$CuO$_4$ (LCO) \cite{EPJST.188.131}. Regarding the dominant two magnon response, the relevant energy scale is set by the characteristic frequency $c_s\Lambda$ of the underlying linear $O(3)$ $\sigma$-model, which is related to the exchange coupling by $\hbar J=\pi^{-1}c_s\Lambda$ \footnote{In our units $\left[J\right]=(\mathrm{Joule})^{-1}s^{-2}$. It is common to give $J$ in units of energy, which we will adopt in the following, i.e. effectively we calculate $\hbar^2 J$.} For undoped curates, the appropriate values for $J$ are known from inelastic neutron scattering. The agreement between the optimal values for $J$ obtained from our fit to the Raman data and the n-scattering results is therefore a crucial consistency test for both the model and our approximations involved in calculating the Raman spectrum. Since neither experiments nor theory can reliably determine the Raman intensity in physical units of counts per second and per frequency interval, the results are plotted in arbitrary units. Correspondingly, the experimental data are fitted with a function of the form $A \tilde{\chi}''\left(\omega/(c_s\Lambda)\right)$, where an overall prefactor $A$ remains undetermined. It is important to note however, that this arbitrary overall factor does not affect the relative weight to the Raman spectrum which comes from the magnons or the Higgs mode. Apart from the dimensionless Higgs mass $\tilde{m}_0$ this leaves the frequency scale $c_s\Lambda=\pi\hbar J$ as the only physical parameter which the spectra depend on. An important and unexpected result of the comparison between theory and experiment below is that without the Higgs mode contribution the slow decay of the spectrum above the two magnon peak cannot be described properly. The amplitude mode of 2D quantum antiferromagnets is therefore directly visible in the Raman spectrum despite the fact that generically it does not show up as a separate peak. It is only in the particular case of LCO, where the Higgs contribution gives rise to an additional shoulder in the tail above the two magnon peak which is likely due to an intermediate mass Higgs. \\ The experimental data for the bilayer compound Y-123 are shown in FIG.~\ref{fig:Y123Fit} together with theoretical fits of the full Raman susceptibility. Apparently, the quite non-trivial form of the dominant two-magnon peak is reproduced quite well by our theory with an exchange coupling $J=126\mathrm{meV}$ which is rather close to the value $120\mathrm{meV}$ obtained from inelastic neutron scattering~\cite{PRB.53.R14741}. The optimal value of the dimensionless Higgs mass for Y-123 is $\tilde{m}_0=0.25$. This is significantly smaller than the value $\tilde{m}_0\simeq 0.6$ which is found below for LCO. A possible origin for the small value of the Higgs mass, which entails large fluctuations of the magnitude of the N\'eel order, may be that Y-123 is a bi-layer system with an antiferromagnetic interlayer exchange coupling $J_\perp/J\approx 0.1$~\cite{PRB.53.R11930, PRB.53.R14741}. Such a system undergoes a quantum phase transition from a phase of two weakly coupled 2D antiferromagnetic layers to a gapped singlet phase with increasing $J_\perp$~\cite{PRB.52.3521, PRL.72.2777, EPL.42.559}. While Y-123 is clearly in the phase where the two layers possess antiferromagnetic order, the interlayer coupling is expected to increase quantum fluctuations in the N\'eel order compared to the single-layer compound LCO and hence leads to a smaller Higgs mass. The presence of an amplitude mode directly shows up in the Raman spectrum via the slow decay above the two magnon peak. Indeed, the dashed line FIG.~\ref{fig:Y123Fit}, which is an optimal fit to the spectrum without the Higgs, clearly misses a substantial part of the spectral weight in the range between $3000\mathrm{cm}^{-1}$ and $5000\mathrm{cm}^{-1}$. A feature which is not poperly described by our model is the slightly non-monotonic decay in this frequency range. Such a nontrivial structure may in principle be caused by a triple resonance discussed by Morr and Chubukov~\cite{PRB.56.9134}. This resonance appears in situations where the incoming photon energy $\hbar\omega_i$ is above the charge transfer gap $2\Delta$. The pure spin-photon interaction of the Fleury-Loudon Hamiltonian then needs to be extended by taking into account the generation of intermediate particle-hole pairs. While the experiments indeed were in a range with $\hbar\omega_i>2\Delta\simeq 2\mathrm{eV}$, they do not show a change of the additional feature with increasing values of the incident photon energy, as expected for a triple resonance~\cite{PRB.56.9134}. An explanation of the slightly non-monotonic decay of the spectrum in terms of a triple resonance therefore seems unlikely. \begin{figure} \includegraphics[width=\linewidth]{YBCOFit+Higgs} \caption{(Color online) Experimental data for the Raman spectrum of Y-123 together with a fit of the theoretical model with Higgs mass $\tilde{m}_0=0.25$ (solid line) and without $\chi_\text{H}''$ (dashed line).} \label{fig:Y123Fit} \end{figure} For the single-layer compound LCO the data together with the theoretical fit are shown in FIG.~\ref{fig:LCOFit}. Again, the two-magnon peak is reproduced very well with a value $J=149\mathrm{meV}$ for the exchange coupling which is in excellent agreement with the INS result $143\mathrm{meV}$~\cite{PRL.105.247001}. Contrary to the case of Y-123 and our theoretical result, the Raman spectrum of LCO exhibits a slight increase with frequency in the spectral range above $6000 \mathrm{cm}^{-1}$. This deviation is most likely due to the luminescence of defects in the LCO sample, an effect which is absent in the Y-123 spectra shown in FIG.~\ref{fig:Y123Fit} because of a higher sample quality~\cite{HacklPrivate}. Concerning the pronounced shoulder above the two-magnon peak at about $4500\mathrm{cm}^{-1}$, it turns out that this additional feature is reproduced quite well by choosing an intermediate value $\tilde{m}_0=0.595$ of the Higgs mass. The overall very good agreement between theory and experiment suggests, that the additional feature seen in the Raman spectrum of LCO is indeed associated with an amplitude mode of the underlying N\'eel state. What remains to be understood, however, is the fact that the pronounced feature the $B_{1g}$ mode near $4500\mathrm{cm}^{-1}$ in LCO also appears in the experimental data for the $A_{2g}$ mode, where no two-magnon response is visible. By contrast, there is essentially no Raman signal at any frequency in the $A_{2g}$ mode of Y-123~\cite{EPJST.188.131}. \begin{figure} \includegraphics[width=\linewidth]{LCOFit+Higgs} \caption{(Color online) Experimental data for the Raman spectrum of LCO together with a fit of the theoretical model with Higgs mass $\tilde{m}_0=0.595$ (solid line) and without $\chi_\text{H}''$ (dashed line)} \label{fig:LCOFit} \end{figure} \section{Conclusion and Open Problems} \label{sec:Conclusion and Open Problems} In summary, we have shown that the simple linear $O(3)$ $\sigma$-model is able to explain quantitatively the major features of the Raman spectrum of the N\'eel state of undoped cuprates. In particular, we have found that the detailed form of the dominant two-magnon peak is a result of magnon-magnon interactions which are mediated by the Higgs mode. The relevant energy scale is fixed by the exchange coupling of the underlying Heisenberg model, with the peak located at $\omega\simeq 2.44\, J$. Apart from mediating interactions between the magnons, the Higgs mode also shows up more directly in an enhancement of the spectral weight above the two magnon peak. It leads to a separate peak only for an intermediate value of the Higgs mass, a case which is apparently realized in LCO. The agreement between experiment and theory depends quite sensitively on the proper inclusion of the Higgs mode contribution. Raman scattering therefore appears to provide a direct signature for the existence of a Higgs mode in 2D quantum antiferromagnets.\\ While it is remarkable that a description based on the linear $O(3)$ $\sigma$-model which only captures the low energy physics of quantum antiferromagnets is able to account for the detailed form of the observed Raman spectrum in the full range of energies up to $6J$ and beyond, our present study of possible signatures of a Higgs mode in 2D quantum antiferromagnets is certainly only a first step into a more detailed analysis of this problem. Among the open questions in this context we mention two: \begin{enumerate} \item[a)] from a quite general point of view, what are suitable and experimentally accessible correlation functions where the Higgs mode in 2D quantum antiferromagnets can be seen in direct form and can also be studied into a regime where N\'eel order disappears into some more complex spin liquid state? \item[b)] regarding the specific model studied above, can one calculate the mass parameter $m_0$ from an underlying microscopic model and - moreover- go beyond the perturbative calculation of the Raman spectrum which is only reliable deep in the N\'eel phase? \end{enumerate} The complexity of the correlation functions which enter the Raman spectrum makes both problems quite challenging. In particular, to uniquely identify the presence a Higgs mode from the experimental data so far, which include the two cuprate examples discussed above and also the Iridate compound Sr$_2$IrO$_4$ \cite{Gretarsson}, a better microscopic understanding of the Higgs mass parameter is needed, both for single and double layer compounds, for which $m_0$ is apparently much smaller than in the single layer case. Regarding the issue of correlation functions where the Higgs mode shows up more directly than in a standard Raman experiment, an interesting option appears to be resonant inelastic X-ray scattering (RIXS)~\cite{RevModPhys.83.705}. Of particular interest here is indirect RIXS, which allows to measure a momentum dependent four-spin correlation associated with the operator~\cite{EPL.80.47003} \begin{equation} \hat{O}_{\mathbf{q}}=\sum_{\mathbf{k}} J_{\mathbf{k}}\, \hat{\mathbf{S}}_{\mathbf{k}-\mathbf{q}}\cdot\hat{\mathbf{S}}_{-\mathbf{k}} \label{eq:RIXS} \end{equation} In the language of the NL$\sigma$M Eq.~\eqref{eq:RIXS} corresponds to $\hat{O}_\mathbf{q}\sim\int d^2 x ~e^{i\mathbf{q}\cdot\mathbf{x}}\left(\partial_\mu\mathbf{n}\right)^2$. In comparison to Eq.~\eqref{eq:RamanOxy} one notices, that RIXS again couples to the square of field gradients, however in a symmetric form, without the antisymmetric tensor $\sigma^z_{ij}$, which is responsible for the vanishing of the leading diagrams associated with the Higgs mode. It is likely therefore that the Higgs mode in 2D quantum antiferromagnets is directly observable in a indirect RIXS experiment, similar to the probe of a Higgs mode in the Bose-Hubbard model which is induced essentially by a modulation of the hopping amplitude~ \cite{nature11255}. By contrast, in {\it direct} RIXS one couples to a single spin operator $\hat{O}_\mathbf{q}^\text{(direct)} \sim (\mathbf{e}_i\times\mathbf{e}_f) \cdot \hat{\mathbf{S}}_\mathbf{q}$~\cite{PRL.108.177003}. This is similar to determining the dynamic spin structure factor in inelastic neutron scattering and allows to measure the magnon-dispersion. \begin{acknowledgements} We are very grateful for many insightful discussions with Assa Auerbach and Daniel Podolsky and for their kind hospitality at the Technion during a crucial stage of this work. We are also indepted deeply to Rudi Hackl, both for providing the motivation to start this project in the first place and for long discussions about the details of the experiments. Finally, it is a pleasure to acknowledge constructive comments from Tom Devereaux and from Eugene Demler. After completion of our work, we have also benefitted a lot from discussions with Hlynur Gretarsson, Bernhard Keimer, Ginat Khaliullin and Mathieu LeTacon on Raman scattering on Iridates, which in fact provide further support for the presence of a Higgs mode in 2D quantum antiferromagnets, see~\cite{Gretarsson}. \end{acknowledgements}
2,869,038,156,526
arxiv
\section{Introduction} For coordinated precoding \cite{OptResAllCoordMultiCellSys} in intermediate to large sized multicell networks, base station clustering \cite{Peters2012,Chen2014,Brandt2016bsubmitted} is necessary for reasons including channel state information (CSI) acquisition overhead, backhaul delays and implementation complexity constraints. In frequency-division duplex mode, the CSI acquisition overhead is due to the feedback required \cite{ElAyach2012,Bolcskei2009}, whereas in time-division duplex mode, the CSI acquisition overhead is due to pilot contamination and allocation \cite{Jose2011b,Mochaourab2015arxiv}. For the case of interference alignment (IA) precoding \cite{Cadambe2008}, suboptimal base station clustering algorithms have earlier been proposed in \cite{Peters2012} where clusters are orthogonalized and a heuristic algorithm for the grouping was proposed, in \cite{Chen2014} where the clusters are non-orthogonal and a heuristic algorithm on an interference graph was proposed, and in \cite{Brandt2016bsubmitted} where coalition formation and game theory was applied to a generalized frame structure. To the best of the authors' knowledge however, no works in the literature have addressed the problem of finding the globally optimal base station clustering for \mbox{IA-based} systems. Naive exhaustive search over all possible clusterings is not tractable, due to its super-exponential complexity. Yet, the globally optimal base station clustering is important in order to benchmark the more practical schemes in e.g \cite{Peters2012,Chen2014,Brandt2016bsubmitted}. Therefore, in this paper, we propose a structured method based on branch and bound \cite{Land1960,GlobalOptimizationDeterministicApproaches} for finding the globally optimal base station clustering. We consider a generalized throughput model which encompasses the models in \cite{Peters2012,Chen2014,Brandt2016bsubmitted}. When evaluated using the throughput model of \cite{Brandt2016bsubmitted}, empirical evidence shows that the resulting algorithm finds the global optimum at an average complexity which is orders of magnitude lower than that of exhaustive search. \vspace{-2ex} \section{Problem Formulation} We consider a symmetric multicell network where $I$ base stations (BSs) each serve $K$ mobile stations (MSs) in the downlink. A BS together with its served MSs is called a \emph{cell} and we denote the $k$th served MS by BS $i$ as $i_k$. The BSs each have $M$ antennas and the MSs each have $N$ antennas. Each MS is served $d$ spatial data streams. BS $i$ allocates\footnote{Any \emph{fixed} power levels can be used, e.g. obtained from some single-cell power allocation method \cite[Ch.~1.2]{OptResAllCoordMultiCellSys}. Generalizing to \emph{adaptive} multicell power allocation would however lead to loss of tractability in the SINR bound of Thm.~\ref{thm:throughput_bound}, due to $\rho_{i_k}$ not being supermodular \cite{SubmodularFunctionMinimization} when the powers are adaptive.} a power of $P_{i_k}$ to MS $i_k$, in total using a power of $P_i = \sum_{k=1}^K P_{i_k}$, and MS $i_k$ has a thermal noise power of $\sigma_{i_k}^2$. The average large scale fading between BS $j$ and MS $i_k$ is $\gamma_{i_kj}$. The cooperation between the BSs is determined by the BS clustering, which mathematically is described as a set partition: \begin{definition}[Set partition] \label{def:set_partition} A set partition $\setS = \{ \setC_1, \ldots, \setC_S \}$ is a partition of $\setI = \{ 1, \ldots, I \}$ into disjoint and non-empty sets called clusters, such that $\setC_s \subseteq \setI$ for all $\setC_s \in \setS$ and $\bigcup_{s = 1}^S \setC_s = \setI$. For a cell $i \in \setC_s$, we let $\setS(i) = \setC_s$. \end{definition} We assume that IA is used to completely cancel the interference within each cluster.\footnote{Within each cluster, both intra-cell and inter-cell interference is cancelled.} Thus only the intercluster interference remains, which is reflected in the long-term signal-to-interference-and-noise ratios (SINRs) of the MSs: \begin{assumption}[Signal-to-interference-and-noise ratio] Let \\$\rho_{i_k} \!: 2^\setI \rightarrow \realnumbers_+$ be the long-term SINR of MS $i_k$ defined as \begin{equation} \label{eq:rho} \rho_{i_k}(\setS(i)) = \frac{\gamma_{i_ki} P_{i_k}}{\sigma_{i_k}^2 + \sum_{j \in \setI \setminus \setS(i)} \gamma_{i_kj} P_j}. \end{equation} \end{assumption} We consider a general model for the MS throughputs, which depends on the cluster size and the long-term SINR. The cluster size determines the overhead, whereas the long-term SINR determines the achievable rate. \begin{assumption}[Throughput] \label{ass:throughput} For a cluster size~$\card{\setS(i)}$ and a long-term SINR~$\rho_{i_k}(\setS(i))$, the throughput of MS $i_k$ is given by $t_{i_k}(\setS) = v_{i_k}(\card{\setS(i)}, \rho_{i_k}(\setS(i)))$, where $v_{i_k} \!: \naturalnumbers \times \realnumbers_+ \rightarrow \realnumbers_+$ is unimodal in its first argument and non-decreasing in its second argument. \end{assumption} The structure of $\rho_{i_k}(\cdot)$ and the monotonicity properties of $v_{i_k}(\cdot,\cdot)$ will be used in the throughput bound to be derived below. The model in Assumption~\ref{ass:throughput} is quite general and is compatible with several existing throughput models: \begin{example} In \cite{Peters2012}, the clusters are orthogonalized using time sharing, and no intercluster interference is thus received. A coherence time of $L_c$ is available. Each BS owns $1/I$ of the coherence time, which is contributed to the corresponding cluster. Larger clusters give more time for data transmission but also require more CSI feedback, which in \cite{Peters2012} is modelled as a quadratic function, giving the throughput model as: \begin{equation} \label{eq:example_Peters2012} v_{i_k}(\card{\setS(i)}, \cdot) = \left( \frac{\card{\setS(i)}}{I} - \frac{\card{\setS(i)}^2}{L_c} \right) d \log \left( 1 + \varrho_{i_k} \right). \end{equation} where $\varrho_{i_k} = \frac{\gamma_{i_ki} P_{i_k}}{\sigma_{i_k}^2}$ is the constant \emph{signal-to-noise ratio} (SNR). The function in \eqref{eq:example_Peters2012} is strictly unimodal in its first argument and independent of its second argument. \end{example} \begin{example} In \cite{Chen2014}, the clusters are operating using spectrum sharing. The CSI acquisition overhead is not accounted for. A slightly modified\footnote{The original SINR model in \cite{Chen2014} includes the impact of the instantaneous IA filters, which we neglect here in order to avoid the cross-dependence between the IA solution and the clustering. This corresponds to how the approximated interference graph weights are derived in \cite{Chen2014}.} version of their throughput model is then: \begin{equation} \label{eq:example_Chen2014} v_{i_k}(\cdot, \rho_{i_k}(\setS(i))) = d \log \left( 1 + \rho_{i_k}(\setS(i)) \right). \end{equation} The function in \eqref{eq:example_Chen2014} is independent of its first argument, and strictly increasing in its second argument. \end{example} \begin{example} In \cite{Brandt2016bsubmitted}, intercluster time sharing and intercluster spectrum sharing are used in two different orthogonal phases. For the CSI acquisition overhead model during the time sharing phase, a model similar to the one in \cite{Peters2012} is used. For the achievable rates during the spectrum sharing phase, \mbox{long-term} averages are derived involving an exponential integral. The model is thus \begin{equation} \label{eq:example_Brandt2016b} v_{i_k}(\card{\setS(i)}, \rho_{i_k}(\setS(i))) = \alpha_{i_k}^{(1)}(\card{\setS(i)}) \, r_{i_k}^{(1)} + r_{i_k}^{(2)}(\rho_{i_k}(\setS(i))) \end{equation} where \begin{align*} &\alpha_{i_k}^{(1)}(\card{\setS(i)}) = \frac{\card{\setS(i)}}{I} - \frac{(M + K(N+d)) \card{\setS(i)} + KM \card{\setS(i)}^2}{L_c}, \\ &r_{i_k}^{(1)} = d \, e^{1/\varrho_{i_k}} \int_{1/\varrho_{i_k}}^\infty t^{-1} e^{-t} \, \mathrm{d}t, \\ &r_{i_k}^{(2)}(\rho_{i_k}(\setS(i))) = d \, e^{1/\rho_{i_k}(\setS(i))} \int_{1/\rho_{i_k}(\setS(i))}^\infty t^{-1} e^{-t} \, \mathrm{d}t. \end{align*} The function in \eqref{eq:example_Brandt2016b} is strictly unimodal in its first argument and strictly increasing in its second argument. \end{example} Given the MS throughput model, we introduce the notion of a system-level objective: \begin{definition}[Objective] \label{def:objective} The performance of the entire multicell system is given by $f(\setS)~=~g(t_{1_1}(\setS), \ldots, t_{I_K}(\setS))$, where $g \!: \realnumbers_+^{I \cdot K} \rightarrow \realnumbers_+$ is an argument-wise non-decreasing function. \end{definition} The function $f(\setS)$ thus maps a set partition to the corresponding system-level objective. Typical examples of objective functions are the weighted sum $f_\text{WSR}(\setS)~=~\sum_{(i,k)} \lambda_{i_k} t_{i_k}(\setS)$ and the minimum weighted throughput $f_\text{min}(\setS)~=~\min_{(i,k)} \lambda_{i_k} t_{i_k}(\setS)$. \section{Globally Optimal Base Station Clustering} \label{sec:branchandbound} We will now provide a method for solving the following combinatorial optimization problem: \begin{equation} \label{opt:system} \begin{aligned} \setS^\star = \, & \underset{\setS}{\text{arg\,max}} & & f(\setS) \\ & \text{subject to} & & \text{$\setS$ satisfying Def.~\ref{def:set_partition}} \\ & & & \card{\setS(i)} \leq D, \; \forall \, i \in \setI. \end{aligned} \end{equation} The cardinality constraint is used to model cluster size constraints due to IA feasibility \cite{Liu2013}, CSI acquisition feasibility \cite{Brandt2016bsubmitted}, implementation feasibility, etc. \subsection{Restricted Growth Strings and Exhaustive Search} In the algorithm to be proposed, we use the following alternate representation of a set partition: \begin{definition}[Restricted growth string, {\cite[Sec.~7.2.1.5]{TAoCPVol4Apart1}}] A set partition $\setS$ can equivalently be expressed using a \emph{restricted growth string} $a = a_1a_2\ldots a_I$ with the property that $a_i~\leq~1~+~\max(a_1, \ldots, a_{i-1})$ for $i \in \setI$. Then $a_i \in \naturalnumbers$ describes which cluster that cell $i$ belongs to. We let $\setS_a$ denote the mapping from $a$ to the set partition $\setS$, and $a_\setS$ as its inverse. \end{definition} For example, the set partition $\setS_a~=~\{ \{1, 3 \}, \{ 2 \} , \{ 4 \} \}$ would be encoded as $a_\setS~=~1213$. One approach to solving the optimization problem in \eqref{opt:system} is now by enumerating all restricted growth strings of length $I$, using e.g. Alg.~H of \cite[Sec.~7.2.1.5]{TAoCPVol4Apart1}. The complexity of this approach is however $\mathbb{B}_I$, the $I$th \emph{Bell number}\footnote{The $I$th Bell number describes the number of set partitions of $\setI$ \cite[p. 287]{IntroductoryCombinatorics}, and can be bounded as $\mathbb{B}_I~<~\left( 0.792I/\log(1 + I) \right)^I$ \cite{Berend2010}. The 17 first Bell numbers are 1, 1, 2, 5, 15, 52, 203, 877, 4\,140, 21\,147, 115\,975, 678\,570, 4\,213\,597, 27\,644\,437, 190\,899\,322, 1\,382\,958\,545, 10\,480\,142\,147.}, which grows super-exponentially. \subsection{Branch and Bound Algorithm} Most of the possible set partitions are typically not interesting in the sense of the objective of \eqref{opt:system}. For example, most set partitions will include clusters whose members are placed far apart, thus leading to low SINRs. By prioritizing set partitions with a potential to achieve large throughputs, the complexity of finding the globally optimal set partition can be decreased significantly compared to that of exhaustive search. This is the idea of the branch and bound approach \cite{Land1960}, which entails bounding the optimal value $f(\setS^\star)$ from above and below for a sequence of \emph{partial solutions}. When the bounds converge, the optimal solution has been found. The partial solutions are described using \emph{partial restricted growth strings}: \begin{definition}[Partial restricted growth string] The restricted growth string $\bar{a} = \bar{a}_1\bar{a}_2\ldots \bar{a}_l$ is \emph{partial} if $l = \textsc{length}(\bar{a}) \leq I$. The corresponding partial set partition, where only the first $l$ cells are constrained into clusters, is denoted $\setS_{\bar{a}}$. \end{definition} The branch and bound method considers the sequence of partial solutions by dynamically exploring a search tree (see Fig.~\ref{fig:searchtree}, at the top of the page), in which each interior node corresponds to a partial restricted growth string. By starting at the root and traversing down the search tree\footnote{At level $i \leq I$ of the tree, there are $\mathbb{B}_i$ nodes.}, more cells are constrained into clusters, ultimately giving the leaves of the tree which describe all possible restricted growth strings. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{illustrations/searchtree} \caption{Example of branch and bound search tree for $I = 4$.} \label{fig:searchtree} \end{figure} \subsubsection{Bounds} We now provide the bounds that will be used to avoid exploring large parts of the search tree. \begin{lemma}[Objective bound] Let $\check{t}_{i_k}(\setS_{\bar{a}})$ be an upper bound of the throughput of MS $i_k$ for all leaf nodes in the sub-tree below the node described by $\bar{a}$. Then the function \begin{equation*} \check{f}(\setS_{\bar{a}})~=~g(\check{t}_{1_1}(\setS_{\bar{a}}), \ldots, \check{t}_{I_K}(\setS_{\bar{a}})) \end{equation*} is an upper bound of the objective in \eqref{opt:system} for all leaves in the sub-tree below the node described by $\bar{a}$. \end{lemma} \begin{IEEEproof} This follows directly from the argument-wise monotonicity of $g(t_{1_1}, \ldots, t_{I_K})$ in Def.~\ref{def:objective}. \end{IEEEproof} In order to describe the throughput bound, we will introduce three sets. Given a node described by $\bar{a}$, the cells in $\setP_{\bar{a}} = \{ 1, \ldots, \textsc{length}(\bar{a}) \} \subseteq \setI$ are constrained into clusters as given by $\setS_{\bar{a}}$. The remaining cells in $\setP^\bot_{\bar{a}} = \setI \setminus \setP_{\bar{a}}$ are still unconstrained.\footnote{In the sub-tree below the node described by $\bar{a}$, there is a leaf node for all possible ways of constraining the cells in $\setP^\bot_{\bar{a}}$ into clusters.} The set of cells which could accommodate more members in the corresponding clusters\footnote{For the sake of this definition, we consider the non-constrained cells in $\setP^\bot_{\bar{a}}$ to be in singleton clusters.} are written as $\setF_{\bar{a}} = \left\{ i \in \setP_{\bar{a}} : \card{\setS_{\bar{a}}(i)} < D \right\} \cup \setP^\bot_{\bar{a}}$. \begin{theorem}[Throughput bound] \label{thm:throughput_bound} Let $t_{i_k}(\setS_a)$ be the throughput of MS $i_k$ for some leaf node in the sub-tree below the node described by $\bar{a}$. It can be bounded as $t_{i_k}(\setS_a)~=~v_{i_k}(\card{\setS_{a}(i)}, \rho_{i_k}(\setS_{a}(i))) \leq v_{i_k}(\check{B}_{i_k}, \check{\rho}_{i_k})$ where \begin{align*} \check{B}_{i_k} &= \begin{cases} \card{\setS_{\bar{a}}(i)} & \textnormal{if} \; \card{\setS_{\bar{a}}(i)} \geq B_{i_k}^\star, \\ \min \left( \card{\setS_{\bar{a}}(i)} + \card{\setP^\bot_{\bar{a}}}, B_{i_k}^\star \right) & \textnormal{else if} \; i \in \setP_{\bar{a}}, \\ \min \left( \card{\setF_{\bar{a}}}, B_{i_k}^\star \right) & \textnormal{else if} \; i \in \setP^\bot_{\bar{a}}, \end{cases} \\ B_{i_k}^\star &= \argmax_{b \in \naturalnumbers, b \leq D} t_{i_k}(b, \check{\rho}_{i_k}), \end{align*} and \begin{align} \check{\rho}_{i_k} = \, &\underset{\card{\setE_{i_k}} \leq D}{\textnormal{maximize}} & & \rho_{i_k}(\setE_{i_k}) \label{opt:rho_bound} \\ & \textnormal{subject to} & & \textnormal{if} \; i \in \setP_{\bar{a}} \textnormal{:} \notag \\ & & & \hspace{2em} \setS_{\bar{a}}(i) \subseteq \setE_{i_k} \subseteq \left( \setS_{\bar{a}}(i) \cup \setP^\bot_{\bar{a}} \right) \notag \\ & & & \textnormal{else if} \; i \in \setP^\bot_{\bar{a}} \textnormal{:} \notag \\ & & & \hspace{2em} \setE_{i_k} \subseteq \setF_{\bar{a}}. \notag \end{align} \end{theorem} \begin{IEEEproof} First note that $\check{\rho}_{i_k}$ is an upper bound of the achievable long-term SINR for MS $i_k$ in the considered sub-tree, since the requirement of disjoint clusters is not enforced in the optimization problems\footnote{The optimal solution to the optimization problem in \eqref{opt:rho_bound} can be found by minimizing the denominator of $\rho_{i_k}(\setE_{i_k})$ in \eqref{eq:rho}, which is easily done using greedy search over the feasible set. The set-function $\rho_{i_k}(\setE_{i_k})$ is \emph{supermodular} \cite{SubmodularFunctionMinimization}, i.e. demonstrating ``increasing returns'', which is the structure that admits the simple solution of the optimization problem. Without changes, Thm.~\ref{thm:throughput_bound} would indeed hold for any other supermodular set-function $\rho_{i_k}(\setE_{i_k})$.} in \eqref{opt:rho_bound}. We therefore have that $v_{i_k}(\card{\setS_{a}(i)}, \rho_{i_k}(\setS_{a}(i))) \leq v_{i_k}(\card{\setS_{a}(i)}, \check{\rho}_{i_k})$, due to the monotonicity property of $v_{i_k}(\cdot,\cdot)$. Now the fact that $v_{i_k}(\card{\setS_{a}(i)}, \check{\rho}_{i_k})~\leq~v_{i_k}(\check{B}_{i_k}, \check{\rho}_{i_k})$ holds is proven. Note that $B_{i_k}^\star$ is the optimal size of the cluster, in terms of the first parameter of $v_{i_k}(\cdot,\check{\rho}_{i_k})$. If $\card{\setS_{\bar{a}}(i)} \geq B_{i_k}^\star$, the cluster is already larger than what is optimal, and keeping the size is thus a bound for all leaves in the sub-tree. On the other hand, if $\card{\setS_{\bar{a}}(i)} < B_{i_k}^\star$ and $i \in \setP_{\bar{a}}$, $\check{B}_{i_k}$ is selected as close to $B_{i_k}^\star$ as possible, given the number of unconstrained cells that could conceivably be constrained into $\setS_{\bar{a}}(i)$ further down in the sub-tree. If $i \in \setP^\bot_{\bar{a}}$ however, we similarly bound $\check{B}_{i_k}$, except that we only consider cells in non-full clusters for cell $i$ to conceivably be constrained to further down in the sub-tree. Due to the unimodality property of $v_{i_k}(\cdot,\check{\rho}_{i_k})$ and the fact that $\check{B}_{i_k}$ is selected optimistically, we have that $v_{i_k}(\card{\setS_{a}(i)}, \check{\rho}_{i_k}) \leq v_{i_k}(\check{B}_{i_k}, \check{\rho}_{i_k})$, which gives the bound. \end{IEEEproof} As the algorithm explores nodes deeper in the search tree, $\textsc{length} \left( \bar{a} \right)$ gets closer to $I$, and there is less freedom in the bounds. For $\textsc{length} \left( \bar{a} \right) = I$, the bounds are tight. \algrenewcommand\algorithmicindent{0.25em}% \begin{algorithm}[t] \caption{Branch and Bound for Base Station Clustering} \label{alg:branchandbound} \begin{algorithmic}[1] \Require Initial $a_\text{incumbent}$ from some heuristic, $\epsilon \geq 0$ \State $\texttt{live} \gets [1]$ \While{$\textsc{length}(\texttt{live}) > 0$} \State $\bar{a}_\text{parent} \gets \text{node from \texttt{live} with highest upper bound}$ \IIf{$\check{f}(\setS_{\bar{a}_\text{parent}}) - f(\setS_{a_\text{incumbent}}) < \epsilon$} \Goto{line \ref{alg:branchandbound:final}} \EndIIf \ForAll{$\bar{a}_\text{child}$ from $\textsc{branch}(\bar{a}_\text{parent})$} \If{$\check{f}(\setS_{\bar{a}_\text{child}}) > f(\setS_{a_\text{incumbent}})$} \If{$\textsc{length}(\bar{a}_\text{child}) = I$} \State $a_\text{incumbent} \gets \bar{a}_\text{child}$ \Else \State Append $\bar{a}_\text{child}$ to \texttt{live} \EndIf \EndIf \EndFor \EndWhile \State \Return globally optimal $a_\text{optimal} = a_\text{incumbent}$ \label{alg:branchandbound:final} \end{algorithmic} \end{algorithm} \algrenewcommand\algorithmicindent{1em}% \begin{figure}[t] \vspace{-1em} \hrule \vspace{0.5em} \begin{algorithmic}[1] \Function{branch}{$\bar{a}_\text{parent}$} \State Initialize empty list $\texttt{children} = []$ \For{$b = 1:(1 + \max(\bar{a}_\text{parent}))$} \State Append $[\bar{a}_\text{parent}, b]$ to \texttt{children} \EndFor \State \Return \texttt{children} \EndFunction \end{algorithmic} \vspace{0.5em} \hrule \vspace{-1em} \end{figure} \subsubsection{Algorithm} The proposed branch and bound method is described in Alg.~\ref{alg:branchandbound}. The algorithm starts by getting an initial incumbent solution from a heuristic (e.g. from Sec.~\ref{sec:heuristic} or \cite{Peters2012,Chen2014,Brandt2016bsubmitted}), and then sequentially studies the sub-tree which currently has the highest upper bound. By comparing the upper bound $\check{f}(\setS_{\bar{a}})$ to the currently best lower bound $f(\setS_{a_\text{incumbent}}) \leq f(\setS^\star)$, the \emph{incumbent solution}, the sub-tree below $\bar{a}$ can be \emph{pruned} if it provably cannot contain the optimal solution, i.e. if $\check{f}(\setS_{\bar{a}})~<~f(\setS_{a_\text{incumbent}})$. If a node $\bar{a}$ cannot be pruned, all children of $\bar{a}$ are built by a branching function and stored in a list for future exploration by the algorithm. If large parts of the search tree can be pruned, few nodes need to be explicitly explored, leading to a complexity reduction. The algorithm ends when the optimality gap for the current incumbent solution is less than a pre-defined $\epsilon \geq 0$. \begin{theorem} Alg.~\ref{alg:branchandbound} converges to an $\epsilon$-optimal solution of the optimization problem in \eqref{opt:system} in at most $\sum_{i = 1}^I \mathbb{B}_i$ iterations. \end{theorem} \begin{IEEEproof} Only sub-trees in which the optimal solution cannot be are pruned. Since all non-pruned leaves are explored, the global optimum will be found. No more than all $\sum_{i = 1}^I \mathbb{B}_I$ nodes of the search tree can be traversed. \end{IEEEproof} In Sec.~\ref{sec:numerical_results} we empirically show that the average complexity is significantly lower than the worst case. \vspace{-2ex} \subsection{Heuristic Base Station Clustering} \label{sec:heuristic} We also provide a heuristic (see Alg.~\ref{alg:heuristic}) which can be used as the initial incumbent in Alg.~\ref{alg:branchandbound}, or as a low complexity clustering algorithm in its own right. The heuristic works by greedily maximizing a function of the average channel gains in the clusters while respecting the cluster size constraint. The heuristic is similar to Ward's method \cite{Ward1963}. \begin{algorithm}[t] \caption{Heuristic for Base Station Clustering} \label{alg:heuristic} \begin{algorithmic}[1] \Require $\setL = \{ (i,j) \in \setI \times \setI \mid i \neq j \}$, $\setS = \{ \{ 1 \}, \ldots, \{ I \} \}$ \While{$\card{\setL} > 0$} \State $(i^\star, j^\star) = \argmax_{(i,j) \in \setL} \sum_{k=1}^K \log \left( 1 + \gamma_{i_kj} P_j/\sigma_{i_k}^2 \right)$ \State Let $\Xi_{\left( i^\star, j^\star \right)} \gets \setS(i^\star) \cup \setS(j^\star)$ \If{$\card{\Xi_{\left( i^\star, j^\star \right)}} \leq D$} \State Let $\setS \gets \left( \setS \setminus \{ \setS(i^\star), \setS(j^\star) \} \right) \cup \{ \Xi_{\left( i^\star, j^\star \right)} \}$ \EndIf \State Let $\setL \gets \setL \setminus \{ \left( i^\star, j^\star \right) \}$ \EndWhile \State \Return heuristic solution $a_\text{heuristic} = a_\setS$ \end{algorithmic} \end{algorithm} \begin{figure}[t] \centering \includegraphics{plots/convergence_bounds} \vspace{-5ex} \caption{Example of convergence of the algorithm for one realization.} \label{fig:convergence_bounds} \vspace{2ex} \includegraphics{plots/convergence_fathom} \vspace{-5ex} \caption{Pruning evolution for the realization in Fig.~\ref{fig:convergence_bounds}.} \label{fig:convergence_fathom} \vspace{2ex} \includegraphics{plots/I} \vspace{-5ex} \caption{Average complexity as a function of $I$.} \label{fig:I} \vspace{2ex} \includegraphics{plots/SNR} \vspace{-5ex} \caption{Sum throughput performance as a function of \texttt{SNR}.} \label{fig:SNR} \end{figure} \vspace{-2ex} \section{Numerical Results} \label{sec:numerical_results} For the performance evaluation \cite{Brandt2015c_code}, we consider a network of $I = 16$ BSs, $K = 2$ MSs per cell, and $d = 1$ per MS. We employ the throughput model from \eqref{eq:example_Brandt2016b} and let $f(\setS)~=~\sum_{(i,k)} t_{i_k}(\setS)$. We let the number of antennas be $M = 8$ and $N = 2$. This gives a hard size constraint as $D = 4$ cells per cluster, due to IA feasibility \cite{Liu2013}. We consider a large-scale setting with path loss $15.3 + 37.6 \log_{10}(\text{distance} \, \text{[m]})$, i.i.d. log-normal shadow fading with $8$ dB std. dev., and i.i.d. $\mathcal{CN}(0,1)$ small-scale fading. The BSs are randomly dropped in a $2000 \times 2000 \, \text{m}^2$ square and the BS-MS distance is $250 \, \text{m}$. We let $L_c = 2\,700$, corresponding to an MS speed of $30$ km/h at a typical carrier frequency and coherence bandwidth \cite{Jindal2010}. In Fig.~\ref{fig:convergence_bounds} we show the convergence of the best upper bound and the incumbent solution, respectively, for one network realization with $\texttt{SNR} = P_{i_k}/\sigma_{i_k}^2 = 20 \, \text{dB}$. The number of iterations needed was $198$ and a total of $908$ nodes were bounded. Naive exhaustive search would have needed exploring $\mathbb{B}_{16}~=~10\,480\,109\,379$ nodes, and the proposed algorithm was thus around $1 \cdot 10^7$ times more efficient for this realization\footnote{Also note that $\sum_{i=1}^{16} \mathbb{B}_i = 12\,086\,679\,035$, i.e. the actual running time of the algorithm was significantly lower than the worst-case running time.}. The number and fraction of nodes pruned during the iterations is shown in Fig.~\ref{fig:convergence_fathom}. At convergence, $99.99999 \%$ of the search tree had been pruned. We show the average number of iterations as a function of network size in Fig.~\ref{fig:I}. The complexity of the proposed algorithm is orders of magnitude lower than the complexity of exhaustive search. In Fig.~\ref{fig:SNR} we show the sum throughput performance as a function of $\texttt{SNR}$, averaged over $250$ network realizations. The heuristic algorithm performs well: it is close to the optimum\footnote{For this network size, exhaustive search is not tractable. Without Alg.~\ref{alg:branchandbound}, we would thus not know that Alg.~\ref{alg:heuristic} performs so well. This shows the significance of Alg.~\ref{alg:branchandbound} as a benchmarking tool for practical but suboptimal clustering algorithms such as Alg.~\ref{alg:heuristic}, or the algorithms in \cite{Peters2012,Chen2014,Brandt2016bsubmitted}.}, and has about twice the throughput of the no clustering case, where $\setS = \{ \{ 1 \}, \ldots, \{ I \} \}$. The grand cluster $\setS = \{ \setI \}$ has zero sum throughput since $I > D$, and is therefore not shown. \section{Conclusions} With a structured branch and bound approach, the otherwise intractable base station clustering problem has been solved. The algorithm is intended for benchmarking of suboptimal base station clustering heuristics in intermediate size networks. \vfill\pagebreak \bibliographystyle{IEEEtran}
2,869,038,156,527
arxiv
\section{Introduction} In the framework of general relativity, two theoretical aspects of cosmic strings have been mainly studied. The dynamical one in which the equations of motion of the infinitely thin cosmic string are governed by the Nambu-Goto action and secondly the self-gravitating one in which the straight cosmic string in particular is considered as the source of a gravitational field. In this case one finds that the asymptotical metric generated by the straight cosmic string is a conical metric \cite{she} and the case of a singular line is obtained in a limit process. The study of line sources in gravitation can be originated from a paper of Israel \cite{isr1} in which the author concluded that there existed "no simple general prescription [...] for obtaining the physical characteristics of an arbitrary line source". However, Vilenkin \cite{vil} gave the physical meaning of the conical singularity as being a cosmic string. Attention has been recently devoted in understanding the dynamics of a conical-type line source of the Einstein equations \cite{vic,fro,unr,cla}. The conclusions summarised in \cite{isr2} are as follows. A two-dimensional timelike worldsheet whose points are conical singularities of the 4-geometry cannot be in general the dynamical evolution of a Nambu-Goto string of arbitrary initial shape. The conical singularity requires that the worldsheet is totally geodesic. This restraints the initial shape of the string as well as the evolution of each of its points to be a geodesic of the 4-geometry. In this paper, we take up again the question of the dynamics but for a self-gravitating extended string. For a straight cosmic string, it is known that the exterior metric may be matched with an interior metric having as source an energy-momentum tensor with suitable properties. We consider a string of arbitrary shape but we restrict ourselves to the case of a thin tube of matter adopting the assumption that the exterior metric is locally the one describing a straight cosmic string. The interior metric is constrained from this and the energy-momentum tensor could be derived from the metric. The aim of the present work is to find the equations of motion of the central line of this thin tube of matter in the limit where its radius tends to zero. We emphasize that we do not consider directly a self-gravitating line. In our method, the conical points are smoothed out on a scale comparable to the radius of the string. The central line of this tube sweeps a timelike 2-worldsheet whose points are perfectly regular. A local coordinate system can then be attached to our spacetime by taking the two parameters of the worldsheet as the first two coordinates and the other two as geodesic coordinates pointing in a direction orthogonal to the worldsheet. This local coordinate system affixed to the worldsheet allows the natural introduction of the extrinsic curvature and other geometric parameters of the worldsheet. The interior metric of the string is essentially characterised by a function $f(l)=\epsilon h(l/\epsilon)$ in which $l$ is the radial coordinate whose origin lies on the worldsheet and $\epsilon$ is a length typical of the thickness of the string. The function $h$ is arbitrary, only submitted to certain conditions expressing the smoothness of the spacetime on the worldsheet, as well as the matching conditions on the boundary between the interior of the string and the vacuum. If no other conditions are imposed, we shall prove that the worldsheet tends to a totally geodesic surface, {\em i.e.} of vanishing extrinsic curvature, when $\epsilon$ goes to zero. However we have found another possibility. If we impose on the function $h$ a specific supplementary constraint then the extrinsic curvature no longer tends to zero. Nevertheless the mean curvature tends to zero when $\epsilon$ does and thus the worldsheet tends to be locally extremal which is precisely the behavior of the Nambu-Goto string. Since the function $h$ is a characteristic of the metric in the interior of the string thus the constraint imposed is interpreted as a specification of the matter of the string. The paper is organised in the following manner. We recall in Sec. II the basic results on self-gravitating straight strings with some thickness and we describe the formalism that we shall use. In Sec. III, a self-gravitating string of arbitrary shape is introduced and the coordinate system adopted to its study is introduced. In Sec. IV, we expand in powers of $1/\epsilon$ the geometrical quantities appearing in the Einstein equations. In Sec. V, taking the limit of $\epsilon$ going to zero of the Einstein equations on the boundary between the string and the vacuum, we obtain the constraints that the worldsheet swept by the central line of the string must satisfy. Finally in Sec. VI, we find the Nambu-Goto energy-momentum tensor in the zero limit of $\epsilon$, confirming the choice of the definitions adopted in Sec. III. \section{Straight strings as smoothed cones} In a general relativistic context, it is possible to portray a straight string as a thin cylinder of matter \cite{mar,got,his,lin}. We give some basic features of a self-gravitating straight string since they will be used later on in the general case of strings of arbitrary shape. In the coordinate system $(t,z,l,\phi)$ with $l\geq0$ and $0\leq\phi<2\pi$, the energy-momentum tensor has the form \begin{equation} \label{1} T^{t}_{t}=T^{z}_{z}=-\sigma(l)\qquad T^{l}_{l}=T^{\phi}_{\phi}=0 \qquad 0\leq l<l_{\small{0}} \end{equation} where $l_{\small{0}}$ is the given radius of the cylinder. The energy density $\sigma$ is a positive regular function of $\Re^{2}$. Taking into account the Einstein equations, the interior metric can be written as \begin{equation} \label{2} ds_{\small{INT}}^{2}=-dt^{2}+dz^{2}+dl^{2}+f^{2}(l)d\phi^{2}, \qquad 0\leq l<l_{\small{0}} \end{equation} where the positive function $f$ determines $\sigma$ by the formula, \begin{equation} \label{3} \sigma=- \frac{1}{8\pi G}\frac{f^{\prime\prime}}{f}, \end{equation} $G$ being the Newtonian gravitational constant. In order to ensure a regular behavior of metric (\ref{2}) at $l=0$, one must choose $f$ such that \begin{equation} \label{4} f(l)\sim l+a_{3} l^{3}+a_{5} l^{5}+\cdots \qquad \mbox{as} \qquad l\rightarrow 0. \end{equation} We can also require that the function $f$ is increasing. The exterior metric which can be matched to the interior metric (\ref{2}) can be expressed in the form \begin{equation} \label{5} ds_{\small{EXT}}^{2}=-dt^{2}+dz^{2}+dl^{2}+\sin^{2}\alpha\, (l-\stackrel{-}{l_{\small{0}}})^{2}d\phi^{2}, \qquad l>l_{\small{0}} \end{equation} where the constants $\alpha$ and $\stackrel{-}{l_{\small{0}}}$ are determined from the following matching conditions of the metric and its first derivatives at $l=l_{0}$ \begin{equation} \label{6} f(l_{\small{0}})=\sin\alpha(l_{\small{0}}-\stackrel{-}{l_{\small{0}}}) \quad {\rm and} \quad f^{\prime}(l_{\small{0}})=\sin\alpha. \end{equation} The linear mass density $\mu$ of the straight string is defined by \begin{equation} \label{8} \mu=\int_{l<l_{\small{0}}}\sigma f\, dl\, d\phi. \end{equation} By using (\ref{3}), (\ref{4}) and (\ref{6}), it is easy to see that it is related to the angular deficit $\Delta$ of metric (\ref{5}) by \begin{equation} \label{9} \Delta=2\pi (1-\sin\alpha)=8\pi G \mu. \end{equation} From (\ref{9}), we note that the exterior metric (\ref{5}) is independent of the details of the internal string as well as of its radius. At this point, it is enlightening to recall the solution for a straight string with $\sigma=$constant. The interior metric is characterised by \begin{equation} \label{10} f(l)=\epsilon\sin\frac{l}{\epsilon}\qquad\mbox{and}\qquad\sigma= \frac{1}{8\pi G\epsilon^{2}} \end{equation} and the exterior metric is again (\ref{5}). The geometrical interpretation of the parameter $\epsilon$ will be given below. The matching conditions (\ref{6}) give the relation $l_{\small{0}}/\epsilon =\pi /2 -\alpha$ and thereby \begin{equation} \label{11} \epsilon=(l_{\small{0}}-\stackrel{-}{l_{\small{0}}}) \frac{\sin\alpha}{\cos\alpha}. \end{equation} We point out that $l_{\small{0}}/\epsilon$ depends only on $\alpha$. Returning to the general case this particular solution suggests to impose in the generic interior metric (\ref{2}) the following form \begin{equation} \label{12} f(l)=\epsilon h(\frac{l}{\epsilon}) \end{equation} where $h$ is a smooth function and $\epsilon$ is a parameter which takes again the value (\ref{11}). Now the matching conditions (\ref{6}) yield simply \begin{equation} \label{13} h(\frac{l_{\small{0}}}{\epsilon})=\cos\alpha \quad {\rm and} \quad h^{\prime}(\frac{l_{\small{0}}}{\epsilon})=\sin\alpha. \end{equation} For a given function $h$, the quotient $l_{0}/\epsilon$ depends only on the constant angle $\alpha$, {\em i.e.} on the linear mass density $\mu$ by virtue of relation (\ref{9}). One can give a geometrical interpretation. By performing the change of coordinate $\rho=l-\stackrel{-}{l_{\small{0}}}$ in metric (\ref{5}), we recognise the so called conical metric representing a cone of half angle $\alpha$. It is usefull to give a representation in a Euclidean 3-space of the 2-surface $t=$ constant and $z=$ constant for the interior and exterior metrics (cf. fig. 1). It can be visualised as a cone whose top is cut out at a distance $\rho_{\small{0}}=l_{\small{0}}-\stackrel{-}{l_{\small{0}}}$ of the vortex and replaced by an axisymmetric cap which joins the cone tangently along the circle of junction of radius $\rho_{0}\sin \alpha$. The coordinate $l$ represents the length of the radial geodesic of the 2-surface originated at the top of the cap. It takes the value $l_{\small{0}}$ at the boundary between the smooth cap and the cone and then it continues along a generatrix of the cone. The coordinate $\phi$ is the azimutal angle. Referring to fig. 1, we see that $\epsilon =\rho_{0}\sin \alpha /\cos \alpha$, which represents the distance of the junction to the central axis, coincides with choice (\ref{11}). For the particular solution (\ref{10}) we have a sperical cap of radius $\epsilon$. \begin{figure}[htbp] \epsfxsize=14cm \epsfysize=14cm $$ \epsfbox{Smooth.eps} $$ \caption{Smoothed cone} \label{fig:Smoothed cone} \end{figure} In the generic case, $h$ is characteristic of the form of the cap and $\epsilon$ of its size. This explains the choice of form (\ref{12}) of $f$ which allows us to wedge in the cone a cap of a given form of any size. In short, the 2-surface of the Euclidean 3-space will be called a {\it smoothed cone} of half angle $\alpha$. The corresponding spacetime will be said to have a {\it smooth conical point}. We emphasize that we can choose $l_{\small{0}}$ and $\epsilon$ as little as we want thus making the smooth cone as sharp as we want, but keeping the quotient $l_{\small{0}}/\epsilon$ fixed. Finally, we shall also require the continuity of the second derivatives of the metric at the junction. This is simply achieved by putting \begin{equation} \label{16} f^{\prime\prime}(l_{\small{0}})=0\qquad \mbox{or} \qquad h^{\prime\prime}(\frac{l_{\small{0}}}{\epsilon})=0. \end{equation} As a consequence the Ricci tensor is continuous at the junction. Let us note that the supplementary condition (\ref{16}) is not verified by the particular solution (\ref{10}). This was obvious from the beginning since the energy density $\sigma$ is constant in the interior of a straight string and decreases sharply to zero at the boundary. The supplementary condition (\ref{16}) is physically natural for a thick relativistic cosmic string but not absolutely necessary. Its real usefulness is that the continuity of the Ricci tensor makes easier further calculations in the neighborhood of the boundary between the cap and the cone. \section{Self-gravitating string of arbitrary shape and coordinate system} In the previous section we described a straight self-gravitating string with some thickness. The aim of this section is to extend this construction to a string of arbitrary shape. If the string thickness is sufficiently small and if the central line of the string spans a sufficiently smooth world sheet, we can suppose that the string approaches a straight one and thus it can be locally characterised by a smooth conical point. This suggests to determine the spacetime in the neighborhood of a small portion of the string in the following way : \\ - the interior metric given by \begin{equation} \label{17} ds_{\small{INT}}^{2}=g_{AB}(\tau^{A},l,\phi)d\tau^{A}d\tau^{B}+dl^{2}+ f^{2}(l)d\phi^{2} \qquad 0\leq l\leq l_{\small{0}} \end{equation} where $f(l)=\epsilon h(l/\epsilon)$ was specified in Sec. II and the metric components $g_{AB}$ ($A,B=0,3$) are smooth. \\ - in the vicinity of the string, the exterior metric given by \begin{equation} \label{18} ds_{\small{EXT}}^{2}=g_{AB}(\tau^{A},l,\phi)d\tau^{A}d\tau^{B}+dl^{2}+ \sin^{2}\alpha\, (l-\stackrel{-}{l_{\small{0}}})^{2} d\phi^{2}\qquad l>l_{\small{0}}. \end{equation} We note that {\em a priori} we have ommited cross terms in (\ref{17}) and (\ref{18}) in order to simplify the calculus. Metric (\ref{17}) is regular but the coordinate system breaks down at $l=0$. We introduce coordinates $\rho^{a}$ $(a=1,2)$ which are well defined \begin{equation} \label{19} \rho^{1}=l\cos\phi\qquad {\rm and} \qquad\rho^{2}=l\sin\phi. \end{equation} Then, metrics (\ref{17}) and (\ref{18}) become \begin{equation} \label{20} ds^{2}=g_{AB}(\tau^{A},\rho^{a})d\tau^{A}d\tau^{B}+ g_{ab}(\rho^{a})d\rho^{a}d\rho^{b}. \end{equation} Let us introduce the function \begin{equation} \label{21} r(l)=\frac{f^{2}(l)}{l^{4}}-\frac{1}{l^2} \quad {\rm with} \; l=\sqrt{(\rho^{1})^{2}+(\rho^{2})^{2}} \end{equation} which is well defined and smooth in the interval $0\leq l\leq l_{\small{0}}$ from (\ref{4}). It has null odd derivatives at $l=0$. The components $g_{ab}$ have the simple expression \begin{equation} \label{22} g_{11}=1+r(l)(\rho^{2})^{2},\qquad g_{12}=-r(l)\rho^{1}\rho^{2},\qquad g_{22}=1+r(l)(\rho^{1})^{2} \end{equation} in the interval $0\leq l \leq l_{\small{0}}$. It is interesting to note that $g_{ab}=\delta_{ab}+O(\rho^{2})$. There will be no need in the following to express explicitely $g_{ab}$ for $l>l_{0}$. The central line of the string defined by $l=0$ spans a timelike worldsheet parametrised by $\tau^{A}$ whose induced metric is simply \begin{equation} \label{24} \gamma_{AB}(\tau^{A})=g_{AB}(\tau^{A},\rho^{a}=0). \end{equation} A radial geodesic on the 2-surface $\tau^{A}=$ constant, is also a geodesic of the spacetime, normal to the worldsheet at the point $P(\tau^{A})$. The coordinate $l$ represents the length along this geodesic measured from the point $P(\tau^{A})$. So, the 2-surface $\tau^{A}=$ constant is generated by the spacetime geodesics tangent at $P(\tau^{A})$ to a 2-plane orthogonal to the worldsheet. Hence we recognise in the coordinate system $(\tau^{A},\rho^{a})$ a known system (see for example \cite{boi}) where $\rho^{a}$ are geodesic coordinates and from which it is easy to extract the extrinsic curvature $K_{aAB}$ of the worldsheet by expanding the metric components \begin{equation} \label{25} g_{AB}(\tau^{A},\rho^{a})=\gamma_{AB}(\tau^{A}) +2K_{aAB}(\tau^{A})\rho^{a}+O(\rho^{2}). \end{equation} As we shall see in Sec. V, the extrinsic curvature is implicated in the dynamics of a self-gravitating string. \section{Expansion of geometrical quantities} The aim of this section is to expand in powers of 1/$\epsilon$ geometrical quantities such as the metric, connection and Ricci tensor when $\epsilon$, that is a measure of the thickness of the string, is small but not null. Let us point out that all length parameters $\rho_{\small{0}}^{a},l_{\small{0}},\epsilon$ are of the same order since $l_{\small{0}}/\epsilon$ is constant. In the interval $0\leq l\leq l_{\small{0}}$ all these quantities depend on the function $r(l)$ through the metric components $g_{ab}(\rho^{a})$. However since $r(l_{\small{0}})=O(1/\epsilon^{2})$ it is preferable to substitute $r(l)$ by a function of $l/\epsilon$ \begin{equation} \label{26} q( \frac{l}{\epsilon})=\epsilon^2 r(l)=\frac{\epsilon^{4}}{l^{4}}h^{2}( \frac{l}{\epsilon})-\frac{\epsilon^{2}}{l^{2}}. \end{equation} It is evident that $q(l_{\small{0}}/\epsilon)$ is fixed and depends only of the angle $\alpha$. The same holds for its first and second derivatives $q^{\prime}(l_{\small{0}}/\epsilon)$ and $q^{\prime\prime}(l_{\small{0}}/\epsilon)$. The metric components (\ref{22}), their inverse, and the determinant can now be written \begin{equation} \label{27} g_{ab}=\delta_{ab}+q(\frac{l}{\epsilon})\epsilon_{a}^{\mbox{ }c} \epsilon_{b}^{\mbox{ }d}\frac{\rho_{c}\rho_{d}}{\epsilon^{2}}, \end{equation} \begin{equation} \label{28} g^{ab}=\frac{\delta^{ab}+q(\frac{l}{\epsilon})\frac{\rho^{a}\rho^{b}} {\epsilon^{2}}}{1+\frac{l^{2}}{\epsilon^{2}}q(\frac{l}{\epsilon})}, \end{equation} \begin{equation} \label{29} \stackrel{\wedge}{g}=det(g_{ab})=1+\frac{l^{2}}{\epsilon^{2}} q(\frac{l}{\epsilon}) \end{equation} where $\epsilon_{a}^{\mbox{ }c}$ is the totally antisymmetric Levi-Civita symbol and $\stackrel{\wedge}{(\mbox{ } )}$ stands for the induced geometrical quantities of the 2-dimensional smoothed cone. Then one immediately gets \begin{equation} \label{30} (g_{ab})_{\small{l=l_{\small{0}}}}=O(1),\qquad (g^{ab}) _{\small{l=l_{\small{0}}}}=O(1),\qquad (\stackrel{\wedge}{g})_{\small{l=l_{\small{0}}}}=O(1). \end{equation} By direct calculation of the first derivative one obtains \begin{equation} \label{31} (g_{ab,c})_{\small{l=l_{\small{0}}}}=O(\frac{1}{\epsilon}),\qquad (g_{,\mbox{ }c}^{ab})_{\small{l=l_{\small{0}}}}=O(\frac{1}{\epsilon}). \end{equation} It is also easy to see that \begin{equation} \label{32} (g_{AB})_{\small{l=l_{\small{0}}}}=O(1), \\ \qquad (g_{AB,d})_{\small{l=l_{\small{0}}}}=O(1),\qquad (g_{AB,C} )_{\small{l=l_{\small{0}}}}=O(1). \end{equation} Now we can calculate the order of the non null connection coefficients for $l=l_{\small{0}}$ \begin{equation} \label{33} \Gamma_{BC}^{A}=\frac{1}{2}g^{AD}(g_{DB,C}+g_{DC,B}-g_{BC,D})=O(1), \end{equation} \begin{equation} \label{34} \Gamma_{aB}^{D}=\frac{1}{2}g^{AD}g_{AB,a}=O(1), \end{equation} \begin{equation} \label{35} \Gamma_{AB}^{a}=-\frac{1}{2}g^{ab}g_{AB,b}=O(1), \end{equation} \begin{equation} \label{36} \Gamma_{ab}^{c}=\frac{1}{2}g^{cd}(g_{da,b}+g_{db,a}-g_{ab,d})= \stackrel{\wedge}{\Gamma}_{ab}^{c}(\rho^{a})=O(\frac{1}{\epsilon}). \end{equation} We can now deal with the Ricci tensor for $l=l_{\small{0}}$. The derivatives of (\ref{35}) will give terms of the order of $1/\epsilon$, whereas the derivatives of (\ref{31}) and (\ref{36}) will give terms of the order $1/\epsilon^{2}$. In obvious notations we obtain \begin{equation} \label{37} R_{AB}=R_{AB}(\frac{1}{\epsilon})+R_{AB}(1) \end{equation} in which we have \begin{equation} \label{38} R_{AB}(\frac{1}{\epsilon})=\partial_{a}\Gamma_{AB}^{a}+\Gamma_{ad}^{a} \Gamma_{AB}^{d}, \end{equation} \begin{equation} \label{39} R_{AB}(1)=\stackrel{*}{R}_{AB}+\Gamma_{aD}^{D}\Gamma_{AB}^{a}- \Gamma_{aB}^{C}\Gamma_{AC}^{a}-\Gamma_{BD}^{a}\Gamma_{Aa}^{D} \end{equation} with $\stackrel{*}{R}_{AB}=\partial_{D}\Gamma_{AB}^{D}-\partial_{B}\Gamma_{AD}^{D} +\Gamma_{CD}^{D}\Gamma_{AB}^{C}-\Gamma_{BD}^{C}\Gamma_{AC}^{D}$. For the small case indices we get \begin{equation} \label{41} R_{ab}=R_{ab}(\frac{1}{\epsilon^{2}})+R_{ab}(\frac{1}{\epsilon})+R_{ab}(1) \end{equation} in which we have \begin{equation} \label{42} R_{ab}(\frac{1}{\epsilon^{2}})=\stackrel{\wedge}{R}_{ab} \end{equation} where $\stackrel{\wedge}{R}_{ab}$ is the Ricci tensor associated with the connection (\ref{36}) of the smoothed cone, \begin{equation} \label{43} R_{ab}(\frac{1}{\epsilon})=\Gamma_{dD}^{D}\Gamma_{ab}^{d} \end{equation} and \begin{equation} \label{44} R_{ab}(1)=-\partial_{b}\Gamma_{Da}^{D}-\Gamma_{bD}^{C}\Gamma_{aC}^{D}. \end{equation} Finally for the mixed indices we get \begin{equation} \label{45} R_{aB}=R_{aB}(1)=\partial_{A}\Gamma_{aB}^{A}-\partial_{B}\Gamma_{Da}^{D}+\Gamma_{CD}^{C}\Gamma_{aB}^{D}-\Gamma_{BC}^{D}\Gamma_{aD}^{C}. \end{equation} The curvature scalar $R$ for $l=l_{\small{0}}$ is given by the equation \begin{equation} \label{46} R=R(\frac{1}{\epsilon^{2}})+R(\frac{1}{\epsilon})+R(1) \end{equation} where \begin{equation} \label{47} R(\frac{1}{\epsilon^{2}})=\stackrel{\wedge}{R}_{ab}g^{ab} =\stackrel{\wedge}{R}, \end{equation} \begin{equation} \label{48} R(\frac{1}{\epsilon})=\partial_{a}\Gamma_{AB}^{a}g^{AB}+\Gamma_{ad}^{a} \Gamma_{AB}^{d}g^{AB}+\Gamma_{ab}^{d}\Gamma_{Dd}^{D}g^{ab}, \end{equation} \begin{equation} \label{49} R(1)=\stackrel{*}{R}+\Gamma_{aD}^{D}\Gamma_{AB}^{a}g^{AB}-\Gamma_{aB}^{C} \Gamma_{AC}^{a}g^{AB}-\Gamma_{BD}^{a}\Gamma_{Aa}^{D}g^{AB}-\partial_{b} \Gamma_{aD}^{D}g^{ab}-\Gamma_{bD}^{C}\Gamma_{Ca}^{D}g^{ab} \end{equation} with $\stackrel{*}{R}=\stackrel{*}{R}_{AB}g^{AB}$. We have calculated the above quantities for $l=l_{\small{0}}$ since we need these estimations precisely at the boundary in order to obtain the equations of motion in the following section. \section{Equations of motion of the string} We now focus attention on the self-gravitating string of arbitrary shape described by metric (\ref{20} ) in the coordinate system $(\tau^{A},\rho^{a})$ where the function $h$ is specified in Sec. II. We suppose that the exterior spacetime is a vacuum solution : $R_{\alpha\beta}=0,\; (\alpha=A,a)$ for $l>l_{\small{0}}$. The energy-momentum tensor $T_{\alpha\beta}$ of the extended string is the source of the Einstein equations \begin{equation} \label{52} R_{\alpha\beta}-\frac{1}{2}g_{\alpha\beta}R=8\pi G T_{\alpha\beta} \quad ,\qquad 0\leq l\leq l_{\small{0}}. \end{equation} As a consequence of the matching conditions adopted, the Ricci tensor must be continuous at the junction $l=l_{\small{0}}$. So, the interior Einstein equations (\ref{52}) coincide with the vacuum Einstein equations at $l=l_{0}$ \begin{equation} \label{53} (R_{\alpha\beta})_{l_{\small{0}}}=0. \end{equation} These boundary conditions impose some constraints on the timelike worldsheet swept by the central line of the string. We shall examine (\ref{53}) when the parameter $\epsilon$ is arbitrarily small. According to (\ref{41})-(\ref{44}), the components $(a,b)$ of equations (\ref{53}) are written \begin{equation} \label{54} (R_{ab})_{l_{\small{0}}}=(\Gamma_{dD}^{D})_{l_{\small{0}}}( \Gamma_{ab}^{d})_{l_{\small{0}}}+(R_{ab}(1))_{l_{\small{0}}}=0, \end{equation} equation (\ref{54}) being obtained by noting from (\ref{42}) that $[R_{ab}(\frac{1}{\epsilon^{2}})]_{l_{\small{0}}}= (\stackrel{\wedge}{R}_{ab})_{l_{\small{0}}}=0$ since the junction $l=l_{0}$ belongs to the cone. We need to calculate the limit of the connection coefficient $\Gamma_{dD}^{D}$ appearing in equation (\ref{54}). From (\ref{34}), (\ref{24}), (\ref{25}) and since $l_{\small{0}}/\epsilon$ is constant we obtain \begin{equation} \label{56} \lim_{\epsilon\rightarrow 0}(\Gamma_{Dd}^{D})_{l_{\small{0}}}= \lim_{l_{\small{0}}\rightarrow 0}(\Gamma_{Dd}^{D})_{\l_{\small{0}}}= \gamma^{AB}K_{dAB}=K_{d} \end{equation} where $K_{d}$ is the mean curvature of the timelike worldsheet. On the other hand we know from (\ref{36}) that $(\Gamma_{ab}^{d})_{l_{\small{0}}}$ is of order $1/\epsilon$. Thus, the limit $\epsilon\rightarrow 0$ of equation (\ref{54}) multiplied by $\epsilon$ gives \begin{equation} \label{57} F_{ab}^{d}(k^{a})K_{d}=0\quad {\rm with} \quad k^{a}= \frac{\rho_{\small{0}}^{a}}{l_{\small{0}}} \end{equation} where \begin{equation} \label{58} F_{ab}^{d}(k^{a})=\lim_{\epsilon\rightarrow 0}[\epsilon( \Gamma_{ab}^{d})_{\l_{\small{0}}}] \end{equation} is finite. Equation (\ref{57}) is independent of $\epsilon$ or $\l_{\small{0}}$ and depends only on the azimuthal angle $\phi$ on the boundary or equivalently as indicated of the unitary 2-vector $k^{a}$. In the same way using (\ref{37})-(\ref{39}) along with (\ref{35}),(\ref{36}), we obtain the zero limit of $\epsilon$ for the $(A,B)$ components of equations (\ref{53}) \begin{equation} \label{59} F^{b}(k^{a})K_{bAB}=0\quad {\rm with} \quad k^{a}= \frac{\rho_{\small{0}}^{a}}{l_{\small{0}}} \end{equation} where \begin{equation} \label{60} F^{b}(k^{a})=\lim_{\epsilon\rightarrow 0}[\epsilon(\partial_{a}g^{ab}+ \Gamma_{ad}^{a}g^{db})_{\l_{\small{0}}}] \end{equation} is finite. By introducing the quantities independent of $\epsilon$ \begin{equation} \label{61} A=q(\frac{l_{\small{0}}}{\epsilon})\frac{l_{\small{0}}^{2}}{\epsilon^{2}} \quad {\rm and} \quad B=q^{\prime}(\frac{l_{\small{0}}}{\epsilon}) \frac{l_{\small{0}}^{3}}{\epsilon^{3}}, \end{equation} after some algebra we can express $F^{b}(k^{a})$ in the following way \begin{equation} \label{62} F^{b}=\frac{\epsilon}{l_{\small{0}}}\, \frac{1}{1+A}\, [2A+ \frac{1}{2}B] k^{b}. \end{equation} Let us note that the mixed components $(a,A)$ of (\ref{53}) give no constraint. If the coefficients $F_{ab}^{d}(k^{a})$ and $F^{b}(k^{a})$ are non vanishing, which is the general case, equations (\ref{57}) and (\ref{59}) yield respectively \begin{equation} \label{63} K_{d}=0, \end{equation} \begin{equation} \label{64} K_{bAB}=0. \end{equation} Equation (\ref{64}) means that, at the zero limit of $\epsilon$, the worldsheet swept by the central line of the string is totally geodesic. This of course implies (\ref{63}). We recover the situation described in the litterature \cite{vic,fro,unr,cla,isr2} when the string is a self-gravitating singular line. However we see from (\ref{62}) that if we take the condition \begin{equation} \label{65} B=-4A, \end{equation} then one annihilates the extrinsic curvature coefficient and thus constraint (\ref{64}) disappears. Hence we are left with constraint (\ref{63}) which expresses that in the zero limit of $\epsilon$ the world sheet is minimal. In other words the worldsheet has the same evolution as a Nambu-Goto string. We must verify however that when condition (\ref{65}) is applied then equation (\ref{57}) does not also disappear. Equations (\ref{57}) are explicitely expressed as \begin{eqnarray} \label{66} \nonumber & & k^{2}[P+Q(k^{1})^{2}]K_{1}+k^{1}[P+Q(k^{2})^{2}]K_{2}=0, \\ & & -k^{1}Q(k^{2})^{2}K_{1}-k^{2}[2P+Q(k^{2})^{2}]K_{2}=0, \\ \nonumber & & -k^{1}[2P+Q(k^{1})^{2}]K_{1}-k^{2}Q(k^{1})^{2}K_{2}=0 \end{eqnarray} for all $k^{a}$, where $P=B+2A$ and $Q=-B+AB+4A^{2}$. If $A$ and $B$ verify condition (\ref{65}), then the three equations (\ref{66}) are reduced to only one: $k^{2}K_{1}-k^{1}K_{2}=0$ for all $k^{a}$. Hence (\ref{63}) is again verified. By their definition (\ref{61}), the quantities $A$ and $B$ are independent of the length parameters and depend only on the angle $\alpha$. That is, we can express condition (\ref{65}) in terms of the angle $\alpha$ by using (\ref{13}) and (\ref{26}). We obtain the simple expression \begin{equation} \label{71} \sin\alpha\cos\alpha=\frac{l_{\small{0}}}{\epsilon}. \end{equation} This relation is a supplementary constraint on $h$ as it will be explained in the conclusion. \section{Energy-momentum tensor of the string} The energy-momentum tensor $S_{\alpha\beta}$ of the string is obtained by an integration of $T_{\alpha\beta}$ on the section $\tau^{A}=$ constant of the string. \begin{equation} \label{72} S_{\alpha\beta}=\int_{l\leq l_{\small{0}}}T_{\alpha\beta} \sqrt{\stackrel{\wedge}{g}}\, d\rho^{1}\, d\rho^{2} \end{equation} where now $\epsilon$ and thus $l_{\small{0}}$ are small but fixed. From the Einstein equations (\ref{52}) and the algebraic expressions of the Ricci tensor given in Sec. IV, we can write in obvious notations \begin{equation} \label{73} T_{\alpha\beta}(\tau^A,\rho^a)=T_{\alpha\beta}(\frac{1}{\epsilon^{2}})+T_{\alpha\beta}(\frac{1}{\epsilon})+T_{\alpha\beta}(1)\qquad l\leq l_{\small{0}}. \end{equation} We first apply formula (\ref{72}) for the $(A,B)$ components. Only the first term of (\ref{73}) will yield a non null finite integral when $\epsilon$ tends to $0$ since the volume element of the smoothed cone is of the order $l_{\small{0}}^{2}$. \begin{equation} \label{74} S_{AB}=\int_{l\leq l_{\small{0}}}T_{AB}(\frac{1}{\epsilon^{2}}) \sqrt{\stackrel{\wedge}{g}}\, d\rho^{1}\, d\rho^{2}+O(\epsilon ). \end{equation} Now we have \begin{equation} \label{75} T_{AB}(\frac{1}{\epsilon^{2}})=-\frac{1}{16\pi G}g_{AB} \stackrel{\wedge}{R}=-\frac{1}{8\pi G}g_{AB}K \end{equation} where $K$ is the Gauss curvature, therefore \begin{equation} \label{76} S_{AB}=-\frac{1}{8\pi G} \int_{l\leq l_{\small{0}}}g_{AB}K\sqrt{\stackrel{\wedge}{g}}\, d\rho^{1}\, d\rho^{2}+O(\epsilon). \end{equation} Since $l_{\small{0}}$ is small we can also approximate the metric $g_{AB}$ by development (\ref{25}). We finally obtain \begin{equation} \label{77} S_{AB}=-\frac{1}{8\pi G}\gamma_{AB}\int_{l\leq l_{\small{0}}}K\sqrt{\stackrel{\wedge}{g}}\, d\rho^{1}d\rho^{2}+O(\epsilon)=-\mu\gamma_{AB}+O(\epsilon) \end{equation} where $\mu$ is the linear mass density. The Gauss-Bonnet formula gives \begin{equation} \label{78} \int_{l\leq l_{0}}K\sqrt{\stackrel{\wedge}{g}}d\rho^{1}d\rho^{2}=2\pi (1-\sin \alpha ) \end{equation} and consequently $\mu$ is related to $\alpha$ by formula (\ref{9}). This result is not obvious a priori. It originates specifically in the choice of the form of the metric (\ref{17}) where $f$ is given by (\ref{12}). A general discussion of the problem of obtaining the energy momentum tensor for a concentrate distribution of matter can be found in \cite{ger}. The integral of the $(a,A)$ components obviously vanishes as $\epsilon^{2}$. For the integral of the $(a,b)$ components the non null finite part vanishes since ${\stackrel{\wedge}{R}}_{ab}-\frac{1}{2}g_{ab}{\stackrel{\wedge}{R}}=0$ for a 2-dimensional manifold. Thus the integral vanishes as $\epsilon$. Discarding terms in $\epsilon$, $S_{\alpha\beta}$ reduces to $S_{AB}$ given by (\ref{77}) which is the energy-momentum tensor of the Nambu-Goto string. This last result which is proved independently of the equations of motion is in accordance with them. It can justify {\em a posteriori} the definition of a self-gravitating string as a smooth cone with metric (\ref{17}) and (\ref{18}). \section{Conclusion} In this paper we have investigated the dynamics of a self-gravitating string in the limit where the thickness $\epsilon$ becomes negligible. In the generic case the worldsheet swept by the string (more precisely by the central line of the string) is a totally geodesic surface (the extrinsic curvature of the worldsheet is null). This result could have been somehow guessed since the strings defined as singular lines of conical points have precisely this property \cite{vic,fro,unr,cla}. However we have found another pleasant possibility : by imposing relation (\ref{71}) the extrinsic curvature is no more relevant and only the mean curvature is null. This expresses that the worldsheet is extremal which is the behaviour of the Nambu-Goto string. If we suppose that the linear mass density $\mu$ is given, then the angle $\alpha$ is fixed by equation (\ref{78}). It is easily seen that relation (\ref{71}) is a constraint on the function $h$; it can be rewritten \begin{equation} \label{90} h(\frac{l_{0}}{\epsilon})h'(\frac{l_{0}}{\epsilon})=\frac{l_{0}}{\epsilon}. \end{equation} The above equation is an algebraic relation taken at the fixed point $l_{0}/\epsilon$ which is added to the matching conditions (\ref{13}) and (\ref{16}). It is easily shown that such a function $h$ can be found in a polynomial form. The simplest solution is an odd polynomial (cf. (4)) of order seven : \[ h(\frac{l_{0}}{\epsilon})=\frac{l_{0}}{\epsilon}+b_{3}(\frac{l_{0}}{\epsilon})^3+b_{5}(\frac{l_{0}}{\epsilon})^5+b_{7}(\frac{l_{0}}{\epsilon})^7. \] The four unknown quantities $l_{0}/\epsilon$, $b_3$, $b_5$ and $b_7$ can be determined as functions of the angle $\alpha$ (or equivalently as a function of the linear mass density $\mu$ of the string) by the four equations (\ref{13}), (\ref{16}) and (\ref{90}). Of course by taking a polynomial of larger order one can get an infinity of solutions. It would probably be interesting to investigate the physical meaning of the constraint (\ref{90}) on the matter of the string. By using the method described in Sec. IV, we can evaluate the Riemann tensor $(R_{\alpha\beta\gamma\delta})_{l_{\small{0}}}$ at the junction $l=l_{0}$ or equivalently the Weyl tensor since the Ricci tensor (\ref{53}) vanishes. This gives the magnitude of the gravitational field near the thin cosmic string. In the generic case where the extrinsic curvature $K_{aBC}$ vanishes, the Riemann tensor is bounded as the radius $l_{0}$ becomes arbitrarily small. On the contrary when only the mean curvature $K_a$ is null, the Riemann tensor components $(R_{aBCd})_{l_{\small{0}}}$ blow up as $1/\epsilon$, {\it i.e.} $1/l_{0}$, when the radius $l_{0}$ tends to zero. We see that the gravitational aspect is completely different in the latter case. Let us add that if we require that the Riemann tensor remains bounded then we necessarily fall in the generic case. \newpage